How Kyocera’s CISO Addresses Cyber Risks in the Era of AI Adoption

Kyocera’s Chief Information Security Officer, Andrew Smith, discusses proactive strategies for addressing the cyber risks linked to AI technologies and outlines actionable steps businesses can take for implementation.

Since the explosive popularity of AI, especially following the launch of ChatGPT in November 2022, discussions surrounding this technology have gained significant momentum. While AI applications demonstrate great promise in sectors like healthcare, education, and operational efficiency, the dark side includes its exploitation by cybercriminals for activities such as phishing, automated attacks, and ransomware campaigns—issues that dominate news headlines today.

Despite varied opinions about AI’s presence in the corporate world, one undeniable fact is its permanence. It is crucial for businesses to confront the realities of this technology, even if it introduces new cybersecurity threats. Firms that cling to outdated practices risk repeating the mistakes of those who resisted change during the early era of the Dot-com boom and consequently faded into obscurity.

In navigating the challenges of AI adoption, organizations can draw a parallel to historical entities; everyone aspires to emulate successful pioneers like Apple, while few aim to experience the downfall of companies like Pan-Am. The question remains: how can businesses evolve in their utilization of AI while effectively managing the associated cyber risks?

A fundamental initial step involves comprehending the legal frameworks governing AI applications and assessing their appropriateness for specific business contexts. The expanding commercialization of AI is auspicious, indicating the establishment of necessary legal guidelines for its deployment. Although AI technology predates the advent of ChatGPT, it is only recently that robust governance structures have begun to emerge.

Given the dynamic nature of AI developments and the shifts in regulation, it is imperative for companies to stay informed about applicable legal standards within their industries. Engaging legal experts during this process is essential; businesses should avoid substantial investments in initiatives that could breach compliance laws. Once the legal landscape is adequately reviewed, firms need to explore where and how AI can enhance operations and influence cybersecurity measures. This reflection might reveal opportunities to automate tedious tasks or implement customer service solutions like chatbots, while also considering the security of sensitive information in the wake of AI integration.

The next critical phase is selecting an appropriate AI transformation partner. This does not imply relying solely on AI models like ChatGPT to drive business functions. In the absence of internal expertise, numerous AI transformation firms offer partnerships to guide organizations in their AI journey. It is recommended that businesses examine case studies from potential partners and seek feedback from previous clients regarding the effectiveness and security of their capabilities. Given the rapid evolution of the AI sector, organizations should not discount capable firms lacking an extensive portfolio but rather allow them to demonstrate their qualifications and relate how they can assist in achieving the organization’s objectives.

To mitigate insider threats, which are often linked to human error rather than malicious intent, organizations must prioritize cybersecurity education across all levels. Given that most cyber incidents are caused by employees who lack an understanding of cybersecurity principles, a comprehensive training program emphasizing AI and other technologies is paramount. Employees should be explicitly advised to refrain from inputting sensitive data into AI systems, as their interactions can inadvertently expose confidential information. Regular education on data handling, breach reporting, incident response planning, and maintaining a secure backup of critical data should be integral components of this training initiative.

Finally, with successful implementation and employee training, organizations should not treat AI as a static solution. Continuous monitoring and adjusting AI tools will not only maximize effectiveness but also uncover potential vulnerabilities before they can be exploited. Skipping steps in this process may lead to critical setbacks and wasted resources, while diligently following through can position AI as a key asset in maintaining a competitive edge in an evolving landscape.

Source