Employers Encouraged to Establish AI Protocols to Mitigate Data Leakage Risks

Generative AI Usage Guidelines Issued to Safeguard Data Privacy

The increasing adoption of generative artificial intelligence (AI) within workplaces has prompted significant concerns regarding data privacy and security. Ada Chung Lai-ling, the Privacy Commissioner for Personal Data in Hong Kong, has recommended that employers establish a well-defined scope for the permissible use of generative AI to mitigate risks associated with data leakage. This recommendation aligns with the broader necessity for organizations to navigate the complexities inherent in new technological disruptions.

On March 31, the Commissioner’s Office released a Checklist on Guidelines for the Use of Generative AI by Employees. This comprehensive document is designed to assist organizations in formulating internal policies governing the deployment of generative AI technologies. In a recent radio appearance, Chung highlighted that many employees are utilizing generative AI tools without their companies’ awareness, underscoring the risk of unmonitored activities that could jeopardize sensitive data.

Chung expressed her hope that these guidelines would empower businesses to create policies that enhance the responsible use of generative AI, thereby enabling employees to leverage this technology effectively and safely. The guidelines, detailed across three pages, systematically cover five pivotal elements: defining the scope of use, safeguarding personal data privacy, ensuring lawful and ethical practices while preventing bias, maintaining data security, and outlining consequences for policy violations.

Shortly following the guidelines’ release, various sector representatives conveyed a desire for the Commissioner’s Office to develop sample internal policy templates. Chung noted this request is currently under consideration. Furthermore, the privacy watchdog has launched an AI safety hotline, providing businesses with a resource for inquiries and custom training courses to address AI-related concerns. An AI safety seminar, jointly organized with the Hong Kong Productivity Council, is planned for June to further educate stakeholders on best practices.

Chung identified key risks associated with AI technologies, with priority given to threats to personal data privacy. Notably, employees may inadvertently misuse customer data while utilizing AI tools, or input sensitive information contrary to company policy. There lies a grave concern regarding potential data leakage, especially since training generative AI models typically necessitates substantial datasets, which, if mishandled, could result in detrimental outcomes.

The Commissioner’s Office conducts annual reviews across various organizations, including government departments and affiliated entities. So far, no violations have been recorded. In the event of data leakage incidents, the office is prepared to investigate, publish reports, and issue enforcement notices to the implicated organizations. Furthermore, it holds the authority to intervene if generative AI tools cause data breaches.

In terms of potential tactics employed during such data breaches, organizations should remain vigilant for adversarial tactics identified in the MITRE ATT&CK Matrix. Potential tactics include initial access via compromised credentials or misconfigured APIs, persistence through the creation of backdoor accounts or continued use of valid accounts, and privilege escalation through the exploitation of known vulnerabilities.

As generative AI continues to evolve and integrate into workplace processes, the emphasis on robust governance and protective measures becomes ever more essential. Organizations are urged to be proactive in developing strategies that not only protect their data but also support compliant and ethical innovation within their operations. This reflects a broader commitment to safeguarding personal data and reinforcing trust in technological advancements in the industry.

Source link