Artificial Intelligence

GenAI in the Workplace: Hong Kong PCPD Releases Checklist for Employer Policies

By Leslie Veloz and Jennifer Ruehr

The Hong Kong Office of the Privacy Commissioner for Personal Data (“PCPD”) recently published its Checklist on Guidelines for the Use of Generative AI by Employees (“Checklist”). The goal of the Checklist is to help organizations draft internal policies and procedures governing employee use of generative AI (“GenAI”) tools, especially where GenAI is used to process personal data.

The Checklist provides recommendations for topics that should be covered in internal company policies and provides practical tips on supporting the use of GenAI by employees, as explained further below.

Internal GenAI policies

The PCPD recommends organizations implement GenAI Policies to address the following topics:

Permissible Use of GenAI.

  • Describe the GenAI tools that can be used internally (e.g. publicly available, licensed, or internally developed).

  • Describe how employees can use GenAI, and the applicability of relevant policies or guidelines.

Data Privacy Protections.

  • Provide clear guidance on the types and amounts of data that can (or cannot) be input into GenAI tools, and how output data can or cannot be used.

  • Describe use cases when privacy protective measures (e.g. anonymization) should be applied to output data.

  • Provide data security requirements for output data generated by GenAI that aligns with the organization’s existing policies.

  • Review other internal policies (e.g., data retention, personal data handling, information security, etc.) to decide if updates are necessary.

Lawful and Ethical Use, including Prevention of Bias.

  • Emphasize that employees must verify the accuracy of generated outputs, report biased or discriminatory outputs, label outputs, and refrain from using GenAI tools for unlawful or harmful activities.

In addition to the above, the PDPC also recommends that organizations offer specific security guidance to employees about the use of GenAI tools that address permitted devices, employees, robust user credentials, and security settings for GenAI tools, including an AI Incident Response Plan. Internal policies should also describe the possible consequences in the event the policies are violated.

Practical Tips

To address these recommendations, companies should:

  • Enhance Policy Transparency. Routinely communicate updates to GenAI policies or guidelines, ensuring that employees understand how and when they can use GenAI tools.

  • Conduct Employee Trainings. Conduct trainings on how to use GenAI tools effectively and responsibly, including explaining capabilities and GenAI tool limitations.

  • Develop AI Support Teams. Establish an AI support team to assist employees using GenAI tools in their work, provide technical assistance, and address any general AI related employee concerns.

  • Implement Feedback Mechanisms. Create channels or processes for employees to provide feedback on the use of AI, so that governance teams can improve and further tailor applicable AI policies.

In addition to publishing the Checklist, the PCPD also launched an “AI Security Hotline” and a “Data Security Training Series for SMEs,” aimed to assist organizations in driving high-quality AI development, expand the diverse application of AI, and complying with the requirements of the Personal Data (Privacy) Ordinance (“PDPO”).

Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on global privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law.

Leslie Veloz is an Associate at Hintze Law PLLC offering clients pragmatic and result-driven legal counsel for establishing, maintaining, and maturing effective privacy programs.

 

Jennifer Ruehr is Co-Managing Partner at Hintze Law PLLC and co-chair of the firm’s Workplace Privacy Group, Cybersecurity and Breach Response Group, and the Artificial Intelligence and Machine Learning Group.

10 areas for US-based privacy programs to focus in 2025

10 areas for US-based privacy programs to focus in 2025

By Sam Castic

The post below was originally published by the IAPP at https://iapp.org/news/a/10-areas-for-privacy-programs-to-focus-in-2025.

This past year was another jammed one for privacy teams and it was not easy to stay on top of all the privacy litigation, enforcement trends, and new laws and regulations in the U.S.

Read More

The EDPB Releases an Opinion on AI Model Development and Deployment

The EDPB Releases an Opinion on AI Model Development and Deployment

By Emily Litka

On December 18th, in response to a request from the Irish Supervisory Authority (“SA”), the European Data Protection Board (the “EDPB”) published an opinion (the “Opinion”) on the application of the GDPR to certain aspects of AI model development and deployment.

Read More

California Enacts "genAI" Laws That Introduce New Privacy and Transparency Requirements, Amongst Others 

California Enacts "genAI" Laws That Introduce New Privacy and Transparency Requirements, Amongst Others 

By Emily Litka

In September 2024, California Governor Gavin Newsome signed a number of new generative AI (“genAI”) bills into law. These laws address risks associated with deepfakes, training dataset transparency, use of genAI in healthcare settings, privacy, and AI literacy in schools. California is the first US state to enact such sweeping genAI regulations.

Read More

FTC Introduces Novel Ban in Its Settlement with NGL Labs and Scrutinizes AI Representations

By Emily Litka

On July 9, 2024, The Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office (LA DA) reached a settlement with NGL Labs, the maker of the “NGL: ask me anything” app and its co-founders. The complaint alleged violations of the Federal Trade Commission Act (FTC Act), the Children’s Online Privacy Protection Act (COPPA), the Restore Online Shoppers’ Confidence Act (ROSCA), and similar California state laws. In the complaint, the FTC and LA DA also brought claims against NGL’s cofounders individually. 

Read More