Privacy

OpenAI Investigations Serve as Caution for AI Developers and Their Customers

Published: Apr. 05, 2023

Updated: May. 04, 2023

OpenAI is facing a flurry of complaints and investigations from regulators and organizations around the world. These actions highlight the importance of privacy law compliance and the uncertainty surrounding what standards should apply to AI in areas such as fairness, transparency, and safety. For developers and users of AI, there are two main takeaways:

  • If you’re an AI developer, make sure your training data has been collected with privacy in mind. “Public” data may still be protected by applicable privacy laws.
  • If you’re a company considering an AI integration, you should do diligence on how the developer built their training data and have appropriate provisions in your MSA to contractually mitigate risks.

On March 30, 2023, the four-month anniversary of ChatGPT’s launch, OpenAI was hit with a complaint to the Federal Trade Commission (“FTC”), a call for EU-wide investigations into its products, and a ban from the Italian data protection authority (“Garante”).

  • The US-based advocacy group Center for AI and Digital Policy (“CAIDP”) filed a complaint with the FTC, alleging that GPT-4 (the next evolution of the model underlying ChatGPT) violates the FTC Act and various normative and principle-based frameworks (despite the fact that such frameworks are unenforceable). The complaint alleged that GPT-4 is “biased, deceptive, and a risk to privacy and public safety” because it can allegedly generate misinformation, content that reinforces existing biases, and age-inappropriate content for child users (among other reasons). The complaint also takes issue with GPT-4’s alleged lack of transparency and explainability.
  • The European Consumer Organization (“BEUC”) urged EU and national authorities to investigate the risks of ChatGPT and similar chatbots, citing CAIDP’s complaint.
  • The Garante provisionally banned OpenAI from processing Italian residents’ personal data due to alleged noncompliance with the General Data Protection Regulation (“GDPR”), including failure to (1) establish a legal basis for processing Italians’ personal data, (2) provide all information required by Article 13 (typically provided via a privacy policy), (3) take appropriate steps to ensure that personal data is correct and accurate, (4) implement data protection by design, and (5) obtain parental consent to process data of children under the age of 16. The Garante also alleged that ChatGPT lacks age verification mechanisms and presents inappropriate information to children. In response, OpenAI made ChatGPT unavailable in Italy.

Other regulators have since opened investigations into ChatGPT and/or OpenAI or are considering whether to do so:

  • On April 4, 2023, the Office of the Privacy Commissioner of Canada (Canada’s national privacy regulator) announced it is launching an investigation based on a complaint.
  • As of April 5, 2023, data protection authorities in France, Ireland, and Germany are reportedly following the Garante’s investigation and considering whether to take action.

Other regulators have issued statements or advice aimed at generative AI developers. In response to mounting concerns about large language models like ChatGPT, the UK’s national data protection authority (“ICO”) published a list of questions it will use to evaluate generative AI models. These questions are aimed at assessing several basic pillars of GDPR compliance, including whether developers have adequately assessed and mitigated risks associated with their AI models. Perhaps hinting at future enforcement, the ICO notes that it “will act where organizations are not following the law and considering the impact on individuals.”

As more regulators take action, legal compliance and risk management should be top-of-mind for AI developers and users.