Privacy

CPOs are All in the AI Business (Or Should Be)

Published: Apr. 01, 2024

Ahead of her presentation at this year’s IAPP GPS, and on the heels of what could only be called “2023: The Year ChatGPT Ate the World,” Privacy and Data Governance Consultant Anne Toth summarizes the top reasons why the explosion in interest in AI, and rapid advances in using data to fuel AI, fit naturally with the work that privacy professionals do all day, every day.


AI is all about data:

When you boil it down, AI is only as good as the data used to train it. Every discussion about fairness and bias comes right back to the quality and diversity of training data. Even in forms of AI that are more sophisticated, like deep learning, data provides the necessary fuel that makes learning and improvement possible. While scientific advances will someday result in far less data being necessary to achieve the same end result, today’s AI improvements are often directly proportional to the volume of data ingested, processed, and analyzed. If you were to ask yourself “Who in my organization has the best handle on where all the data is and how it’s being used (and how it ought to be used)?” The correct answer in many cases would be “the data science team.” However, that answer is often true only with respect to your customer data. Consider that privacy teams today will often have broad oversight not only for customer data, but for employee data and every other type of data your organization collects and uses. More often than not you need to consider the implications of the AI tools you build and buy to optimize your customer experience, but also how your employees use AI in the workplace and how your company uses AI across a wide array of data types. If you answered the earlier question by saying, “my privacy team,” it would not just be another correct answer, it would almost always be the better answer.

Privacy teams are horizontally integrated:

There is not one standard corporate org chart model. Some companies are functionally organized. Some are organized by product or business P&L, by global region, or both. Large and complex corporations and entities are often highly matrixed. No matter what your org chart looks like today, you can expect that as soon as you learn to navigate it, it will be re-organized. But regardless of the size and complexity of your organization and who sits where, your privacy team can only be effective when they are horizontally integrated. They have to be because data is hardly ever siloed in any organization. Whether your privacy team is a part of your legal team, your data science team, your information security team, your IT team, or even your marketing team, their work has to cut across every function that collects and uses data. Since we have already established that AI is all about the data and AI tools are becoming as ubiquitous as the data itself, who better to manage AI governance than the team that is already horizontally integrated?

The only constant is ambiguity:

Rewind 20 years and privacy was not clearly regulated in the U.S. Early privacy lawyers and professionals had to navigate a sea of uncertainty where data collection, use, and sharing were concerned. Even with the additional clarity that regulation, litigation, and widely adopted industry best practices have brought, every new data type and use case carries with it a long list of questions about what regulatory framework applies and how. Just as email, instant messaging, texting, and online communications redefined how we thought about what constitutes wiretapping, so does generative AI make us consider how we think about intellectual property, ownership, and content authenticity. Every day, a new kind of technology or technology application requires us to think about our policies and practices anew. People working in privacy have had to be nimble about this from the very start. For a seasoned privacy professional, this is a hardwired skill set that you should take advantage of.

Managing risk is a core competency, but not the only competency:

Privacy teams, much like legal teams, are inherently risk managers. Some kinds of data and uses of data are more sensitive and therefore carry more risk than others. That is why we regulate and treat different kinds of data differently. Every data use case is ultimately tied to a business use case, and an organization’s business decisions are entwined in its data strategy. When done right, privacy and data governance is a partner to data strategy. Risk management is important, but creative risk management that mitigates risk while enabling positive business outcomes is what privacy professionals do best. Involving them in AI governance and leveraging this experience is another dimension of a truly integrated and smart data strategy approach.


In sum, your already horizontally-integrated privacy teams are proficient in risk management, navigating ambiguity, and are deeply versed in your organization’s data architecture, policies, and strategy. That makes them the go-to resource for crafting your AI governance framework. Their expertise represents a logical progression of their duties and serves as an effective means to guarantee a comprehensive strategy for responsible data policy.


For those of you in DC again this spring, I’ll be following up on this topic in a pre-conference workshop, “Getting from Here to There: Career Pathing in Privacy and Data Protection.” Hope to see you there to continue this important conversation.