Artificial intelligence (AI) technology utilizes vast amounts of data and computational power to train models that can make predictions and decisions without being explicitly programmed for the task. AI may make us richer and more innovative, but it comes with privacy and data protection risks. Organizations designing or deploying AI tools should examine how their AI systems collect, process, and disseminate personal data across digital and brick-and-mortar markets.
How is personal data collected?
When people are working, browsing the aisles of the grocery store, browsing the internet, listening to music at home, and even navigating the physical world through location data. Collecting and aggregating information about people has never been easier. Al allows organizations to collect, manipulate, and ultimately exchange vast amounts of personal data without individuals’ prior knowledge or consent.
For example, let’s say someone chooses a song from a streaming service that matches their mood. Let’s say one of us chose “Coffee Acoustic” as our playlist of choice for writing this article. Streaming services give the online behavioral advertising market a “Javanese and staid” feel, but sometimes without detailed explicit consent. The ad buyer then serves ads based on Joni Mitchell’s Morning. Based on this musical preference, predictions are probably made about future choices. Later, when I tried to go to a physical store that was advertised on the streaming service, I couldn’t hear the “ping” in the store’s background music, which my phone’s microphone “heard” and sent back to the advertiser. there is. What we used to listen to in the office can now be tracked to our physical locations using our mobile phones. We may receive emails or texts offering discounts based on predictions obtained from processing all your personal data.
How is my personal data processed?
When personal data is collected as part of an AI training corpus or in queries to an AI, it may be ‘processed’ under legislation such as the European Union’s General Data Protection Regulation (GDPR). Personal data used to train machine learning models can introduce bias into the model. Bias is closely related to the GDPR’s transparency and fairness concepts and is regulated by employment laws such as Title VII of the Civil Rights Act of 1964, U.S. Equal Employment Opportunity Commission Guidance on AI and Title VII . Using personal data in a way that results in biased predictions or outputs may violate privacy and employment laws. The Federal Trade Commission (FTC) also warned companies that unfair or deceptive AI is subject to Section 5 powers under the FTC Act. Companies implementing AI must be transparent and clear when providing AI to consumers and must avoid practices that could be seen as deceptive.
Additionally, organizations should be aware of the trade-offs between privacy and AI utility, such as not providing end-to-end encryption, which allows AI systems to read users’ chat messages. This is because friction can unfortunately be necessary to achieve compliance. Protect individual rights. Additionally, trouble can arise if personal data collected for one purpose is used for another purpose (such as training an AI model) without the person’s consent for a secondary purpose. We recommend that companies assemble cross-functional program teams to develop policies and procedures to manage these complex and interrelated challenges.
How is personal data disseminated?
Organizations should be careful about disclosing or sharing personal data entrusted to them by individuals to third parties, including AI providers. For example, an organization may want to better understand and identify patterns in large amounts of data. Organizations should assess privacy and data protection requirements for notice, transparency, fairness, (in some cases) consent, and secondary use before sharing information.
What now?
Organizations must protect personal data in accordance with U.S. state and federal privacy laws and, where applicable, extraterritorial laws such as the GDPR. Government contractors using AI will need to consider the privacy protections in President Biden’s executive order on AI. We encourage organizations to “think hard” about how best to address other compliance concerns and maximize the value of AI in a holistic way. Building a multidisciplinary AI ethics and governance team that includes privacy, security, legal, and other stakeholders is a good place to start a large-scale AI compliance program. Companies may also consider working with outside experts to help navigate this complex and evolving area.