2023 was the year when Generative AI attracted attention, with many individuals and organizations taking on the challenge and utilizing it in various business settings. The time and effort saved by Generative AI is enormous. Internal research shows that Generative AI increases productivity by 62%, resulting in a savings of 26 hours per week.
Many companies are beginning to realize these benefits and are considering integrating Generative AI within their organization’s workflows and into various aspects of business operations. However, this also entails the need for governance training.
Uncovering the risks of generative AI that raise concerns for improved governance
AI is permeating every industry, from healthcare and finance to manufacturing. The recent widespread use of modern generative AI has raised ethical dilemmas. Algorithmic biases, privacy concerns, and unintended consequences are beginning to surface, and these are challenges data professionals must now grapple with. Therefore, data governance professionals must be educated to responsibly and ethically guide implementation and deployment.
Global convergence of AI regulations and frameworks shows urgency
Globally, there is active action by various regulators and discussions about the governance of AI and its impact on people, work, and privacy. This focus on AI regulation demonstrates the urgency for effective internationally harmonized standards and convergence or alignment of frameworks.
Across ASEAN, AI has been identified as a key focus area in national strategies, but there is still little regulation in place regarding AI governance. Malaysia will introduce a set of AI governance and ethics codes this year that will serve as the basis for AI regulation. This will include the seven principles of responsible AI outlined in the country’s National Artificial Intelligence Roadmap 2021-2025, as well as a focus on education and ethics. The Philippines plans to formulate a proposal for an ASEAN-wide AI regulatory framework based on AI-related legislation during its 2026 ASEAN chair.
Meanwhile, Singapore is at the forefront of adopting AI technology, but recognizes the need for responsible governance. In 2019, the first National AI Strategy was rolled out, outlining plans to deepen the use of AI to transform the economy. The latest National AI Strategy 2.0, released last December, focuses on industry collaboration, research and development, talent development, and establishing an effective and trustworthy environment for AI innovation.
Additionally, the country is actively participating in the global debate on AI. For example, it is a founding member of the Global Partnership on Artificial Intelligence. It is also working with the US to find areas of alignment between the US’s AI risk management framework and Singapore’s governance framework.
Generative AI risks and personal data protection considerations
Generative AI brings value, but it is also important to be aware of its risks and limitations. For example, the shift from content creation to content generation is expected to increase privacy and security risks and ethics-related violations. These can be caused by malicious intent, accident, or ignorance in the use of generative AI.
We are already witnessing deepfakes and voice cloning, which are understandably raising concerns about the proliferation of misinformation, data privacy, and intellectual property. This could increase even further in 2024 and could have a very negative impact, especially in the political arena. Around the world he is expected to have 80 major elections. Recently in Indonesia, an AI-generated avatar of the late political leader Suharto was created and disseminated with the aim of influencing voters’ choices in the upcoming Indonesian elections.
The Rise of Custom GPT Opens the Door to Information Leakage and Copyright Issues
OpenAI opened its GPT store in January 2024. This allows users to create custom GPTs and upload files as part of their knowledge. There are currently no guardrails for uploading copyrighted content. This could be a problem for unknown or poorly controlled developers who may monetize these GPTs in the future. The ease of generating such custom GPTs also comes with the risk of introducing hostile prompts.
These include prompt injection (insertion of malicious content to manipulate output), prompt leaks (unintentional disclosure of sensitive information within a response), and jailbreaking (injection of prompts to circumvent limitations of an AI system). adjustment). Addressing these challenges is of paramount importance as these operations can have far-reaching effects.
It is still very necessary to have “people in the field” to oversee, fact-check, and verify the output of technology. Data professionals must keep issues such as potential bias and security vulnerabilities, such as data leaks of personal data, in mind when creating frameworks for the deployment of Generative AI.
Data professionals need to upskill
When Singapore’s Deputy Prime Minister Lawrence Wong announced the updated National AI Strategy in December 2023, he said Singapore will triple its AI talent pool in three to five years by training domestic talent and recruiting overseas. He said he plans to increase the number to 15,000.
As generative AI moves into the mainstream, the role of data professionals will evolve from being simply custodians of information to guardians of ethical AI practices. Therefore, it is critical that data professionals prioritize AI governance training instead of taking a “wait-and-see” attitude. As AI becomes increasingly integrated into decision-making processes, data professionals must have the skills to navigate the ethical complexities surrounding data management. This makes data protection officers (DPOs) and data governance experts invaluable as companies develop and deploy generative AI.