(This article Series on artificial intelligence For directors and senior management. )
Generative AI, especially large-scale language models (LLMs) like ChatGPT, are the most important end products introduced since artificial intelligence research began, at least in terms of their contribution to economic productivity.
How important is it? McKinsey estimates that generative AI could add up to $7.9 trillion annually to the global economy. This is equivalent to the combined contributions of Canada, the United Kingdom, Russia, and Austria.
However, despite its great potential, generative AI is not without its drawbacks. In fact, you might say that its arrival has made things rather confusing so far. So, 10 months after the release of ChatGPT 4, let’s take a look at the main issues with generative AI and some ideas on how to overcome them.
1. Accuracy
We’ve all heard about the ChatGPT hallucination problem. LLM is designed to provide probabilistic answers from training data, and can be a little overzealous in providing answers. Rather than say, “I don’t know,” they make things up and have been known to create a Pandora’s Box of problems, from brand damage to regulatory violations.
It helps to build topic and ethical guardrails into your generative AI models. You can also use knowledge bases specific to your enterprise domain to power the training of your generative AI models. But in the near future, I think companies will need to further improve their engineering prompts for accuracy and keep humans involved to double-check everything that LLM produces.
2. Bias
The problem of bias creeping into AI is old news (see article)How AI makes terrible mistakes: 5 biases that lead to failure. However, the rapid adoption of generative AI has amplified this concern more than most people expected. Instead of worrying about small biases creeping in here and there, corporate leaders now need to worry about biases completely taking over the corporate culture.
Does the ease of use of programs like ChatGPT mean that our diverse voices are muffled from a single perspective? Critics say the software is “woke” They accuse it of being plagued by political bias, of perpetuating gender bias, and of being inherently racially biased.
To ensure that generative AI does not perpetuate harmful perspectives within an organization, engineering teams must stay in close touch with this issue and work to instill AI with the company’s own values and human values. .
3. Volume
Before generative AI made it easy to create new content, we were already drowning in information. Emails, e-books, web pages, social media posts, and other created works. Even the volume of job applications has skyrocketed, thanks to the ability to quickly create customized resumes and cover letters using AI. Managing all this vast amount of new information can be difficult.
How do you leverage the vast assets your organization generates? How do you store all that information? How do you understand data analysis and marketing asset attribution? If produced by, what and whose benefits would you value?
To avoid chaos and employee burnout, you need to organize the right teams, technology, and tactics to stay on top of this situation. This is because the amount will only continue to increase.
4. Cybersecurity
Generative AI has significantly improved the ability of malicious actors to launch new cyberattacks. It can be used to analyze code for vulnerabilities and create malware to exploit them. It could be used to create deepfake videos and voice clones for fraud and virtual kidnapping. Create persuasive emails to support phishing attacks and more. Additionally, code written with the assistance of AI may be more susceptible to hacking than code written by humans.
In this case, the best response is to fight fire with fire. AI can be used to analyze code for vulnerabilities and perform continuous penetration testing to improve defense models.
But remember that the biggest cybersecurity vulnerability in your organization is humans. Generative AI can analyze logs of user activity to look for risky behavior, but the first line of defense is to train staff to be even more vigilant than before.
5. Intellectual property issues
Lawsuits by artists, writers, stock photo agencies and others allege that their proprietary data and styles were used to train generative AI software without their permission. Companies using generative AI software are worried about being caught up in this debacle.
Who is liable if an ad campaign image created by generative AI inadvertently infringes someone else’s copyrighted work? On the other hand, who owns the assets created using generative AI in the first place? You. Is it? Generative AI software company? The AI itself? So far, rulings have found that works created by humans with the assistance of AI can be protected by copyright, but the jury is still out on patents.
My advice is to keep your humans up to date on all asset creation and ensure your legal team continues to do due diligence as the law evolves rapidly.
6. Shadow AI
According to a Salesforce survey of 14,000 employees in 14 countries, half of corporate employees who use generative AI tools do so without their organization’s approval. It would be impossible to put the genie back in the bottle on this one. Therefore, it is best to develop a governance policy for generative AI and establish a program to teach staff what responsible use looks like.
You should also talk to your IT leaders about what they are doing to discover and manage the AI software tools they produce on corporate devices.
Generative AI still has issues to address. But we are in the midst of unprecedented changes in the way we do business, and we need to manage them.
There are few industries, such as healthcare, banking, logistics, insurance, customer service, and e-commerce, that will not suddenly be subject to lightning-fast disruption from generative AI. The vulnerability to disruption and its velocity has never been greater. Companies that figure out how to leverage this technology effectively will create a flywheel effect that will make it very difficult for competitors who fall behind to recover. AI needs to be a board-level priority this year. (See “AI Threat: Winner Takes All”).
If you’re interested in how AI determines business winners and losers, how you can leverage AI for your organization’s benefit, and how you can manage AI risks, you should tune in. please. I write and speak about how senior executives, board members, and other business leaders can use AI effectively. You can read past articles and receive notifications of new articles by clicking the “Follow” button here.
Follow us on LinkedIn. Please check out my website.
follow me LinkedIn. check out my website.