Although employees may be aware that the potential for leakage of sensitive data is the biggest risk, some individuals still prefer to use publicly available generative artificial intelligence ( AI) continues to fill in the tool.
This sensitive data includes customer information, sales and financial data, and personally identifiable information such as email addresses and phone numbers. According to a study published by Veritas Technologies, employees also lack clear policies and guidance regarding the use of these tools in the workplace.
Related article: 5 ways to use AI responsibly
The study, conducted in December 2023 by market research firm 3Gem, surveyed 11,500 employees worldwide, including employees in Australia, China, Japan, Singapore, South Korea, France, Germany, the UK and the US. it was done.
When asked about the risks to their organizations from using public-generated AI tools, 39% of respondents cited the potential for sensitive data to be compromised, and 38% said these tools were inaccurate, imprecise, or unhelpful. I answered that there is a possibility of generating information. Additionally, 37% of respondents cited compliance risks and 19% said technology could negatively impact productivity.
Approximately 57% of employees use public generative AI tools in the office at least once a week, and 22.3% use the technology daily. About 28% of people said they do not use such tools at all.
Also: Best AI chatbots: ChatGPT and other notable alternatives
Nearly half (42%) of respondents said they use tools for research and analysis, while 41% and 40% used generative AI to create email messages, take notes, and improve their writing, respectively.
Regarding the type of data that can provide business value when input into publicly available generative AI tools, 30% of employees cited customer information such as references, bank details, and addresses. About 29% cited sales figures, 28% cited financial information, and 25% cited personally identifiable information. An additional 22% of workers mentioned sensitive human resources data and 17% mentioned confidential company information.
Approximately 27% of respondents do not believe that putting such sensitive information into public generative AI tools brings value to their business.
Almost a third (31%) of employees admitted to entering such sensitive data into these tools, but 5% were unsure whether they had done so. Nearly two-thirds (64%) said they do not input sensitive data into publicly available generative AI tools.
Also: If we don’t act now, today’s AI boom will exacerbate societal problems
However, when asked about the benefits to their organization, 48% of respondents said that emerging technology could provide faster access to information. 40% cite increased productivity, 39% think generative AI can replace mundane tasks, and 34% think it can help generate new ideas.
Interestingly, 53% of employees consider the use of generated AI tools by colleagues to be an unfair advantage, and 40% believe that employees who have used such tools will I think it is necessary to teach. A further 29% thought colleagues who used such tools should be reported to their line manager, and 27% thought disciplinary action should be taken.
When it comes to formal guidance and policies regarding the use of public generative AI tools in the workplace, 36% of respondents said they were not available. About 24% said they had a mandatory policy regarding such use, while 21% said such guidelines were optional for their workplace. Additionally, 12% said their organization prohibits the use of generative AI tools in the workplace.
A majority of respondents (90%) believe it is important to have guidelines and policies around the use of emerging technologies, and 68% believe everyone needs to know the “right way” to deploy generative AI. I pointed out that there is.
Risks increase as GenAI usage increases
As the adoption of generative AI increases, the associated security risks may also increase.
According to IBM’s X-Force Threat Intelligence Index 2024, major platforms will experience large-scale attacks when a single generative AI technology approaches 50% market share or when the market consolidates into three or fewer technologies. There is a possibility that this may occur.
Also: Train AI models using your own data to reduce risk
The research is based on technology vendor analysis that monitors more than 150 billion security events per day in more than 130 countries, as well as data insights from within IBM, including Managed Security Services and Red Hat.
According to IBM, cybercriminals are targeting technology that is ubiquitous in organizations around the world in order to profit from their campaigns. As generative AI gains market dominance, this approach will be extended to AI as well, leading to the maturation of AI as an attack surface and motivating cybercriminals to invest in new tools.
Therefore, it is important for enterprises to secure their AI models before threat actors expand their operations, IBM warned. The report notes that in 2023, there were more than 800,000 posts about AI and GPT across dark web forums, and as attackers leverage technology to optimize their attacks, identity-based threats will continue to grow. He added that it will also increase.
The technology vendor describes generative AI as the next big frontier to protect, saying: “Companies should also recognize that their existing underlying infrastructure is the gateway to their own AI models that don’t require new targeting tactics for attackers, and in the era of generative AI. A holistic approach to security.”
Also: These are my 5 favorite AI tools for work
Charles Henderson, global managing partner at IBM Consulting and head of IBM There is no change in the fact that it is summarized.” ”
Additionally, exploiting valid accounts has become the path of least resistance for cybercriminals. The IBM Threat Intelligence Index found a 266% increase in malware attacks aimed at stealing personally identifiable information, including social media and messaging app credentials, banking details, and cryptocurrency wallet data.
Europe was the most targeted region in 2023, accounting for 32% of incidents that IBM’s X-Force responded to worldwide, including 26% of ransomware attacks worldwide. Such attacks contributed to 44% of all incidents experienced by Europe, and were the driving force behind the region’s rise to the top spot last year. IBM said Europe’s heavy use of cloud platforms may also have expanded its attack surface compared to its global peers.
Asia Pacific, the most targeted region in 2021 and 2022, was the third most affected, accounting for 23% of global incidents, with North America accounting for 26%.
MORE: Do you have 10 hours? IBM will train you on AI basics for free
Globally, nearly 70% of attacks are against critical infrastructure organizations, and nearly 85% of these incidents are caused by exploiting public applications, phishing emails, or using valid accounts.
IBM noted that in 85% of attacks against critical sectors, the breach could have been mitigated through patching, multi-factor authentication, or least privilege principals.