As the use of generative artificial intelligence (GenAI) tools booms, new “file upload” options introduced on many platforms, such as OpenAI’s ChatGPT, are an attempt to share sensitive data outside of a company’s private network. has been blamed for the significant increase.
Researchers at Menlo Security tracked an 80% increase in file upload attempts to GenAI sites from July to December 2023.
“[Incidents] Data loss due to file uploads is on the rise. Previously, most solutions did not natively allow file uploads, but as new versions of the generative AI platform are released, new features have been added, such as the ability to upload files.” Menlo Security researchers said in their Data Loss Protection Study (DLP). ) Risk was released on Wednesday.
This study highlights the security risks posed by GenAI tools and their platform owners, who collect large amounts of user data, often exposed through the platforms’ own large language model datasets. Menlo Security cited his March 2023 OpenAI data breach related to account data (not data uploaded or cut-and-pasted as user queries) in which 1.2 million subscriber records were exposed. did.
“These uses of generative AI have the biggest impact on data loss as it makes it easy and quick to upload and input data such as source code, customer lists, roadmap plans, and personally identifiable information. ” the researchers wrote.
According to a February 14 report (registration required), attempts to input PII into GenAI platforms account for more than half (55%) of DLP events. The next most common type of data attempted to be shared with the GenAI platform included confidential documents (40). %).
Closing the GenAI data hole
The evolution of GenAI has outpaced organizations’ efforts to train employees on DLP risks, said Pejman Roshan, chief marketing officer at Menlo Security. “While there has been a commendable decline in copy-and-paste attempts over the past six months, the dramatic increase in file uploads has created significant new risks.”
26% increase in security policies restricting access to GenAI tools. The types of DLP efforts on GenAI platforms fall into two groups. One is domain-based blocking of GenAI websites, and the other is implementing a user-based permission-based approach.
“Security and IT teams that enforce policies on a domain-by-domain basis need to revisit that list frequently to ensure users are not accessing, or even exposing, sensitive data to lesser-known platforms. “There is,” the researchers wrote. “This process can be time-consuming and ultimately not scalable. Organizations are adopting security technologies that enable policy management at the Generative AI group level and provide protection for a broader range of Generative AI sites. need to do it.”