Major U.S. employers such as Walmart, Delta Air Lines, T-Mobile, Chevron, and Starbucks, as well as European brands such as Nestlé and AstraZeneca, turn to seven-year-old startup Aware to monitor corporate chatter. I rely on it. According to the company.
Jeff Schuman, co-founder and CEO of the Columbus, Ohio-based startup, said AI can help companies “understand the risks within their communications,” and The company says it can read employees’ emotions in real time, rather than relying on them twice. Annual survey.
Schuman said that by using anonymous data from Aware’s analytics products, customers can see how employees of a particular age group or in a particular region are responding to new corporate policies or marketing campaigns. . Aware’s dozens of AI models are built to read text and process images, and can also identify bullying, harassment, discrimination, non-compliance, pornography, nudity and other behaviors, he said. .
Schuman said Aware’s analytics tools, which monitor employee sentiment and toxicity, do not have the ability to flag individual employee names. However, he added that separate e-discovery tools can be used in the event of extreme threats or other risk behaviors pre-configured by the client.
CNBC has not received responses from Walmart, T-Mobile, Chevron, Starbucks or Nestlé regarding their use of Aware. An AstraZeneca representative said the company uses e-discovery products but does not use analytics to monitor sentiment or harm. Delta Air Lines told CNBC it uses Aware analytics and eDiscovery to monitor trends and sentiment and maintain legal records on social media platforms as a way to gather feedback from employees and other stakeholders. He said he is using.
You don’t have to be a lover of dystopian novels to understand where everything could go very wrong.
Jutta Williams, co-founder of Humane Intelligence, an AI liability nonprofit, said AI has been around for years to assess things like corporate espionage, especially in email communications. He said it is adding a new and potentially problematic wrinkle to the program.
Williams spoke broadly about employee monitoring AI, not specifically Aware’s technology, telling CNBC that “a lot of it amounts to thought crimes.” She added, “This is treating people like inventory in a way I’ve never seen before.”
Employee monitoring AI is a rapidly expanding but niche part of the larger AI market that has exploded in the past year since the launch of OpenAI’s ChatGPT chatbot in late 2022 . Generative AI has quickly become a common phrase on corporate earnings calls, and the technology in some form is automating tasks in nearly every industry, from financial services and biomedical research to logistics, online travel, and utilities. doing.
Schuman told CNBC that Aware’s revenue has grown an average of 150% annually over the past five years, and that the company’s typical customer has about 30,000 employees. Top competitors include Qualtrics, Relativity, Proofpoint, Smarsh, and Netskope.
By industry standards, Aware remains very lean. The company last raised funding in 2021, when it raised $60 million in a round led by Goldman Sachs Asset Management. Compare this to large language model (LLM) companies like his OpenAI and Anthropic, which have each raised billions of dollars primarily from strategic partners.
Schuman founded the company in 2017 after spending about eight years working in enterprise collaboration at insurance company Nationwide.
Before that, he was an entrepreneur. And Aware wasn’t the first company he started that drew on Orwell’s ideas.
In 2005, Schumann founded a company called BigBrotherLite.com. The company has developed software that “enhances the digital and mobile viewing experience” for the CBS reality series “Big Brother,” according to his LinkedIn profile. In Orwell’s classic novel “1984,” Big Brother was the leader of a totalitarian state whose citizens were under permanent surveillance.
”I created a simple player that focuses on a cleaner and easier consumer experience for people to watch TV shows on their computers,” Schuman said in an email.
With Aware, he’s doing something completely different.
The company aggregated insights from billions of messages sent across large enterprises each year (that number will rise to 6.5 billion in 2023) and tabulated perceived risk factors and workplace sentiment scores. A report is being published. Schumann calls the trillions of messages sent each year through his Workplace Communication platform “the world’s fastest growing unstructured data set.”
When you include other types of content that are shared, such as images and videos, Aware’s analytics AI analyzes over 100 million pieces of content every day. In doing so, the technology creates a social graph of the company and examines which teams are talking more internally than others.
“We’re constantly tracking employee sentiment in real time, and we’re always tracking toxicity,” Schuman said of the analytics tools. “If you’re a bank using His Aware and your employees’ sentiment spikes during his last 20 minutes, it’s because they collectively have something positive to say. Technology will be able to tell them whatever it is.”
Aware confirmed to CNBC that it uses data from enterprise customers to train machine learning models. Company Data His repository contains approximately 6.5 billion messages, which the company says represents approximately 20 billion individual interactions across his more than 3 million employees. .
When a new client signs up for the analytics tool, Aware’s AI model spends about two weeks training on employee messages to understand patterns in sentiment and sentiment within the company and understand what’s normal and what’s abnormal. Schuman says that you will be able to make a decision.
“To protect their privacy, we will not include people’s names,” Schumann said. Rather, the client says, “Probably the 40+ workforce in this part of the United States… [a] This policy is highly negative because it is costly, but people outside that age group and region view it positively because it affects them in different ways. ”
However, Aware eDiscovery tools work differently. Companies can set up role-based access to employee names depending on the “extreme risk” category selected by the company. This instructs Aware’s technology to capture the individual’s name, possibly for human resources or another company representative.
“What we often see is extreme violence, extreme bullying and harassment, but it varies by industry,” Schuman said, adding that in financial services, suspected insider trading is tracked.
For example, customers can use Aware’s technology to specify “violent threat” policies and other categories to have AI models monitor for violations in Slack, Microsoft Teams, and Workplace by Meta, Schumann said. He says it can be done. Clients can also combine this with rule-based flags such as specific phrases or statements. If the AI finds something that violates a company’s designated policies, it may provide the employee’s name to the client’s designated representative.
This type of practice has been used in email communications for many years. What’s new is the use of AI and its applications across workplace messaging platforms like Slack and Teams.
Amba Kaku, executive director of New York University’s AI Now Institute, is concerned about using AI to determine what counts as risky behavior.
“It has a chilling effect on what people say in the workplace,” Kaku said, without going into specifics, but the Federal Trade Commission, Department of Justice and Equal Employment Opportunity Commission have all expressed concerns about the matter. He added that he is doing so. Aware technology. “These are privacy issues as well as worker rights issues.”
Schuman said that while Aware’s eDiscovery tools allow security and human resources investigation teams to use AI to search through large amounts of data, platforms such as Slack and Teams have “similar but basic functionality that already exists. “I am doing so,” he said.
“The key difference here is that Aware and its AI models are not making decisions,” Schuman said. “Our AI simply makes it easier to examine this new dataset and identify potential risks and policy violations.”
Research suggests that even when data is aggregated or anonymized, it is a flawed concept. A groundbreaking study on data privacy using 1990 US Census data showed that 87% of Americans can identify their girlfriends just by zip code, date of birth, and gender. Aware clients can use its analytics tools to add metadata to message traces, such as employee age, location, department, tenure, and job function.
“What they’re saying is relying on a concept that’s very outdated and, at this point, completely false, that anonymization and aggregation are some kind of silver bullet that will solve privacy problems.” Mr. Kaku said.
Additionally, recent research has shown that the type of AI model used by Aware is effective at generating inferences from aggregated data and accurately inferring personal identifiers based on language, context, slang, and more. became.
“No company is in a position to make fundamentally blanket guarantees about the privacy and security of LLM or these types of systems,” Kaku said. “No one can say with a straight face that these challenges have been solved.”
And what about employee relief? Williams said if an interaction is flagged and an employee is disciplined or fired, it’s difficult to defend without knowing all the relevant data.
“How do you confront your accusers when you know that AI’s ability to explain things is still in its infancy?” Williams said.
In response, Schumann said, “None of our AI models make decisions or recommendations regarding employee discipline.”
“Once the model flags an interaction, it provides the full context of what happened and what policy it triggered, allowing the investigation team to align with company policy and legislation,” Schuman said. “We will provide you with the information you need to decide on your next steps.”
clock: AI ‘really coming into play here’ with recent technology layoffs