The Treasury is sounding the alarm on the cybersecurity risks posed by the growing use of artificial intelligence (AI) in the financial services sector.
a new report Wednesday (March 27th) It highlights the potential risks and calls for urgent cooperation between governments and industry to protect financial stability. The report, mandated by an executive order from the Biden administration, focuses on the widening capability gap created by AI. While large banks and financial institutions have the resources to develop custom AI systems, smaller institutions are increasingly being left behind. This leaves smaller agencies vulnerable as they may rely on third-party AI solutions.
“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden administration is working with financial institutions to leverage emerging technologies while safeguarding operational resiliency and threats to financial stability.” We are working hard,” said Nellie Liang, Undersecretary of Finance. news release.
“Treasury’s AI report builds on the success of public-private partnerships for secure cloud adoption and provides clarity on how financial institutions can securely plan their areas of business and stop rapidly evolving AI-driven fraud. It shows a vision.”
Lack of fraud prevention
A Treasury investigation found a lack of data sharing on fraud prevention, putting small financial institutions at an even greater disadvantage. Limited data hinders the ability to develop effective AI fraud protection, but large agencies leverage large amounts of data to train their models. The report was prepared by the Treasury Department’s Office of Cybersecurity and Critical Infrastructure Protection and is based on interviews with more than 40 companies in the financial and technology sectors.
Narayana Pappu CEO Zendata, a San Francisco-based provider of data security and privacy compliance solutions, believes that the biggest barrier for small financial institutions to use AI for fraud detection is not model creation, but access to high-quality, standardized fraud data. he tells PYMNTS. He said financial institutions can act as nodes that aggregate data.
“Data standardization and quality assessment will be a great opportunity for startups to offer it as a service,” he added. “Technologies such as differential privacy can facilitate information between financial institutions without exposing individual customer data. That can be a concern that prevents them from sharing.”
Marcus Fowler CEO Darktrace Federationtold PYMNTS in an interview that the tools used by attackers and defenders, and the digital environments they must protect against, are constantly changing and becoming increasingly complex.
“Specifically, while the use of AI by adversaries is still in its infancy and we don’t know exactly how it will evolve, we believe that as adversaries deploy advanced techniques more quickly and at scale, “We know that the barriers to entry for the industry are already low,” he added.
“To effectively protect your organization in the age of offensive AI, you need to increase your inventory of defensive AI.” have been protected.”
Fowler said financial services organizations have always been prime targets for cyber threats due to their essential operations. Therefore, these entities typically have highly developed and complex cybersecurity measures.
“AI represents the greatest advancement in truly augmenting the cyber workforce, and these organizations are learning how to effectively apply AI to their security operations to increase agility and strengthen defenses against emerging threats.” “It’s a great example of this,” he said.
“We encourage these organizations to foster open conversations about the successes and failures of AI adoption and help other organizations across sectors accelerate the adoption of AI in cybersecurity. ”
“Nutrition label” for AI
The report’s recommendations include streamlining regulatory oversight to avoid fragmentation as various financial regulators grapple with the challenges posed by AI. It also proposes extending standards developed by the National Institute of Standards and Technology (NIST) to specifically apply to financial services. This report advocates best practices for data tracking and “nutrition label” development for AI vendors. These labels clarify the type of data used in the AI model, its origin, and its intended use.
Additionally, the report argues for the need to address “black box” systems by increasing the explainability of complex AI, especially in the rapidly evolving field of generative AI. It emphasizes the importance of bridging the human capital gap by developing training and competency standards for people who utilize AI systems. Other key points are the creation of a common AI vocabulary to standardize definitions, address digital identity issues to strengthen fraud prevention, and foster international cooperation to align AI regulation and risk mitigation strategies. It emphasizes the need.
the study carried out by PYMNTS Intelligence reveals that financial institutions (FIs) are adopting a range of fraud prevention strategies, relying on a combination of internal fraud prevention systems, external resources, and emerging technologies to protect operations and customers. Did.
inside 2023 report Titled “The State of Fraud and Financial Crime in the United States,” PYMNTS Intelligence found that in September, 66% of bank leaders reported using AI and machine learning (ML) to combat fraud. This is a significant increase from 34% the previous year.
However, the report notes, “Developing AI and ML tools is costly, which explains why only 14% of financial institutions embark on creating their own AI and ML solutions to combat fraud. “There is a possibility that it is.” PYMNTS further notes that “approximately 30% of financial institutions rely entirely on external vendors for these technologies. Similarly, only 11% of financial institutions develop APIs in-house, while 22% We exclusively use third-party API solutions.”