AI Poses a Threat to Financial Sector, and Cyberattackers are ‘Outpacing’ Defenses – Treasury

AI Poses a Threat to Financial Sector, and  Cyberattackers are ‘Outpacing’ Defenses – Treasury

Law.com reports that the U.S. Treasury Department warned the financial services sector this week that artificial intelligence (AI) will become a powerful weapon for fraudsters and cyberattackers, who will outgun the sector’s defensive efforts in the foreseeable future. 

The report was based on interviews with representatives from 42 financial services and technology companies about the current state of AI fraud and cybersecurity risks and safeguards.

While neither the Law.com article nor the report delves into specifics, the days of laughably written phishing emails or SMS messages are rapidly becoming a thing of the past, as criminals use ChatGPT to write more professional and grammatically correct emails or “alerts.” But as criminals use ChatGPT and AI to further their criminal pursuits, can AI help consumers and the financial sector accurately detect instances of business email compromise (BEC) attempts or other attempts at fraud? And while large corporations and financial institutions will have such resources, what about small and medium businesses?

American Banker notes that Treasury’s report, released on Wednesday, “discusses the inadequacies in financial institutions’ ability to manage AI risk — namely, not specifically addressing AI risks in their risk management frameworks — and how this trend has held financial institutions back from adopting expansive use of emerging AI technologies.”

The issues are not confined to the financial sector. Government agencies at the federal and state levels have been considering and attempting to develop regulations for the use of AI and risks posed by AI. Last year, the U.S. House of Representatives limited staffers’ use of ChatGPT to the paid subscription version while banning the free version. Now Axios reports that the House has set a strict ban on congressional staffers’ use of Microsoft Copilot, the company’s AI-based chatbot.

For now, some expect to see a sectoral approach that will pose a slew of new laws and rules that further complicate compliance efforts above and beyond the already complicated patchwork approach to data security and data breach notification laws. AI has the potential to help entities detect data security problems or threats posed by AI, but will a regulatory framework become so burdensome that some avoid it rather than embrace it? Years after the U.S. Department of Health and Human Services required covered entities to perform and update risk assessments, many entities still failed to conduct risk assessments or address identified risks. How long will it be before we see entities address AI risks in their risk management frameworks?