Even as AI becomes a household term, some applications and industries have been more keen to adopt artificial intelligence than others. For instance, after becoming available in 2015, Amazon sold approximately 4.4 million Echo devices. The product became so ubiquitous that “Hey Alexa” makes an appearance in many contemporary TV shows and movies. We’re eager to utilize AI when it makes our lives easier and at least appears relatively simple.
When looking at the surveillance industry, however, adopting AI has been met with skepticism in many ways. With an industry that is so heavily regulated and dependent on effective reporting, it is understandable to be wary of the promises of AI and wonder: why now?
After all, AI is a bit of a buzzword, isn’t it? When pressed to actually define it, many cannot. Firms cannot explain the full value of the AI-powered solutions becoming available, their potential impact, or how AI fits into their surveillance workflows.
The adoption of AI, however, can help firms keep up with a tightening regulatory landscape, expanding surveillance laws globally, and an increasing number of channels to monitor in a remote professional world. Due to the success of AI enablement, most financial institutions see AI as the future and regulators are deeply exploring what is required to build and validate AI models. It is clear from looking at the trajectory of this industry that AI will play an increasingly larger role in helping firms avoid fines and remain compliant.
The Path Toward AI
Historically, surveillance has been slow to adopt technology solutions. The prevailing viewpoint was that the industry would self-regulate. Checks and balances were enacted using simple surveillance tools such as term searching and manually reviewing communications. These measures were reactive, allowing teams to try and comply with relevant regulations like: 2003 SEC recording requirements, 2005 Reg NMS (Market Data Rules), 2010 Dodd-Frank recording requirement, and the 2015 CFTC Regulation Automated Trading.
Many organizations considered themselves advanced in the industry by simply having lexicons and filtering by groups. However, the industry has been aware of the potential benefits of AI for several years. Since the FX and LIBOR scandals, there is now recognition that communications monitoring is the only feasible way of managing the market abuse risks of collusion as well as employee misconduct. Making this process effective and efficient continues to challenge compliance teams across the regulated community.
Reception from the Industry and Regulators
Concerns for effective AI adoption in surveillance have largely been prompted by limited access to high quality data (how we ensure AI qualifies a good alert) and a lack of transparency into the way the algorithms work (how AI determines high risk behaviors). However, advances in technology and implementation strategy allow software to detect risk by understanding the context of conversation. Models now have access to thousands of high-quality training examples due to increased scale and automation. As for transparency, regulators and clients are becoming more knowledgeable about AI and how it works, making both sides feel more comfortable with the results.
In fact, FINRA recently laid out, in their 2021 “Artificial Intelligence (AI) In The Securities Industry” report, excitement and support of AI in communication surveillance strategy: “[AI] enables firms to holistically surveil and monitor various functions across the enterprise, as well as monitor conduct across various individuals (e.g., traders, registered representatives, employees, and customers), in a more efficient, effective, and risk-based manner.”
In their Business Strategy memo, the FCA has stated that one of their goals for 2022/2023 is to increase detection capability through advanced analytics and better data coverage. These priorities were identified largely because:
- The financial surveillance industry has largely moved to a cloud-based system rather than on-premises solutions.
- A recent poll by 1LoD found that 93 percent of banks support a move to risk-based surveillance, rather than 100 percent coverage.
The excitement for AI is based on the expectation that data quality will improve, helping teams monitor risk in the new hybrid workplace. It is clear that regulators and compliance teams support grows as their understanding of processes rises at the same rate.
AI in the Standard Workflow
Within two years, every organization monitoring communications will be using AI to extend beyond traditional approaches to catch bad actors. AI adoption and integration into the standard surveillance workflow must be a priority now in order to effectively find misconduct and stop it.
In the aforementioned 1LoD poll, 32 percent of banks responded that false positives generated are "out of control" and 49 percent said the biggest efficiency gains in the long run will come from adding artificial intelligence (AI) or machine learning (ML) technologies to the alert generation process.
As AI is further enveloped into the standard workflow, teams will benefit from AI working in concert with their employees. Focusing on defensibility, explainability, and transparency, compliance leaders will find that data cleansing and AI-powered transcription levels up their risk management, protecting them from regulator actions and headline-busting misconduct. Interest in reducing false positives and finding real risk with the power of an AI-backed communications surveillance platform like Relativity Trace continues to grow. In service of this demand, we’re devoted to continuing to break down the ins and outs of AI.