The average person sends or receives over 100 written communications per day—and around the world, nearly 1.4 million voice/video calls are made every minute. Regulatory bodies know communication volumes are up and have been closely monitoring how teams surveille them across all platforms.
From the FCA’s Market Watch 69 to the 2022 Report on FINRA's Examination and Risk Monitoring Program, regulations are clear: communications of all types must be monitored.
And when they’re not, the fines are hefty; an increase in penalties in 2021 is reported to top $5.4 billion (€4.7 billion).
How can compliance teams keep up? The answer is moving beyond legacy-based solutions to embrace artificial intelligence (AI).
AI in Finance
In recent years, AI has started showing up everywhere: healthcare, ecommerce, transportation, and especially in the world of finance. Banks have used AI to streamline processes in the front office and enhance customer experiences for more than a decade. In consumer finance, customers are most concerned about independence and security—and AI helps. Chatbots powered by natural language processing give the answers customers are looking for 24/7, and fraud departments constantly scouring charges to detect anomalies keep their finances secure.
As financial institutions’ AI journeys have matured, so have the use cases for it. Which is why the compliance industry is keen to adopt AI to stay ahead of regulatory misconduct and improve surveillance team performance. Exponential data growth and fines have been a driving force behind this. Compliance teams cannot keep up with growing data volumes, especially in a remote work environment where more meetings happen virtually, and communication happens on a variety of messaging platforms.
AI: The Perfect Tool to Solve Communication Surveillance Challenges
Instead of solely relying on out-of-the-box, lexicon-only policies, forward-thinking surveillance teams have adopted AI to combat these problems. According to a recent report published by FCS (Financial Conduct Authority), 67 percent of regulated firms indicated that they are using machine learning (ML) in live production environments. Likewise, industry surveys report similar findings in attitudes toward AI and ML: 81 percent of C-suite respondents in financial services saw AI as important to their company’s future success, and more than half thought it gave them a competitive advantage.
Risk is too nuanced and bad actors are too smart for traditional surveillance alone. But when building and training AI solutions, there is not just one algorithm, behavioral analytics tool, or model that will solve surveillance challenges. Therefore, surveillance teams must consider several factors when choosing an AI-powered solution—primary of which are data protection and usability.
1. Data governance.
Data security and privacy are top of mind when exploring AI models, and compliance teams need confidence that all models are delivered in a responsible manner. Responsibility starts with appropriate data governance. Are models trained with sufficient variety, and built with security, to manage the sensitive work of communication surveillance? For example, Relativity Trace builds models from public data sets—including the latest regulatory findings—as well as synthetic data and expertise gained from years of experience in e-discovery and compliance monitoring. This approach allows Relativity Trace to build models that have seen sufficient variance to yield high performance in real-world workflows. Sufficient variance is critical to ensuring AI is not overfit to match historical trends, while being sturdy enough to perform well in the future without degradation.
The other key to responsible AI is explainability. This is demonstrated in two ways: globally and locally. This helps teams understand how the AI works at a theoretical level, as well as how it’s executed for their specific organization. At the model level, certain questions should be answered before a model is released into production. What input variables are coming into the model? Who has access to the model? How is the model behaving regarding key performance metrics? Only after these are answered and satisfactory can a model be released for use. Locally, insights regarding transparency for the individual user can be shown by the model confidence score, as well as highlighting what the model alerted on—resulting in an approachable and understandable user experience.
To enable compliance teams to comfortably understand and invest in AI capabilities to detect risk, remove irrelevant content, and enhance review, communication surveillance platforms should be able to fully answer all of these questions and more. Responsible AI helps improve surveillance monitoring capabilities no matter the data sources or workflows.
Responsible AI for the Future
AI once made compliance teams apprehensive because it felt like a “black box” approach. Today, AI offers an effective monitoring tool for compliance teams thanks to its ease of use and transparency. We’re learning that AI is not about ceding control; it’s about harnessing technology to help individuals do their jobs better.
Effective use of this technology is supported by defensible reporting, data visualizations, and clear application. Finally, with thoughtful implementation, AI can contain a suite of highly configurable capabilities that keep transparency and ethical considerations top of mind. Therefore, AI adoption in a tool like Relativity Trace can help compliance teams achieve their core mission of uncovering real risk, reducing review time, and building confidence in their results.