The industry may be abuzz with the potential of artificial intelligence to transform the manner in which capital markets operate, but have we really thought things through properly? Is rushing into an implementation without the necessary groundwork really the best approach? After all, things didn’t work out too well for denizens of HBO’s “Westworld,” and the movie “Robocop” is testament to the perils of automating compliance without the right security controls in place.
If you haven’t seen it, “Westworld” envisions a future of realistic humanoid AI-based robots who dominate a Wild West-style theme park of sorts. Things go vastly awry when these beings start to think on their own, a cautionary tale about AI. In the business world, AI can similarly be perilous—or it can potentially help to save your enterprise.
Although it is easy to laugh off the lessons that can be learned from a TV series or a movie, they often have a recurring message that can easily be applied to the capital markets. AI is only as good as the data it has access to—until it becomes self-actualizing; then, if “Terminator” or “Westworld” are anything to go by, we’re toast anyway.
This year overall and the COVID-19 crisis in particular have proven the point that digitalization and transformation are essential to the future of this industry—not only in the front office but in the middle and back office, as well. I have had frequent conversations with operations teams over the last five months that indicate we’ve still got a long way to go before we’re fully efficient. Firms are still struggling with manual processes, dependent on fax machines, printers, scanners and paper-based documentation. A lot of information is hard to access, and automation is lacking within core post-trade functions. Artificial intelligence has huge potential to augment and transform current market practices, but we need to make sure we’re putting the groundwork in to make sure it’s successfully implemented. Market leaders have learned the lesson that firms first need to build out their data foundation layer to make best use of this technology.
There’s been a lot of industry discussion about the potential of machine learning and AI to tackle challenges ranging from regulatory change management to improving the client experience over the last few years. In 2018, Sibos, the annual mega-banking conference that travels the globe, indicated that AI had overtaken blockchain and other technologies in its dominance on the industry’s agenda with its record number of sessions covering the many facets of the potential of machine learning (a total of 25 sessions during the 4-day event). Last year, AI continued to dominate the agenda, and the conference’s closing keynote saw Google Cloud’s CEO Thomas Kurian championing the use of AI by industry regulators to ease the burden of compliance. However, another recent consideration is the importance of underlying data infrastructure in the effort to turn any of these technology ambitions into a reality.
As we creep closer to this year’s first-ever virtual Sibos in October, it’s important for the industry to keep data infrastructure improvements top of mind. After all, AI’s successful deployment for any task is dependent on the provision of high quality, accurate data. The old adage of ”garbage in, garbage out” has never been more relevant. Think back to Westworld again—would you want your AI making decisions based on faulty data? Of course, your compliance or trading system isn’t likely to shoot you in a Wild West gunfight if you get it wrong, but it could have significant negative impacts on your business and reputation.
Top-tier banks that have successfully deployed AI to support tasks such as market sentiment analysis or trade surveillance can tell you that though the technology is great at pattern recognition, it doesn’t work well if the data inputs are inconsistent or low volume. AI requires a high volume of consistent and compatible data to deliver actionable insights to traders, compliance teams, or any other functions and lines of business attempting to deploy the technology.
So, how does the industry tackle the thorny issue of data management when we must gather that data from a patchwork of systems, including legacy technologies dating back decades and static and inflexible data lakes that are more like “data swamps?” One approach is to deploy a next-generation, real-time intelligent data layer that can sit between existing applications and AI-enabled applications to allow seamless and real-time data access, integration, and analysis. These applications must scale out dynamically to accommodate increases in data volumes and workloads, as markets and volatility levels spike in times of crisis. This means that firms can benefit from next-generation technology without an entire structural rebuild of every enterprise data store.
Contrasted with data lake initiatives of the past, these are “hot, smart data lakes.” Hot refers to the ability to incorporate real-time data; smart provides the ability to harmonize the data and run all sorts of advanced analytics that can transform the data into real value for the business.
As AI continues to bring value to capital markets firms by automating manual processes, improving compliance, reducing risk and increasing alpha, forward-looking firms are not only using the latest AI and machine learning software, but the newest advancements with real-time, intelligent data management technologies as well. And that’s how they’re conquering the Wild West of today’s harried environment—and putting AI to work for good.
Read more about Firebrand's latest research at www.intersystems.com/firebrand.
About the Author
Virginie O’Shea is a capital markets fintech research specialist, with two decades of experience in tracking financial technology developments in the sector, with a particular focus on regulatory developments, data and standards. She is the founder of Firebrand Research, a new research and advisory firm focused on providing capital markets technology and operations insights for the digital age.