While experts continue to argue about what should count as artificial intelligence, technologies that are now commonly referred to as AI are being widely implemented in the financial sector in a variety of formats and for various purposes. They are penetrating so fast that at times we begin to ask ourselves whether all potential risks have been properly assessed.
What am I talking about? Banks are literally investing billions in AI-enabled systems and machine learning technologies in their information security and fraud response management.
And we should certainly welcome that. Attackers are not wasting their time either; they use zillions of all sorts of tricks, including technical ones, to lay hands on other people's money. So it is no longer possible to effectively combat that without using the most advanced technologies that can identify non-obvious patterns in huge arrays of data, because analyzing them manually would take weeks and months.
Naturally, AI technologies in their current iteration cannot (yet?) fully replace the services of human specialists; however, they can make their lives much easier.
Machine learning and AI-enabled video analytics can also qualify as protection tools, although it would be stretching it a little. For example, such systems are capable of identifying a known intruder entering a bank branch office for reconnaissance and notifying the security service. Or, when investigating an incident, they can quickly narrow down the list of persons whose actions would indicate their involvement.
There are machine learning and AI-based solutions that can detect and curb attempted fraudulent transactions or other threats within the bank infrastructure including cloud resources, virtualized networks and the Internet of Things. Again, their use does not mean banks will be able to do without full-time information security specialists, but technology does help increase their efficiency, including when it comes to directly fighting cyberattacks.
On the other hand, AI applications are quickly expanding in completely different areas, unrelated to information security.
For example, JP Morgan Chase has recently begun using AI in marketing: they have been recently using a machine – actually a copywriter robot – to write advertising texts. That is certainly not very good news for millions of professional copywriters, whose services are sometimes very expensive on the market.
A similar innovation is the use of Robo-Advisers, a digital alternative to financial advisers on banking and money transactions online. Such digital portfolio managers are already capable of giving very good recommendations on a wide range of issues, from banking products and loyalty programs offered by various banks to financial risks modeling for small businesses.
According to IHS Markit's estimates published last spring, banks around the world earned about $41.1 billion from AI technologies in 2018. This amount includes both direct income from using AI-enabled solutions and reduced costs and benefits from improving financial institutions' efficiency. Back in 2018, Russia's Sberbank announced that robots had replaced its employees on dozens of routine processes, boosting back office efficiency by a quarter. The bank also uses technology to speed up customer service processes.
It is clear that these figures are a valid reason for further expanding the scope of AI applications.
But here is another question. Any technology with this level of hype around it as AI has three possible scenarios for the future. One is commoditization – a process where a product becomes standardized and loses its uniqueness, a commodity. This is something that happened with optical text recognition – this technology used to be referred to as 'artificial intelligence,' but those days are long gone. The second option is a decline in interest on the market due to the lack of proper means of implementation – as happened with the forerunners of current VR technologies 25 years ago. And the third scenario is reckless commercialization, as happened with the Internet of Things; as a result, the market became flooded with unsafe devices with substandard firmware, because the demand for trendy gadgets had to be satisfied, and security was handled on the leftover principle.
The systems now associated with the term 'artificial intelligence' are hardly in for commoditization, at least not any time soon, and certainly not all of them. Popular interest can decline if something else creates a bigger hype, but this will hardly affect investment or projects in this area in any significant way.
But thoughtless implementation without proper security considerations is a very real prospect.
Sci-fi authors have long been scaring us with the impending advent of artificial superintelligence, which eventually threatens to destroy human civilization. However small the likelihood of creating a Skynet that no longer needs people, or a Samaritan from the Person of Interest series deified by its mad creators, an apocalyptic scenario involving AI does not seem all that far-fetched.
Just imagine a digital infrastructure that unites a number of the biggest banks and stock exchanges around the world in which AI has extended privileges all the way to blocking accounts and manipulating stock indicators at its own discretion.
If such a system loses controllability, the consequences for the global economy can be comparable to the aftermath of a global military conflict.https://visioners-rus.blogspot.com/2019/12/aleksey-kuzovkin-ai.html