Fight failures with voice biometric technology.

Fight failures with voice biometric technology.

With each technological advance, there seems to be a corresponding progress in exploiting this technology for harmful purposes. This is particularly true in the financial services industry, where the methods used to interact with our banks have produced a new form of "bank robber." When the transactions consisted only of going to a bank branch, the threat of financial loss consisted mainly of an armed attack. thief. However, the advent of the Internet has marked the introduction of online banking, a decisive technological advantage for banks and customers. It also introduced a new generation of bank robbers in the form of programmers and hackers. The new flight techniques were not based on firearms, but on social engineering techniques, such as phishing, as well as much more advanced techniques, such as Man-in-the-Mal malware. -Middle and Man-in-the-Browser. Becoming computers distributing money, they have been the target of malware attacks. Smartphone apps have also not been immune to malware targeting their respective operating systems. Our efforts to counter these attacks are often also based on technologies, such as the use of 2-factor authentication using SMS-based authorization codes. Not surprisingly, then, these techniques have also come under attack from techniques such as SIM swapping attacks and even the SS7.Deepfakes global telecommunications network hacking. There is a new technology known as Deepfake that, although it has very distant origins, we believe that it will have the ability to be used as a new and powerful fraud vector. Depefake is the use of Machine Learning to create audio / visual imitations of real people. It uses a technique known as the Generative Adversarial Network (GAN), which can generate new data from existing data sets. This includes images and sound. For example, existing audio / video files of a speaking person can be used to generate a new synthetic video / sound, depending on what the algorithm has learned from the actual video / sound. Although it was initially used to transpose celebrities into porn movies, the harmful possibilities of Deepfakes range from falsifying false information to television, that is, we can now see that the target speaks to us personally. Fake news, election manipulation, disinformation war, and a whole new way. The decline of print media in favor of digital reception of our news is not only practical, it has also introduced much richer content. audio and video There are practically unlimited sites that we can visit to get news and content. If we see a video clip of a person, unknown or not, transmitting a message, we have no reason to suspect that this video is false. This provides a turnkey forum for those looking to spread fake news via Deepfakes.Mage: Shutterstock (Image: © Shutterstock) Potential impact on financial services Why can Deepfake affect financial services too? Information is increasingly being disseminated digitally, as is banking services. Omnichannel and unified communications strategies involve banks communicating with their customers using, for example, a browser-based audio / video system. This could be with a human agent, but in the future also with agents based on artificial intelligence (AI). Therefore, it is not difficult to imagine, therefore, a video / audio conversation between a wealthy client and his private banker. If the client looks and looks like himself and of course can provide answers to all security questions (as he always would), why does the banker not accept any of the instructions given by the client? A much larger scale with banks using facial recognition technology to authenticate customers on websites and mobile apps? This could involve self-service, interaction with a human agent, or an AI chatbot. If the face matches, and remembering that Deepfakes is not static, they show a liveliness, fraudulent transactions will be executed. These are just two examples that involve customer interactions. Interbank communications and instructions could be compromised in the same way, no doubt, as the author did not even consider. Being easily identifiable by a colleague or outside worker could become the key to exploiting Deepfake technology. No one wants to dispute the identity of a known person who looks and sounds perfectly normal. Detecting a Deepfake So how do we detect that what looks real to our eyes and really sounds to our ears is actually wrong? The answer lies in the audio of a Deepfake and in the use of advanced vocal biometric techniques. Regardless of the real and "human" appearance of a Deepfake, it is synthetically generated. Even Deepfake videos invariably include an audio component, and that is what audio is the key to its detection. Advanced voice biometric algorithms include techniques to detect both recordings, called presentation or playback attacks, as well as synthetically generated audio. Regardless of how a "human" voice may sound to the human ear, it is not what appears to be important to synthetic detection engines. Their interpretation of whether or not the audio is spoken by a human is very different from ours. Voice biometrics has always been the most powerful and accurate way to authenticate or identify the true identity of a human being. The ability of the most advanced biometric speech engines to simultaneously identify the distinction between a human and a synthetically generated "human" can be invaluable if we truly witness the rise of Deepfakes.