This is a bit creepy. At Amazon’s Re:Mars conference, Alexa’s senior vice-president Rohit Prasad exhibited a startling new voice assistant capability: the ability to mimic voices. That’s not so weird. However, Amazon framed this mimick ability as a way to commemorate your lost loved ones.
It played a demo video where Alexa read to a child in the voice of his recently deceased grandmother. Prasad said that the company was looking at ways to make AI as personal as possible. “While AI can’t eliminate that pain of loss, he said, “it can definitely make the memories last.” An Amazon spokesperson even says that this new skill can create a synthetic voiceprint after being trained on just a minute of audio of the individual.
There is a downside. Security experts have long had concerns that deep fake audio tools, which use text-to-speech technology to create synthetic voices, could usher in a wave of new scams. Voice cloning software has led to several crimes. For instance, a 2020 incident in the United Arab Emirates saw criminals fool a bank manager into transferring $35 million after they impersonated a company director. These crimes are still very rare, but as technology unfolds, they may become more commonplace.
Image Source Pexels
Filed Under: Gadgets News