Always Providing You With Ongoing Information

 

coco7_001

Just imagine a world where anyone could create a photo-realistic and make who ever they want to say whatever they want. Add to that the ability to write a script and have a machine recite it back with the perfectly indistinguishable intonation of the person featured. Well it’s here!

A Montreal-based AI startup has recently revealed a new voice imitation technology that could signal the end of trusting your ears, meaning pretty soon there could be a cloud of doubt over literally every “recording” you see and hear.

Three PhD students at the University of Montreal developed Lyrebird, a deep learning algorithm that reportedly needs only a 60-second sample of a person’s voice to be able to generate a synthesized copy. While the company touts applications such as speech synthesis for people with disabilities, it’s clear this technology is opening a Pandora’s box of future complications.

Lyrebird has a dedicated “Ethics” page on its website, openly discussing the potentially dangerous consequences of the technology. The company intends to release the technology publicly and make it available to anyone, with the idea being that demonstrating so visibly how voices can be artificially faked. We will all learn to become skeptical of audio recordings we hear in the future. Everyone will learn to become skeptical of audio recordings we hear in the future.

Adobe revealed aproject in late 2016 called VoCo.

 

 

 

Advertisements

Tag Cloud

%d bloggers like this: