Connect with us

Technology

Technology is Spying on You: How Can Artificial Intelligence Prevent It?

Published

on

Artificial Intelligence

By Madhurjya Chowdhury


Human listeners perceive the audio camouflage as background noise and have no difficulty recognizing the spoken words

Companies utilize “bossware” to listen in on their staff while they are near their desktops. Phone calls can be recorded by a variety of “spyware” applications. Home gadgets like Amazon’s Echo can also capture daily chats. A new technique known as Neural Voice Camouflage now provides protection. As you speak, it makes bespoke audio noises in the background, confounding the artificial intelligence (AI) that transduces our recorded speech.

The new technology employs what is known as an “adversarial attack.” The method incorporates machine learning in which algorithms look for patterns in data to modify sounds in such a manner that an AI, but not humans, mistake them for something different. In essence, you utilize one AI to deceive another.

When you wish to hide in real-time, the machine-learning AI needs to organize the entire sound clip before understanding how to adjust it.

In the latest study, experts trained a neural network, an ML system modeled by the brain, to predict the future efficiently. They trained it on hours of recorded speech so that it can continually analyze 2-second audio snippets and conceal what’s likely to be said next.

For example, if somebody has just said “enjoy the big feast,” it is impossible to foresee what will be said next. However, by considering what has recently been spoken as well as the features of the speaker’s voice, it generates noises that will interrupt a variety of alternative words that may follow. That covers what happened afterward, as stated by the same speaker, “that’s being prepared.” Human listeners perceive the audio camouflage as background noise and have no difficulty recognizing the spoken words. However, machines make mistakes.

Speech concealed by white noise and a competitive adversarial attack had mistake rates of just 12.8 percent and 20.5 percent, respectively.

Even when the ASR framework was conditioned to transcribe speech that had been disrupted by Neural Voice Camouflage, the error margin remained at 52.5 percent. Short words, like “the,” were the most difficult to interrupt in general, but they are the least illuminating elements of dialogue.

The technology was also tested in the actual world, with the experts playing a speech recording paired with camouflage over a set of speakers in the same area as a microphone. It was still functional.

According to Mia Chiquier, a computer engineer at Columbia University who conducted the research, this is simply the first step toward protecting privacy in the face of AI.

Chiquier explains that the program’s predictive component has a lot of potential for other uses that require real-time processing, such as driverless automobiles. Brains also function through anticipation; you feel surprised when your brain guesses something incorrectly. “We’re replicating the way people do things,” Chiquier explains in this regard.

“There’s something pleasant about the form it integrates predicting the future, a classic ML problem, with another problem of confrontational ML,” says Andrew Owens. Bo Li was astonished by the new approach’s ability to defeat the fortified ASR system.


Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement
You have not selected any currencies to display
mebookshelfandi