Someone is tricking voice assistants with white noise and secret commands

Share

Worryingly, the students say the bad actors could use messages hidden within music to unlock doors, access accounts or add items to shopping lists.

According to the New York Times, researchers in both China and the USA have carried out a series of experiments which ultimately proved that it's possible to communicate silent commands that are undetectable to the human ear to voice assistants like Siri, Alexa and Google Assistant.

Researchers can send secret audio instructions to smart speakers.

The technique, which the Chinese researchers called DolphinAttack, can instruct smart devices to visit malicious websites, initiate phone calls, take a picture or send text messages.

Apple has additional features to prevent the HomePod speaker from unlocking doors.

Boston Dynamics' robots are conquering the indoors and outdoors
You can run from Boston Dynamic's humanoid robot Atlas, but it wouldn't do you any good - the robot can run after you. In the second clip, Boston Dynamics showed off the dog-like SpotMini robot climb up and down stairs.

In case you haven't noticed, voice-activated gadgets are booming in popularity right now, and finding their way into a growing number of homes as a result.

Amazon told The New York Times it has taken steps to ensure its speaker is secure. Amazon has a PIN code option for making voice purchases.

Earlier this month, researchers at the University of California at Berkley published a research paper that moved the needle even further. None would say, for instance, whether or not their voice platform was capable of distinguishing between different audio frequencies and then blocking ultrasonic commands above 20kHz.

Researchers at China's Zhejiang University published a study previous year that showed numerous most popular smart speakers and smartphones, equipped with digital assistants, could be easily tricked into being controlled by hackers.

What these research studies prove is that it's possible to manipulate speech recognition gadgets by making minute changes to speech or other audio files.

Duke Baristas Fired After Blasting Rap Song at Coffee Shop
The university said that when he heard offensive and explicit music playing over the speakers, he took his concerns to management. But they will continue to plague us unless we address them directly, honestly, in good faith, and with a healthy dose of courage.

The team were able to launch attacks, which are higher than 20kHz, by using less than £2.20 ($3) of equipment which was attached to a Galaxy S6 Edge.

Testing against Mozilla's open source DeepSpeech voice recognition implementation, Carlini and Wagner achieved a 100 percent success rate without having to resort to large amounts of distortion, a hallmark of past attempts at creating audio attacks.

Researchers tested Apple iPhone models from iPhone 4s to iPhone 7 Plus, Apple watch, Apple iPad mini 4, Apple MacBook, LG Nexus 5X, Asus Nexus 7, Samsung Galaxy S6 edge, Huawei Honor 7, Lenovo ThinkPad T440p, Amazon Echo and Audi Q3.

Researchers say the fault is due to both software and hardware issues. The pair also claims that the attack worked when they hid the rogue command within brief music snippets.

Advantages of Being an Android App Developer
The problem with Android updates is it often takes as long as a year before third-party manufacturers push out major versions. Google has partnered with Deep Mind, the world leader in artificial intelligence, to build Adaptive Battery for Android P.

Share