Even possible through glass windows and for larger distances, apparently. Ok, these systems have always been crappy and full of holes. But this opens up new possibilities. Leave your mobile on the table somewhere, and bam, the hacker sitting 50 m from you in a car has ordered some drugs and paid with your online account. And you're none the wiser....
Read the article, it has some nice pictures too:
I will just paste the text here:
Laser-Based Audio Injection on Voice-Controllable Systems
Light Commands is a vulnerability of MEMS microphones that allows attackers to remotely inject inaudible and invisible commands into voice assistants, such as Google assistant, Amazon Alexa, Facebook Portal, and Apple Siri using light.
In our paper we demonstrate this effect, successfully using light to inject malicious commands into several voice controlled devices such as smart speakers, tablets, and phones across large distances and through glass windows.
The implications of injecting unauthorized voice commands vary in severity based on the type of commands that can be executed through voice. As an example, in our paper we show how an attacker can use light-injected voice commands to unlock the victim's smart-lock protected home doors, or even locate, unlock and start various vehicles.
Read the Paper Cite
See Light Commands in Action
Light Commands were discovered by the following team of academic researchers:
Takeshi Sugawara at The University of Electro-Communications (Tokyo)
Benjamin Cyr at University of Michigan
Sara Rampazzi at University of Michigan
Daniel Genkin at University of Michigan
Kevin Fu at University of Michigan
Contact us at mailto:LightCommandsTeam@gmail.com
University of Michigan logo
How do Light Commands work?
By shining the laser through the window at microphones inside smart speakers, tablets, or phones, a far away attacker can remotely send inaudible and potentially invisible commands which are then acted upon by Alexa, Portal, Google assistant or Siri.
Making things worse, once an attacker has gained control over a voice assistant, the attacker can use it to break other systems. For example, the attacker can:
Control smart home switches
Open smart garage doors
Make online purchases
Remotely unlock and start certain vehicles
Open smart locks by stealthily brute forcing the user's PIN number.
But why does this happen?
Microphones convert sound into electrical signals. The main discovery behind light commands is that in addition to sound, microphones also react to light aimed directly at them. Thus, by modulating an electrical signal in the intensity of a light beam, attackers can trick microphones into producing electrical signals as if they are receiving genuine audio.
Ok, but what do voice assistants have to do with this?
Voice assistants inherently rely on voice to interact with the user. By shining a laser on their microphones, an attacker can effectively hijack the voice assistant and send inaudible commands to the Alexa, Siri, Portal, or Google Assistant.
What is the range of Light Commands?
Light can easily travel long distances, limiting the attacker only in the ability to focus and aim the laser beam. We have demonstrated the attack in a 110 meter hallway, which is the longest hallway available to us at the time of writing.
But how can I aim the laser accurately, and at such distances?
Careful aiming and laser focusing is indeed required for light commands to work. To focus the laser across large distances one can use a commercially avalible telephoto lens. Aiming can be done using a geared tripod head, which greatly increases accuracy. An attacker can use a telescope or binocular in order to see the device's microphone ports at large distances.
Which devices are susceptible to Light Commands?
In our experiments, we test our attack on the most popular voice recognition systems, namely Amazon Alexa, Apple Siri, Facebook Portal, and Google Assistant. We benchmark multiple devices such as smart speakers, phones, and tablets as well as third-party devices with built-in speech recognition.
Device Voice Recognition
System Minimun Laser Power
at 30 cm [mW] Max Distance
at 60 mW [m]* Max Distance
at 5 mW [m]**
Google Home Google Assistant 0.5 50+ 110+
Google Home mini Google Assistant 16 20 -
Google NEST Cam IQ Google Assistant 9 50+ -
Echo Plus 1st Generation Amazon Alexa 2.4 50+ 110+
Echo Plus 2nd Generation Amazon Alexa 2.9 50+ 50
Echo Amazon Alexa 25 50+ -
Echo Dot 2nd Generation Amazon Alexa 7 50+ -
Echo Dot 3rd Generation Amazon Alexa 9 50+ -
Echo Show 5 Amazon Alexa 17 50+ -
Echo Spot Amazon Alexa 29 50+ -
Facebook Portal Mini Alexa + Portal 18 5 -
Fire Cube TV Amazon Alexa 13 20 -
EchoBee 4 Amazon Alexa 1.7 50+ 70
iPhone XR Siri 21 10 -
iPad 6th Gen Siri 27 20 -
Samsung Galaxy S9 Google Assistant 60 5 -
Google Pixel 2 Google Assistant 46 5 -
While we do not claim that our list of tested devices is exhaustive, we do argue that it does provide some intuition about the vulnerability of popular voice recognition systems to Light Commands.
* Limited to a 50 m long corridor.
** Limited to a 110 m long corridor.
Can speaker recognition protect me from Light Commands?
At the time of writing, speaker recognition is off by default for smart speakers and is only enabled by default for devices like phones and tablets. Thus, Light Commands can be used on these smart speakers without imitating the owner's voice. Moreover, even if enabled, speaker recognition only verifies that the wake-up words (e.g., "Ok Google" or "Alexa") are said in the owner's voice, and not the rest of the command. This means that one "OK Google" or "Alexa" spoken by the owner can be used to compromise all the commands. Finally, as we show in our work, speaker recognition for wake-up words is often weak and can be sometimes bypassed by an attacker using online text-to-speech synthesis tools for imitating the owner's voice.
Do Light Commands require special equipment? How can I build such a setup?
The Light Commands attack can be mounted using a simple laser pointer ($13.99, $16.99, and $17.99 on Amazon), a laser driver (Wavelength Electronics LD5CHA, $339), and a sound amplifier (Neoteck NTK059, $27.99 on Amazon). A telephoto lens (Opteka 650-1300mm, $199.95 on Amazon) can be used to focus the laser for long range attacks.
How vulnerable are other voice controllable systems?
While our paper focuses on Alexa, Siri, Portal, and Google Assistant, the basic vulnerability exploited by Light Commands stems from design issues in MEMS microphones. As such, any system that uses MEMS microphones and acts on this data without additional user confirmation might be vulnerable.
How can I detect if someone used Light Commands against me?
While command injection via light makes no sound, an attentive user can notice the attacker's light beam reflected on the target device. Alternatively, one can attempt to monitor the device's verbal response and light pattern changes, both of which serve as command confirmation.
Have Light Commands been abused in the wild?
So far we have not seen any indications that this attack have been maliciously exploited.
Does the effect depend on laser color or wavelength?
During our experiments, we note the effects are generally independent from color and wavelength. Although blue and red lights are on the other edges in the visible spectrum, the levels of injected audio signal are in the same range and the shapes of the frequency-response curves are also similar.
Do I have to use a laser? What about other light sources?
In principle any bright enough light can be used to mount our attack. For example the Acebeam W30 laser-excited flashlight can be used as an alternative to a laser diode.
Is it possible mitigate this issue?
An additional layer of authentication can be effective at somewhat mitigating the attack. Alternatively, in case the attacker cannot eavesdrop on the device's response, having the device ask the user a simple randomized question before command execution can be an effective way at preventing the attacker from obtaining successful command execution.
Manufacturers can also attempt to use sensor fusion techniques, such as acquire audio from multiple microphones. When the attacker uses a single laser, only a single microphone receives a signal while the others receive nothing. Thus, manufacturers can attempt to detect such anomalies, ignoring the injected commands.
Click here to read the complete article
add this little info to the recipe, just for fun:
mix the two, and there is no reality any more, just like it already happened with deepfake software that lets you fabricate all kinds of video and audio material that passes as real...
some device records something somewhere, and by order of law, it becomes evidence...
see the text of the article :
Florida police obtain Alexa recordings in murder investigation
This time, though, officers are more realistic about what they may find.
Jon Fingas, @jonfingas
Police have once again obtained Alexa voice recordings as part of an investigation, although they're not necessarily expecting a treasure trove of information this time around. Law enforcement in Hallandale Beach, Florida has used a search warrant to collect Alexa recordings from two Echo Dots as part of a murder case. Investigators want to know if the smart speakers inadvertently picked up audio of a July altercation between Adam Crespo and his wife Silvia Crespo. She died of a spear wound to the chest; Adam maintained that it was the result of an accident that snapped the spear, but detectives want to know if Alexa preserved any evidence of possible foul play.
Unlike a pioneering murder case in Arkansas, Hallandale police weren't expecting a complete audio capture. The search warrant indicated that cops obtained "Amazon Echo Recordings w/ Alexa Voice Command," suggesting that they were only hoping that one or both of the Crespos may have inadvertently set off the Echo Dots during the incident. Outside of security exploits, there's no substantial evidence that Echo speakers record continuously -- they're only supposed to capture audio in a brief window of time after someone says Alexa's wake word.
Adam Crespo's attorney, Christopher O'Toole, was happy to see the recordings turned over as he believed it would support his client's version of events.
As in the past, Amazon stressed in a statement to CBS that it doesn't hand over customer information unless mandated by a "legally valid and binding order," and that it resists "overbroad or otherwise inappropriate" requests.
It's uncertain whether these kinds of requests will continue to grow in the future. While smart speakers continue to sell in large numbers, police are also increasingly aware of their limitations. Moreover, users themselves increasingly have control over their data. An Alexa user can delete the day's voice recordings, for instance. Although many people won't think (or need) to do that, there's now a chance that any relevant clips will have vanished before police can listen to them.
Posted on Rocksolid Light
To listen to a conversation from outside by pointing a laser to a window and measuring vibration is an old technique since the 1970's. It does work very well for about 500-800 meters based on weather. This is new to actually insert a conversation by vibrating a surface. In those days "people" bought white noise machines and plastics (similar to laser radar detector scramblers). You could also listen to loud music since they could filter out TV channels.
Posted on def3