قالب وردپرس درنا توس
Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Technology https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Alexa and Google Home have misused eavesdropping and phishing passwords

Alexa and Google Home have misused eavesdropping and phishing passwords



  The modified image shows human ears originating from an Amazon device.

At present, the privacy threats posed by Amazon Alexa and Google Home are well known. Workers at both companies regularly listen to audio of users – records of which can be stored forever – and the sounds captured by devices can be used in criminal processes.

Now there is new concern: malicious applications developed by third parties and hosted by Amazon or Google. The threat is not only theoretical. Whitehat hackers at German security testing labs have developed eight applications ̵

1; four Alexa "skills" and four "Google Home actions" – that have all gone through the processes of checking Amazon or Google. Skills or actions are simple applications for checking horoscopes, except for one that masks itself as a random number generator. Behind the scenes, these "smart spies", as researchers call them, secretly eavesdrop on users and scammer on their passwords.

"It has always been clear that these voice assistants have a privacy impact – Google and Amazon receive your speech, and this can be triggered sometimes in an incident," Fabian Branlein, senior security consultant at SRLabs, told me. "Now we show that not only manufacturers, but … hackers can also abuse these voice assistants to interfere with anyone's privacy.

Malicious applications have different names and slightly different ways of working, but all followed similar streams. A user would say a phrase like, "Hello, Alexa, ask my Lucky Horoscope to give me a Taurus for a Taurus" or "OK ​​Google, ask my Lucky Horoscope to give me a Taurus for a Taurus". The eavesdropper apps responded with the requested information while phishing apps sent out a fake error message. At the time, the apps gave the impression that they were no longer working when they were actually silently waiting for the next phase of the attack.

As the next two videos show, the eavesdropping apps give the expected answers and then silence. In one case, the application is silent because the task has been completed, and in another case, the application is silent because the user has given the stop command that Alexa uses to terminate the applications. But the applications quietly recorded all calls while listening to the device and sent a copy to a server designated by the developer.

Google Home Eavesdropping.

Amazon Alexa Eavesdropping.

Phishing applications follow a slightly different path by responding with an error message stating that the skill or action is not available in that user's country. They then remain silent to give the impression that the application is no longer working. After about a minute, the apps use a voice mimicking those used by Alexa and Google's home to claim that there is an incorrect update to the device and prompt the user to enter a password to install it.

Home Home Phishing. [19659011] Amazon Alexa Phishing.

SRLabs eventually downloaded all four applications. Recently, researchers have developed four German-language applications that have worked similarly. All eight of them have been verified by Amazon and Google. The four newer ones were only downloaded after the researchers privately reported their results to Amazon and Google. As with most skills and actions, users should download nothing. Just saying the right phrases on a device was enough to launch applications.

All malicious applications used common building blocks to mask their malicious behavior. The first was the exploitation of a flaw in both Alexa and Google Home, when their speech machines were instructed to speak the "" sign. (U + D801, point, interval). The non-uttering sequence made both devices silent while the applications were still running. The silence gave the impression that applications were discontinued, even when they were still running.

Applications use other tricks to mislead users. In the language of the voice applications "Hey Alexa" and "OK Google" are known as "wake-up" words that activate devices; "My lucky horoscope" is a "calling" phrase used to launch a particular skill or action; "give me a horoscope" is an "intent" that tells an application what function to call; and Taurus is a slot value that acts as a variable. Once the apps are initially approved, SRLabs developers manipulate intentions like "stop" and "start" to give them new features that cause apps to listen and enter conversations.

Others of the SRLabs who worked on the project include a security researcher. Louise Friris and Carsten Nol, chief scientist at the firm. In a publication documenting the applications, researchers explained how they developed Alexa phishing skills:

1. Create a seemingly innocent skill that already has two intentions:
– an intention that starts from "stop" and copies the intention of stop
– an intention that starts from a specific, commonly used word, and retains the following words as slot values. That intention is acting as a backup intention.

2. After reviewing Amazon, change your first intention to say goodbye, but then keep the session open and extend the listening time by adding a sequence of "(U + D801, dot, space)" repeatedly to the voice prompt.

3, Change the second intention not to respond at all

When the user now tries to terminate the skill, he hears a goodbye message, but the skill continues to work for a few more seconds. If the user starts a sentence beginning with the selected word during that time, the intent will write the sentence as slot values ​​and send them to the attacker.

To develop Google Home eavesdropping actions:

1. Create an action and submit it for review.

2. After the review, change the basic intent to complete the Bye Earcon sound (by playing a recording using the Speech Synthesis Markup Language (SSML)) and set trueUserResponse to true. This sound is usually understood as a signal that the voice application has ended. Then add a few noInputPrompts consisting of just a brief silence, using the SSML element or a continuous sequence of Unicode characters. "

3. Create a second intent that is called every time an action.intent.TEXT request is received. This intent elicits a brief silence and defines several silent noInputPrompts.

After displaying the requested information and playing the hearing aid, the Google Home device waits about 9 seconds for speech input. If no one is found, the device "silences" a brief silence and waits again for user input. If no speech is detected within 3 iterations, the Action stops.

When a speech input is detected, a second intention is invoked. This intention consists of only one silent exit, again with many silent reprimands. Each time a speech is detected, this Intent is called and the number of reprimands is reset.

The hacker receives a complete transcript of the user's subsequent conversations, while there is at least a 30 second interruption of the open speech. (This can be extended by extending the silence during which the tapping is paused.)

In this state, Google Home Device will forward all commands prefixed by "Ok Google" (except "stop ") Of the hacker, therefore, the hacker can also use this hack to simulate other applications, interact with the person with the fraudulent actions and launch reliable phishing attacks.

SRLabs privately reports the results of its research to Amazon and Google. In response, both companies have removed applications and stated that they are changing their approval processes to prevent skills and actions from having such capabilities in the future. In a statement, Amazon representatives provided the following statement and frequently asked questions (emphasis added for clarity):

Customer trust is important to us and we conduct security reviews as part of the skills certification process. We quickly blocked the skill in question and put in place mitigating measures to prevent and detect this type of behavior and to reject or remove them when identified.

In the Q&A entry:

1) Why is it possible for a skill created by researchers to get a rough transcript of what the client says after saying "stop" the skill?

This is no longer possible for skills submitted for certification. We have put in place mitigation measures to prevent and detect these types of behaviors and to reject or remove them when they are identified.

2) Why is it possible for SR Labs to prompt skill users to install a fake security update and then ask them to enter a password?

We have put in place mitigation measures to prevent and detect this type of skill behavior and to reject or remove them when identified. This includes preventing the ability to ask customers for their passwords on Amazon.

It is also important for customers to know that we provide automatic security updates to our devices, and will never ask them to share their password.

Meanwhile, Google representatives wrote:

All actions on Google are required to follow our developer policies, and we prohibit and remove any action that violates these policies. We have review processes in place to identify the type of behavior described in this report and have removed the actions we found from these researchers. We are introducing additional mechanisms to prevent these problems from occurring in the future.

Google did not say what these additional mechanisms are. In the background, a company representative stated that company employees were reviewing all third-party activities available from Google, and in the meantime, some of them may be paused. Once the review is complete, the completed actions will be made available again.

It is encouraging that Amazon and Google have removed apps and are stepping up their review processes to prevent such apps from being made available. But the success of SRLabs raises serious concerns. Google Play has a long history of hosting malicious applications that push sophisticated malware to monitor – at least in one case, researchers have pointed out, to allow the Egyptian government to spy on its own citizens. Other malicious apps on Google Play have stolen user cryptocurrency and run secret payloads. This type of application has routinely gone through the Google vetting process for years.

There is little or no evidence that third-party applications are actively threatening Alexa and Google Home users, but the SRLabs study suggests that the possibility is by no means far off. I have long been convinced that the risks posed by Alexa, Google Home, and other permanent listening applications outweigh their benefits. SRLabs's Smart Spies research only adds to my belief that most devices should not be trusted on these devices.


Source link