Privacy issues of smart speakers are a growing concern. In addition to traditional security threats, consumers are facing potential privacy violations from vendors as well. The New York Times recently reviewed several patent applications (although patent applications were not necessarily used for actual products) and found that in some smart speakers, voice information may be collected for advertising optimization; in some extreme cases, even human breath sounds may be collected to analyze the user’s health status.
For the first time ever, we will reveal a new attack surface called “Hot Mic”, which is related to the audio sensor circuit that is widely used in smart speakers.
We will give a live demonstration of the ability to extract and recover the voice signal within a reasonable duration after pressing the mute button, which is meant to be designed as the last barrier for voice privacy protection. Some countermeasures will be provided to people who attend this session as well.
We will also walk through the landscape of emerging smart speaker markets in China. After that, we will analyze those designs, especially the hardware part, to give people a better understanding of any future potential security flaws, and most importantly, the difficulty to earn trust from both consumers and regulatory authorities.
Despite the auditable surface growing smaller, for example, encrypted network traffic and unmeasurable sensor behavior, we tried to maximize the use of any possible audit surface. We will show the observation results of the data flow of internal hardware buses and network behaviors. We have created several innovative testing methods and gadgets, which can also be easily connected with smartphones to monitor any illegal audio capturing behavior.
Finally, some provable designs aimed to gain better trust from regulatory and users will be provided as references. We hope that with these designs, smart speakers’ manufacturers will be able to expand the auditable surface without sacrificing security and privacy.