[1] |
詹新明, 黄南山, 杨灿 . 语音识别技术研究进展[J]. 现代计算机(专业版), 2008,24(9):43-45.
|
[2] |
Sovie D, Mcmahon L, Murdoch R , et al. Reshape to relevance[R\OL]., Accenture, 2019.[2019-10-20]. https://www.accenture.com/_acnmedia/pdf-93/accenture-digital-consumer-2019-reshape-to-relevance.pdf.
|
[3] |
Vanian. J, Pressman. A , How Amazon,Apple, Google, and Microsoft Created an Eavesdropping Explosion—Data Sheet [N\OL].3rd ed. Fortune,2019.8.8.[2019-10-20].https://fortune.com/2019/08/08/gogole-amazon-microsoft-listen-conversation-siri.
|
[4] |
Andriotis. A, Stevens. L . 亚马逊考虑借助Alexa进入支付领域[N\OL].华尔街日报,2018-4-6.[2019-10-20].https://cn.wsj.com/articles/CN-TEC-20180406192723.
|
[5] |
Papernot N, McDaniel P, Sinha A , et al. Towards the science of security and privacy in machine learning[J]. arXiv preprint arXiv:1611.03814, 2016.
|
[6] |
Kasmi C, Esteves J L . IEMI Threats for Information Security: Remote Command Injection on Modern Smartphones[J]. IEEE Transactions on Electromagnetic Compatibility, 2015,57(6):1752-1755.
|
[7] |
Roy N, Hassanieh H, Roy Choudhury R . Backdoor: Making microphones hear inaudible sounds[C]. Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 2017: 214.
|
[8] |
Zhang G, Yan C, Ji X, et al. Dolphinattack: Inaudible voice commands[C]. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017: 103-117
|
[9] |
Song L, Mittal P . Inaudible voice commands[J]. arXiv preprint arXiv:1708.07238, 2017.
|
[10] |
Roy N, Shen S, Hassanieh H, et al. Inaudible voice commands: The long-range attack and defense [C]// 15th {USENIX} Symposium on Networked Systems Design and Implementation( {NSDI} 18). 2018: 547-560.
|
[11] |
Michalevsky Y, Nakibly G, Nakibly G . Gyrophone: recognizing speech from gyroscope signals [C]// Usenix Conference on Security Symposium. USENIX Association, 2014.
|
[12] |
Schlegel R, Zhang K, Zhou X, et al. Soundcomber: A Stealthy and Context-Aware Sound Trojan for Smartphones [C]// NDSS. 2011,11:17-33.
|
[13] |
Dalvi N, Domingos P, Sanghai S, et al. Adversarial classification[C]. Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, Aug 22-25, 2004. New York: ACM, 2004: 99-108.
|
[14] |
Lowd D, Meek C . Adversarial learning[C]. Proceedings of the 11th ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, Chicago, Aug 21-24, 2005. New York: ACM, 2005: 641-647.
|
[15] |
Huang L, Joseph A D, Nelson B, et al. Adversarial machine learning[C]. Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, Chicago, Oct 21, 2011. New York: ACM, 2011: 43-58.
|
[16] |
Sharif M, Bhagavatula S, Bauer L, et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition [C]// the 2016 ACM SIGSAC Conference. ACM, 2016.
|
[17] |
Su J, Vargas D V, Kouichi S . One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 2017.
|
[18] |
Szegedy C, Zaremba W, Sutskever I , et al. Intriguing properties of neural networks[J]. Computer Science, 2013.
|
[19] |
Kurakin A, Goodfellow I, Bengio S . Adversarial examples in the physical world[J]. 2016.
|
[20] |
Nguyen A, Yosinski J, Clune J . Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images[J]. 2014.
|
[21] |
Papernot N, Mcdaniel P, Jha S , et al. The Limitations of Deep Learning in Adversarial Settings[J]. 2015.
|
[22] |
Nikiforakis N, Balduzzi M, Desmet L , et al. Soundsquatting: Uncovering the Use of Homophones in Domain Squatting[M] // Information Security. Springer International Publishing, 2014.
|
[23] |
Kumar D, Paccagnella R, Murley P, et al. Skill squatting attacks on amazon alexa[C]. 27th USENIX Security Symposium (USENIX Security 18). 2018: 33-47.
|
[24] |
Zhang N, Mi X, Feng X, et al. Dangerous skills: Understanding and mitigating security risks of voice-controlled third-party functions on virtual personal assistant systems[C]. Dangerous Skills: Understanding and Mitigating Security Risks of Voice-Controlled Third-Party Functions on Virtual Personal Assistant Systems. IEEE, 2019.
|
[25] |
Bispham M K, Agrafiotis I, Goldsmith M . Nonsense attacks on Google Assistant and missense attacks on Amazon Alexa[J]. 2018.
|
[26] |
Zhang Y, Xu L, Mendoza A, et al. Life After Speech Recognition: Fuzzing Semantic Misinterpretation for Voice Assistant Applications[C]. NDSS 2019.
|
[27] |
Abdullah H, Garcia W, Peeters C , et al. Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems[J]. arXiv preprint arXiv:1904.05734, 2019.
|
[28] |
Alzantot M, Balaji B, Srivastava M . Did you hear that? Adversarial Examples Against Automatic Speech Recognition[J]. 2018.
|
[29] |
Neupane A, Saxena N, Hirshfield L M, et al. The Crux of Voice (In) Security: A Brain Study of Speaker Legitimacy Detection [C]// NDSS. 2019.
|
[30] |
Yuan X, Chen Y, Zhao Y, et al. Commandersong: A systematic approach for practical adversarial voice recognition[C]. 27th USENIX Security Symposium (USENIX Security 18). 2018: 49-64.
|
[31] |
Zhou M, Qin Z, Lin X , et al. Hidden Voice Commands: Attacks and Defenses on the VCS of Autonomous Driving Cars[J]. IEEE Wireless Communications, 2019.
|
[32] |
Vaidya T, Zhang Y, Sherr M, et al. Cocaine noodles: exploiting the gap between human and machine speech recognition [C]// Usenix Conference on Offensive Technologies. USENIX Association, 2015.
|
[33] |
Carlini N, Mishra P, Vaidya T, et al. Hidden voice commands [C]// 25th {USENIX} Security Symposium ({USENIX} Security 16). 2016: 513-530.
|
[34] |
Carlini N, Wagner D . Audio adversarial examples: Targeted attacks on speech-to-text[C]. 2018 IEEE Security and Privacy Workshops (SPW). IEEE, 2018: 1-7.
|
[35] |
Cisse M, Adi Y, Neverova N , et al. Houdini: Fooling Deep Structured Prediction Models[J]. 2017.
|
[36] |
Szurley J, Kolter J Z . Perceptual Based Adversarial Audio Attacks[J]. arXiv preprint arXiv:1906.06355, 2019.
|
[37] |
Schönherr L, Kohls K, Zeiler S , et al. Adversarial attacks against automatic speech recognition systems via psychoacoustic hiding[J]. arXiv preprint arXiv:1808.05665, 2018.
|
[38] |
Kreuk F, Adi Y, Cisse M, et al. Fooling end-to-end speaker verification with adversarial examples [C]// 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018: 1962-1966.
|
[39] |
Serdyuk D, Audhkhasi K , Brakel, Philémon, et al. Invariant Representations for Noisy Speech Recognition[J]. 2016.
|
[40] |
Sriram A, Jun H, Gaur Y , et al. Robust Speech Recognition Using Generative Adversarial Networks[J]. 2017.
|
[41] |
Zeng Q, Su J, Fu C, et al. A multiversion programming inspired approach to detecting audio adversarial examples [C]// 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). IEEE, 2019: 39-51.
|
[42] |
Yang Z, Li B, Chen P Y , et al. Characterizing Audio Adversarial Examples Using Temporal Dependency[J]. 2018.
|
[43] |
Shokri R, Stronati M, Song C, et al. Membership inference attacks against machine learning models[C]. 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017: 3-18.
|
[44] |
Salem A, Zhang Y, Humbert M , et al. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models[J]. arXiv preprint arXiv:1806.01246, 2018.
|
[45] |
Miao Y Zhao B Z H, Xue M, , et al. The Audio Auditor: Participant-Level Membership Inference in Voice-Based IoT[J]. arXiv preprint arXiv:1905.07082, 2019.
|
[46] |
Dwork C, Roth A . The algorithmic foundations of differential privacy[J]. Foundations and Trends® in Theoretical Computer Science, 2014,9(3-4):211-407.
|
[47] |
Chaudhuri K, Monteleoni C, Sarwate A D . Differentially private empirical risk minimization[J]. Journal of Machine Learning Research, 2011,12(Mar):1069-1109.
|
[48] |
Jadhav S, Rawate A M . A New Audio Steganography with Enhanced Security based on Location Selection Scheme[J]. International Journal of Performability Engineering, 2016,12(5).
|
[49] |
Kong Y, Zhang J . Adversarial Audio: A New Information Hiding Method and Backdoor for DNN-based Speech Recognition Models[J]. arXiv preprint arXiv:1904.03829, 2019.
|
[50] |
Wang B, Yao Y, Shan S, et al. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks[C]. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks. IEEE, 2019.
|
[51] |
Elsayed G F, Goodfellow I, Sohl-Dickstein J . Adversarial reprogramming of neural networks[J]. arXiv preprint arXiv:1806.11146, 2018.
|
[52] |
Mukhopadhyay D, Shirvanian M, Saxena N. All Your Voices are Belong to Us: Stealing Voices to Fool Humans and Machines[M]// Computer Security - ESORICS 2015. Springer International Publishing, 2015.
|
[53] |
Lei X, Tu G H, Liu A X , et al. The Insecurity of Home Digital Voice Assistants - Amazon Alexa as a Case Study[J]. 2017.
|
[54] |
Lai C I, Abad A, Richmond K, et al. Attentive filtering networks for audio replay attack detection [C]// ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019: 6316-6320.
|
[55] |
Diao W, Liu X, Zhou Z, et al. Your voice assistant is mine: How to abuse speakers to steal information and control your phone[C]. Proceedings of the 4th ACM Workshop on Security and Privacy in Smartphones & Mobile Devices. ACM, 2014: 63-74.
|
[56] |
Xiao Q, Li K, Zhang D, et al. Security risks in deep learning implementations[C]. 2018 IEEE Security and Privacy Workshops (SPW). IEEE, 2018: 123-128.
|