Amazon claims it reviews the software created by third-party developers for its Alexa voice assistant platform, yet US academics were able to create more than 200 policy-violating Alexa Skills and get them certified. In a paper [PDF] presented at the US Federal Trade Commission’s PrivacyCon 2020 event this week, Clemson University researchers Long Cheng, Christin
Amazon claims it reviews the software created by third-party developers for its Alexa voice assistant platform, yet US academics were able to create more than 200 policy-violating Alexa Skills and get them certified.
In a paper [PDF] presented at the US Federal Trade Commission’s PrivacyCon 2020 event this week, Clemson University researchers Long Cheng, Christin Wilson, Jeffrey Alan Young, Daniel Dong & Hongxin Hu describe the ineffectiveness of Amazon’s Skills approval process.
The researchers have also set up a website to present their findings.
Like Android and iOS apps, Alexa Skills have to be submitted for review before they’re available to be used with Amazon’s Alexa service. Also like Android and iOS, the Amazon’s review process sometimes misses rule-breaking code.
In the researchers’ test, sometimes was every time: The e-commerce giant’s review system granted approval for every one of 234 rule-flouting Skills submitted over a 12-month period.
“Surprisingly, the certification process is not implemented in a proper and effective manner, as opposed to what is claimed that ‘policy-violating skills will be rejected or suspended,'” the paper says. “Second, vulnerable skills exist in Amazon’s skills store, and thus users (children, in particular) are at risk when using [voice assistant] services.”
Amazon disputes some of the findings and suggests that the way the research was done skewed the results by removing rule-breaking Skills after certification, but before other systems like post-certification audits might have caught the offending voice assistant code.
The devil is in the details
Alexa hardware has been hijacked by security researchers for eavesdropping and the software on these devices poses similar security risks, but the research paper concerns itself specifically with content in Alexa Skills that violates Amazon’s rules.
Alexa content prohibitions include limitations on activities like collecting information from children, collecting health information, sexually explicit content, descriptions of graphic violence, self-harm instructions, references to Nazis or hate symbols, hate speech, the promotion drugs, terrorism, or other illegal activities, and so on.
Dear makers of smart home things. Yeah, you with that bright idea of an IoT Candle. Here’s an SDK from Amazon
Getting around these rules involved tactics like adding a counter to Skill code, so the app only starts spewing hate speech after several sessions. The paper cites a range of problems with the way Amazon reviews Skills, including inconsistencies where rejected content gets accepted after resubmission, vetting tools that can’t recognize cloned code submitted by multiple developer accounts, excessive trust in developers, and negligence in spotting data harvesting even when the violations are made obvious.
Amazon also does not require developers to re-certify their Skills if the backend code – run on developers’ servers – changes. It’s thus possible for Skills to turn malicious if the developer alters the backend code or an attacker compromises a well-intentioned developer’s server.
The Clemson boffins conclude that Amazon has been lenient toward Skill approval because it prioritizes quantity over quality, noting that there are more than 100,000 Skills but most go unused. They point out that Google limits developers to 12 projects on the Actions on Google console (Actions being what Google calls Skills), unless they specifically request more.
“Customer trust is our top priority and we take violations of our Alexa Skill policies seriously,” an Amazon spokesperson told The Register in an emailed statement. “We conduct security and policy reviews as part of skill certification and have systems in place to continually monitor live skills for potentially malicious behavior or policy violations.”
“Any offending skills we identify are blocked during certification or quickly deactivated. We are constantly improving these mechanisms and have put additional certification checks in place to further protect our customers. We appreciate the work of independent researchers who help bring potential issues to our attention.” ®