In the spring of last year, cybersecurity researcher Takeshi Sugawara walked into the lab of Kevin Fu, a professor he was visiting at the University of Michigan. He wanted to show off a strange trick he'd discovered. Sugawara pointed a high-powered laser at the microphone of his iPad--all inside of a black metal box, to avoid burning or blinding anyone--and had Fu put on a pair of earbuds to listen to the sound the iPad's mic picked up. As Sugawara varied the laser's intensity over time in the shape of a sine wave, fluctuating at about 1,000 times a second, Fu picked up a distinct high-pitched tone. The iPad's microphone had inexplicably converted the laser's light into an electrical signal, just as it would with sound.
Apple customers can now opt out of having their conversations with Siri listened to by human "graders" and delete any clips that have already been uploaded, three months after the Guardian revealed the practice based on a whistleblower report. In the latest software updates for Apple's products, including iOS 13.2 and macOS 10.15.1, users have the option to disable the grading feature while still using Siri as normal. The preferences are not particularly prominent. To opt out of future grading on iOS, in the settings app, under the heading privacy, users can tap on "Analytics & Improvements" then disable the preference to "improve Siri dictation". To delete their uploaded clips, they go to Siri & Search in the settings app, tap on Siri & Dictation History, and then hit a red button marked "Delete Siri & Dictation History".
Security researchers developed skills for both Google Home and Amazon Echo devices that could eavesdrop on people. Smart speakers already face privacy concerns, but now security researchers have found that malicious apps designed to eavesdrop can sneak through Google's and Amazon's vetting processes. On Sunday, Security Research Labs disclosed its findings after developing eight voice apps that could listen in on people's conversations through Amazon's Echo and Google's Nest devices. All of the apps passed through the companies' reviews for third-party apps. The research was first reported by CNET sister site ZDNet.
Oscar-nominated actor Samuel L. Jackson is lending his iconic voice to Amazon's Alexa – profanities and all. During Amazon's event to unveil new products and services Wednesday, the online shopping giant announced that Jackson will be the first celebrity voice for its Alexa virtual assistant and was created using neural text-to-speech technology. There will be both an explicit version and a clean version when the feature launches later this year. The Alexa "skill" will cost 99 cents as an introductory offer. After the introductory period, the price will be $4.99, according to the product page.
Up until recently, when users wanted to search for something online, they would need to type their queries into a search engine such as Google or Yahoo. However, the development of voice search technology means that users can now simply speak their query aloud into a device such as a smart speaker (e.g. the Google Home) or an AI-powered virtual assistant (e.g. the Amazon Alexa) and receive a verbal answer to that query. ComScore predicts that voice search will account for half of all online searches by the year 2020. So, what exactly is the appeal of this technology for users? Vocal search is changing how people search for things online so you will have to adapt your approach to keyword research accordingly.
One day in 2017, Alexa went rogue. When Martin Josephson, who lives in London, came home from work, he heard his Amazon Echo Dot voice assistant spitting out fragmentary commands, seemingly based on his previous interactions with the device. It appeared to be regurgitating requests to book train tickets for journeys he had already taken and to record TV shows that he had already watched. Josephson had not said the wake word – "Alexa" – to activate it and nothing he said would stop it. It was, he says, "Kafkaesque". This was especially interesting because Josephson (not his real name) was a former Amazon employee.
That was the conclusion of a recent study published in academic journal Marketing Science in which researchers analyzed field data from outbound sales calls between bots or sales reps and 6,200 randomized customers of an anonymous Asia-based financial services company. They found that the customers tended to grow curt when informed upfront of the bot's presence, and that such disclosures led to an 80% drop in sales. "They perceive the disclosed bot as less knowledgeable and less empathetic," the study authors wrote. "The negative disclosure effect seems to be driven by a subjective human perception against machines, despite the objective competence of AI chatbots." The paper raises a moral dilemma for businesses looking to deploy chatbots.
The use of multiple digital devices to support people's daily activities has long been discussed.11 Multi-device experiences (MDXs) spanning multiple devices simultaneously are viable for many individuals. Each device has unique strengths in aspects such as display, compute, portability, sensing, communications, and input. Despite the potential to utilize the portfolio of devices at their disposal, people typically use just one device per task; meaning they may need to make compromises in the tasks they attempt or may underperform at the task at hand. It also means the support that digital assistants such as Amazon Alexa, Google Assistant, or Microsoft Cortana can offer is limited to what is possible on the current device.
Chatbots have emerged as a great option for providing a 24/7 self-service solution to address a host of customer support requirements. They enable customers to get their questions answered in real-time and they free up support staff from having to field high volumes of repetitive inquiries. And with advances in artificial intelligence and machine learning, chatbots are becoming extremely effective at providing a cognitive and conversational experience that your customers will love. As chatbot adoption has increased, various types have entered the market to address different requirements. For example, chatbots built for specific B2C or B2B support use cases, called transactional bots, are very different from those built for more wide-ranging applications, referred to as knowledge bots.
The BBC is preparing to launch a rival to Amazon's Alexa called Beeb, with a pledge that it will understand British accents. The voice assistant, which has been created by an in-house BBC team, will be launched next year, with a focus on enabling people to find their favourite programmes and interact with online services. While some US-developed products have struggled to understand strong regional accents, the BBC will this week ask staff in offices around the UK to record their voices and make sure the software understands them. The BBC currently has no plans to launch a standalone physical product such as Amazon's Echo speaker or a Google Home device. Instead, the Beeb software will be built into the BBC's website, its iPlayer app on smart TVs, and made available to manufacturers who want to incorporate the public broadcaster's software.