Communications: AI-Alerts
Hackers Can Use Lasers to 'Speak' to Your Amazon Echo
In the spring of last year, cybersecurity researcher Takeshi Sugawara walked into the lab of Kevin Fu, a professor he was visiting at the University of Michigan. He wanted to show off a strange trick he'd discovered. Sugawara pointed a high-powered laser at the microphone of his iPad--all inside of a black metal box, to avoid burning or blinding anyone--and had Fu put on a pair of earbuds to listen to the sound the iPad's mic picked up. As Sugawara varied the laser's intensity over time in the shape of a sine wave, fluctuating at about 1,000 times a second, Fu picked up a distinct high-pitched tone. The iPad's microphone had inexplicably converted the laser's light into an electrical signal, just as it would with sound.
Taking the Risk Out of Machine Learning and AI - Workflow
Machine learning and artificial intelligence are integral components of any modern organization's IT stack but these data-harvesting tools can have a dark side if appropriate risk management and planning protocols aren't in place. There's no denying the power and possibilities created by AI and machine learning. With this astounding power to build models designed to improve the efficiency and performance of everything from marketing and supply chain to sales and human resources comes considerable responsibility. A recent McKinsey report sheds some light on how companies in every industry should be wary of assuming that these relatively new and remarkably complex tools will always deliver the desired outcome as they're integrated with other applications and processes. These tools are just like every other tool that's ever existed: they're only as good as the people designing and using them.
Machine Learning for Translation: What's the State of the Language Art? - ReadWrite
A new batch of Machine Translation tools driven by Artificial Intelligence is already translating tens of millions of messages per day. Proprietary ML translation solutions from Google, Microsoft, and Amazon are in daily use. Facebook takes its road with open-source approaches. What works best for translating software, documentation, and natural language content? And where is the automation of AI-driven neural networks driving?
Facebook's AI prevents you from being identified by face recognition tech
Facial recognition systems are all the rage among government agencies around the world, as they seek to automate services and keep tabs on their citizens. If there's a picture of you somewhere, you could potentially be identified in photos and videos from public camera feeds. Now, Facebook has devised a way to thwart this technology. Its face de-identification tech, developed by three AI researchers who work with the company, modifies your face slightly in video content, so that facial recognition systems can't match what they see in the footage with images of you in their databases. You can see this in action in this video (screenshot above), in which certain details are tweaked, such as the shape of a person's mouth, or the size of their eyes.
Indic Language Computing
In April 2019, following the Easter Sunday bomb attacks, the Government of Sri Lanka had to shut down Facebook and YouTube for nine days to stop the spreading of hate speech and false news, posted mainly in the local languages Sinhala and Tamil. This came about simply because these social media platforms did not have the capability to detect and warn about the provocative content. India's Ministry of Human Resource Development (MHRD) wants lectures on Swayama and NPTELb--the online teaching platforms--to be translated into all Indian languages. Approximately 2.5 million students use the Swayam lectures on computer science alone. The lectures are in English, which students find difficult to understand. A large number of lectures are manually subtitled in English.
This Technique Can Make It Easier for AI to Understand Videos
Whether it's dubious viral memes, gaffe-prone presidential debates, or surreal TikTok remixes, you could spend the rest of your life trying to watch all the video footage posted on YouTube in a single day. Researchers want to let artificial intelligence algorithms watch and make sense of it instead. A group from MIT and IBM developed an algorithm capable of accurately recognizing actions in videos while consuming a small fraction of the processing power previously required, potentially changing the economics of applying AI to large amounts of video. The method adapts an AI approach used to process still images to give it a crude concept of passing time. The work is a step towards having AI recognize what's happening in video, perhaps helping to tame the vast amounts now being generated.
How Photos of Your Kids Are Powering Surveillance Technology
One day in 2005, a mother in Evanston, Ill., joined Flickr. Then she more or less forgot her account existed. Years later, their faces are in a database that's used to test and train some of the most sophisticated artificial intelligence systems in the world. The pictures of Chloe and Jasper Papa as kids are typically goofy fare: grinning with their parents; sticking their tongues out; costumed for Halloween. None of them could have foreseen that 14 years later, those images would reside in an unprecedentedly huge facial-recognition database called MegaFace.
Using Machine Learning to Hunt Down Cybercriminals
"This is a key first step in being able to shed light on serial hijackers' behavior," says MIT Ph.D. candidate Cecilia Testart. Hijacking IP addresses is an increasingly popular form of cyber-attack. This is done for a range of reasons, from sending spam and malware to stealing Bitcoin. It's estimated that in 2017 alone, routing incidents such as IP hijacks affected more than 10 percent of all the world's routing domains. There have been major incidents at Amazon and Google and even in nation-states -- a study last year suggested that a Chinese telecom company used the approach to gather intelligence on western countries by rerouting their Internet traffic through China.
Facebook Portal security concerns laid bare as company admits humans can listen in
Facebook's Portal smart home device is finally launching in the UK – but a human contractor might end up listening to your voice commands. The device, whose AI-equipped camera will follow users around the room in order to keep them in the frame during video calls, will be available to British consumers for the first time from Oct 15. Users will be able to make voice calls using Facebook Messenger and encrypted voice calls using WhatsApp, as well as watch Facebook's TV service in tandem with their friends. But Facebook admits up front that clips of the instructions given to Portal's voice assistant might be passed to human contractors to check whether they have been correctly interpreted by its speech recognition software – unless users explicitly opt out. Andrew Bosworth, Facebook's vice president of augmented and virtual reality, said that Portal would never record the content of anyone's video calls, and that its "smart camera" software remains entirely on the device without any data being sent back to Facebook.
Hey Siri, Google and Alexa — enough with the snooping
Hey, Google, enough is enough already. Google was caught having contractors listening in to our conversations from its personal assistant, which sounds bad until you realize Google wasn't alone in this. Apple and Facebook were doing the same thing. And this week, Microsoft got stung by Vice's Motherboard, and now admits it, too, listens. The companies, which also include Amazon, have said they do this on a limited basis to learn and make their assistants better.