"This is a key first step in being able to shed light on serial hijackers' behavior," says MIT Ph.D. candidate Cecilia Testart. Hijacking IP addresses is an increasingly popular form of cyber-attack. This is done for a range of reasons, from sending spam and malware to stealing Bitcoin. It's estimated that in 2017 alone, routing incidents such as IP hijacks affected more than 10 percent of all the world's routing domains. There have been major incidents at Amazon and Google and even in nation-states -- a study last year suggested that a Chinese telecom company used the approach to gather intelligence on western countries by rerouting their Internet traffic through China.
Facebook's Portal smart home device is finally launching in the UK – but a human contractor might end up listening to your voice commands. The device, whose AI-equipped camera will follow users around the room in order to keep them in the frame during video calls, will be available to British consumers for the first time from Oct 15. Users will be able to make voice calls using Facebook Messenger and encrypted voice calls using WhatsApp, as well as watch Facebook's TV service in tandem with their friends. But Facebook admits up front that clips of the instructions given to Portal's voice assistant might be passed to human contractors to check whether they have been correctly interpreted by its speech recognition software – unless users explicitly opt out. Andrew Bosworth, Facebook's vice president of augmented and virtual reality, said that Portal would never record the content of anyone's video calls, and that its "smart camera" software remains entirely on the device without any data being sent back to Facebook.
Jimmy Gomez is a California Democrat, a Harvard graduate and one of the few Hispanic lawmakers serving in the US House of Representatives. But to Amazon's facial recognition system, he looks like a potential criminal. Gomez was one of 28 US Congress members falsely matched with mugshots of people who've been arrested, as part of a test the American Civil Liberties Union ran last year of the Amazon Rekognition program. Nearly 40 percent of the false matches by Amazon's tool, which is being used by police, involved people of color. This is part of a CNET special report exploring the benefits and pitfalls of facial recognition.
Hey, Google, enough is enough already. Google was caught having contractors listening in to our conversations from its personal assistant, which sounds bad until you realize Google wasn't alone in this. Apple and Facebook were doing the same thing. And this week, Microsoft got stung by Vice's Motherboard, and now admits it, too, listens. The companies, which also include Amazon, have said they do this on a limited basis to learn and make their assistants better.
The company hopes doing so will let any developer deliver captions for long-form conversations. The source code is available now on GitHub. Google released Live Transcribe in February. The tool uses machine learning algorithms to turn audio into real-time captions. Unlike Android's upcoming Live Caption feature, Live Transcribe is a full-screen experience, uses your smartphone's microphone (or an external microphone), and relies on the Google Cloud Speech API.
What I saw didn't look very much like the future -- or at least the automated one you might imagine. The offices could have been call centers or payment processing centers. One was a timeworn former apartment building in the middle of a low-income residential neighborhood in western Kolkata that teemed with pedestrians, auto rickshaws and street vendors. In facilities like the one I visited in Bhubaneswar and in other cities in India, China, Nepal, the Philippines, East Africa and the United States, tens of thousands of office workers are punching a clock while they teach the machines. Tens of thousands more workers, independent contractors usually working in their homes, also annotate data through crowdsourcing services like Amazon Mechanical Turk, which lets anyone distribute digital tasks to independent workers in the United States and other countries.
On paper, it's a great time to be on a dating app. In the seven years since Tinder's entrance on to the dating scene in 2012, it has gone from fringe novelty to romantic ubiquity; within two years of launching, it was seeing 1bn swipes a day. Other apps have similarly impressive stats: in 2018, Bumble's global brand director revealed it had more than 26 million users and a confirmed 20,000 marriages. It's a far cry from the considerably less optimistic response Tinder received when it launched. Many hailed it as the end of romance itself.
The 9th Circuit U.S. Court of Appeals said Thursday that Facebook users in Illinois can sue the company over its use of facial recognition technology. The 9th Circuit U.S. Court of Appeals said Thursday that Facebook users in Illinois can sue the company over its use of facial recognition technology. A U.S. court has ruled that Facebook users in Illinois can sue the company over face recognition technology, meaning a class action can move forward. The 9th Circuit U.S. Court of Appeals issued its ruling on Thursday. According to the American Civil Liberties Union, it's the first decision by a U.S. appellate court to directly address privacy concerns posed by facial recognition technology.
Somewhat unceremoniously, Facebook this week provided an update on its brain-computer interface project, preliminary plans for which it unveiled at its F8 developer conference in 2017. In a paper published in the journal Nature Communications, a team of scientists at the University of California, San Francisco backed by Facebook Reality Labs -- Facebook's Pittsburgh-based division devoted to augmented reality and virtual reality R&D -- described a prototypical system capable of reading and decoding study subjects' brain activity while they speak. It's impressive no matter how you slice it: The researchers managed to make out full, spoken words and phrases in real time. Study participants (who were prepping for epilepsy surgery) had a patch of electrodes placed on the surface of their brains, which employed a technique called electrocorticography (ECoG) -- the direct recording of electrical potentials associated with activity from the cerebral cortex -- to derive rich insights. A set of machine learning algorithms equipped with phonological speech models learned to decode specific speech sounds from the data and to distinguish between questions and responses.
With a new feature, Tinder says it wants to make the swiping experience safer for its LGBTQ users traveling and living in certain countries. On Wednesday, the dating app introduced a new safety update dubbed "Traveler Alert" that will warn users who have identified themselves as lesbian, gay, bisexual, transgender and/or queer when they enter a country that could criminalize them for being out. The app plans to use the locations from users' devices to determine if there is a threat to the user's safety, where users can opt to have their profile hidden during their stay or make their profile public again. The caveat being that if a user decides to have their profile public, their sexual preference or gender identity will no longer be disclosed on the app until they return to a location where the user is deemed safer to disclose their identity. In the statement, Tinder says they developed the feature so that users "can take extra caution and do not unknowingly place themselves in danger for simply being themselves."