In late 2018, Krzysztof Czarnecki, a professor at Canada's University of Waterloo, built a self-driving car and trained it to navigate surrounding neighborhoods with an annotated driving data set from researchers in Germany. The vehicle worked well enough to begin with, recognizing Canadian cars and pedestrians just as well as German ones. But then Czarnecki took the autonomous car for a spin in heavy Ontarian snow. It quickly became a calamity on wheels, with the safety driver forced to grab the wheel repeatedly to avert disaster. The incident highlights a gap in the development of self-driving cars: maneuvering in bad weather.
Google has released a neural-network-powered chatbot called Meena that it claims is better than any other chatbot out there. Data slurp: Meena was trained on a whopping 341 gigabytes of public social-media chatter--8.5 times as much data as OpenAI's GPT-2. Google says Meena can talk about pretty much anything, and can even make up (bad) jokes. Why it matters: Open-ended conversation that covers a wide range of topics is hard, and most chatbots can't keep up. At some point most say things that make no sense or reveal a lack of basic knowledge about the world.
As facial recognition systems become increasingly accurate, more governments and law enforcement organizations are tapping them to verify people's identities, nab criminals and keep transactions secure. In recent months, France's government announced a nationwide facial recognition ID program, a UK court ruled that live facial recognition doesn't violate privacy rights and research revealed that the US Immigration and Customs Enforcement (ICE) agency and the FBI are using facial recognition to apprehend undocumented immigrants. Most of this activity is undertaken in the name of safety and security, but it is also raising major red flags among privacy advocates. They argue that the technology--which can scan and identify faces without consent in crowded streets, retail stores and sports stadiums--is predatory and invasive. Among consumers, the jury is still out.
A study at Canada's University of Alberta found some virtual assistants are far better than others at providing users reliable, relevant information on medical emergencies. Researchers at the University of Alberta in Canada have found that virtual assistants do not live up to their potential in terms of providing users with reliable, relevant information on medical emergencies. The team tested four commonly used devices--Alexa, Google Home, Siri, and Cortana--using 123 questions about 39 first aid topics, including heart attacks, poisoning, nosebleeds, and splinters. The devices' responses were measured for accuracy of topic recognition, detection of the severity of the emergency, complexity of language used, and how closely the advice given fit with accepted first aid treatment and guidelines. Google Home performed the best, recognizing topics with 98% accuracy and providing relevant advice 56% of the time.
Some of the biggest names in artificial intelligence, including two godfathers of the machine learning boom, are betting that clever algorithms are about to transform the abilities of industrial robots. Geoffrey Hinton and Yann LeCun, who shared this year's Turing Prize with Yoshua Bengio for their work on deep learning, are among the AI luminaries who have invested in Covariant.ai, The company, emerging from stealth Wednesday, announced the first commercial installations of its AI-equipped robots: picking boxes and bags of products for a German electronics retailer called Obeta. Picking up everyday boxes and plastic packages might sound trivial, and it is for most humans. Workers in factories and warehouses are frequently given new objects to handle, or a batch of different items mixed together, but it's deceptively difficult for a machine to quickly work out how to grab the next doodad.
Robots are becoming more human-like every day: now they can sweat. Thomas Wallin at Cornell University in New York and his colleagues have created soft robotic grippers that are capable of sweating to cool down. The grippers are capable of a cooling capacity of 107 watts per kilogram, making them more efficient sweaters than mammals. By comparison, humans and horses have a maximum cooling capacity around 35 watts per kilogram. Each gripper consists of three finger-like parts that bend simultaneously to grasp small objects.
Programming a robotic arm to deal with every situation, one rule at a time, is impossible. At Knapp, Mr. Puchwein and his partners had tried and failed for years to create a robot with the dexterity and flexibility needed for the job. Covariant, which is working with Knapp, built software that could learn through trial and error. First, the system learned from a digital simulation of the task -- a virtual recreation of a bin filled with random items. Then, when Mr. Chen and his colleagues transferred this software to a robot, it could pick up items in the real world.
Amid the fear and intrigue surrounding the coronavirus, people are downloading a simulation video game in which players use real-time strategy to spread a deadly outbreak around the world. The video game's developers warn people not to take the game too seriously and to seek advice about how epidemics travel from authoritative sources. The coronavirus has sickened more than 4,500 people and killed more than 100. The illness originated in China before moving to other parts of the globe, including the USA. Plague Inc. is an 8-year-old app and PC game in which users play the role of a disease intent on infecting the world with a pathogen.
You've heard it a million times: Americans don't care about our online privacy. Turns out that's not really true. Anxiety levels over privacy and security are peaking as the relentless collection of online data and the steady drumbeat of data incursions and breaches take a toll. People are worried like never before about eavesdropping by smart home devices such as Google Home and the Amazon Echo or having their microphone tapped to target them with personalized ads and increasingly they want a say over how their personal information gets used, according to a survey released Tuesday to observe Data Privacy Day. More than 8 in 10 American adults expect to have control over how a business handles their data, the survey released by privacy firm DataGrail found.
The global movie industry generated over $43 billion in revenue in 2018, of which the United States' contribution alone topped more than $11 billion. Yet, these seemingly impressive headline figures can obscure the fact that year-on-year growth has been a sluggish 2 per cent over the last several years, with market researchers forecasting further stagnation. Given the inherent financial risk involved in film making, some now believe artificial intelligence, rather than human expertise, is best placed to select which films are most likely to provide suitable returns on investment. In early January 2020, Warner Bros signed a deal with Cinelytic, a Los Angeles-based artificial intelligence company which, according to the press release, aims to help content creators make faster, better-informed decisions through predictive analytics. Belgium's ScriptBook provides a similar service, touted as "artificially intelligent script analysis and box office forecasting".