dangerously
Changing the clocks makes people DRIVE more dangerously because it disrupts our sleep, study finds
Changing the clocks could have greater consequences than just missing your alarm, as a new study has found it makes us drive more dangerously. Researchers at the University of Padova in Italy and the University of Surrey have found that Daylight Saving Time (DST) disrupts our sleep-wake cycle. They tested the driving ability of 23 male Italian drivers before and after the introduction of springtime DST, and found they took more risks as a result of the change. Their reaction times and ability to read situations on the road were also compromised after losing the hour. This is thought to be the result of sleep deprivation and disturbances to their circadian rhythms - the internal process that regulates the sleep-wake cycle and other rhythmic functions.
- Europe > Italy (0.26)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Asia > Japan (0.05)
Cisco ICON Speaker Series: AI is living (dangerously) on the EDGE
Others can attend online over Webex. AI is slowing moving to IoT edge devices and pushing the limits and capacity of edge systems. Where there is an increased effort to improve hardware design considerations such as power, latency, throughput, cost, reliability, bandwidth, privacy and security with full support from economies/markets of scale, it still falls short on the demand. This talk will present the efforts at working on a compilers enabling machine learning models to perform on small formfactor like microcontrollers, which are part of all sorts of household devices: think appliances, cars, and toys. In fact, there are around 30 billion microcontroller-powered devices produced each year.
Military artificial intelligence can be easily and dangerously fooled
Last March, Chinese researchers announced an ingenious and potentially devastating attack against one of America's most prized technological assets--a Tesla electric car. The team, from the security lab of the Chinese tech giant Tencent, demonstrated several ways to fool the AI algorithms on Tesla's car. By subtly altering the data fed to the car's sensors, the researchers were able to bamboozle and bewilder the artificial intelligence that runs the vehicle. In one case, a TV screen contained a hidden pattern that tricked the windshield wipers into activating. In another, lane markings on the road were ever-so-slightly modified to confuse the autonomous driving system so that it drove over them and into the lane for oncoming traffic.
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Russia (0.14)
- North America > United States > California (0.05)
- (5 more...)
- Information Technology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- (3 more...)
Military artificial intelligence can be easily and dangerously fooled
Kanaan is generally very bullish about AI, partly because he knows firsthand how useful it stands to be for troops. Six years ago, as an Air Force intelligence officer in Afghanistan, he was responsible for deploying a new kind of intelligence-gathering tool: a hyperspectral imager. The instrument can spot objects that are normally hidden from view, like tanks draped in camouflage or emissions from an improvised bomb-making factory. Kanaan says the system helped US troops remove many thousands of pounds of explosives from the battlefield. Even so, it was often impractical for analysts to process the vast amounts of data collected by the imager.
Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems
Tech leaders, including Elon Musk and the three co-founders of Google's AI subsidiary DeepMind, have signed a pledge promising to not develop "lethal autonomous weapons." It's the latest move from an unofficial and global coalition of researchers and executives that's opposed to the propagation of such technology. The pledge warns that weapon systems that use AI to "[select] and [engage] targets without human intervention" pose moral and pragmatic threats. Morally, the signatories argue, the decision to take a human life "should never be delegated to a machine." On the pragmatic front, they say that the spread of such weaponry would be "dangerously destabilizing for every country and individual."
After Math: The week of living dangerously
It was a chaotic week in the tech world, even before the YouTube HQ shooting. Apple's pushing its luck by pushing its Mac Pro release to next year, Russia's mail delivery drone barely got off the ground, and Scott Pruitt's EPA is doing its best to suffocate California in smog. Numbers, because how else will you know when yours is up? And it looks like the creatives that the Pro is designed for will have to wait just a little bit longer as the company announced this week that the promised revamp won't happen until 2019. Guess Nick Cage won't be headed to Mars anytime soon.
- Europe > Russia (0.26)
- Asia > Russia (0.26)
- North America > United States > California > San Francisco County > San Francisco (0.06)
- Law > Environmental Law (0.40)
- Government > Regional Government > North America Government > United States Government (0.40)
- Information Technology > Communications > Social Media (0.62)
- Information Technology > Artificial Intelligence > Robots (0.40)