Utah Jazz head coach Quin Snyder had heavy praise for reigning MVP James Harden ahead of Game 2. Snyder had compared Harden to artificial intelligence on Tuesday and was asked to expand on that before Wednesday night's game. "The way he plays, there's an artistic nature to it… Obviously he's skilled, but I think the way he processes the game… He literally sees the whole court." I think the way he plays, there's an artistic nature to it. The feel that he has for different things on the court. He's able to put the ball on different locations that he wants, to manipulate spacing.
TEL AVIV, Israel, 16 April 2019--Israel's reams of electronic medical records –health data on its population of around 8.9 million people-- are proving fruitful for a growing number of digital health startups training algorithms to do things like early detection of diseases and produce more accurate medical diagnoses. According to a new report by Start-Up Nation Central, the growth in the number of Israeli digital health startups –537 companies, up from 327 in 2014--has drawn in new investors, including Israeli VCs who have never previously invested in healthcare. This has driven financing in the sector to a record $511M in 2018, up 32% year on year. By the first quarter of 2019 the amount raised was already at $214M. Of the $511M, over 50% ($285M) went to companies in decision support and diagnostics which rely heavily on data crunching.
"Artificial intelligence" can be defined as the theory and development of computer systems able to perform tasks that normally require human intervention. Artificial intelligence (AI) is being used in new products and services across numerous industries and for a variety of policy-related purposes, raising questions about the resulting legal implications, including its effect on individual privacy. Aspects of AI related to privacy concerns are the ability of systems to make decisions and to learn by adjusting their code in response to inputs received over time, using large volumes of data. Following the European Commission's declaration on AI in April 2018, its High-Level Expert Group on Artificial Intelligence (AI HLEG) published Draft Ethics Guidelines for Trustworthy AI in December 2018. A consultation process regarding this working document concluded on February 1, 2019, and a revised draft of the document based on the comments that were received is expected to be delivered to the European Commission in April 2019.
Boston Dynamics has taught its fleet of SpotMini robot dogs a new trick: The robotics company posted a new video featuring ten of the mechanical canines hauling a cargo truck up a slightly inclined hill like a team of sled dogs. The video was shared to the company's popular YouTube page on Tuesday. In the above video, two lines of SpotMini's are tethered together and marched in unison to inch a semi-truck in neutral gear forward. As with pretty much every video Boston Dynamics publishes, it elicited a flurry of Black Mirror, Terminator, and robot overlord-themed references across social media. But Boston Dynamics promises that its famous SpotMinis are here to help humans, not to rule them.
Whenever we start to talk about artificial intelligence, machine learning, or deep learning, the cautionary tales from science fiction cinema arise: HAL 9000 from 2001: A Space Odyssey, the T-series robots from Terminator, replicants from Blade Runner, there are hundreds of stories about computers learning too much and becoming a threat. The crux of these movies always has one thing in common: there are things that computers do well and things that humans can do well, and they don't necessarily intersect. Computers are really good at crunching numbers and statistical analysis (deductive reasoning) and humans are really good at recognizing patterns and making inductive decisions using deductive data. Both have their strengths and their role. With the massive proliferation of data across platforms, types, and collection schedules, how are geospatial specialists supposed to address this apparently insurmountable task?
Science fiction didn't do a great job in preparing us for our first real encounters with AI. Most people probably still envision AI in the form of a sentient robot that can talk, move around, and experience feelings – something like WALL-E or C-3PO from the movies. Although that still may be the dream, it turns out that the current iteration of AI is actually quite different. With modern AI, all the "thinking" gets done in the cloud, and the algorithms aren't tied to the identity of a physical machine like we would have expected from the big screen. The modern iteration of AI works silently in the background without a face, and it's starting to impact everything it touches.
Microsoft has said it turned down a request from law enforcement in California to use its facial recognition technology in police body cameras and cars, reports Reuters. Speaking at an event at Stanford University, Microsoft president Brad Smith said the company was concerned that the technology would disproportionately affect women and minorities. Past research has shown that because facial recognition technology is trained primarily on white and male faces, it has higher error rates for other individuals. "Anytime they pulled anyone over, they wanted to run a face scan," said Smith of the unnamed law enforcement agency. "We said this technology is not your answer."
Go player Lee Sedol (R) during the third game of the Google DeepMind Challenge Match against Google-developed supercomputer AlphaGo. Leading Australian artificial intelligence scientist Professor Toby Walsh is warning that we are "sleepwalking" into an AI future in which billions of machines and computers will be able to think. Professor Walsh, from the University of New South Wales, is calling for a national discussion about whether society needs to adopt clear boundaries and guidelines around how AI is developed and how it's used in our lives. In his book It's Alive: Artificial Intelligence From The Logic Piano to Killer Robots, he has highlighted key questions in a series of predictions that describe how our future could be far better or far worse because of AI. Here's how he thinks society might change by 2050 thanks to artificial intelligence.
Almost 20 years ago, the medical and scientific communities were overjoyed. With the Human Genome Project finished, there was an air of inevitability that the causes of some of the most common and destructive diseases would soon be pinpointed and eradicated. It'd be simple: one gene, one problem, one solution. We even heard Francis Collins, at the time, say "over the longer term, perhaps in another 15 or 20 years, you will see a complete transformation in therapeutic medicine." Unfortunately, it was never going to be that easy.