If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Meanwhile, China's surveillance firms continue to expand globally as China aims to be the world leader in artificial intelligence by 2030. Nadella said regulation "does have a real place here," particularly rules at the "time of use" of AI, like facial recognition. "I think we should be thinking a lot harder around regulation at the time of use. Because facial recognition or object recognition by itself is not good or bad; it is just a technology. So we have to be able to sort of even think about regulation more at the run time, more at the design time," Nadella said.
I've been talking in recent posts about how our typical methods of testing AI systems are inadequate and potentially unsafe. In particular, I've complained that all of the headline-grabbing papers so far only do controlled experiments, so we don't how the AI systems will perform on real patients. Today I am going to highlight a piece of work that has not received much attention, but actually went "all the way" and tested an AI system in clinical practice, assessing clinical outcomes. They did an actual clinical trial! Big news … so why haven't you heard about it?
You may think that artificial intelligence (AI) will make doctors obsolete soon but that day is still far off. In fact, computers are not that intelligent just yet. Most computer solutions emerging in healthcare rely on algorithms written to analyse data and recommend treatments. They do not rely on computers thinking independently. The computers in question are fed with large amounts of known data and use rules or algorithms set by experts to extract information and apply it to a health issue or problem.
ABOUT A CENTURY ago, engineers created a new sort of space: the control room. Before then, things that needed control were controlled by people on the spot. But as district heating systems, railway networks, electric grids and the like grew more complex, it began to make sense to put the controls all in one place. Dials and light bulbs brought the way the world was working into the room. Levers, stopcocks, switches and buttons sent decisions back out. By the 1960s control rooms had become a powerful icon of the modern. At Mission Control in Houston, young men in horn rimmed glasses and crewcuts sent commands to spacecraft heading for the Moon. In the space seen through television sets, travellers exploring strange new worlds did so within an iconic control room of their own: the bridge of Star Trek's USS Enterprise. A hexagonal room built in Santiago de Chile a decade later fitted right into the same philosophy--and aesthetic. It had an array of screens full of numbers and arrows. It was linked to a powerful computer. It had futuristic swivel chairs, complete with geometric buttons in the armrests to control the displays.
Fully self-driving cars are still a thing of the future. But in today's laboratories, the technology ranges from commonly used cruise control systems to so much automation that humans don't need to get into a car at all. In Taiwan, a startup is developing a driver's cockpit that's comfortable and packed with artificial intelligence features that transfers control of the vehicle to the computer whenever the system senses that the human driver is sick, tired, distracted or just sloppy. The 3-year-old Taipei-based Mindtronic AI developed this cockpit, called DMX, last year with luxuries like easy-to-use entertainment for the driver. But what if the driver gets mesmerized by a soccer match?
New technologies are poised to challenge assumptions that AI and robotics will be used to perform only low-level and highly repetitive tasks. Over the past decade, U.S. tech firms have made significant advancements in artificial intelligence and robotics, making it far easier and more efficient to automate tasks and functions across industries. Artificial intelligence (AI) affects all types of risks and lines of insurance, and the workers' compensation market has a particularly large stake in the developments. Although the U.S. has experienced technological change and disruption during prior periods of industrial revolution, the pace and scope of the fourth industrial Revolution positions it to have a far greater impact on the U.S. and global economies. The recent advancements in AI and robotics are some of the most significant computer science advancements of our generations.
The Brookings Institution last week published a report from global economy expert Indermit Gill prophesying that the AI leader in 2030 will go on to rule the planet until at least 2100. The territories in the running include the US, China, and the European Union. Economists appear to have reached a general consensus that artificial intelligence is among the four great "general purpose technologies" to come along since the 1800s. Gill argues that AI, like steam power, electricity, and information systems technology, will directly impact the way business is conducted at the global scale by 2030. Related: Here's what AI experts think will happen in 2020 Technological leadership will require big digital investments, rapid business process innovation, and efficient tax and transfer systems.
"Countries that can harness the current wave of innovation, mitigate its potential disruptions, and capitalize on its transformative power will gain economic and military advantages over potential rivals," the report found. Leadership in innovation, research and technology since World War II has made the U.S. the most secure and economically prosperous nation in the world, the task force said, warning, "Today, this leadership position is at risk." Federal support and funding for R&D has stagnated over the past two decades, the report noted. "Washington has failed to maintain adequate levels of public support and funding for basic science. Federal investment in R&D as a percentage of GDP peaked at 1.86 percent in 1964 but has declined from a little over 1 percent in 1990 to 0.66 percent in 2016."
This past fall, diplomats from around the globe gathered in Geneva to do something about killer robots. In a result that surprised nobody, they failed. The formal debate over lethal autonomous weapons systems--machines that can select and fire at targets on their own--began in earnest about half a decade ago under the Convention on Certain Conventional Weapons, the international community's principal mechanism for banning systems and devices deemed too hellish for use in war. But despite yearly meetings, the CCW has yet to agree what "lethal autonomous weapons" even are, let alone set a blueprint for how to rein them in. Meanwhile, the technology is advancing ferociously; militaries aren't going to wait for delegates to pin down the exact meaning of slippery terms such as "meaningful human control" before sending advanced warbots to battle.
Landing in Shanghai recently, I found myself in the middle of a tech revolution remarkable in its sweep. The passport scanner automatically addresses visitors in their native tongues. Digital payment apps have replaced cash. Outsiders trying to use paper money get blank stares from store clerks. Nearby in the city of Hangzhou a prototype hotel called FlyZoo uses facial recognition to open doors, no keys required.