If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. A team led by Stanford computer science professor Andrew Ng and Google Fellow Jeff Dean showed the system images from 10 million randomly selected YouTube videos.
In other words, he hopes the new chip and the new service will set Google's cloud business apart from services offered by its main rivals, including Amazon and Microsoft, the unnamed competitive threat underlying his I/O keynote. Between its two AI labs--Google Brain, based at company headquarters in Silicon Valley, and DeepMind, a London AI startup Google purchased a little more than three years ago--Google is leading the new wave of artificial intelligence research and development so rapidly changing entire industries and economies. But the company believes cloud computing--where computing power is rented over the internet to businesses and software developers--could one day bring in far more. Google built its new chip as a better way of serving its own AI services, most notably Google Translate, says Jeff Dean, the uber-engineer who oversees Google Brain, the company's main AI lab.
Amato, Christopher (University of New Hampshire) | Amir, Ofra (Harvard University) | Bryson, Joanna (University of Bath) | Grosz, Barbara (Harvard University) | Indurkhya, Bipin (Jagiellonian University) | Kiciman, Emre (Microsoft Research) | Kido, Takashi (Rikengenesis) | Lawless, W. F. (Massachusetts Institute of Technology) | Liu, Miao (University of Southern California) | McDorman, Braden (Semio) | Mead, Ross (University of Amsterdam) | Oliehoek, Frans A. (University of Pennsylvania) | Specian, Andrew (American University in Paris) | Stojanov, Georgi (University of Electro-Communications) | Takadama, Keiki
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2016 Spring Symposium Series on Monday through Wednesday, March 21-23, 2016 at Stanford University. The titles of the seven symposia were (1) AI and the Mitigation of Human Error: Anomalies, Team Metrics and Thermodynamics; (2) Challenges and Opportunities in Multiagent Learning for the Real World (3) Enabling Computing Research in Socially Intelligent Human-Robot Interaction: A Community-Driven Modular Research Platform; (4) Ethical and Moral Considerations in Non-Human Agents; (5) Intelligent Systems for Supporting Distributed Human Teamwork; (6) Observational Studies through Social Media and Other Human-Generated Content, and (7) Well-Being Computing: AI Meets Health and Happiness Science.
It was in this same dingy diner in April 1993 that three young electrical engineers--Malachowsky, Curtis Priem and Nvidia's current CEO, Jen-Hsun Huang--started a company devoted to making specialized chips that would generate faster and more realistic graphics for video games. "We've been investing in a lot of startups applying deep learning to many areas, and every single one effectively comes in building on Nvidia's platform," says Marc Andreessen of venture capital firm Andreessen Horowitz. Starting in 2006, Nvidia released a programming tool kit called CUDA that allowed coders to easily program each individual pixel on a screen. From his bedroom, Krizhevsky had plugged 1.2 million images into a deep learning neural network powered by two Nvidia GeForce gaming cards.
While not well understood, neural networks, deep learning, and reinforcement learning are all machine learning. Each layer of a deep learning model lets the computer identify another level of abstraction of the same object. Reinforcement learning, takes ideas from game theory, and includes a mechanism to assist learning through rewards. Researchers refer to this challenge as the black box problem of machine learning.
The latest chip in the iPhone 7 has 3.3 billion transistors packed into a piece of silicon around the size of a small coin. But the trend for smaller, increasingly powerful computers could be coming to an end. Silicon-based chips are rapidly reaching a point at which the laws of physics prevent them being any smaller. There are also some important limitations to what silicon-based devices can do that mean there is a strong argument for looking at other ways to power computers. Perhaps the most well-known alternative researchers are looking at is quantum computers, which manipulate the properties of the chips in a different way to traditional digital machines.
A team of researchers from Belgium think that they are close to extending the anticipated end of Moore's Law, and they didn't do it with a supercomputer. Using an artificial intelligence (AI) algorithm called reservoir computing, combined with another algorithm called backpropagation, the team developed a neuro-inspired analog computer that can train itself and improve at whatever task it's performing. Reservoir computing is a neural algorithm that mimics the brain's information processing abilities. Backpropagation, on the other hand, allows for the system to perform thousands of iterative calculations that reduce error, which lets the system improve its solution to a problem. "Our work shows that the backpropagation algorithm can, under certain conditions, be implemented using the same hardware used for the analog computing, which could enhance the performance of these hardware systems," Piotr Antonik explains.
This time next week, on Tuesday 11 October at Bletchley Park, sees the launch of an initiative to celebrate women in maths and computing. As a new branch of the existing Suffrage Science scheme, it will encourage women into science, and to reach senior leadership roles. Women make up no more than four in ten undergraduates studying maths (London Mathematical Society), and fewer than two in ten of those studying computer science (WISE report, 2014). Despite much effort, there has been little sign of improvement. In fact, the number of women studying computer science at the undergraduate level has been in decline since the 1980s.
Alibaba will be among 13 businesses working with the Hangzhou government on a'brain' for the city and will work with the National Astronomical Observatory of China (NAOC) on deep space exploration projects, it announced at its annual Computing Conference this week. According to the retail and cloud computing giant, it will be supplying a range of its tech services such as AI, deep learning and data storage. The B2B technology supply side to the Alibaba business is growing fast and puts it very much in battle with Amazon on a global playing field. The Hangzhou City Brain project is a new government initiative to address its urban city living issues, such as traffic congestion. It will use Alibaba Cloud's AI program "ET" and big data analytics capabilities to perform real-time traffic prediction by using its video and image recognition technologies.
A new report from the Office of Science Technology Policy (OSTP) addresses the fast-growing field of artificial intelligence (AI), which is increasingly poised to reshape the way we live and work. Titled "Preparing for the Future of Artificial Intelligence," the report makes 23 policy recommendations on a number of topics concerned with the best way to harness the power of machine learning and algorithm-driven intelligence for the benefit of society. The OSTP position is that government has several roles to play in driving the direction of AI. Namely, "It should convene conversations about important issues and help to set the agenda for public debate. It should monitor the safety and fairness of applications as they develop, and adapt regulatory frameworks to encourage innovation while protecting the public.