If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Though Stanford University professor Fei-Fei Li began her career during the most recent artificial intelligence (AI) winter, she's responsible for one of the insights that helped precipitate its thaw. By creating Image-Net, a hierarchically organized image database with more than 15 million images, she demonstrated the importance of rich datasets in developing algorithms--and launched the competition that eventually brought widespread attention to Geoffrey Hinton, Ilya Sutskever, and Alex Krizhevsky's work on deep convolutional neural networks. Today Li, who was recently named an ACM Fellow, directs the Stanford Artificial Intelligence Lab and the Stanford Vision and Learning Lab, where she works to build smart algorithms that enable computers and robots to see and think. Here, she talks about computer vision, neuroscience, and bringing more diversity to the field. Your bachelor's degree is in physics and your Ph.D. is in electrical engineering.
These are just a few ways the world's top researchers and industry leaders have described the threat that artificial intelligence poses to mankind. Will AI enhance our lives or completely upend them? There's no way around it -- artificial intelligence is changing human civilization, from how we work to how we travel to how we enforce laws. As AI technology advances and seeps deeper into our daily lives, its potential to create dangerous situations is becoming more apparent. A Tesla Model 3 owner in California died while using the car's Autopilot feature. In Arizona, a self-driving Uber vehicle hit and killed a pedestrian (though there was a driver behind the wheel). Register for the live briefing to find out about the top AI trends expected to reshape industries and economies this year. Other instances have been more insidious. For example, when IBM's Watson was tasked with helping physicians diagnose cancer patients, it gave numerous "unsafe and incorrect treatment recommendations." Some of the world's top researchers and industry leaders believe these issues are just the tip of the iceberg. How might that redefine humanity's place in the world?
Research groups at KAIST, the University of Cambridge, Japan's National Institute for Information and Communications Technology, and Google DeepMind argue that our understanding of how humans make intelligent decisions has now reached a critical point in which robot intelligence can be significantly enhanced by mimicking strategies that the human brain uses when we make decisions in our everyday lives. In our rapidly changing world, both humans and autonomous robots constantly need to learn and adapt to new environments. But the difference is that humans are capable of making decisions according to the unique situations, whereas robots still rely on predetermined data to make decisions. Despite the rapid progress being made in strengthening the physical capability of robots, their central control systems, which govern how robots decide what to do at any one time, are still inferior to those of humans. In particular, they often rely on pre-programmed instructions to direct their behavior, and lack the hallmark of human behavior, that is, the flexibility and capacity to quickly learn and adapt.
The global construction industry has grown by only one per cent per year over the past few decades. Compare this with a growth rate of 3.6% in manufacturing, and 2.8% for the whole world economy. Productivity, or the total economic output per worker, has remained flat in construction. In comparison, productivity has grown 1,500% in retail, manufacturing, and agriculture since 1945. One of the reasons for this is that construction is one of the most under-digitized industries in the world and is slow to adopt new technologies (McKinsey, 2017).
My area focuses on the specific applications of robotics to extreme and challenging environments. These include robots that handle nuclear waste, climb tall towers in the middle of the ocean or survive thousands of meters underwater so not vacuuming the floor or serving you a coffee. AI forms a part of this programme, not because of the current hype around the technology, but simply that some of the latest developments in machine learning are so well suited to robotic challenges in unstructured environments. Let's have a look at examples of these latest techniques and the problems they are solving. One of the fields of computer vision that has in recent years been disrupted by AI is image classification.
Advances in artificial intelligence (AI) software and hardware are giving rise to a multitude of smart devices that can recognize and react to sights, sounds, and other patterns--and do not require a persistent connection to the cloud. These smart devices, from robots to cameras to medical devices, could well unlock greater efficiency and effectiveness at organizations that adopt them. In some industries, smart machines may well help expand existing markets, threaten incumbents, and shift the way revenue and profits are apportioned among industry players. Rapid strides in technology and the growing investment in AI innovation signal how fast AI deployment is moving. Advances in software and hardware are propelling AI outside of the data center into devices and machines we use in our work and our everyday lives.
This year the programme included a new stream, the Future of Work and that was the one in which I was invited to speak. Before summarising what I presented, I'd like to share some of the ideas and takeaways that I discovered about digital marketing and the impact of AI (artificial intelligence) and ML (machine learning). Most of us have grown up with text communication, but Gen Z, those born after 1996, are more comfortable with voice. They are less formal but far more impatient than previous generations. They expect Alexa, Siri, Cortana and similar voice-activated personal assistants to be available whenever they have a question.
In 1950, Norbert Wiener's The Human Use of Human Beings was at the cutting edge of vision and speculation in proclaiming: But this was his book's denouement, and it has left us hanging now for 68 years, lacking not only prescriptions and proscriptions but even a well-articulated "problem statement." We have since seen similar warnings about the threat of our machines, even in the form of outreach to the masses, via films like Colossus: The Forbin Project (1970), The Terminator (1984), The Matrix (1999), and Ex Machina (2015). But now the time is ripe for a major update with fresh, new perspectives -- notably focused on generalizations of our "human" rights and our existential needs. Concern has tended to focus on "us versus them" (robots) or "gray goo" (nanotech) or "monocultures of clones" (bio). To extrapolate current trends: What if we could make or grow almost anything and engineer any level of safety and efficacy desired?
According to some scientists, humans really do have a sixth sense. There's nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal "map" of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they're moving, above and beyond your sense of touch.