If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world's information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to "speak" in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change. A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose "training labels" consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.
What does being self-aware mean? Do we have self-aware robots? Both of these are key questions in the field of artificial intelligence, and questions that will be covered in this article. I will also explain the difference between a robot and AI, what self-awareness is, and some examples of self-awareness in robots. Robotics and artificial intelligence (AI) are two separate fields of engineering; a robot is a machine, whereas an AI is a program.
"I want to meet, in my lifetime, an alien species," said Hod Lipson, a roboticist who runs the Creative Machines Lab at Columbia University. "I want to meet something that is intelligent and not human." But instead of waiting for such beings to arrive, Lipson wants to build them himself -- in the form of self-aware machines. To that end, Lipson openly confronts a slippery concept -- consciousness -- that often feels verboten among his colleagues. "We used to refer to consciousness as'the C-word' in robotics and AI circles, because we're not allowed to touch that topic," he said.
Columbia University professor and robotics engineer Hod Lipson knows the importance of artificial intelligence (AI) on a global level. "It permeates everything we do, from the stock market, from predicting the weather to what product you're going to buy," he said Wednesday during the second day of the virtual Ai4 2020 conference. AI falls into the category of an exponential technology, meaning it accelerates with time. Both biopharma and med-tech companies are increasingly pulling the technology into their business operations, working on programs that can assist in everything from drug discovery and clinical trial recruitment to precision diagnostics and patient compliance efforts. Computing power has doubled every 20 months or so for the past 120 years, Lipson said, moving from mechanical instruments to graphics processing units (GPUs) today.
Artificial intelligence--the ability of a computer program to perform human tasks such as thinking and learning, sometimes referred to as machine learning--is changing classrooms in both K12 and higher ed. But robotics has some questioning whether AI is just a fad that will eventually fade into obscurity or alter teaching and learning processes as we know it. Experts discussed the topic at a recent conference for future K12 educators held by the Teachers College at Columbia University, "Where Does Artificial Intelligence Fit in the Classroom?" Borhene Chakroun, director of the division for policies and lifelong learning systems at UNESCO, kicked off the event extolling the future of AI technology and its potential to "profoundly alter every aspect of the teaching and learning process." He also acknowledged the implications of AI and how it is altering how machines and humans work together.
Researchers at Columbia University say they've built a robot arm that can construct a self-image from scratch -- a capability they frame, provocatively, as a step toward machines that are truly self-aware. "This is perhaps what a newborn child does in its crib, as it learns what it is," said Hod Lipson, a professor of mechanical engineering who worked on the robot, in a press release. "We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot's ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness." The robot arm, described in a new paper in the journal Science Robotics, learns how to operate by experimenting -- with no programming about physics, geometry or its own construction.
Not many robotics companies can boast legions of fans online, but not many robotics companies make robots quite like Boston Dynamics. Each time the firm shares new footage of its machines, they cause a sensation. Whether it's a pack of robot dogs towing a truck or a human-like bot leaping nimbly up a set of boxes, Boston Dynamics' bots are uniquely thrilling. And when a parody video circulated last month showing a CGI "Bosstown Dynamics" robot turning on its creators, many mistook it for the real thing -- a testament to how far the company has pushed what seems technologically possible. But for all its engineering prowess, Boston Dynamics now faces its biggest challenge yet: turning its stable of robots into an actual business.
Taking a cue from biological cells, researchers from MIT, Columbia University, and elsewhere have developed computationally simple robots that connect in large groups to move around, transport objects, and complete other tasks. This so-called "particle robotics" system -- based on a project by MIT, Columbia Engineering, Cornell University, and Harvard University researchers -- comprises many individual disc-shaped units, which the researchers call "particles." The particles are loosely connected by magnets around their perimeters, and each unit can only do two things: expand and contract. That motion, when carefully timed, allows the individual particles to push and pull one another in coordinated movement. On-board sensors enable the cluster to gravitate toward light sources.
Researchers at Columbia Engineering and MIT Computer Science & Artificial Intelligence Lab (CSAIL) have engineered for the first time a particle robotic swarm with individual components that function as a whole. The novel kind of robot has never been seen before. "You can think of our new robot as the proverbial "Gray Goo," said Hod Lipson, professor of mechanical engineering at Columbia Engineering. "Our robot has no single point of failure and no centralized control. It's still fairly primitive, but now we know that this fundamental robot paradigm is actually possible.
A swarm of robots inspired by living cells can squeeze through gaps and keep moving even if many of its parts fail. Living cells gather together and collectively migrate under certain conditions, such as when inflammatory cells travel through the bloodstream to a wound site to help the healing process. To mimic this, Hod Lipson at Columbia University in New York and his colleagues created 25 disc-shaped robots that can join together. Each is equipped with cogs that cause the robot's outer shell to expand and contract and magnets around its perimeter that let it stick to neighbouring bots. Individually, the bots can't move, but once stuck together, the swarm can slither across a surface by making individual bots expand and contract at different times.