If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
What will happen to a person's artificial intelligence (AI) when they retire? When a prospective employee interviews for a job, will his or her AI be questioned alongside them? Will companies hire AI straight from a factory, or will the system undergo a sort of apprenticeship before being put to work? More importantly--and more realistic in the near-term--what will be the line at which machines are not reliable enough or morally appropriate to use and humans take over? These, along with many more immediate questions, are among the topics USGIF's Machine Learning & Artificial Intelligence Working Group seeks to generate discussion around.
With a focus on chips and artificial intelligence, U.K.-based Graphcore can now be considered one of Europe's hottest startups. Today, the company announced it has raised a $50 million round of funding led by Silicon Valley's Sequoia Capital, a firm not known for investing much in Europe. This follows the $60 million that Graphcore had already raised over the last 18 months. In a blog post, Graphcore cofounder Nigel Toon wrote that the company's partnership with Sequoia is an indication that it intends to remain independent as it seeks to compete in the surging AI chip market. "So over the last few weeks, Graphcore and Sequoia Capital have worked together on a scale-up business plan and on a funding plan which will allow us to grow more quickly and to support our prospective customers more deeply as we bring products to market," Toon wrote.
During the 2008 financial crisis, the banking industry realized that their machine learning algorithms were based on flawed assumptions. So financial system regulators decided that additional controls were needed, and regulatory requirements for "model risk" management on banks and insurers were introduced. Banks also had to prove that they understood the models they were using, so, regrettably but understandably, they deliberately limited the complexity of their technology, resorting to generalized linear models that offered simplicity and interpretability above all else. In the past several years, machine learning and AI have made enormous strides in accuracy. Yet regulated industries (like banking) remain hesitant, often prioritizing regulatory compliance and algorithm interpretability over accuracy and efficiency.
Artificial Intelligence techniques such as "deep learning" and "convolutional neural networks" have made stunning advancements in image recognition, self-driving cars, and other difficult tasks. Numerous AI companies have appeared to catch the wave of excitement as funding and acquisitions have accelerated. Yet, leading AI researchers realize something is not right. Despite the impressive progress, current AI techniques are limited. For example, deep learning networks typically require millions of training examples before they start working correctly, while a human can learn something new with just a few exposures.
If there is one technology that promises to change the world more than any other over the next several decades, it is arguably machine learning. By enabling computers to learn certain things more efficiently than humans and discover certain things that humans cannot, machine learning promises to bring increasing intelligence to software everywhere and enable computers to develop ever new capabilities – from driving cars to diagnosing disease – that were previously thought impossible. While most of the core algorithms that drive machine learning have been around for decades, what has magnified its promise so dramatically in recent years is the extraordinary growth of the two fuels that power these algorithms – data and computing power. Both continue to grow at exponential rates, suggesting that machine learning is at the beginning of a very long and productive run. As revolutionary as machine learning will be, its impact will be highly asymmetric.
Siri and Alexa may spend their days responding to requests for trivia and weather. Among these more general agents, though, it's a safe bet that Microsoft will steer its agent, Cortana, more toward a productivity focus at some point. And Will.i.am's tech venture i.am recently raised $117 million in support of its enterprise-focused voice agent Omega that is set to focus initially on customer service. Four ways to explore the use of voice technology for your business. But once the meeting got underway, though, you were on on your own.
AI is a term that gets bandied about a lot these days. It's the capability du jour, the follow-up hit to "big data." But what does it really mean? Luis Perez-Breva is a lecturer and research scientist at MIT's School of Engineering and the originator and lead Instructor of the MIT Innovation Teams Program. He's the author of Innovating: A Doer's Manifesto for Starting from a Hunch, Prototyping Problems, Scaling Up, and Learning to Be Productively Wrong.
A trio of new investments in Silicon Valley machine-learning startups shows that the U.S. intelligence community is deeply interested in artificial intelligence. But China is investing even more in these kinds of U.S. companies, and that has experts and intelligence officials worried. Founded to foster new technology for spies, the 17-year-old In-Q-Tel has also helped boost commercial products. Compared to a venture capitalist firm whose early-stage investments are intended to make some money and get out, the nonprofit's angle is longer term, less venture, more strategic, according to Charlie Greenbacker, In-Q-Tel's technical product leader in artificial intelligence, machine learning, natural language processing, analytics, and data science. "Our model is to put a little bit of pressure at the right spot to influence a company to make sure it develops things that are useful to our customers," said Greenbacker, who estimated that their investments in a given startup generally amount to about one of every 15 dollars the company has.
Opinion: just what exactly is Artificial Intelligence and why is it so important? The first official use of the term Artificial Intelligence (AI) was in the proposal for the 1956 Dartmouth Summer Research Project on Artificial Intelligence. That six week workshop marked the birth of the field of study of AI and the organisers - John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon - and conference attendees led the way for many years. At the beginning, the focus was on developing computational systems that had the capacity for the human abilities that are traditionally associated with intelligence. These included language use, mathematics and self-improvement on tasks through experience (learning) and planning (for example in games such as chess).
The most complete set of AI ethics developed to date, the twenty-three Asilomar Principles, was created by the Future of Life Institute in early 2017 at their Asilomar Conference. Ninety percent or more of the attendees at the conference had to agree upon a principle for it to be accepted. Although all twenty-three principles are important, the research issues are especially time sensitive. That is because AI research is already well underway by hundreds, if not thousands of different groups. There is a current compelling need to have some general guidelines in place for this research.