"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
Wherever artificial intelligence is deployed, you will find it has failed in some amusing way. Take the strange errors made by translation algorithms that confuse having someone for dinner with, well, having someone for dinner. But as AI is used in ever more critical situations, such as driving autonomous cars, making medical diagnoses, or drawing life-or-death conclusions from intelligence information, these failures will no longer be a laughing matter. That's why DARPA, the research arm of the US military, is addressing AI's most basic flaw: it has zero common sense. "Common sense is the dark matter of artificial intelligence," says Oren Etzioni, CEO of the Allen Institute for AI, a research nonprofit based in Seattle that is exploring the limits of the technology.
Data center-hosted artificial intelligence is rapidly proliferating in both government and commercial markets, and while it's an exciting time for AI, only a narrow set of applications is being addressed, primarily limited to neural networks based on convolutional approach. Other categories of AI include general AI, symbolic AI and bio-AI, and all three require different processing demands and run distinctly different algorithms. Virtually all of today's commercial AI systems run neural network applications. But much more control-intensive and powerful AI workloads using symbolic AI, bio-AI and general AI algorithms are ill-suited to GPU/TPU architectures. Today, commercial and governmental entities that need AI solutions are using workarounds to achieve more compute power for their neural net applications, and chief among them is specialty processors like Google TPUs and NVIDIA GPUs, provisioned in data centers specifically for AI workloads.
Companies in all industries must stay up to date with the latest tech to survive in this digital world. This is especially true in the case of machine learning (ML), which has the potential to transform the way businesses process and use their data. While ML has a number of useful applications in the business world, applying it to business intelligence (BI) insights can help you optimize your processes and make even better decisions. Thirteen members of Forbes Technology Council shared some creative ways to combine business intelligence with machine learning to produce the best results for your company. One of the most unique ways to combine business intelligence and machine learning is the identification of fraud indicators.
Lately the fact-checking world has been in a bit of a crisis. Sites like Politifact and Snopes have traditionally focused on specific claims, which is admirable but tedious; by the time they've gotten through verifying or debunking a fact, there's a good chance it's already traveled across the globe and back again. Social media companies have also had mixed results limiting the spread of propaganda and misinformation. Facebook plans to have 20,000 human moderators by the end of the year, and is putting significant resources into developing its own fake-news-detecting algorithms. Researchers from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) and the Qatar Computing Research Institute (QCRI) believe that the best approach is to focus not only on individual claims, but on the news sources themselves.
A screen shows a demonstration of the cognitive level of a facial recognition software at the Ericsson AB booth at the Mobile World Congress Shanghai in Shanghai, China, on Thursday, June 28, 2018. The global videos surveillance market is expected to post a compound annual growth rate of close to 11% during the period 2018-2022 according to Technavio. The potential benefits of leveraging artificial intelligence (A) in the physical security industry have pros and cons on both sides, but the debate over the ethical ways to leverage AI and surveillance continues as more and more surveillance systems are getting the brains to match what they see. AI startups like Boulder AI, which offers a vision-as-a-service and IC Realtime, which lets you search and analyze your video feeds from CCTV system; are gaining traction. Alongside the Chinese facial recognition startups like Megvii's Face with $600 million in private equity; SenseTime with $62o million from a series C; and, Yitu Technology with $300 million from a series C, the potential uses of facial recognition technology are well funded.
Ask poverty attorney Joanna Green Brown for an example of a client who fell through the cracks and lost social services benefits they may have been eligible for because of a program driven by artificial intelligence (AI), and you will get an earful. There was the "highly educated and capable" client who had had heart failure and was on a heart and lung transplant wait list. The questions he was presented in a Social Security benefits application "didn't encapsulate his issue" and his child subsequently did not receive benefits. "It's almost impossible for an AI system to anticipate issues related to the nuance of timing," Green Brown says. Then there's the client who had to apply for a Medicaid recertification, but misread a question and received a denial a month later.
Today's artificial intelligence systems, including the artificial neural networks broadly inspired by the neurons and connections of the nervous system, perform wonderfully at tasks with known constraints. They also tend to require a lot of computational power and vast quantities of training data. That all serves to make them great at playing chess or Go, at detecting if there's a car in an image, at differentiating between depictions of cats and dogs. "But they are rather pathetic at composing music or writing short stories," said Konrad Kording, a computational neuroscientist at the University of Pennsylvania. "They have great trouble reasoning meaningfully in the world." Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
Have you ever used your credit card at a new store or location only to have it declined? Has a sale ever been blocked because you charged a higher amount than usual? Consumers' credit cards are declined surprisingly often in legitimate transactions. One cause is that fraud-detecting technologies used by a consumer's bank have incorrectly flagged the sale as suspicious. Now MIT researchers have employed a new machine-learning technique to drastically reduce these false positives, saving banks money and easing customer frustration.
Any system where humans interact with technology involves a tradeoff: security versus accessibility. The more secure the system, the more difficult it is to access. This poses a dilemma for any organization facing pressure to embrace anytime, anywhere accessibility, the mobile workplace and real-time interaction with customers and employees--and that describes almost every organization today. Advances in artificial intelligence (AI)--and the millions of data points created by the Internet of Things--are starting to change the nature of this tradeoff, particularly where trust is part of the product or service. As AI systems learn more, they can be trained to suggest next best actions, automate some repetitive tasks and minimize the greatest risk: human error.