Guide to AI: How Artificial Intelligence is Changing The Business World - Calendar

#artificialintelligence 

Despite its role in early 20th-century fiction, AI has been part of the professional conversation for barely 70 years. AI was first studied at a Dartmouth College conference in 1956. The 1960's saw gains in machine translation and analysis. But AI underwent a "winter" from the 1970s through the early '90s. Researchers shelved their work largely because of the problem of "combinatorial explosion." A U.K. professor who first described the AI concept worried that too many variables would make it useless outside of lab settings. In the early '70s, groups like the U.S. Defense Advanced Research Projects Agency pulled funding. Research failures had become the norm. Interest in AI grew during the 1990s and early 2000's. Processing power and data volumes increased. At the same time, data sets grew massively. Algorithms gained more "meat" on which to train. Advances in game theory and data modeling led to new approaches. Today, best-in-class infrastructures can support 100,000 or more computers. Two-and-a-half quintillion bytes of data are now generated every day. Globally, private firms are spending tens of billions of dollars per year researching and improving AI initiatives. In fact, 2018's investment amount is more than 50 percent larger than last year's alone. Add it all up, and AI seems ready for a leap forward unlike any seen in its history. But, after slow decades followed by speedy discoveries, few outside the field feel they truly understand it. A recent Dell Technologies report found that 67 percent of leaders said their companies were struggling to implement AI. A similar two out of three consumers don't even realize they're using it, according to a HubSpot survey. "By far the greatest danger of artificial intelligence is that people conclude too early that they understand it."