If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Amazon Prime Day 2018 is here, with the 36 hour extravaganza of online deals expected to be the biggest shopping event in the retail giant's history. But consumer group Which? has warned that not all of the offers for TVs, laptops, cameras and other electronics represent the value they claim to. On Monday 16 July, discounts on a range of goods will appear on Amazon's website, however some of the items may actually be cheaper outside the promotion period. Last year, Amazon Prime Day became the retailers biggest ever event, with more purchases than Black Friday and Cyber Monday. At its peak, Amazon customers reportedly ordered 398 items per second.
AI has moved a step closer to achieving human-like thought, after a new project developed machines capable of abstract thought to pass parts of an IQ test. Experts from DeepMind, which is owned by Google parent company Alphabet, put machine learning systems through their paces with IQ tests, which are designed to measure a number of reasoning skills. The puzzles in the test involve a series of seemingly random shapes, which participants need to study to determine the rules of that dictate the pattern. Once they have worked out the rules of the puzzle, they should be able to accurately pick the next shape in the sequence. DeepMind researchers hope that developing AI which is capable of thinking outside the box could lead to machines dreaming-up novel solutions to problems that humans may not ever have considered.
As the infrastructure world becomes saturated with progressively sophisticated digital technologies, public and private sector infrastructure leaders will be forced to adopt a new base of knowledge and new set of skills. Many of these decision makers are accomplished engineers, but with mechanical, civil, structural or electrical backgrounds. Their expertise and experience remains valuable and relevant, but must today be augmented by perspectives from computer science and software engineering in order to meet the demands and expectations of today's citizen-consumers. These changes also mean officials will have to source new partners and vendors -- ones like Xaqt, Rapid Flow Technologies and Pluto AI that can supplement traditional infrastructural intelligence with digital intelligence.
Jean-François Puget, a data professional who holds the title of IBM distinguished engineer, delivered a keynote presentation at the Strata Data Conference in London recently. The theme of his talk was the consideration of humans when bringing in automated processes. His main example focused the resistance people can have to new automated processes that work that they are not used to, but he also spoke of bias in artificial intelligence saying it "can be a bit more complex." Puget gave an example that he had worked on with his team, a study on facial recognition. The found that accuracy of the services varied wildly according to the race and gender of the subject.
Asia is fast become the home of the chatbot, as millions of consumers with billions of queries swamp traditional customer support and sales tools. Follow what's happening in Asia and you will see why companies both local and global should be chatbot-enabled. For every high-profile test launch in the west, there are a dozen full roll-outs of working chatbots to handle customer inquiries across banks, airlines, hotels and other markets. Asia needs chatbots now, while western business still tinkers with the possibilities. But getting ahead of the game could give your company a major competitive edge.
Standard human IQ tests often require test-takers to interpret perceptually simple visual scenes by applying principles that they have learned through everyday experience. For example, human test-takers may have already learned about'progressions' (the notion that some attribute can increase) by watching plants or buildings grow, by studying addition in a mathematics class, or by tracking a bank balance as interest accrues. They can then apply this notion in the puzzles to infer that the number of shapes, their sizes, or even the intensity of their colour will increase along a sequence. We do not yet have the means to expose machine learning agents to a similar stream of'everyday experiences', meaning we cannot easily measure their ability to transfer knowledge from the real world to visual reasoning tests. Nonetheless, we can create an experimental set-up that still puts human visual reasoning tests to good use.
Earlier this week we learned that worldwide smart speaker sales are expected to increase sixfold within the next couple of years. This mirrors multiple studies that say the majority of U.S. households will have a smart speaker by 2022, powered by current leading intelligent assistants Google Assistant and Alexa. At the same time, tech giants making intelligent assistants seem to want to have it both ways, selling products to both consumers and governments. For example, Microsoft, maker of Cortana, may be supplying facial recognition software to ICE, the government agency tasked with capturing and detaining immigrants who are in the United States illegally. As Amazon rolls out deep learning camera Lens and fashion assistant Echo Look, the company has drawn pleas from employees, the ACLU, and a number of other organizations to stop sharing its facial recognition software with law enforcement agencies.