"PREDICTION IS VERY difficult, especially if it's about the future," said Physics Nobel Laureate Niels Bohr. Bohr was presumably talking about the vagaries of quantum mechanical subatomic life, but the statement holds true at other scales too. Predicting the future is tough, and any good scientist knows enough to hedge his or her bets. That's what error bars are all about. It's why science usually proceeds methodically: hypotheses are formulated, experiments conducted, observations collated, and data evaluated.
Where does the European insurance industry stand in terms of advanced analytics, AI and automation? Do we see that traditional methods of data analysis are now being labeled by the term "machine learning"? Maybe the industry is more advanced than that: Are real chatbots, for example, already ubiquitous? Let's have a closer look. I had the opportunity to visit the "Insurance AI and Analytics Europe" conference in London.
Historically, when new technologies become easier to use, they transform industries. That's what's happening with artificial intelligence and big data; as the barriers to implementation disappear (cost, computing power, etc.), more and more industries will put the technologies into use, and more and more startups will appear with new ideas of how to disrupt the status quo with these technologies. By my predictions, the AI revolution isn't coming, it's already here, and we'll see it first in a few key sectors. Most people agree that healthcare is broken, and many startups believe that the biggest answer is putting the power back in the hands of the patient. We're all carrying the equivalent of Star Trek's tricorder around in our pockets (or an early version, at any rate) and smartphones and other smart devices will continue to advance and integrate with AI and big data to allow individuals to self-diagnose.
Artificial intelligence, blockchain, cryptocurrencies - three terms you need to scatter through your conversation if you want to come across as a tech guru. On Tech Tent this week we examine these trends and ask a futurologist to predict which of them will make rapid progress over the next decade. This week saw another major achievement by Google's Deep Mind, when it showed that a neural network could learn to play Go in just three days, without even looking at how humans play this complex game. AlphaGo Zero took on the previous version of the program, developed with human expertise, and beat it by 100 games to nil. The company now hopes to use this technique in other areas such as drug development.
Hollywood has made many big promises about artificial intelligence (AI): how it will destroy us, how it will save us, and how it will pass us butter. One of the less memorable promises is how cool it will look. There's a great example of amazing AI visualization in Avengers: Age of Ultron when Tony Stark's AI butler Jarvis interacts with Ultron and we see an organic floating network of light morphing and pulsing. I wanted to make something similar to fill blank space on my apartment wall (to improve upon the usual Ikea art). Obviously, I couldn't create anything as amazing as Jarvis as a floating orb of light; however, I could use a machine learning algorithm that looks interesting with quirky data visualization: a neural network!
Artificial intelligence is no longer just a niche subfield of computer science. Tech giants have been using AI for years: Machine learning algorithms power Amazon product recommendations, Google Maps, and the content that Facebook, Instagram, and Twitter display in social media feeds. But William Gibson's adage applies well to AI adoption: The future is already here, it's just not evenly distributed. The average company faces many challenges in getting started with machine learning, including a shortage of data scientists. But just as important is a shortage of executives and nontechnical employees able to spot AI opportunities.
Even though predictive analytics has been around for quite some time, interest around this topic has increased over the last couple of years. It is no longer enough for a company to accurately record what has happened. Today, an organization's success depends on its ability to reliably predict what will happen – be it predictions about what a customer is likely to buy next, an asset that could require maintenance, or the best action to take next in a business process. Predictive analytics uses (big) data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes based on historical data, enabling both optimization and innovation. Existing processes can be improved – for example by forecasting sales and spikes in demand and enabling the required adjustments to the production planning.
Recent studies by Google Brain have shown that any machine learning classifier can be tricked to give incorrect predictions, and with a little bit of skill, you can get them to give pretty much any result you want. This fact steadily becomes worrisome as more and more systems are powered by artificial intelligence -- and many of them are crucial for our safe and comfortable life. Lately, safety concerns about AI were revolving around ethics -- today we are going to talk about more pressuring and real issues. Machine learning algorithms accept the input in a form of numeric vectors. Designing an input in a specific way to get the wrong result from the model is called an adversarial attack.
Machine learning algorithms work blindly towards the mathematical objective set by their designers. It is vital that this task include the need to behave ethically. Such systems are exploding in popularity. Companies use them to decide what news you see and who you meet online dating. Governments are starting to roll out machine learning to help deliver government services and to select individuals for audit.
The invention of artificial things that learn and perform actions took place in the classic times. Alongside Calculus Ratiocinator by Llull, there were many fictional stories and dramas depicting artificial things and their immense potentials. You must watch it if you haven't. Church-Turing thesis -- which means machines can simulate any process of formal reasoning (from Wiki). Theory that backed up the brains of creators like Allen Newell, Herbert Simon, John McCarthy, Marvin Minsky, and Arthur Samuel.