Despite being used to make life-altering decisions from medical diagnoses to loan limits, the inner workings of various machine learning architectures – including deep learning, neural networks and probabilistic graphical models – are incredibly complex and increasingly opaque. Just as humans worked to make sense and explain their actions after the fact, a similar method could be adopted in AI, Norvig explained. "So we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say – given the input of this first system, now it's your job to generate an explanation." Besides, Norvig added yesterday: "Explanations alone aren't enough, we need other ways of monitoring the decision making process."
He continues, "While farfetched at the time, big data and machine learning have come far enough in just four years to provide gravitas to Vinod's argument. With a trillion gigabytes of patient data collected from devices, EHRs, labs, and DNA sequencing, alongside surrounding factors such as weather, geo-location, and viral outbursts taken into account, computers learn quickly, and they learn everything. Today, researchers in Europe are using 3-D printers and DNA sequencing to grow human body parts that could potentially replace missing limbs or ailing organs. While some of Aziz's ideas still make me squeamish, machine learning, virtual reality, the Human Genome Project, and the internet of things will undoubtedly impact our lives in the future.
The bots -- known as "dialog agents" -- were creating their own language -- well, kind of. Using machine learning algorithms, dialog agents were left to converse freely in an attempt to strengthen their conversational skills. Over time, the bots began to deviate from the scripted norms and in doing so, started communicating in an entirely new language -- one they created without human input. After learning to negotiate, the bots relied on machine learning and advanced strategies in an attempt to improve the outcome of these negotiations.
Simply put, machine learning uses algorithms to find patterns in data fed to it by humans. Traditional machine learning requires humans to provide context for data -- something called feature engineering -- so a machine can make better predictions. Deep learning is great for video, speech or images. Traditional machine learning models can't make heads-or-tails of complex images, for example.
In other words, infected people test positive 99 per cent of the time and healthy people test negative 99 per cent of the time. We also need a figure for the prevalence of the infection in the population; if we don't know it, we can start by guessing that half of the population is infected and half is healthy. But this line of reasoning ignores the fact that 1 per cent of the healthy people will test positive and, as the proportion of healthy people increases, the number of those healthy people who test as positive begins to overwhelm those who are infected and also test positive. In slightly more formal terms we would say that the number of false positives (healthy people being misdiagnosed) begins to overwhelm the true positives (infected people testing positive).
When quizzed about the problem-solving that Google uses ML to enhance, Pande said that Google's large pool of user data allows them to provide answers to text, voice, speech and translation-related issues posed by users. ML applications currently in the works can also read text and detect the tone of what is being written. For instance, they can figure out if a user is congratulating someone or complaining about something, and act accordingly. Recently, Google has given us a real-world example of ML put to work, with products like Suggested Sharing, and Photo Books that utilise it to select the best of your photos and make a photo album for you.
Uber and others--Google and Tesla and the auto companies--have invested a lot of money in developing technology for self-driving cars because technologists believe that the technology is still good and will eventually become so pervasive that most of us won't drive cars around anymore. If you're the company that controls that technology, then you could in theory control the transportation network that runs that technology. That's why it's investing a lot of money in its own self-driving car technology. General Motors is investing a lot of money.
One of China's largest state-owned lenders is to set up a fintech lab with one of the world's largest internet and gaming companies. Bank of China (BOC) and Tencent have established a joint financial technology laboratory, the lender said in a statement this week. The lab will work on cloud computing, big data, blockchain and artificial intelligence to promote financial innovations.
We should all be blessed with friends like my pal Stafford Masie. Stafford is deeply plugged into the tech world, a great advantage for his friends as he willingly helps the rest of us understand the big trends shaping the world. Stafford's most recent venture, the multibillion mobile payments product Thumbzup, is a huge success with its Absa relationship expanded to now include clients like Mr Price and even Uber. The next big global winner, Stafford reckons, will be the company that becomes the gorilla in artificial intelligence.