Technology


Introduction to Deep Learning

#artificialintelligence

It explores the study and construction of algorithms that can learn from and make predictions on data sets that are provided by building a model from sample data sets provided during a "training" period. In a supervised training period, a human feeds the data set to the computer along with the correct answer. The algorithms must build a model identifying how the correct answer is indeed the correct answer. An unsupervised training period is when the data set is provided to the computer which, in turn, discovers both the correct answer and how to figure out the correct answer.


Disney Is Developing an AI That Can Judge What Makes For a Truly Great Story

#artificialintelligence

The test was to see if the neural networks, once trained, could take a new story from the database and predict how many Quora upvotes it got – a sign that the AI was understanding what makes a story popular and what doesn't. In the team's experiments, the neural networks proved better at judging stories than traditional machine-learning techniques, with the network judging the story as a whole, registering an 18 percent improvement. Having said that, it's another example of the way advanced AI techniques like neural networks – which are capable of accepting and processing vast amounts of data in a layered way – can potentially make systems that are more human-like over time. "The ability to predict narrative quality impacts on both story creation and story understanding," says Disney Research vice president Markus Gross, who wasn't directly involved in the research.


Amazon Has Developed an AI Fashion Designer

MIT Technology Review

The effort points to ways in which Amazon and other companies could try to improve the tracking of trends in other areas of retail--making recommendations based on products popping up in social-media posts, for instance. For instance, one group of Amazon researchers based in Israel developed machine learning that, by analyzing just a few labels attached to images, can deduce whether a particular look can be considered stylish. An Amazon team at Lab126, a research center based in San Francisco, has developed an algorithm that learns about a particular style of fashion from images, and can then generate new items in similar styles from scratch--essentially, a simple AI fashion designer. The event included mostly academic researchers who are exploring ways for machines to understand fashion trends.


Understanding overfitting: an inaccurate meme in Machine Learning

@machinelearnbot

Applying cross-validation prevents overfitting and a good out-of-sample performance, low generalisation error in unseen data, indicates not an overfit. Aim In this post, we will give an intuition on why model validation as approximating generalization error of a model fit and detection of overfitting can not be resolved simultaneously on a single model. Let's use the following functional form, from classic text of Bishop, but with an added Gaussian noise We generate large enough set, 100 points to avoid sample size issue discussed in Bishop's book, see Figure 2. Overtraining is not overfitting Overtraining means a model performance degrades in learning model parameters against an objective variable that effects how model is build, for example, an objective variable can be a training data size or iteration cycle in neural network.


Emotion AI Will Personalize Interactions

#artificialintelligence

Artificial intelligence (AI) and affective computing are starting to make this possible. Devices enriched with AI, depth-sensing and neurolinguistic-programming technologies are starting to process, analyze and respond to human emotions. They use the technological approaches of natural-language processing and natural-language understanding, but they don't currently perceive human emotions. Artificial emotional intelligence ("emotion AI") will change that.


We want to democratise artificial intelligence: Google exec Fei-Fei Li - ETtech

#artificialintelligence

Google, a pioneer in AI, has been focusing on four key components - computing, algorithms, data and expertise -- to organise all the data and make it accessible. Google as a company has always been at the forefront of computing AI," Fei-Fei Li, Chief Scientist of Google Cloud AI and ML, told reporters during a press briefing. Earlier this year, Google announced the second-generation Tensor Processing Units (TPUs) (now called the Cloud TPU) at the annual Google I/O event in the US. The company offers computing power including graphics processing unit (GPUs), central processing units (CPUs) and tensor processing units (TPUs) to power machine learning.


We want to democratise artificial intelligence: Google exec Fei-Fei Li - ETtech

#artificialintelligence

Google, a pioneer in AI, has been focusing on four key components - computing, algorithms, data and expertise -- to organise all the data and make it accessible. Google as a company has always been at the forefront of computing AI," Fei-Fei Li, Chief Scientist of Google Cloud AI and ML, told reporters during a press briefing. Earlier this year, Google announced the second-generation Tensor Processing Units (TPUs) (now called the Cloud TPU) at the annual Google I/O event in the US. The company offers computing power including graphics processing unit (GPUs), central processing units (CPUs) and tensor processing units (TPUs) to power machine learning.


What is Deep Learning and how does it work?

#artificialintelligence

One of these is neural networks – the algorithms that underpin deep learning and play a central part in image recognition and robotic vision. Inspired by the nerve cells (neurons) that make up the human brain, neural networks comprise layers (neurons) that are connected in adjacent layers to each other. So we need to compile a training set of images – thousands of examples of cat faces, which we (humans) label "cat", and pictures of objects that aren't cats, labelled (you guessed it) "not cat". In 2001, Paul Viola and Michael Jones from Mitsubishi Electric Research Laboratories, in the US, used a machine learning algorithm called adaptive boosting, or AdaBoost, to detect faces in an image in real time.


Using machine learning to improve patient care 7wData

#artificialintelligence

"The system could potentially be an aid for doctors in the ICU, which is a high-stress, high-demand environment," says PhD student Harini Suresh, lead author on the paper about ICU Intervene. "The goal is to leverage data from medical records to improve health care and predict actionable interventions." Another team developed an approach called "EHR Model Transfer" that can facilitate the application of predictive models on an electronic health record (EHR) system, despite being trained on data from a different EHR system. "Much of the previous work in clinical decision-making has focused on outcomes such as mortality (likelihood of death), while this work predicts actionable treatments," Suresh says.


Flipboard on Flipboard

#artificialintelligence

Four-year-old startup Databricks just raised another $140 million in venture funding for a total of $247 million. With the new money, they are working on a "Slack for AI" that solves the problem of a lack of machine learning/AI scientists. Big-data startup Databricks has raised another $140 million in venture funding, it announced on Tuesday, bringing the total raised for the four-year-old company to $247 million. But because Databricks found an untapped niche in big data and AI, it quickly generated revenue, which led to investment, which led to growth, which led to happy cofounders.