"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
As Charles Darwin wrote in at the end of his seminal 1859 book On the Origin of the Species, "whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved." Scientists have since long believed that the diversity and range of forms of life on Earth provide evidence that biological evolution spontaneously innovates in an open-ended way, constantly inventing new things. However, attempts to construct artificial simulations of evolutionary systems tend to run into limits in the complexity and novelty which they can produce. This is sometimes referred to as "the problem of open-endedness." Because of this difficulty, to date, scientists can't easily make artificial systems capable of exhibiting the richness and diversity of biological systems.
Machine Learning, Data Science, and Predictive Analytics techniques are in strong demand. That's why since its launch, IBM Watson Studio has proven to be very popular with academia. Thousands of students and faculty have been drawn to Watson Studio for its powerful open source and code-free data analysis tools. Now, this all-in-one platform for data science is free to students and faculty with unlimited use with Watson Studio Desktop. Watson Studio Desktop, with unlimited compute, is now available for free to students and faculty for teaching and learning purposes via a 1 year subscription.
MIT's Computer Science and Artificial Intelligence Lab has developed a new deep learning-based AI prediction model that can anticipate the development of breast cancer up to five years in advance. Researchers working on the product also recognized that other similar projects have often had inherent bias because they were based overwhelmingly on white patient populations, and specifically designed their own model so that it is informed by "more equitable" data that ensures it's "equally accurate for white and black women." That's key, MIT notes in a blog post, because black women are more than 42 percent more likely than white women to die from breast cancer, and one contributing factor could be that they aren't as well-served by current early detection techniques. MIT says that its work in developing this technique was aimed specifically at making the assessment of health risks of this nature more accurate for minorities, who are often not well represented in development of deep learning models. The issue of algorithmic bias is a focus of a lot of industry research and even newer products forthcoming from technology companies working on deploying AI in the field.
"We can run these simulations in a few milliseconds, while other'fast' simulations take a couple of minutes," says study co-author Shirley Ho, a group leader at the Flatiron Institute's Center for Computational Astrophysics in New York City and an adjunct professor at Carnegie Mellon University. The speed and accuracy of the project, called the Deep Density Displacement Model, or D3M for short, wasn't the biggest surprise to the researchers. The real shock was that D3M could accurately simulate how the universe would look if certain parameters were tweaked--such as how much of the cosmos is dark matter--even though the model had never received any training data where those parameters varied. "It's like teaching image recognition software with lots of pictures of cats and dogs, but then it's able to recognize elephants," Ho explains. "Nobody knows how it does this, and it's a great mystery to be solved."
While today's deep learning systems are able to natively analyze video, the large file sizes of high resolution movies present unique challenges in terms of storage space and computational requirements. Sampling them into sequences of still images not only allows for real-time processing of unlimited-length videos but opens the door for creative new applications like "video ngrams." The most straightforward way to sample a video into a sequence of still images is to use a fixed-rate time-based mechanism such as one frame per second. This kind of sampling is supported natively by most tools like ffmpeg and provides a simplistic and robust workflow. At the same time, it is highly inefficient, especially for videos where there is a lot of repetition.
A team of MIT researchers is making it easier for novices to get their feet wet with artificial intelligence, while also helping experts advance the field. In a paper presented at the Programming Language Design and Implementation conference this week, the researchers describe a novel probabilistic-programming system named "Gen." Users write models and algorithms from multiple fields where AI techniques are applied -- such as computer vision, robotics, and statistics -- without having to deal with equations or manually write high-performance code. Gen also lets expert researchers write sophisticated models and inference algorithms -- used for prediction tasks -- that were previously infeasible. In their paper, for instance, the researchers demonstrate that a short Gen program can infer 3-D body poses, a difficult computer-vision inference task that has applications in autonomous systems, human-machine interactions, and augmented reality.
Dreams are, for the most part, delightful. As we sleep, visual and audio fragments combine into nonsensical snippets and epic narratives. Loosely recalled moments merge with vivid, imagined scenes; we interact with characters known and characters conjured up; we explore our fantasies and, sometimes, face our fears. Yet sleeping and dreaming do not exist for our nocturnal pleasure alone. As we slumber, our brains filter information collected in waking hours.
The unprecedented explosion in the amount of information we are generating and collecting, thanks to the arrival of the internet and the always-online society, powers all the incredible advances we see today in the field of artificial intelligence (AI) and Big Data. With this in mind, a great deal of thought and research has gone into working out the best way to store and organize information during the digital age. The relational database model was developed in the 1970s and organizes data into tables consisting of rows and columns – meaning the relationship between different data points can be determined at a glance. This worked very well in the early days of business computing, where information volumes grew slowly. For more complicated operations, however – such as establishing a relationship between data points stored in many different tables - the necessary operations quickly become complex, slow and cumbersome.