computational


Computational Learning Theory

#artificialintelligence

Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm.


Informatics as a Fundamental Discipline for the 21st Century

Communications of the ACM

Informatics for all is a coalition whose aim is to establish informatics as a fundamental discipline to be taken by all students in school. Informatics should be seen as important as mathematics, the sciences, and the various languages. It should be recognized by all as a truly foundational discipline that plays a significant role in education for the 21st century. In Europe, education is a matter left to the individual states. However, education, competencies, and preparedness of the workforce are all important matters for the European Union (EU).


Why AI Underperforms and What Companies Can Do About It

#artificialintelligence

Why is the gap between companies' AI ambition and their actual adoption so large? The answer is not primarily technical. It is organizational and cultural. A massive skills and language gap has emerged between key organizational decision makers and their "AI teams." It is a barrier that promises to stall, delay, or sink algorithmic innovations.


Can you learn Data Science and Machine Learning without Maths?

#artificialintelligence

Data scientists are the no. 1 most promising job in America for 2019, according to a Thursday report from LinkedIn. Hence, this comes as no surprise: Data scientist topped Glassdoor's list of Best Jobs in America for the past three years, with professionals in the field reporting high demand, high salaries, and high job satisfaction. Also, with the increase in demand, employers are looking for more skills in modern day data scientists. Furthermore, a modern-day data scientist needs to be a good player in aspects like maths, programming, communication and problem-solving. In this blog, we are going to explore if knowledge of mathematics is really necessary to become good data scientists.


Artificial intelligence platform to detect neurodegenerative diseases

#artificialintelligence

The buildup of abnormal tau proteins in the brain in neurofibrillary tangles is a feature of Alzheimer's disease, but it also accumulates in other neurodegenerative diseases, such as chronic traumatic encephalopathy and additional age-related conditions. Accurate diagnosis of neurodegenerative diseases is challenging and requires a highly-trained specialist. Researchers at the Center for Computational and Systems Pathology at Mount Sinai developed and used the Precise Informatics Platform to apply powerful machine learning approaches to digitized microscopic slides prepared using tissue samples from patients with a spectrum of neurodegenerative diseases. Applying deep learning, these images were used to create a convolutional neural network capable of identifying neurofibrillary tangles with a high degree of accuracy directly from digitized images. "Utilizing artificial intelligence has great potential to improve our ability to detect and quantify neurodegenerative diseases, representing a major advance over existing labor-intensive and poorly reproducible approaches," said lead investigator John Crary, MD, PhD, Prof. of Pathology and Neuroscience at the Icahn School of Medicine at Mount Sinai.


No, Machine Learning is not just glorified Statistics

#artificialintelligence

This meme has been all over social media lately, producing appreciative chuckles across the internet as the hype around deep learning begins to subside. The sentiment that machine learning is really nothing to get excited about, or that it's just a redressing of age-old statistical techniques, is growing increasingly ubiquitous; the trouble is it isn't true. I get it -- it's not fashionable to be part of the overly enthusiastic, hype-drunk crowd of deep learning evangelists. ML experts who in 2013 preached deep learning from the rooftops now use the term only with a hint of chagrin, preferring instead to downplay the power of modern neural networks lest they be associated with the scores of people that still seem to think that import keras is the leap for every hurdle, and that they, in knowing it, have some tremendous advantage over their competition. While it's true that deep learning has outlived its usefulness as a buzzword, as Yann LeCun put it, this overcorrection of attitudes has yielded an unhealthy skepticism about the progress, future, and usefulness of artificial intelligence.


5G will change your business faster than you think

#artificialintelligence

Not that I'm a big fan of such titles, but when I look at what 5G will bring, it's clear most businesses will feel the impact. Most technologies have a slow adoption curve. This change of speed is going to catch most companies unaware. The technological improvements of better connectivity are apparent, but their consequences aren't. There are two big groups of problems that 5G's lower latency and high bandwidth will impact. On one side, we have those problems that we can solve with low computation and real-time responses. Think of any remote controller. There is minor computation needs on the controller side but needs fast reactions on the remoter actuator.


3 New Chips to Help Robots Find Their Way Around

IEEE Spectrum Robotics

Robots have a tough job making their way in the world. Life throws up obstacles, and it takes a lot of computing power to avoid them. At the IEEE International Solid-State Circuits Conference last month in San Francisco, engineers presented some ideas for lightening that computational burden. That's a particularly good thing if you're a compact robot, with a small battery pack and a big job to do. Engineers at Intel are experimenting with robot-specific accelerators as part of a collaborative multirobot system.


Global Big Data Conference

#artificialintelligence

Artificial intelligence is helping computers drive cars, recognize faces in a crowd, and hold life-like conversations. General Electric engineers now say they've used the data-intensive technology to develop tools that could cut the industrial giant's design process for jet engines and power turbines in at least half, speeding up its next generation of products. Today, it might take two days for engineers to run a computational analysis of the fluid dynamics of a single design for a turbine blade or an engine component. Scientists at General Electric's research center in Niskayuna, New York, say they've leveraged machine learning to train a surrogate model so that it can evaluate a million different variations of a design in just 15 minutes. "This is, we think, a huge breakthrough," says Robert Zacharias, technology director of thermosciences at GE Research.


6 impactful applications of AI to the life sciences [new essay]

#artificialintelligence

In 2013, the machine learning (ML) research community demonstrated the uncanny ability for deep neural networks trained with backpropagation on graphics processing units to solve complex computer vision tasks. The same year, I wrapped up my PhD in cancer research that investigated the genetic regulatory circuitry of cancer metastasis. Over the 6 years that followed, I've noticed more and more computer scientists (we call them bioinformaticians:) and software engineers move into the life sciences. This influx is both natural and extremely welcome. The life sciences have become increasingly quantitative disciplines thanks to high-throughput omics assays such as sequencing and high-content screening assays such as multi-spectral, time-series microscopy. If we are to achieve a step-change in experimental productivity and discovery in life sciences, I think it's uncontroversial to posit that we desperately need software-augmented workflows. This is the era of empirical computation (more on that here). But what life science problems should we tackle and what software approaches should we develop?