If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
IBM's Computational Psychiatry and Neuroimaging research team has been working on a way to use machine learning to predict the risk of developing psychosis and it just published a second study that shows AI might be a valuable tool when it comes to mental health assessment. Building off of work published in 2015, the team used AI to analyze the speech patterns of 59 individuals who had participated in a separate study. Transcripts of an interview the participants took part in were broken down into parts of speech and were scored on how coherent the sentences were. Then, the machine learning model determined, based on those speech patterns, who was at risk of developing psychosis and who wasn't. Of those participants, 19 developed a psychotic disorder within two years while 40 did not and the model was able to predict that with 83 percent accuracy.
Defined as the "ability for (computers) to learn without being explicitly programmed," machine learning is huge news for the information security industry. It's a technology that potentially can help security analysts with everything from malware and log analysis to possibly identifying and closing vulnerabilities earlier. Perhaps too, it could improve endpoint security, automate repetitive tasks, and even reduce the likelihood of attacks resulting in data exfiltration. Naturally, this has led to the belief that these intelligent security solutions will spot - and stop - the next WannaCry attack much faster than traditional, legacy tools. "It's still a nascent field, but it is clearly the way to go in the future.
A field of study that gives computers the ability to learn without being explicitly programmed. Despite this common claim, anyone who has worked in the field knows that designing effective machine learning systems is a tedious endeavor, and typically requires considerable experience with machine learning algorithms, expert knowledge of the problem domain, and brute force search to accomplish. Thus, contrary to what machine learning enthusiasts would have us believe, machine learning still requires a considerable amount of explicit programming. In this article, we're going to go over three aspects of machine learning pipeline design that tend to be tedious but nonetheless important. After that, we're going to step through a demo for a tool that intelligently automates the process of machine learning pipeline design, so we can spend our time working on the more interesting aspects of data science.
Humans today live a lot longer than they used to. That's great news, but as modern medical advances are giving patients second chances at living normal lives, end-of-life care continues to be a difficult thing to plan. Forecasting when someone will die is an extremely challenging and often uncomfortable thing, but Stanford researchers have trained an AI to be able to predict death with incredible accuracy, and it could revolutionize end-of-life care for patients who are reaching their ends. The goal is to better match patient (and family) wishes with an accurate timeline of an individuals final months, weeks, and days, while affording them the opportunity to plan ahead for the inevitable. The work is titled Improving Palliative Care with Deep Learning, and it's currently available online.
Some people champion artificial intelligence as a solution to the kinds of biases that humans fall prey to. Even simple statistical tools can outperform people at tasks in business, medicine, academia, and crime reduction. Others chide AI for systematizing bias, which it can do even when bias is not programmed in. In 2016, ProPublica released a much-cited report arguing that a common algorithm for predicting criminal risk showed racial bias. Now a new research paper reveals that, at least in the case of the algorithm covered by ProPublica, neither side has much to get worked up about.
Unlike evaluating the accuracy of models that predict a continuous or discrete dependent variable like Linear Regression models, evaluating the accuracy of a classification model could be more complex and time-consuming. Before measuring the accuracy of classification models, an analyst would first measure its robustness with the help of metrics such as AIC-BIC, AUC-ROC, AUC- PR, Kolmogorov-Smirnov chart, etc. The next logical step is to measure its accuracy.
Defined as the "ability for (computers) to learn without being explicitly programmed," machine learning is huge news for the information security industry. It's a technology that potentially can help security analysts with everything from malware and log analysis to possibly identifying and closing vulnerabilities earlier. Perhaps too, it could improve endpoint security, automate repetitive tasks, and even reduce the likelihood of attacks resulting in data exfiltration. Get the latest from CSO by signing up for our newsletters. Naturally, this has led to the belief that these intelligent security solutions will spot - and stop - the next WannaCry attack much faster than traditional, legacy tools.
Perhaps you heard recently about a new algorithm that can drive a car? Or scan a picture and find your face in a crowd? It seems as though every week companies are finding new uses for algorithms that adapt as they encounter new data. Last year Wired quoted an ex-Google employee as saying that "Everything in the company is really driven by machine learning." Machine learning has tremendous potential to transform companies, but in practice it's mostly far more mundane than robot drivers and chefs.
Our minds may no longer be a safe haven for secrets. Scientists are working toward building mind-reading algorithms that could potentially decode our innermost thoughts through memories that act as a database. For most, this probably sounds like an episode of Netflix's hit series Black Mirror. The dystopian sci-fi thriller recently showcased a chilling episode called "Crocodile" that used memory-reading techniques to investigate accidents for insurance purposes. The eerie episode is set in an AI-driven world of driverless vehicles and facial recognition technologies.
Visual aesthetics has been shown to critically affect a variety of constructs such as perceived usability, satisfaction, and pleasure. However, visual aesthetics is also a subjective concept and therefore, presents its unique challenges in training a machine learning algorithm to learn such subjectiveness. Given the importance of visual aesthetics in human-computer interaction, it is vital that machines adequately assess the concept of visual aesthetics. Machine learning, especially deep learning techniques have already shown great promise on tasks with well-defined goals such as identifying objects in images or translating from one language to another. However, quantification of image aesthetics has been one of the most persistent problems in image processing and computer vision.