If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Higher expression of nine genes may help identify people with systemic lupus erythematosus (SLE) who will respond to treatment with Stelara (ustekinumab) -- an approved therapy in inflammatory disorders but not in SLE. At the 2019 American College of Rheumatology (ACR)/Association for Rheumatology Health Professionals (ARHP) Annual Meeting, being held in Atlanta Nov. 8-13, Janssen is presenting evidence of reduced SLE disease activity with Stelara, as well as a tool to predict benefits in clinical trials. Stelara works by blocking interleukin (IL)-12 and IL-23, two pro-inflammatory molecules. It is approved in the U.S. for the treatment of psoriasis and psoriatic arthritis, as well as Crohn's disease and ulcerative colitis, which are two forms of inflammatory bowel disease. Results from a Phase 2 trial (NCT02349061) showed that Stelara reduced SLE disease activity and severe flares, among other benefits, compared with a placebo.
A lawmaker with severe physical disabilities attended his first parliamentary interpellation Thursday since being elected in July and became the first lawmaker in Japan ever to use an electronically-generated voice during a Diet session. In the session of the education, culture and science committee, Yasuhiko Funago, who has amyotrophic lateral sclerosis, a condition also known as Lou Gehrig's disease, greeted the committee using a speech synthesizer. He also asked questions through a proxy speaker. "As a newcomer, I am still inexperienced, but with everyone's assistance, I will do my best to tackle (issues)," he said at the beginning of the session. An aide then posed questions on his behalf and expressed his desire to see improvements in the learning environment for disabled children.
Conventional inclusion criteria used in osteoarthritis clinical trials are not very effective in selecting patients who would benefit the most from a therapy under test. Typically these criteria select majority of patients who show no or limited disease progression during a short evaluation window of the study. As a consequence, less insight on the relative effect of the treatment can be gained from the collected data, and the efforts and resources invested in running the study are not paying off. This could be avoided, if selection criteria were more predictive of the future disease progression. In this article, we formulated the patient selection problem as a multi-class classification task, with classes based on clinically relevant measures of progression (over a time scale typical for clinical trials). Using data from two long-term knee osteoarthritis studies OAI and CHECK, we tested multiple algorithms and learning process configurations (including multi-classifier approaches, cost-sensitive learning, and feature selection), to identify the best performing machine learning models. We examined the behaviour of the best models, with respect to prediction errors and the impact of used features, to confirm their clinical relevance. We found that the model-based selection outperforms the conventional inclusion criteria, reducing by 20-25% the number of patients who show no progression and making the representation of the patient categories more even. This result indicates that our machine learning approach could lead to efficiency improvements in clinical trial design.
More than half of clinicians in the United States experience symptoms of burnout,1 a condition so pervasive that multiple leading healthcare organizations have declared it a public health crisis.2 The electronic health record (EHR) is a major contributing factor to clinician burnout. On average, clinicians spend nearly 6 hours a day – more than half of the workday – carrying out tasks on the EHR. Rather than spending that time with patients, clinicians are burdened with documentation, billing, coding, order entry, system security, and other clerical and administrative tasks.3 Burnout manifests in several ways: emotional exhaustion, feelings of ineffectiveness, an inability to find meaning in work, and even a tendency to view patients and colleagues as objects rather than human beings.
Background. Real-world data show that approximately 50% of psoriasis patients treated with a biologic agent will discontinue the drug because of loss of efficacy. History of previous therapy with another biologic, female sex and obesity were identified as predictors of drug discontinuations, but their individual predictive value is low. Objectives. To determine whether machine learning algorithms can produce models that can accurately predict outcomes of biologic therapy in psoriasis on individual patient level. Results. All tested machine learning algorithms could accurately predict the risk of drug discontinuation and its cause (e.g. lack of efficacy vs adverse event). The learned generalized linear model achieved diagnostic accuracy of 82%, requiring under 2 seconds per patient using the psoriasis patients dataset. Input optimization analysis established a profile of a patient who has best chances of long-term treatment success: biologic-naive patient under 49 years, early-onset plaque psoriasis without psoriatic arthritis, weight < 100 kg, and moderate-to-severe psoriasis activity (DLQI $\geq$ 16; PASI $\geq$ 10). Moreover, a different generalized linear model is used to predict the length of treatment for each patient with mean absolute error (MAE) of 4.5 months. However Pearson Correlation Coefficient indicates 0.935 linear dependencies between the actual treatment lengths and predicted ones. Conclusions. Machine learning algorithms predict the risk of drug discontinuation and treatment duration with accuracy exceeding 80%, based on a small set of predictive variables. This approach can be used as a decision-making tool, communicating expected outcomes to the patient, and development of evidence-based guidelines.
Google AI researchers working with the ALS Therapy Development Institute today shared details about Project Euphonia, a speech-to-text transcription service for people with speaking impairments. They also say their approach can improve automatic speech recognition for people with non-native English accents as well. People with amyotrophic lateral sclerosis (ALS) often have slurred speech, but existing AI systems are typically trained on voice data without any affliction or accent. The new approach is successful primarily due to the introduction of small amounts of data that represents people with accents and ALS. "We show that 71% of the improvement comes from only 5 minutes of training data," according to a paper published on arXiv July 31 titled "Personalizing ASR for Dysarthric and Accented Speech with Limited Data."
Real world observational data, together with causal inference, allow the estimation of causal effects when randomized controlled trials are not available. To be accepted into practice, such predictive models must be validated for the dataset at hand, and thus require a comprehensive evaluation toolkit, as introduced here. Since effect estimation cannot be evaluated directly, we turn to evaluating the various observable properties of causal inference, namely the observed outcome and treatment assignment. We developed a toolkit that expands established machine learning evaluation methods and adds several causal-specific ones. Evaluations can be applied in cross-validation, in a train-test scheme, or on the training data. Multiple causal inference methods are implemented within the toolkit in a way that allows modular use of the underlying machine learning models. Thus, the toolkit is agnostic to the machine learning model that is used. We showcase our approach using a rheumatoid arthritis cohort (consisting of about 120K patients) extracted from the IBM MarketScan(R) Research Database. We introduce an iterative pipeline of data definition, model definition, and model evaluation. Using this pipeline, we demonstrate how each of the evaluation components helps drive model selection and refinement of data extraction criteria in a way that provides more reproducible results and ensures that the causal question is answerable with available data. Furthermore, we show how the evaluation toolkit can be used to ensure that performance is maintained when applied to subsets of the data, thus allowing exploration of questions that move towards personalized medicine.
The deep learning algorithms of artificial intelligence can identify patterns that help inventors think laterally, make connections between nonobvious ideas, pinpoint hidden invention features, and exploit new science and technology-based opportunities. "To invent, you need a good imagination and a pile of junk." So said Thomas Edison, America's most prolific inventor. Yet the march of technology is now changing the great man's inventive equation: powerful algorithmic advisory systems are now giving inventors far more fertile imaginations, even if they don't have very much of one themselves. After being fed vast datasets of information on a field of inventive endeavor, deep learning algorithms identify patterns that help inventors think laterally, make connections between nonobvious ideas, pinpoint hidden invention features that rivals have missed, and exploit new science and technology-based opportunities from, say, patents and journals.