Chinese tech giant Baidu has created a new way of teaching language to AIs, according to a report in TechnologyReview. The new method offers better results than the ones used by Google and Microsoft, beating out both companies in the General Language and Understanding Evaluation (GLUE) competition. Baidu's new model is called Enhanced Representation through kNowledge IntEgration, or ERNIE. It was named after the Sesame Street character because Google named the former champion model the Bidirectional Encoder Representations from Transformers or BERT. To take the crown from Google, ERNIE had to outperform its rival in the nine different language tests of GLUE.
Sign in to report inappropriate content. Recent advances in artificial intelligence and machine learning are changing the way doctors practice medicine. Can medical data actually improve health care? At this seminar, Harvard Medical School scientists and physicians will discuss how AI assists doctors in diagnosing disease, determining the best treatments and predicting better outcomes for their patients.
A classic debate in cognitive science revolves around understanding how children learn complex linguistic rules, such as those governing restrictions on verb alternations, without negative evidence. Traditionally, formal learnability arguments have been used to claim that such learning is impossible without the aid of innate language-specific knowledge. However, recently, researchers have shown that statistical models are capable of learning complex rules from only positive evidence. These two kinds of learnability analyses differ in their assumptions about the role of the distribution from which linguistic input is generated. The former analyses assume that learners seek to identify grammatical sentences in a way that is robust to the distribution from which the sentences are generated, analogous to discriminative approaches in machine learning.
We introduce a lifelong language learning setup where a model needs to learn from a stream of text examples without any dataset identifier. We propose an episodic memory model that performs sparse experience replay and local adaptation to mitigate catastrophic forgetting in this setup. Experiments on text classification and question answering demonstrate the complementary benefits of sparse experience replay and local adaptation to allow the model to continuously learn from new datasets. We also show that the space complexity of the episodic memory module can be reduced significantly ( 50-90%) by randomly choosing which examples to store in memory with a minimal decrease in performance. We consider an episodic memory component as a crucial building block of general linguistic intelligence and see our model as a first step in that direction.
A long-term goal of machine learning research is to build an intelligent dialog agent. Most research in natural language understanding has focused on learning from fixed training sets of labeled data, with supervision either at the word level (tagging, parsing tasks) or sentence level (question answering, machine translation). This kind of supervision is not realistic of how humans learn, where language is both learned by, and used for, communication. In this work, we study dialog-based language learning, where supervision is given naturally and implicitly in the response of the dialog partner during the conversation. We study this setup in two domains: the bAbI dataset of (Weston et al., 2015) and large-scale question answering from (Dodge et al., 2015).
We investigate heterogenous employment effects of Flemish training programmes. Based on administrative individual data, we analyse programme effects at various aggregation levels using Modified Causal Forests (MCF), a causal machine learning estimator for multiple programmes. While all programmes have positive effects after the lock-in period, we find substantial heterogeneity across programmes and types of unemployed. Simulations show that assigning unemployed to programmes that maximise individual gains as identified in our estimation can considerably improve effectiveness. Simplified rules, such as one giving priority to unemployed with low employability, mostly recent migrants, lead to about half of the gains obtained by more sophisticated rules.
Technology developed by Israel's MedAware could potentially save the United States health system $800 million annually by preventing medication errors, based on a study published earlier this week in the Joint Commission Journal on Quality and Patient Safety.MedAware developed an AI-based patient safety solution. The new study that was conducted by two Harvard doctors validates both the significant clinical impact and anticipated ROI of MedAware's machine learning-enabled clinical decision support platform designed to prevent medication-related errors and risks.MedAware uses AI methods similar to those used in the finance sector to stop fraud, by identifying "outliers" from a trend or practice in order to recognize suspicious or erroneous transactions. Most other electronic health record alert systems are rule based.In the US alone, prescription drug errors result in "substantial morbidity, mortality and excess health care costs estimated at more than $20 billion annually in the United States," according to Dr. Ronen Rozenblum, assistant professor at Harvard Medical School and director of business development for patient safety research and practice at Brigham and Women's Hospital. Rozenblum was the study's lead author, along with Harvard professor Dr. David Bates. Rozenblum, an Israeli who has been living in Boston for more than a decade, has been testing MedAware for the past five years.
Abstract--In medicine, a communicating virtual patient or doctor allows students to train in medical diagnosis and dev elop skills to conduct a medical consultation. In this paper, we describe a conversational virtual standardized patient sy stem to allow medical students to simulate a diagnosis strategy o f an abdominal surgical emergency. We exploited the semantic properties captured by distributed word representations t o search for similar questions in the virtual patient dialogue syste m. We created two dialogue systems that were evaluated on dataset s collected during tests with students. The first system based on handcrafted rules obtains 92.29% as F 1-score on the studied clinical case while the second system that combines rules an d semantic similarity achieves 94.88%. It represents an error reduction of 9.70% as compared to the rules-only-based system. The medical diagnosis practice is traditionally bedside taught. Theoretical courses are supplemented by internshi ps in hospital services. The medical student observes the practi ce of doctors and interns and practices himself under their contr ol. This type of learning has the disadvantage to confront immediately the medical student with complex situations withou t practical training (technical and human) beforehand.
"The attention around AI tends to focus on the latest technologies," Iansiti argues, "but the firms that are thriving have harnessed the subtle, inherent power of AI to break down traditional operational constraints, capture new value, and accelerate growth and innovation." What sets AI-driven firms apart is their ability to avoid the inefficiencies and bottlenecks that plague growth when complexity -- primarily caused by humans -- outstrips organizational capacity. These firms strive to construct a model for operational execution that does not require human intervention (ideally, no real-time "human bottlenecks"). In the new digital operating model, most operational tasks circumvent humans entirely. The ultimate aim is to automate and digitize as many operational processes as possible to take advantage of digital reliability and scalability.