Information Technology: Overviews

r/MachineLearning - Machine learning and the physical sciences


Abstract: Machine learning encompasses a broad range of algorithms and modeling tools used for a vast array of data processing tasks, which has entered most scientific disciplines in recent years. We review in a selective way the recent research on the interface between machine learning and physical sciences.This includes conceptual developments in machine learning (ML) motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross-fertilization between the two fields. After giving basic notion of machine learning methods and principles, we describe examples of how statistical physics is used to understand methods in ML. We then move to describe applications of ML methods in particle physics and cosmology, quantum many body physics, quantum computing, and chemical and material physics. We also highlight research and development into novel computing architectures aimed at accelerating ML.

Artificial Intelligence: Empowering Paradigm Shift in Technology - Hidden Brains Blog


New technology is disrupting business models. From driverless cars to chabots, artificial intelligence (AI) is completely changing the way we live and do business. It is an era of change where an innovative AI companies and Artificial Intelligence Solutions will contribute greatly to this increase in global economic growth and productivity. PwC estimates that AI could add as much as $15.7 trillion to the global economy by 2030. Here is how AI is changing different industries by better monitoring and managing processes for higher efficiency and performance.

Beyond the digital frontier


DIGITAL transformation has become a rallying cry for business and technology strategists. To those charged with mapping the future, it promises a triumphant response to the pressures and potential of disruptive change. Yet all too often, companies anchor their approach on a specific technology advance. To fuel impactful digital transformation, leading organizations combine technology with other catalysts of new opportunities--from emerging ecosystems to human-centered design and the future of work. Because the technology trends that inspire digital transformation efforts don't take place in a vacuum. They cross-pollinate with emerging trends in the physical and social sciences and in business to deliver unexpected outcomes. Developing a systematic approach for identifying and harnessing opportunities born of the intersections of technology, science, and business is an essential first step in demystifying digital transformation, and making it concrete, achievable, and measurable.

Artificial Intelligence (AI) in Fintech Global Market Demand, Growth, Opportunities, Analysis of Top Key Player and Forecast to 2024


May 03, 2019 (Heraldkeeper via COMTEX) -- As FinTech applies data and technology to financial services in an effort to address industry challenges, artificial intelligence is essential to FinTech's existence and usage. According to this study, over the next five years the Artificial Intelligence (AI) in Fintechmarket will register a xx% CAGR in terms of revenue, the global market size will reach US$ xx million by 2024, from US$ xx million in 2019. In particular, this report presents the global revenue market share of key companies in Artificial Intelligence (AI) in Fintech business, shared in Chapter 3. This report presents a comprehensive overview, market shares and growth opportunities of Artificial Intelligence (AI) in Fintech market by product type, application, key companies and key regions. This report also splits the market by region: Breakdown data in Chapter 4, 5, 6, 7 and 8. Americas United States Canada Mexico Brazil APAC China Japan Korea Southeast Asia India Australia Europe Germany France UK Italy Russia Spain Middle East & Africa Egypt South Africa Israel Turkey GCC Countries The report also presents the market competition landscape and a corresponding detailed analysis of the major vendor/manufacturers in the market.

Text Embeddings for Retrieval From a Large Knowledge Base Machine Learning

Text embedding representing natural language documents in a semantic vector space can be used for document retrieval using nearest neighbor lookup. In order to study the feasibility of neural models specialized for retrieval in a semantically meaningful way, we suggest the use of the Stanford Question Answering Dataset (SQuAD) in an open-domain question answering context, where the first task is to find paragraphs useful for answering a given question. First, we compare the quality of various text-embedding methods on the performance of retrieval and give an extensive empirical comparison on the performance of various non-augmented base embedding with, and without IDF weighting. Our main results are that by training deep residual neural models, specifically for retrieval purposes, can yield significant gains when it is used to augment existing embeddings. We also establish that deeper models are superior to this task. The best base baseline embeddings augmented by our learned neural approach improves the top-1 paragraph recall of the system by 14%.

KALM: A Rule-based Approach for Knowledge Authoring and Question Answering Artificial Intelligence

Knowledge representation and reasoning (KRR) is one of the key areas in artificial intelligence (AI) field. It is intended to represent the world knowledge in formal languages (e.g., Prolog, SPARQL) and then enhance the expert systems to perform querying and inference tasks. Currently, constructing large scale knowledge bases (KBs) with high quality is prohibited by the fact that the construction process requires many qualified knowledge engineers who not only understand the domain-specific knowledge but also have sufficient skills in knowledge representation. Unfortunately, qualified knowledge engineers are in short supply. Therefore, it would be very useful to build a tool that allows the user to construct and query the KB simply via text. Although there is a number of systems developed for knowledge extraction and question answering, they mainly fail in that these system don't achieve high enough accuracy whereas KRR is highly sensitive to erroneous data. In this thesis proposal, I will present Knowledge Authoring Logic Machine (KALM), a rule-based system which allows the user to author knowledge and query the KB in text. The experimental results show that KALM achieved superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems.

Alternative Techniques for Mapping Paths to HLAI Artificial Intelligence

The only systematic mapping of the HLAI technical landscape was conducted at a workshop in 2009 [Adams et al., 2012]. However, the results from it were not what organizers had hoped for [Goertzel 2014, 2016], merely just a series of milestones, up to 50% of which could be argued to have been completed already. We consider two more recent articles outlining paths to human-like intelligence [Mikolov et al., 2016; Lake et al., 2017]. These offer technical and more refined assessments of the requirements for HLAI rather than just milestones. While useful, they also have limitations. To address these limitations we propose the use of alternative techniques for an updated systematic mapping of the paths to HLAI. The newly proposed alternative techniques can model complex paths of future technologies using intricate directed graphs. Specifically, there are two classes of alternative techniques that we consider: scenario mapping methods and techniques for eliciting expert opinion through digital platforms and crowdsourcing. We assess the viability and utility of both the previous and alternative techniques, finding that the proposed alternative techniques could be very beneficial in advancing the existing body of knowledge on the plausible frameworks for creating HLAI. In conclusion, we encourage discussion and debate to initiate efforts to use these proposed techniques for mapping paths to HLAI.

A tutorial on recursive models for analyzing and predicting path choice behavior Machine Learning

The problem at the heart of this tutorial consists in modeling the path choice behavior of network users. This problem has extensively been studied in transportation science and econometrics, where it is known as the route choice problem. In this literature, individuals' choice of paths are typically predicted from discrete choice models. The aim of this tutorial is to present this problem from the novel and more general perspective of inverse optimization, in order to describe the modeling approaches proposed in related research areas and thereby motivate the use of so-called recursive models. The latter have the advantage of predicting path choices without generating choice sets. In this paper, we contextualize discrete choice models as a probabilistic approach to an inverse shortest path problem with noisy data, highlighting that recursive discrete choice models in particular originate from viewing the inner shortest path problem as a parametric Markov Decision Process. We also illustrate through simple numerical examples that recursive models overcome issues associated with the path-based discrete choice models commonly found in the transportation literature.

Class Imbalance Techniques for High Energy Physics Machine Learning

A common problem in high energy physics is extracting a signal from a much larger background. Posed as a classification task, there is said to be an imbalance in the number of samples belonging to the signal class versus the number of samples from the background class. Techniques for learning from imbalanced data are well established in the machine learning community. In this work we provide a brief overview of class imbalance techniques in a high energy physics setting. Two case studies are presented: (1) the measurement of the longitudinal polarization fraction in same-sign $WW$ scattering, and (2) the decay of the Higgs boson to charm-quark pairs. We find a significant improvement in the performance of the machine learning models used in the longitudinal $WW$ study, while no significant improvement in performance is found in the deep learning models tested. Our charm-quark tagger gives a 14% improvement in the background rejection rate.

Drug-Drug Adverse Effect Prediction with Graph Co-Attention Machine Learning

Complex or co-existing diseases are commonly treated using drug combinations, which can lead to higher risk of adverse side effects. The detection of polypharmacy side effects is usually done in Phase IV clinical trials, but there are still plenty which remain undiscovered when the drugs are put on the market. Such accidents have been affecting an increasing proportion of the population (15% in the US now) and it is thus of high interest to be able to predict the potential side effects as early as possible. Systematic combinatorial screening of possible drug-drug interactions (DDI) is challenging and expensive. However, the recent significant increases in data availability from pharmaceutical research and development efforts offer a novel paradigm for recovering relevant insights for DDI prediction. Accordingly, several recent approaches focus on curating massive DDI datasets (with millions of examples) and training machine learning models on them. Here we propose a neural network architecture able to set state-of-the-art results on this task---using the type of the side-effect and the molecular structure of the drugs alone---by leveraging a co-attentional mechanism. In particular, we show the importance of integrating joint information from the drug pairs early on when learning each drug's representation.