Expert Systems


r/MachineLearning - [N] The Promise and Limitations of AI

#artificialintelligence

This is a talk from GOTO Chicago 2019 by Doug Lenat, Award-winning AI pioneer who created the landmark Machine Learning program, AM, in 1976 and CEO of Cycorp. I've dropped the full talk abstract below for a read before diving into the talk: Almost everyone who talks about Artificial Intelligence, nowadays, means training multi-level neural nets on big data. Developing and using those patterns is a lot like what our right brain hemispheres do; it enables AI's to react quickly and – very often – adequately. But we human beings also make good use of our left brain hemisphere, which reasons more slowly, logically, and causally. I will discuss this "other type of AI" – i.e., left brain AI, which comprises a formal representation language, a "seed" knowledge base with hand-engineered default rules of common sense and good domain-specific expert judgement written in that language, and an inference engine capable of producing hundreds-deep chains of deduction, induction, and abduction on that large knowledge base.


The Human Behaviour-Change Project: harnessing the power of artificial intelligence and machine learning for evidence synthesis and interpretation

#artificialintelligence

Behaviour change is key to addressing both the challenges facing human health and wellbeing and to promoting the uptake of research findings in health policy and practice. We need to make better use of the vast amount of accumulating evidence from behaviour change intervention (BCI) evaluations and promote the uptake of that evidence into a wide range of contexts. The scale and complexity of the task of synthesising and interpreting this evidence, and increasing evidence timeliness and accessibility, will require increased computer support. The Human Behaviour-Change Project (HBCP) will use Artificial Intelligence and Machine Learning to (i) develop and evaluate a'Knowledge System' that automatically extracts, synthesises and interprets findings from BCI evaluation reports to generate new insights about behaviour change and improve prediction of intervention effectiveness and (ii) allow users, such as practitioners, policy makers and researchers, to easily and efficiently query the system to get answers to variants of the question'What works, compared with what, how well, with what exposure, with what behaviours (for how long), for whom, in what settings and why?'. The HBCP will: a) develop an ontology of BCI evaluations and their reports linking effect sizes for given target behaviours with intervention content and delivery and mechanisms of action, as moderated by exposure, populations and settings; b) develop and train an automated feature extraction system to annotate BCI evaluation reports using this ontology; c) develop and train machine learning and reasoning algorithms to use the annotated BCI evaluation reports to predict effect sizes for particular combinations of behaviours, interventions, populations and settings; d) build user and machine interfaces for interrogating and updating the knowledge base; and e) evaluate all the above in terms of performance and utility.


Hot New Releases Expert Systems in Artificial Intelligence Books

#artificialintelligence

In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. This new second edition improves with the addition of Spark―a ML framework from the Apache foundation. By implementing Spark, machine learning students can easily process much large data sets and call the spark algorithms using ordinary Python code. Machine Learning with Spark and Python focuses on two algorithm families (linear methods and ensemble methods) that effectively predict outcomes.


Overview of artificial intelligence in medicine

#artificialintelligence

Alan Turing (1950) was one of the founders of modern computers and AI. The "Turing test" was based on the fact that the intelligent behavior of a computer is the ability to achieve human level performance in cognition related tasks.[1] The 1980s and 1990s saw a surge in interest in AI. Artificial intelligent techniques such as fuzzy expert systems, Bayesian networks, artificial neural networks, and hybrid intelligent systems were used in different clinical settings in health care. In 2016, the biggest chunk of investments in AI research were in healthcare applications compared with other sectors.[2] AI in medicine can be dichotomized into two subtypes: Virtual and physical.[3]


Global Cancer Diagnosis and Treatment, Micro-LEDs, Renewable Energy Generation and Storage, and Fault Detection Innovations Report 2019 – ResearchAndMarkets.com – Tech Check News

#artificialintelligence

The "Innovations in Cancer Diagnosis and Treatment, Micro-LEDs, Renewable Energy Generation and Storage, and Fault Detection" report has been added to ResearchAndMarkets.com's offering. The edition also provides insights on the role of macropinocytosis in pancreatic cancer. The TOE covers use of ceramic electrodes for doubling energy density and a biosensor for earlier diagnosis of tumors.


Global Big Data Conference

#artificialintelligence

According to a report by IDC, worldwide spending on artificial intelligence systems is forecast to reach $35.8 billion in 2019, an increase of 44.0% over the amount spent in 2018. The report also predicts that the retail sector will lead the spending, followed by the banking sector. Artificial intelligence is well-positioned to impact various sectors like retail, healthcare, banking, finance, discrete manufacturing, transportation, etc. According to a Gartner survey, 37% of organizations have implemented AI in some way. In the early stages, AI was based on rule-based systems, in which, the AI system depended on a knowledge base of rules to deliver business value.


Artificial intelligence: The growth factor for Cloud GPU market

#artificialintelligence

According to a report by IDC, worldwide spending on artificial intelligence systems is forecast to reach $35.8 billion in 2019, an increase of 44.0% over the amount spent in 2018. The report also predicts that the retail sector will lead the spending, followed by the banking sector. Artificial intelligence is well-positioned to impact various sectors like retail, healthcare, banking, finance, discrete manufacturing, transportation, etc. According to a Gartner survey, 37% of organizations have implemented AI in some way. In the early stages, AI was based on rule-based systems, in which, the AI system depended on a knowledge base of rules to deliver business value.


AI assisted content classification for corporate learning & knowledge base - Software Technology Blog

#artificialintelligence

There is no shortage of training content for employees. However, quick access to the right information is the challenge. Traditionally, the L&D departments spend significant time on instructor-led training and aggregating and buying third-party training content. Other learning avenues, like on-the-job training, personalized training, micro-learning, and data or event-driven training programs are equally important. Employees today learn from content spread across internal and external systems including intranets, MooC platforms, LMS, social media platforms, external training content providers, document management systems, collaboration platforms, and even forums, Q&A portals, email and messenger/ chat platforms.


Can Pretrained Language Models Replace Knowledge Bases?

#artificialintelligence

The recent rapid development of pretrained language models has produced significant performance improvements on downstream NLP tasks. These pretrained language models compile and store relational knowledge they encounter in training data, which prompted Facebook AI Research and University College London to introduce their LAMA (LAnguage Model Analysis) probe to explore the feasibility of using language models as knowledge bases. The term "knowledge base" was introduced in the 1970s. Unlike databases which store figures, tables, and other straightforward data in computer memory, a knowledge base is able to store more complex structured and unstructured information. A knowledge base system can be likened to a library that stores facts in a specific field.


Can Pretrained Language Models Replace Knowledge Bases?

#artificialintelligence

The recent rapid development of pretrained language models has produced significant performance improvements on downstream NLP tasks. These pretrained language models compile and store relational knowledge they encounter in training data, which prompted Facebook AI Research and University College London to introduce their LAMA (LAnguage Model Analysis) probe to explore the feasibility of using language models as knowledge bases. The term "knowledge base" was introduced in the 1970s. Unlike databases which store figures, tables, and other straightforward data in computer memory, a knowledge base is able to store more complex structured and unstructured information. A knowledge base system can be likened to a library that stores facts in a specific field.