Artificial Intelligence and Food Safety: Hype vs. Reality


To understand the promise and peril of artificial intelligence for food safety, consider the story of Larry Brilliant. Brilliant is a self-described "spiritual seeker," "social change addict," and "rock doc." During his medical internship in 1969, he responded to a San Francisco Chronicle columnist's call for medical help to Native Americans then occupying Alcatraz. Then came Warner Bros.' call to have him join the cast of Medicine Ball Caravan, a sort-of sequel to Woodstock Nation. That caravan ultimately led to a detour to India, where Brilliant spent 2 years studying at the foot of the Himalayas in a monastery under guru Neem Karoli Baba. Toward the end of the stay, Karoli Baba informed Brilliant of his calling: join the World Health Organization (WHO) and eradicate smallpox. He joined the WHO as a medical health officer, as a part of a team making over 1 billion house calls collectively. In 1977, he observed the last human with smallpox, leading WHO to declare the disease eradicated. After a decade battling smallpox, Brilliant went on to establish and lead foundations and start-up companies, and serve as a professor of international health at the University of Michigan. As one corporate brand manager wrote, "There are stories that are so incredible that not even the creative minds that fuel Hollywood could write them with a straight face."[1]

Building the engine that drives digital transformation

MIT Technology Review

This is the consensus view of an MIT Technology Review Insights survey of 210 members of technology executives, conducted in March 2021. These respondents report that they need--and still often lack-- the ability to develop new digital channels and services quickly, and to optimize them in real time. Underpinning these waves of digital transformation are two fundamental drivers: the ability to serve and understand customers better, and the need to increase employees' ability to work more effectively toward those goals. Two-thirds of respondents indicated that more efficient customer experience delivery was the most critical objective. This was followed closely by the use of analytics and insight to improve products and services (60%).

MIT Engineers Create a Programmable Digital Fiber – With Memory, Sensors, and AI


MIT researchers have created the first fabric-fiber to have digital capabilities, ready to collect, store and analyze data using a neural network. In a first, the digital fiber contains memory, temperature sensors, and a trained neural network program for inferring physical activity. MIT researchers have created the first fiber with digital capabilities, able to sense, store, analyze, and infer activity after being sewn into a shirt. Yoel Fink, who is a professor of material sciences and electrical engineering, a Research Laboratory of Electronics principal investigator, and the senior author on the study, says digital fibers expand the possibilities for fabrics to uncover the context of hidden patterns in the human body that could be used for physical performance monitoring, medical inference, and early disease detection. Or, you might someday store your wedding music in the gown you wore on the big day -- more on that later.

GBT Is Researching To Develop An AI Empowered, Wireless Patient Health Monitoring System


SAN DIEGO, June 03, 2021 (GLOBE NEWSWIRE) -- GBT Technologies Inc. ( OTC PINK: GTCH) ("GBT" or the "Company") has commenced research with the goal of developing an AI empowered, wireless patient monitoring system. The project's internal code name is "Apollo". The technology will be based on radio waves and empowered by machine learning. Currently health related monitoring devices are typically wearable or invasive types. These devices are self-reporting systems and typically monitoring patient's vitals, keeping logs, track sleeping habits and similar.

Intelicare awarded $100K grant from NSSN to improve machine learning for assisted living – Software


InteliCare has been awarded a $100,000 grant from the New South Wales Smart Sensing Network ("NSSN") to develop its machine learning (ML) capability in conjunction with the University of Sydney (USyd) and Macquarie University (MU). The company is negotiating an agreement with USyd, MU and the NSSN to use these funds to help fund a one-year joint project delivered by the universities' Computer Science Departments. The goal is to build ML algorithms that can predict and prevent chronic disease and mental health deterioration that can lead to a loss of independence and an increased risk of injury. In addition to the NSSN funds, InteliCare will provide a co-contribution of $152,898 in cash and the universities will provide $161,021 of in-kind support. Ongoing development beyond the initial project will require the company to budget from working capital.

From Human-Computer Interaction to Human-AI Interaction: New Challenges and Opportunities for Enabling Human-Centered AI Artificial Intelligence

While AI has benefited humans, it may also harm humans if not appropriately developed. We conducted a literature review of current related work in developing AI systems from an HCI perspective. Different from other approaches, our focus is on the unique characteristics of AI technology and the differences between non-AI computing systems and AI systems. We further elaborate on the human-centered AI (HCAI) approach that we proposed in 2019. Our review and analysis highlight unique issues in developing AI systems which HCI professionals have not encountered in non-AI computing systems. To further enable the implementation of HCAI, we promote the research and application of human-AI interaction (HAII) as an interdisciplinary collaboration. There are many opportunities for HCI professionals to play a key role to make unique contributions to the main HAII areas as we identified. To support future HCI practice in the HAII area, we also offer enhanced HCI methods and strategic recommendations. In conclusion, we believe that promoting the HAII research and application will further enable the implementation of HCAI, enabling HCI professionals to address the unique issues of AI systems and develop human-centered AI systems.

On risk-based active learning for structural health monitoring Machine Learning

A primary motivation for the development and implementation of structural health monitoring systems, is the prospect of gaining the ability to make informed decisions regarding the operation and maintenance of structures and infrastructure. Unfortunately, descriptive labels for measured data corresponding to health-state information for the structure of interest are seldom available prior to the implementation of a monitoring system. This issue limits the applicability of the traditional supervised and unsupervised approaches to machine learning in the development of statistical classifiers for decision-supporting SHM systems. The current paper presents a risk-based formulation of active learning, in which the querying of class-label information is guided by the expected value of said information for each incipient data point. When applied to structural health monitoring, the querying of class labels can be mapped onto the inspection of a structure of interest in order to determine its health state. In the current paper, the risk-based active learning process is explained and visualised via a representative numerical example and subsequently applied to the Z24 Bridge benchmark. The results of the case studies indicate that a decision-maker's performance can be improved via the risk-based active learning of a statistical classifier, such that the decision process itself is taken into account.

Intelligent interactive technologies for mental health and well-being Artificial Intelligence

The field received significant interest in the previous decade, mainly due to advances in automated machine learning (ML) and deep learning (DL). They learn useful patterns from a large amount of data and keep the acquired knowledge as model structures and parameters that can be further applied to make predictions by interpreting unseen data [1]. These models are either a set of elements or features that contribute when making decisions (in ML) or are organized into several layers of abstraction (such as neural networks) for general and specific interpretation tasks (in DL). Healthcare provision and medicine are one of the most significant challenges of AI due to being the pillars for a global society and the necessity of providing higher-quality assistance to the healthcare workforce [2]. An emerging and expanding domain for for application of AI is mental health. Readily available and ubiquitous devices and applications enable the provision of flexible mental care - on-demand, at any time, both at healthcare facilities and at home.

A Conversational Agent System for Dietary Supplements Use Artificial Intelligence

Conversational agent (CA) systems have been applied to healthcare domain, but there is no such a system to answer consumers regarding DS use, although widespread use of DS. In this study, we develop the first CA system for DS use. Methods: Our CA system for DS use developed on the MindeMeld framework, consists of three components: question understanding, DS knowledge base, and answer generation. We collected and annotated 1509 questions to develop natural language understanding module (e.g., question type classifier, named entity recognizer) which was then integrated into MindMeld framework. CA then queries the DS knowledge base (i.e., iDISK) and generates answers using rule-based slot filling techniques. We evaluated algorithms of each component and the CA system as a whole. Results: CNN is the best question classifier with F1 score of 0.81, and CRF is the best named entity recognizer with F1 score of 0.87. The system achieves an overall accuracy of 81% and an average score of 1.82 with succ@3 score as 76.2% and succ@2 as 66% approximately. Conclusion: This study develops the first CA system for DS use using MindMeld framework and iDISK domain knowledge base.

An Intelligent Passive Food Intake Assessment System with Egocentric Cameras Artificial Intelligence

Malnutrition is a major public health concern in low-and-middle-income countries (LMICs). Understanding food and nutrient intake across communities, households and individuals is critical to the development of health policies and interventions. To ease the procedure in conducting large-scale dietary assessments, we propose to implement an intelligent passive food intake assessment system via egocentric cameras particular for households in Ghana and Uganda. Algorithms are first designed to remove redundant images for minimising the storage memory. At run time, deep learning-based semantic segmentation is applied to recognise multi-food types and newly-designed handcrafted features are extracted for further consumed food weight monitoring. Comprehensive experiments are conducted to validate our methods on an in-the-wild dataset captured under the settings which simulate the unique LMIC conditions with participants of Ghanaian and Kenyan origin eating common Ghanaian/Kenyan dishes. To demonstrate the efficacy, experienced dietitians are involved in this research to perform the visual portion size estimation, and their predictions are compared to our proposed method. The promising results have shown that our method is able to reliably monitor food intake and give feedback on users' eating behaviour which provides guidance for dietitians in regular dietary assessment.