If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Today, Sony announced a new division at the company to focus exclusively on artificial intelligence that will have offices in Japan, the United States, and Europe. Called Sony AI, the new division will initially begin with three'flagship projects' in gaming, imaging and sensing, and gastronomy. The company hasn't specified what exactly these projects will be but a concept video from Sony shows the company's vision for how AI and robotics will change how people eat in the future. The video shows a long kitchen counter top equipped with multiple robotic arms and camera sensors, which observe a human chopping a vegetable. A human enters the kitchen and begins chopping a vegetable.
Amazon SageMaker Ground Truth helps you build highly accurate training datasets for machine learning. It can reduce your labeling costs by up to 70% using automatic labeling. This blog post explains the Amazon SageMaker Ground Truth chaining feature with a few examples and its potential in labeling your datasets. Chaining reduces time and cost significantly as Amazon SageMaker Ground Truth determines the objects that are already labeled and optimizes the data for automated data labeling mode. As a prerequisite, you might want to check the post "Creating hierarchical label taxonomies using Amazon SageMaker Ground Truth" that shows how to achieve multi-step hierarchical labeling and the documentation on how to use the augmented manifest functionality.
San Diego, Calif., November 14, 2019 -- From companies worth billions of dollars to startups employing a small number of people, UC San Diego engineering alumni are at the core of the robotics ecosystem here in San Diego County. This was clearly evident at the sixth annual robotics forum organized by the UC San Diego Contextual Robotics Institute Nov. 7. The forum focused exclusively on local companies this year and was dubbed the San Diego Robotics Forum for the occasion. The goal was to showcase the breadth and depth of the region's robotics strengths, and solidify San Diego's reputation as Robot Beach. "We have an important mission here to showcase how strong San Diego is in the area of robotics," said Henrik Christensen, director of the UC San Diego Contextual Robotics Institute.
Could the same computer algorithms that teach autonomous cars to drive safely help identify nearby asteroids or discover life in the universe? NASA scientists are trying to figure that out by partnering with pioneers in artificial intelligence (AI)--companies such as Intel, IBM and Google--to apply advanced computer algorithms to problems in space science. Machine learning is a type of AI. It describes the most widely used algorithms and other tools that allow computers to learn from data in order to make predictions and categorize objects much faster and more accurately than a human being can. Consequently, machine learning is widely used to help technology companies recognize faces in photos or predict what movies people would enjoy.
Behaviour change is key to addressing both the challenges facing human health and wellbeing and to promoting the uptake of research findings in health policy and practice. We need to make better use of the vast amount of accumulating evidence from behaviour change intervention (BCI) evaluations and promote the uptake of that evidence into a wide range of contexts. The scale and complexity of the task of synthesising and interpreting this evidence, and increasing evidence timeliness and accessibility, will require increased computer support. The Human Behaviour-Change Project (HBCP) will use Artificial Intelligence and Machine Learning to (i) develop and evaluate a'Knowledge System' that automatically extracts, synthesises and interprets findings from BCI evaluation reports to generate new insights about behaviour change and improve prediction of intervention effectiveness and (ii) allow users, such as practitioners, policy makers and researchers, to easily and efficiently query the system to get answers to variants of the question'What works, compared with what, how well, with what exposure, with what behaviours (for how long), for whom, in what settings and why?'. The HBCP will: a) develop an ontology of BCI evaluations and their reports linking effect sizes for given target behaviours with intervention content and delivery and mechanisms of action, as moderated by exposure, populations and settings; b) develop and train an automated feature extraction system to annotate BCI evaluation reports using this ontology; c) develop and train machine learning and reasoning algorithms to use the annotated BCI evaluation reports to predict effect sizes for particular combinations of behaviours, interventions, populations and settings; d) build user and machine interfaces for interrogating and updating the knowledge base; and e) evaluate all the above in terms of performance and utility.
Health care innovators are developing artificial intelligence algorithms called Explainable AI (XAI) that actually reveal the logic behind their diagnoses. Because their results can be verified, doctors and regulators will be more likely to adopt these algorithms than traditional "black box" AI. However, the transparency that makes these algorithms valuable to practitioners also makes the technology trickier to protect as intellectual property. With some legal creativity, there are multiple paths to patent protection for XAI-based diagnostics. The very nature of XAI algorithms prevents them from being kept secret, and the law governing patents for diagnostic algorithms is nearly undecipherable.
This is a keynote from the O'Reilly Artificial Intelligence Conference in London 2019. See additional keynotes and sessions from this event on the O'Reilly online learning platform. You can also check out more highlights from AI London '19. Get a free trial today and find answers on the fly, or master something new and useful. Receive weekly insight from industry insiders--plus exclusive content, offers, and more on the topic of AI.
CAIML #9 took place on November 14 at factor-a – part of Dept, demonstrating how AI can be used for social good and to address societal challenges. "Aid organizations and governments are applying great effort in resolving the negative impacts of food insecurity induced crisis like famines or mass migration. One of the most limiting resources these actors face is the lack of preparation time for consistent and sustainable planning for emergency relief like setting refugee camps or securing supply with food and energy. Hence, increasing the lead time for preparation is an essential step and will result in saving many lives. The aim of this research is to increase the lead time by developing a ML based mathematical prediction model that is able to compute the probability for food insecure areas by learning from historical data. For performing such computations, our prediction model is developed and trained on historic open access data for the Horn of Africa (2009-2018). We used precipitation and vegetation data derived by remote sensing, as well as socio-economic, medical, armed conflict and disaster data. To overcome spatial inconsistencies in the input data and to meet the requirements of spatially homogenous input for neural networks, all data has been converted to geo-referenced raster maps. Disaster and armed conflict data has been fitted to districts while local food market prices have been interpolated. The IPC has been used as the food security label. In order to find a prediction model, deep learning methods have been used. Several analyses were applied on the collected data such as multicollinearity checks and principal component analyses. Preliminary cross-validated results have encouraged us to further investigate the detection of food insecure areas using open access data."