If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
These are exciting times for computational sciences with the digital revolution permeating a variety of areas and radically transforming business, science, and our daily lives. The Internet and the World Wide Web, GPS, satellite communications, remote sensing, and smartphones are dramatically accelerating the pace of discovery, engendering globally connected networks of people and devices. The rise of practically relevant artificial intelligence (AI) is also playing an increasing part in this revolution, fostering e-commerce, social networks, personalized medicine, IBM Watson and AlphaGo, self-driving cars, and other groundbreaking transformations. Unfortunately, humanity is also facing tremendous challenges. Nearly a billion people still live below the international poverty line and human activities and climate change are threatening our planet and the livelihood of current and future generations. Moreover, the impact of computing and information technology has been uneven, mainly benefiting profitable sectors, with fewer societal and environmental benefits, further exacerbating inequalities and the destruction of our planet. Our vision is that computer scientists can and should play a key role in helping address societal and environmental challenges in pursuit of a sustainable future, while also advancing computer science as a discipline. For over a decade, we have been deeply engaged in computational research to address societal and environmental challenges, while nurturing the new field of Computational Sustainability.
Science has always hinged on the idea that researchers must be able to prove and reproduce the results of their research. Simply put, that is what makes science...science. Yet in recent years, as computing power has increased, the cloud has taken shape, and data sets have grown, a problem has appeared: it has becoming increasingly difficult to generate the same results consistently--even when researchers include the same dataset. "One basic requirement of scientific results is reproducibility: shake an apple tree, and apples will fall downwards each and every time," observes Kai Zhang, an associate professor in the department of statistics and operations research at The University of North Carolina, Chapel Hill. "The problem today is that in many cases, researchers cannot replicate existing findings in the literature and they cannot produce the same conclusions. This is undermining the credibility of scientists and science. It is producing a crisis."
The majority of experts and opinion leaders believe that artificial intelligence (AI) is going to revolutionise many industries, including healthcare . In the short term, the power and potential of AI appear most suitable for complementing human expertise. In other words, machines will help humans do a better job. Consequently, it is anticipated that AI will help with repetitive tasks, in-depth quantification and classification of findings, improved patient and disease phenotyping and, ultimately, with better outcomes for patients, physicians, hospital administrators, insurance companies and governments . This focus issue of the Netherlands Heart Journal aims to help general cardiologists explore the state of the art of AI in cardiology.
Top nations like Germany, Singapore and South Korea have adopted AI and robotics into the healthcare sector. Korea is the leading nation for AI adoption followed by Singapore, China and Taiwan according to an ITIF report. Healthcare systems around the world, notably the UK's National Health Service, have already engaged the use of AI health assistant programs to modify the clinical process with the help of applications and programs to give their patients information as well as facilitate meetings with clinicians. An Indian software company Sigtuple, created an AI- based telepathology system that automates their smart microscopes to take pictures and upload on cloud. This allows efficiency among pathologists for their diagnosis.
Looking at the way we live today, it's easy to think that relatively recent discoveries and innovations in science and technology are responsible for our modern lifestyle. But even the newest devices and equipment today have their foundations in technology developed centuries ago. The technology used for information exchange, communication, transportation and many other essential aspects of our lives are all a result of a series of inventions and innovations that go back well into the past. Let's take a look at some of the most crucial technological advancements in history. Using glass to refract light is a simple idea, but it took humanity a long time to discover it.
AI should be built on rigorous knowledge... Note: This is a follow-up to an earlier article on causal machine learning, "AI Needs More Why". There's much to be excited about with artificial intelligence (AI) in healthcare: Google AI is improving the workflow of clinicians with predictive models for diabetic retinopathy , many new approaches are achieving expert-level performance in tasks such as classification of skin cancer , and others surpassing the capabilities of doctors -- notably the recent report of DeepMind's AI for predicting acute kidney disease, capable of detecting potentially fatal kidney injuries 48 hours before symptoms are recognized by doctors . Yet medical practitioners and researchers at the intersection of machine learning (ML) and medicine are quick to point out these successes are not representative of the more nuanced, non-trivial challenges presented by medical research and clinical applications. These ML success stories (notably all deep learning) are disease prediction problems, learning patterns that map well-defined inputs to well-labeled outputs . Domains where instinctive pattern recognition works powerfully are what psychologist Robin Hogarth termed "kind learning environments" .
To predict 72-h and 9-day emergency department (ED) return by using gradient boosting on an expansive set of clinical variables from the electronic health record. This retrospective study included all adult discharges from a level 1 trauma center ED and a community hospital ED covering the period of March 2013 to July 2017. A total of 1500 variables were extracted for each visit, and samples split randomly into training, validation, and test sets (80%, 10%, and 10%). Gradient boosting models were fit on 3 selections of the data: administrative data (demographics, prior hospital usage, and comorbidity categories), data available at triage, and the full set of data available at discharge. A logistic regression (LR) model built on administrative data was used for baseline comparison. Finally, the top 20 most informative variables identified from the full gradient boosting models were used to build a reduced model for each outcome.
Science-fiction can sometimes be a good guide to the future. In the film Upgrade (2018) Grey Trace, the main character, is shot in the neck. His wife is shot dead. Trace wakes up to discover that not only has he lost his wife, but he now faces a future as a wheelchair-bound quadriplegic. He is implanted with a computer chip called Stem designed by famous tech innovator Eron Keen – any similarity with Elon Musk must be coincidental – which will let him walk again.
The recent surveys, studies, forecasts and other quantitative assessments of the health and progress of AI estimated the impact on productivity of human-machine collaboration, the number of jobs that could be automated in major U.S. cities, and the size of the future AI in retail and healthcare markets; and found AI optimism among the general population, algorithms outperforming (again) pathologists, and that our very limited understanding of how our brains learn may improve machine learning. Do you think securing your devices and personal data will become more or less complicated over the next 12 months? DeepMind has developed a machine learning model that can label most animals at Tanzania's Serengeti National Park at least as well as humans while shortening the process by up to 9 months (it normally takes up to a year for volunteers to return labeled photos) [Engadget] In a simulation, biological learning algorithms outperformed state-of-the-art optimal learning curves in supervised learning of feedforward networks, indicating "the potency of neurobiological mechanisms" and opening "opportunities for developing a superior class of deep learning algorithms" [Scientific Reports] The AI in retail market is estimated to reach $4.3 billion by 2024 [P&S Intelligence] [e.g., Nike acquires Celect, August 6, 2019] The AI in healthcare market is estimated to reach $12.2 billion by 2023 [Market Research Future] [e.g., BlueDot has raised $7 million in Series A funding, August 7, 2019] AI companies funded in the last 3 months: 417 for total funding of $8.7 billion Data is eating the world quote of the week: "Although it is fashionable to say that we are producing more data than ever, the reality is that we always produced data, we just didn't know how to capture it in useful ways"--Subbarao Kambhampati, Arizona State University AI is eating the world quote of the week: "We advocate for a new perspective for designing benchmarks for measuring progress in AI. Unlike past decades where the community constructed a static benchmark dataset to work on for the next decade or two, we propose that future benchmarks should dynamically evolve together with the evolving state-of-the-art"--Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi, Allen Institute for Artificial Intelligence and the University of Washington