If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Imagine an autonomous vehicle traffic sign detector whose accuracy plummets when dealing with rain or unexpected inputs. With machine learning (ML) an increasingly integral part of our daily lives, it is crucial that developers identify such potentially dangerous scenarios before real-world deployment. The rigorous performance evaluation and testing of models has thus become a high priority in the ML community, where an understanding of how and why ML system failures might occur can help with reliability, model refinement, and identifying appropriate human oversight and engagement actions. The process of identifying and characterizing ML failures and shortcomings is however extremely complex, and there is currently no effective universal approach for doing so. To address this, a Microsoft research team recently introduced Error Analysis, a responsible AI toolkit for describing and explaining system failures. Error Analysis starts with error identification illustrated using error heatmaps or decision trees guided by errors.
BETHESDA, Md., Feb. 24, 2020 – Modzy, a leading enterprise artificial intelligence platform, today published "The Race Towards Artificial Intelligence (AI) Adoption" – a report highlighting key challenges and opportunities of AI adoption. In the next few years, many organizations will reach an inflection point for their AI technology programs. With 80% of decision-makers suggesting they will increase AI investment in the next 1-2 years, pressure will grow to demonstrate greater progress and value. But AI technologies will only achieve potential if organizations can integrate greater explainability, trust and security. "2021 will be the year when those implementing AI will start achieving value at scale, while those spending months training brittle models and failing to catch up will be at an increasing, exponential, disadvantage," said Josh Sullivan, head of Modzy.
This approach also mitigates groupthink and conservatism by reducing bias, which in practice means that "people in power will be less likely to give you the benefit of the doubt if you're different. And you respond to that by being more cautious," says Ibarra. And, according to Columbia's McGrath, "the answers to whatever your puzzle is may come from very unexpected places -- it could be a person who normally doesn't have access to power." Make this a routine, not a special exercise. And communicate the strategy -- and the need for change specifically -- in a way that is positive and personal.
In 2007, some of the leading thinkers behind deep neural networks organized an unofficial "satellite" meeting at the margins of a prestigious annual conference on artificial intelligence. The conference had rejected their request for an official workshop; deep neural nets were still a few years away from taking over AI. The bootleg meeting's final speaker was Geoffrey Hinton of the University of Toronto, the cognitive psychologist and computer scientist responsible for some of the biggest breakthroughs in deep nets. He started with a quip: "So, about a year ago, I came home to dinner, and I said, 'I think I finally figured out how the brain works,' and my 15-year-old daughter said, 'Oh, Daddy, not again.'" Hinton continued, "So, here's how it works."
Last July, GPT-3 took the internet by storm. The massive 175 billion-parameter autoregressive language model, developed by OpenAI, showed a startling ability to translate languages, answer questions, and – perhaps most eerily – generate its own coherent passages, poems, and songs when given examples to process. As it turns out, experts were captivated by these abilities, too: captivated enough, in fact, that researchers from OpenAI and a number of universities met several months ago to discuss the technical and sociopolitical implications of the platform. The summit, helmed by OpenAI in partnership with Stanford's Institute for Human-Centered Artificial Intelligence, convened in October. Apart from those two institutions, the remainder of the participants are currently unknown by the public, as the meeting was held under the Chatham House Rule, whereby a meeting's information is public but its participants are secret.
SAVE $20: As of Feb. 24, the Google Nest Audio is on sale for only $79.99 at B&H Photo Video, Best Buy, Target, and the Google Store as of Feb. 24. After taking each of them for a test drive, Mashable tech reporter Brenda Stolyar noticed "plenty of differences between each one that might sway your opinion." Ultimately, though, she gave the crown to the Nest Audio for several reasons: It looks great, its companion app is easy to use, it's compatible with lots of other smart home accessories, and its voice assistant the most knowledgeable of the bunch. But don't take our word for it: B&H Photo Video, Best Buy, Target, and the Google Store all have the Nest Audio on sale for just $79.99, so you can save $20 when you see what all the fuss is about. The Nest Audio comes in five different colors (left to right): Sage, Sand, Sky, Chalk, and Charcoal.
Some healthcare provider organizations are using machine learning and other forms of artificial intelligence to provide clinicians with the best evidence-based care pathways. A group's aim could be to improve a patient's care plan based on personalized analytics. Another goal could be the further merging of evidence-based care paths with historical utilization and outcomes in order to offer optimal patient care. Provider organizations might be using social determinants of health combined with machine learning to offer clinically meaningful services. Healthcare IT News talked over these ideas with Niall O'Connor, chief technology officer at Cohere Health, a vendor of artificial intelligence technology and services designed to improve the provider, patient and payer experiences.
AI and ML systems have advanced in both sophistication and capability at a staggering rate in recent years. They can now model protein structures based only on the molecule's amino-acid sequence, create poetry and text on par with human writers -- even spot specific individuals in a crowd (assuming their complexion is sufficiently light). But for as impressive as these feats of computational prowess are, the field continues to struggle with a number of fundamental moral and ethical issues. A facial recognition system designed to identify terrorists can just as easily be leveraged to monitor peaceful protesters or suppress ethnic minorities, depending on how it is deployed. What's more, the development of AI to date has been largely concentrated in the hands of just a few large companies such as IBM, Google, Amazon and Facebook, as they're among the few with sufficient resources to pour into its development.
ARC Advisory Group engaged in an informative discussion with Derek Gittoes, VP Supply Chain Management Product Strategy at Oracle, as part of ARC's Digital Supply Chain Forum. Derek recently authored an article on Logistics Viewpoints describing recent advancements in logistics predictability through the application of machine learning. And we thought this to be a great opportunity to get further details on this hot topic from a practitioner on the front line of logistics application development. We asked Derek to provide greater detail on a few key points. Namely, how does machine learning help with predicting shipping transit times? Secondly, why should shippers and logistics service providers consider using machine learning in their transportation management systems, and why now?