If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Machine Learning is an algorithmic approach of creating computer models with the ability to learn and adapt from a given data-set, these models can then be used to make useful predictions of results against similar but never-seen-before data. It is often referred to as a subset of Artificial Intelligence and forms the very base on which AI models are created. The concept of Machine Learning is based on the idea that whether machines can be designed to imitate human behaviour of learning, adapting skills, and applying where necessary. Just like all living beings who learn from every experience in life and take future decisions, similarly, the Machine Learning approach creates models that are first trained to'learn' on a data-set distribution. The trained models predict results by applying the knowledge learned during training with reasonable high accuracy.
In an area of West Africa 30 times larger than Denmark, an international team, led by University of Copenhagen and NASA researchers, has counted over 1.8 billion trees and shrubs. The 1.3 million km2 area covers the western-most portion of the Sahara Desert, the Sahel and what are known as sub-humid zones of West Africa. "We were very surprised to see that quite a few trees actually grow in the Sahara Desert, because up until now, most people thought that virtually none existed. We counted hundreds of millions of trees in the desert alone. Doing so wouldn't have been possible without this technology. Indeed, I think it marks the beginning of a new scientific era," asserts Assistant Professor Martin Brandt of the University of Copenhagen's Department of Geosciences and Natural Resource Management, lead author of the study's scientific article, now published in Nature.
Back in January, Google Health, the branch of Google focused on health-related research, clinical tools, and partnerships for health care services, released an AI model trained on over 90,000 mammogram X-rays that the company said achieved better results than human radiologists. Google claimed that the algorithm could recognize more false negatives -- the kind of images that look normal but contain breast cancer -- than previous work, but some clinicians, data scientists, and engineers take issue with that statement. In a rebuttal published today in the journal Nature, over 19 coauthors affiliated with McGill University, the City University of New York (CUNY), Harvard University, and Stanford University said that the lack of detailed methods and code in Google's research "undermines its scientific value." Science in general has a reproducibility problem -- a 2016 poll of 1,500 scientists reported that 70% of them had tried but failed to reproduce at least one other scientist's experiment -- but it's particularly acute in the AI field. At ICML 2019, 30% of authors failed to submit their code with their papers by the start of the conference.
Why does a secondary data store matter for AI? In my previous blog in this data store series, I discussed how the real selection criteria for an AI/ML data platform is how to obtain the best balance between capacity (cost per GB stored) and performance (cost per GB of throughput). Indeed, to support enterprise AI programs, the data architecture must support both high performance (needed for Ai training and validation) and high capacity (needed to store the huge amount of data that AI training requires). Even if these two capabilities can be hosted on the same systems (integrated data platform) or in large infrastructures, they are hosted in two separated specialized systems (two-tier architecture). This post continues the series of blogs dedicated to data stores for AI and advanced analytics.
Can we design recommenders that encourage user trajectories aligned with the true underlying user utilities? Besides engagement, user satisfaction and responsibility arise as important pillars of the recommendation problem. Motivated by this, we will discuss various efforts utilizing the reward function as an important lever of Reinforcement Learning (RL)-based recommenders, so to guide the model learning that for certain states (i.e., latent user representation at a certain point of the trajectory) certain actions (i.e., items to recommend) will bring higher user utility than others. We will also outline current and future directions on overcoming challenges of signals' sparsity and interplay among various reward signals. I am a Research Engineer at Google, leading several efforts on recommender systems and reinforcement learning in Google Brain.
SOCs across the globe are most concerned with advanced threat detection and are increasingly looking to next-gen automation tools like AI and ML technologies to proactively safeguard the enterprise, Micro Focus reveals. The report's findings show that over 93 percent of respondents employ AI and ML technologies with the leading goal of improving advanced threat detection capabilities, and that over 92 percent of respondents expect to use or acquire some form of automation tool within the next 12 months. These findings indicate that as SOCs continue to mature, they will deploy next-gen tools and capabilities at an unprecedented rate to address gaps in security. "The odds are stacked against today's SOCs: more data, more sophisticated attacks, and larger surface areas to monitor. However, when properly implemented, AI technologies such as unsupervised machine learning, are helping to fuel next-generation security operations, as evidenced by this year's report," said Stephan Jou, CTO Interset at Micro Focus. "We're observing more and more enterprises discovering that AI and ML can be remarkably effective and augment advanced threat detection and response capabilities, thereby accelerating the ability of SecOps teams to better protect the enterprise."
From instantaneous translation to conversational interfaces, artificial-intelligence (AI) technologies are making ever more evident impacts on our lives. This is particularly true in the financial-services sector, where challengers are already launching disruptive AI-powered innovations. To remain competitive, incumbent banks must become "AI first" in vision and execution, and as discussed in our previous article, 1 1. Suparna Biswas, Brant Carson, Violet Chung, Shwaitang Singh, and Renny Thomas, "AI-bank of the future: Can banks meet the AI challenge?," If fully integrated, these capabilities can strengthen engagement significantly, supporting customers' financial activities across diverse online and physical contexts with intelligent, highly personalized solutions delivered through an interface that is intuitive, seamless, and fast.
In developing a system to help decipher lost languages, MIT researchers studied the language of Ugaritic, which is related to Hebrew and has previously been analyzed and deciphered by linguists. System developed at MIT CSAIL aims to help linguists decipher languages that have been lost to history. Recent research suggests that most languages that have ever existed are no longer spoken. Dozens of these dead languages are also considered to be lost, or "undeciphered" -- that is, we don't know enough about their grammar, vocabulary, or syntax to be able to actually understand their texts. Lost languages are more than a mere academic curiosity; without them, we miss an entire body of knowledge about the people who spoke them.
On 19 August 2020 IBM Watson Assistant launched autolearning. The tagline from IBM is, Empower your skill to learn automatically with autolearning. This sounds very promising, and is indeed a step in the right direction. The big question of course is to what extend it learns automatically. For a full and detailed report on Watson Assistant's Disambiguation Function, I suggest this article: The ideal chatbot conversation is just that, conversation-like, in natural language and highly unstructured.
As the name suggests, a chatbot is a bot that can chat. Everyone would have encountered a chatbot in their day to day life, may it be on your banking; website, self-guiding you to apply for a loan or a credit card; or it could be a customer service bot to map out your queries or complains. So how exactly does a chatbot understand what the user wants? The answer is pretty simple, it works exactly the way a human being understands and comprehends, using their brains, what is the brain of a chatbot? Basics about chatbot are rooted in a concept called AI, or artificial intelligence, which makes them intelligent enough to absorb and respond on the same context as the user means, its applications are limitless, from your normal, "how to apply for a loan", to complex "book a flight ticket after checking the weather and lowest price", so how many types of chatbot can there be?