If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
"Chips can help produce better chips" in terms of frequency, power and other performance design parameters, Karl Freund of Moor Insights & Strategy, told this week's AI Hardware Summit. For a range of AI applications spanning automotive and networking to AI acceleration, design objectives were met between 84 percent and 89 percent faster using only a single engineer, according to a recent study by chip design leader Synopsys. "These results… are game-changing," Freund said. In one example, an AI algorithm placed transistors from different logic blocks within an IC design "in completely unintuitive manners," the analyst noted. The results: better frequency, lower power and reduced chip area.
Mr. Jean-François: Imagine a dozen cars at a red light. When the light turns green, the first one will take a couple of seconds to react and restart, while the last one will need an extra 10 seconds, because of the cumulated reaction times of the other drivers ahead. If those cars were automated, the human reaction time would be eliminated, all cars would start at the same time and you could increase the number of cars in a given time frame. It's the same for automated trains: the capacity of the line is increased! The computer will also register the line's exact topography and environment and will precisely adjust the train handling accordingly.
AI models will always be more accurate than gut feelings from scant data because they are objective and have "considered" a wide variety and volume of relevant data. Natural language processing allows data scientists to take newsfeeds, social media text, and publicly available business reports and convert them into data for decision making. By training AI models to learn patterns in all sorts of contextual data that pertain to a particular industry, supply chain managers are able to identify the events that will lead to disruption. They are also able to prescribe the best actions to take to avert disaster in their supply chain or pricing structures. Some AI/ML models can automate an appropriate response or alert the human in charge to make the necessary decisions to avert disruption.
The National Institute of Standards and Technology (NIST) is launching the Differential Privacy Temporal Map Challenge. It's a set of contests, with cash prizes attached, that's intended to crowdsource new ways of handling personally identifiable information (PII) in public safety datasets. The problem is that although rich, detailed data is valuable for researchers and for building AI models -- in this case, in the areas of emergency planning and epidemiology -- it raises serious and potentially dangerous data privacy and rights issues. Even if datasets are kept under proverbial lock and key, malicious actors can, based on just a few data points, re-infer sensitive information about people. The solution is to de-identify the data such that it remains useful without compromising individuals' privacy.
The advent of technology has brought many divergent views from all walks of life. Some are of the view that technology will replace human beings, others remain adamant – we will continue to be in charge and never be threatened by technology. Interestingly, there are those that are neutral. They see technology and human beings collaborating. There is no denying the fact that the Fourth Industrial Revolution (4IR) has transformed the workplace.
Film industries all over the world are producing several hundred movies rapidly and grabbing the attraction of people of all ages. Every movie producer is of keen interest in knowing which movies are either likely to hit or flop in the box office. So, the early prediction of the popularity of a movie is of the utmost importance to the film industry. In this study, we examine factors inside the hidden patterns which become movie popular. In past studies, machine learning techniques were implemented on blog articles, social networking, and social media to predict the success of a movie.
The videos i this article will blow your mind... and they are already out of date. Soul Machines is on the cutting edge of building commercial AI avatars that can appear on a computer screen, and even in 3D, to simulate face-to-face engagement. The face in the main image of this article is one of their 3D avatars and they are already being deployed in banks and energy companies to inform and serve customers. With names such as Jamie (ANZ Bank), Will (Vector Energy), Ava (Autodesk), and Sarah (Daimler Mercedes Benz), they are connecting with customers, replicating human emotion, providing the right answers and asking insightful questions. Many call centre workers in affluent countries have been'off-shored' to lower cost countries, and now those roles are set to be outsourced to AI bots.
Mass Eye and Ear researchers have discovered a unique diagnostic tool that can detect dystonia from MRI scans. It is the first technology of its kind to provide an objective diagnosis of the disorder. Dystonia is a potentially disabling neurological condition which causes involuntary muscle contractions, driving to abnormal movements and postures. It is often mistreated and sometimes takes people up to 10 years to get a correct diagnosis. A new study by PNAS researches shows that they have developed an AI-based deep learning platform on September 28, called DystoniaNet to compare brain MRIs of 612 people.
If you have trouble reading this email, see it on a web browser. It's been a busy month for Towards AI. We surpassed over 115k followers across our social media networks, and now we have over 11k subscribers, all thanks to you! Our avid readers who continue to engage with us, whether by sharing, commenting, or engaging with our pieces. On our end, we promise we will continue to work hard to provide you with high-quality content.
Previously, we called for the introduction of risk/benefit assessment frameworks to identify and mitigate risks in AI systems. Yet, such frameworks are highly contextual and require high interdisciplinary expertise and multistakeholder collaboration. Not every organisation can afford such talents or have the required processes. Further, it's perfectly reasonable to assume that a given company has deployed different AI solutions for various use cases, each requiring a distinct framework. Designing and keeping track of these frameworks could quickly become an impossible task even for the most experienced risk managers. In this situation, an intuitive response would be to proceed with caution and limit the use of AI for low-risk applications to avoid potential regulatory violations.