If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
It is very complex because most of the time there is no simple rule such as a threshold on the confidence score of the prediction. In practice it might be more like, "if the user has more than 7 items in their cart and if the user is not a returning customer that filled out personal data and the value of their cart is greater than $100 and they have not put a new item in the cart for 2 minutes, and the confidence score of the predictor is less than 0.4, THEN don't show the next recommended item, just display a checkout link."
NVIDIA today announced that the Italian inter-university consortium CINECA -- one of the world's most important supercomputing centers -- will use the company's accelerated computing platform to build the world's fastest AI supercomputer. The new "Leonardo" system, built with Atos, is expected to deliver 10 exaflops of FP16 AI performance to enable advanced AI and HPC converged application use cases. Featuring nearly 14,000 NVIDIA Ampere architecture-based GPUs and NVIDIA Mellanox HDR 200 Gb/s InfiniBand networking, Leonardo will propel Italy as the global leader in AI and high performance computing research and innovation. Leonardo is procured by EuroHPC, a collaboration between national governments and the European Union to develop a world-class supercomputing ecosystem and exascale supercomputing in Europe, and funded by the European Commission through the Italian Ministry of University and Research. "The EuroHPC technology roadmap for exascale in Europe is opening doors for rapid growth and innovation in HPC and AI," said Marc Hamilton, vice president of solutions architecture and engineering at NVIDIA.
A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out. Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv. The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary. A second dataset of properly captioned images is then used to help teach the AI how to best describe the pictures. "Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don't," said Saqib Shaikh, a software engineering manager with Microsoft's AI platform group.
Radiology extenders who read chest X-rays save attending radiologists more time during the day than radiology residents do, potentially streamlining workflow and alleviating provider burnout. At least that has been the experience for researchers at the University of Pennsylvania. Radiologists in their department read more cases per hour when the drafts came from radiology extenders than from residents, resulting in nearly an hour – 51 minutes – of provider time saved each day. The authors shared their experience on Oct. 13 in the Journal of the American College of Radiology. "Interpreting these radiographs entails a disproportionate amount of work (eg., retrieving patient history, completing standard dictation templates, and ensuring proper communication of important findings before finalization of reports). Given low reimbursement rates for these studies, economic necessities push radiologists to provide faster interpretations, contributing to burnout," said the team led by Arijitt Borthakur, MBA, Ph.D., senior research investigator in the Perelman School of Medicine radiology department.
I had the pleasure of talking with futurist and the managing partner of ChangeistScott Smith recently about some of the biggest macro trends everyone should be aware of today. While these trends had already begun prior to the coronavirus pandemic, in many ways, they accelerated as the world fought to deal with the pandemic and now as we begin to build our post-COVID-19 world. Here are the six future trends he believes everyone should be ready for. The "decoupling" of economies had already started pre-COVID-19 with early indicators appearing five to 10 years ago, according to some thought leaders, but the pandemic certainly made it more clear how dependence on globalization could create vulnerabilities. Some of the world's major powers, such as the UK, the United States, Brazil, Russia, India, and parts of the European Union, had already started to favor nationalism.
We've launched 9,600 satellites since 1957. For the first few decades, no one thought about what would happen once they reached the end of their lives. By the time space agencies decided to do something, it had become a problem. "A vast majority of objects in orbit are effectively stranded there," says Stijn Lemmens, a space debris analyst at the European Space Agency. "And they have a lifetime of hundreds, thousands of years."
It's been nearly 4 years since Tensorflow was released, and the library has evolved to its official second version. Tensorflow is Google's library for deep learning and artificial intelligence. Tensorflow is the world's most popular library for deep learning, and it's built by Google, whose parent Alphabet recently became the most cash-rich company in the world (just a few days before I wrote this). It is the library of choice for many companies doing AI and machine learning. In other words, if you want to do deep learning, you gotta know Tensorflow.
Artificial intelligence, the latest facet of information technology, has gained increasing momentum and been widely applied in various sectors with tremendous potential, thus becoming a driving force of scientific and technological development during China's 13th Five-Year Plan (2016-20) period. It has also injected new impetus into the digital economy and played a key role in bolstering high-quality development and accelerating the nation's push for industrial upgrading, experts said. According to the 13th Five-Year Plan, the country called for developing AI, with a focus on fostering the industrial ecology of AI and promoting the integration and application of AI into key industries and fields. In July 2017, the State Council, China's Cabinet, issued a plan that set benchmarks for the country's AI sector, with the value of core AI industries predicted to exceed 1 trillion yuan ($150 billion) and making the country one of the global leaders in AI innovation by 2030. China has made tremendous strides in AI over the past five years as it has outpaced the United States in the number of worldwide AI-related patent applications, said a report from a Ministry of Industry and Information Technology research unit. The report also pointed out that AI is considered an important direction for industrial upgrading, and the country's strategic plan for AI offers a broad space for the research and development of AI technologies and related industries.
Human intelligence has been creating and maintaining complex systems since the beginnings of civilizations. In modern times, digital twins have emerged to aid operations of complex systems, as well as improve design and production. Artificial intelligence (AI) and extended reality (XR) – including augmented reality (AR) and virtual reality (VR) – have emerged as tools that can help manage operations for complex systems. Digital twins can be enhanced with AI and emerging user interface (UI) technologies like XR can improve people's abilities to manage complex systems via digital twins. Digital twins can marry human and AI to produce something far greater by creating a usable representation of complex systems. End users do not need to worry about the formulas that go into machine learning (ML), predictive modeling and artificially intelligent systems, but also can capitalize on their power as an extension of their own knowledge and abilities. Digital twins combined with AR, VR and related technologies provide a framework to overlay intelligent decision making into day-to-day operations, as shown in Figure 1. Figure 1: A digital twin can be enhanced with artificial intelligence (AI) and intelligent realities user interfaces, such as extended reality (XR), which includes augmented reality (AR) and virtual reality (VR). The operations of a physical twin can be digitized by sensors, cameras and other such devices, but those digital streams are not the only sources of data that can feed the digital twin. In addition to streaming data, accumulated historical data can inform a digital twin. Relevant data could include data not generated from the asset itself, such as weather and business cycle data. Also, computer-aided design (CAD) drawings and other documentation can help the digital twin provide context.
Most of you are probably familiar with the chip giants like Intel & AMD which command a bigger share of the computing processor market, but this entrant to the chip market in 1993 has solidified its reputation as a big name in the arena. Although most well-known for its graphical processing units (GPUs) -- GeForce is its primary & most popular product line, the company also provides system-on-a-chip units (SoCs) for the mobile computing and automotive market. Since 2014, Nvidia has begun to diversify its business from the niche markets of gaming, automotive electronics, and mobile devices. It is now venturing into the futuristic AI, along with providing parallel processing capabilities to researchers and scientists that allow them to efficiently run high-performance applications. Let's review of some these endeavors.