Goto

Collaborating Authors

 ai performance


Apple M5 unveiled: 10 CPU cores, 10 GPU cores, and the 'next big leap' in AI

PCWorld

When you purchase through links in our articles, we may earn a small commission. Apple M5 unveiled: 10 CPU cores, 10 GPU cores, and the'next big leap' in AI New iPad Pro, MacBook Pro, and Vision Pro all benefit from upgraded Apple silicon. Apple on Wednesday announced the launch of its M5 processor, saying the chip "ushers in the next big leap in AI performance for Apple silicon." The M5 appears in new editions of the iPad Pro, MacBook Pro, and Vision Pro, all of which are available for U.S. and U.K. customers to pre-order as of today. The M5, as you would expect, is a higher-performance chip than its M4 predecessor.


To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems

He, Gaole, Bharos, Abri, Gadiraju, Ujwal

arXiv.org Artificial Intelligence

Powerful predictive AI systems have demonstrated great potential in augmenting human decision making. Recent empirical work has argued that the vision for optimal human-AI collaboration requires 'appropriate reliance' of humans on AI systems. However, accurately estimating the trustworthiness of AI advice at the instance level is quite challenging, especially in the absence of performance feedback pertaining to the AI system. In practice, the performance disparity of machine learning models on out-of-distribution data makes the dataset-specific performance feedback unreliable in human-AI collaboration. Inspired by existing literature on critical thinking and a critical mindset, we propose the use of debugging an AI system as an intervention to foster appropriate reliance. In this paper, we explore whether a critical evaluation of AI performance within a debugging setting can better calibrate users' assessment of an AI system and lead to more appropriate reliance. Through a quantitative empirical study (N = 234), we found that our proposed debugging intervention does not work as expected in facilitating appropriate reliance. Instead, we observe a decrease in reliance on the AI system after the intervention -- potentially resulting from an early exposure to the AI system's weakness. We explore the dynamics of user confidence and user estimation of AI trustworthiness across groups with different performance levels to help explain how inappropriate reliance patterns occur. Our findings have important implications for designing effective interventions to facilitate appropriate reliance and better human-AI collaboration.


NVIDIA's RTX 500 and 1000 Ada GPUs bring more AI smarts to thin and light workstations

Engadget

Just ahead of Mobile World Congress, NVIDIA unveiled its latest laptop GPUs and, what a surprise, they're designed largely to assist with AI processing. The RTX 500 and 1000 Ada Generation graphics cards are primarily for thin and light laptops. While they won't offer as much TOPS AI performance as current higher-end mobile GPUs, they could be a handy option for on-the-go AI processing for the likes of researchers, content creators and video editors. It's worth noting they're workstation GPUs rather than ones designed for gaming. NVIDIA says the GPUs, which are based on the Ada Lovelace architecture, offer up to twice the ray-tracing performance of previous-gen GPUs (they employ third-gen ray-tracing cores).


NVIDIA Ups The Ante In Edge Computing With Jetson Orin Nano Developer Kit

#artificialintelligence

NVIDIA continues to push the envelope of AI accelerators - both in the data center and at the edge. Last month, it announced the availability of the Jetson Orin Nano Developer Kit, the latest addition to the Jetson family of devices. Initially announced in September 2022, the Jetson Orin Nano system-on-module (SoM) delivers 80x the performance of the previous generation Jetson Nano device. The developer kit puts the power of the SoM in the hands of developers by making it accessible. Below is a snapshot of the benchmark that compares the performance of AI vision models on Jetson Nano and Jetson Orin Nano.


Aetina and Hailo will launch Multi-Inference AI Solutions at the Edge - Coleda Pvt Ltd

#artificialintelligence

Together, Aetina and Hailo are launching multi-inference AI solutions that use 4x Hailo-8TM AI accelerators in the AI-MXM-H84A MXM, Aetina's AI inference platform (AIP-SQ67), and object recognition AI models. The AIP-SQ67 platform, powered by Aetina's AI-MXM-H84A MXM module, offers enough processing power to enable real-time video analytics processing and numerous, low-latency AI inference tasks at the edge, with up to 104 Tera-Operations Per Second (TOPS) of AI performance from Hailo's AI processors. The AI technologies are appropriate for a variety of applications in cities and transportation networks since they are capable of identifying diverse objects, such as people and vehicles, and evaluating large video datasets from multiple cameras simultaneously. Aetina and Hailo will present the solutions at ISC West 2023. The MegaEdge family member AIP-SQ67 from Aetina boasts an Intel 12th Gen CoreTM processor and expansion slots for up to two M.2 AI accelerators and one MXM.


Using edge AI processors to boost embedded AI performance

#artificialintelligence

The arrival of artificial intelligence (AI) in embedded computing has led to a proliferation of potential solutions that aim to deliver the high performance required to perform neural-network inferencing on streaming video at high rates. Though many benchmarks such as the ImageNet challenge work at comparatively low resolutions and can therefore be handled by many embedded-AI solutions, real-world applications in retail, medicine, security, and industrial control call for the ability to handle video frames and images at resolutions up to 4kp60 and beyond. Scalability is vital and not always an option with system-on-chip (SoC) platforms that provide a fixed combination of host processor and neural accelerator. Though they often provide a means of evaluating the performance of different forms of neural network during prototyping, such all-in-one implementations lack the granularity and scalability that real-world systems often need. In this case, industrial-grade AI applications benefit from a more balanced architecture where a combination of heterogeneous processors (e.g., CPUs, GPUs) and accelerators cooperate in an integrated pipeline to not just perform inferencing on raw video frames but take advantage of pre- and post-processing to improve overall results or handle format conversion to be able to deal with multiple cameras and sensor types.


Intel Collaboration With Deci Boosts AI Performance on Intel Hardware

#artificialintelligence

Scott Bair is a Senior Technical Creative Director for Intel Labs, chartered with growing awareness for Intel's leading-edge research activities, like AI, Neuromorphic Computing and Quantum Computing. Scott is responsible for driving marketing strategy, messaging, and asset creation for Intel Labs and its joint-research activities. In addition to his work at Intel, he has a passion for audio technology and is an active father of 5 children. Scott has over 23 years of experience in the computing industry bringing new products and technology to market. During his 15 years at Intel, he has worked in a variety of roles from R&D, architecture, strategic planning, product marketing, and technology evangelism.


Nvidia takes the wraps off Hopper, its latest GPU architecture

#artificialintelligence

Did you miss a session at the Data Summit? After much speculation, Nvidia today at its March 2022 GTC event announced the Hopper GPU architecture, a line of graphics cards that the company says will accelerate the types of algorithms commonly used in data science. Named for Grace Hopper, the pioneering U.S. computer scientist, the new architecture succeeds Nvidia's Ampere architecture, which launched roughly two years ago. The first card in the Hopper lineup is the H100, containing 80 billion transistors and a component called the Transformer Engine that's designed to speed up specific categories of AI models. Another architectural highlight includes Nvidia's MIG technology, which allows an H100 to be partitioned into seven smaller, isolated instances to handle different types of jobs.


5 things lawyers should know about artificial intelligence

#artificialintelligence

Although artificial intelligence has been the subject of academic research since the 1950s and has been used commercially in some industries for decades, it is still in its infancy across much of the broader economy. The rapid adoption of this technology, along with the unique privacy, security and liability issues associated with it, has created opportunities for lawyers to help their clients capture its economic value while ensuring its use is ethical and legal. However, before advising clients on AI issues, lawyers should have some basic technical knowledge to answer questions about legal compliance. Machine learning algorithms are incredibly complex, learning billions of rules from datasets and applying those rules to arrive at an output recommendation. Even the most precise and well-designed AI systems are probabilistic in nature, guaranteeing that the system will, at some point, produce an incorrect result.


Improved algorithms may be more important for AI performance than faster hardware

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. When it comes to AI, algorithmic innovations are substantially more important than hardware -- at least where the problems involve billions to trillions of data points. That's the conclusion of a team of scientists at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), who conducted what they claim is the first study on how fast algorithms are improving across a broad range of examples. Algorithms tell software how to make sense of text, visual, and audio data so that they can, in turn, draw inferences from it. For example, OpenAI's GPT-3 was trained on webpages, ebooks, and other documents to learn how to write papers in a humanlike way.