Goto

Collaborating Authors

Results


What Is Conversational AI? ZeroShot Bot CEO Jason Mars Explains

#artificialintelligence

Entrepreneur Jason Mars calls conversation our "first technology." Before humans invented the wheel, crafted a spear or tamed fire, we mastered the superpower of talking to one another. That makes conversation an incredibly important tool. But if you've dealt with the automated chatbots deployed by the customer service arms of just about any big organization lately -- whether banks or airlines -- you also know how hard it can be to get it right. Deep learning AI and new techniques such as zero-shot learning promise to change that.


Modern Computing: A Short History, 1945-2022

#artificialintelligence

Inspired by A New History of Modern Computing by Thomas Haigh and Paul E. Ceruzzi. But the selection of key events in the journey from ENIAC to Tesla, from Data Processing to Big Data, is mine. This was the first computer made by Apple Computers Inc, which became one of the fastest growing ... [ ] companies in history, launching a number of innovative and influential computer hardware and software products. Most home computer users in the 1970s were hobbyists who designed and assembled their own machines. The Apple I, devised in a bedroom by Steve Wozniak, Steven Jobs and Ron Wayne, was a basic circuit board to which enthusiasts would add display units and keyboards. April 1945 John von Neumann's "First Draft of a Report on the EDVAC," often called the founding document of modern computing, defines "the stored program concept." July 1945 Vannevar Bush publishes "As We May Think," in which he envisions the "Memex," a memory extension device serving as a large personal repository of information that could be instantly retrieved through associative links.


Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.


Nvidia plans for a more robust Omniverse with avatars, synthetic data

ZDNet

Omniverse Replicator is a simulation framework that produces physically accurate synthetic data to accelerate training of deep neural networks for AI applications. NVIDIA has created Omniverse Replicators for DRIVE Sim - for training of AI perception networks for autonomous vehicles - and for Isaac Sim, for training robots. As enterprises prepare to bring more of their business and operations to the virtual world, Nvidia is building out Omniverse, its platform for extending workflows into the virtual sphere. The latest updates to the platform, introduced during GTC 2021, include Omniverse Avatar, a tool for creating embodied AIs, as well as Omniverse Replicator, a synthetic data-generation engine. Nvidia rolled out Omniverse in open beta last December -- nearly a year before Facebook committed to the concept of a "metaverse" by renaming itself Meta.


NVIDIA Invites Developers To Test Experimental DLSS Models Directly From Company's Supercomputer

#artificialintelligence

NVIDIA recently began inviting developers to test the newest build for DLSS (Deep Learning Super Sampling) and submit their experiences and findings to the developer forum on NVIDIA's site. NVIDIA DLSS is "a deep learning neural network that boosts frame rates and generates beautiful, sharp images for your games. It gives you the performance headroom to maximize ray tracing settings and increase output resolution. DLSS is powered by dedicated AI processors on RTX GPUs called Tensor Cores." NVIDIA is enabling developers to explore and evaluate experimental AI models for Deep Learning Super Sampling (DLSS).


Nvidia In the Lead in AI Chips and is Working to Stay There - AI Trends

#artificialintelligence

Nearly 100% of AI-accelerator chips are from Nvidia today, and the company cofounded in 1993 by CEO Jensen Huang is working hard to maintain its lead position in AI processing. Still, the AI landscape now includes many companies engaged in efforts to build the next generation of AI chips, capable of processing ever-increasing workloads in data centers and handling more processing pushing out to edge computers. That Nvidia is in a dominant position today in the AI chip market is not in dispute. Its graphic processing unit (GPU) chips were deployed in 2019 in over 97% of AI accelerator instances of hardware used to boost processing speeds, at AWS, Google, Alibaba, and Azure, the top four cloud providers, according to a recent account in Wired UK. Nvidia commands "nearly 100%" of the market for training AI algorithms, stated Karl Freund, analyst at Cambrian AI Research. Nearly 70% of the top 500 supercomputers use its GPUs and AI milestones such as the GPT-3 large language model form OpenAI and DeepMind's board game champion AlphaGo have executed on Nvidia hardware.


Here's why a great gaming laptop is the best all-around computer for college

Mashable

If you're tackling a degree in science, technology, engineering, or mathematics, there's nothing more frustrating than a machine that can't keep up with the apps you need for your coursework. Here's where a powerful gaming laptop proves its mettle. With GPU acceleration, your machine delivers super-fast image processing, real-time rendering for complex component designs, and it lets you work quickly and efficiently. For engineering students, this means more interactive, real-time rendering for 3D design and modeling, plus faster solutions and visualization for mechanical, structural, and electrical simulations. For computer science, data science, and economics students, NVIDIA's GeForce RTX 30 Series laptops enable faster data analytics for processing large data sets -- all with efficient training for deep learning and traditional machine learning models for computer vision, natural language processing, and tabular data.


NVIDIA's DLSS upscaling comes to 'Rust' and a wave of Linux games

Engadget

NVIDIA's Deep Learning Super Sampling (DLSS) is about to reach a host of big-name games -- and more titles that don't rely on Windows. The company has announced that Facepunch Studios' survival hit Rust is adding DLSS support on July 1st. That's on top of a slew of already-revealed major titles receiving DLSS, including Doom Eternal (which also gets ray-traced reflections) on June 29th and, at an unspecified point, Red Dead Redemption 2. You can also expect to see DLSS in more Linux titles. A driver update arriving on June 22nd will enable DLSS in Vulkan-based games using the Proton compatibility layer. If a Windows game isn't quite running smoothly enough on your Linux rig, the AI-powered tech might make it more enjoyable.


Nintendo's upgraded Switch may use NVIDIA DLSS for 4K gaming

Engadget

Nintendo's next Switch may use an NVIDIA GPU that supports Deep Learning Super Sampling (DLSS) that will allow it to output higher-quality graphics, Bloomberg has reported. The new system-on-chip would enable output at up to 4K quality when the Switch is connected to a TV, and will also reportedly include an upgraded CPU and increased memory. The next-gen Switch is set to have a built-in 7-inch 720p OLED display and 4K output, according to a previous Bloomberg report. Since NVIDIA's DLSS allows for good-quality 4K upscaling, it's not clear if an upgraded NVIDIA GPU would support native 4K or for upscale from a lower resolution. The current generation of Switch uses NVIDIA's Tegra graphics to output up to 1080p game quality.


For Pac-Man's 40th birthday, Nvidia uses AI to make new levels

PCWorld

Pac-Man turns 40 today, and even though the days of quarter-munching arcade machines in hazy bars are long behind us, the legendary game's still helping to push the industry forward. On Friday, Nvidia announced that its researchers have trained an AI to create working Pac-Man games without teaching it about the game's rules or giving it access to an underlying game engine. Nvidia's "GameGAN" simply watched 50,000 Pac-Man games to learn the ropes. That's an impressive feat in its own right, but Nvidia hopes the "generative adversarial network" (GAN) technology underpinning the project can be used in the future to help developers create games faster and train autonomous robots. "This is the first research to emulate a game engine using GAN-based neural networks," Nvidia researcher Seung-Wook Kim said in a press release.