If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
We are now looking for a Senior Deep Learning Research Scientist: NVIDIA is searching for a world-class researcher in deep learning to join our applied research team. We are passionate about deep learning applied to computer vision, audio, text and other domains, with the goal of improving specific problems encountered in NVIDIA's products. After building prototypes that demonstrate the promise of your research, you will work with product teams to help them integrate your ideas into products. If you're interested in researching and applying the latest advances in the deep learning revolution to solve real-life problems, this team may be an outstanding fit for you! What You'll Be Doing Conceive deep learning approaches to solving particular product problems.
The U.S. Postal Service (USPS) said on Nov. 7 that it would average 20.5 million packages per day through the remainder of the year. That adds up to a projected 800 million package deliveries between Thanksgiving and New Year's Day. The USPS is making an investment in new artificial intelligence technology to make the processing of those millions of packages more efficient. Although it will not impact this holiday season's shipments, the USPS is testing a range of hardware and software solutions from Nvidia to speed up the processing of packages, according to a November statement. Engineering teams from the Postal Service and Nvidia have been collaborating for several months on the project.
Artificial intelligence (AI) has become integral to practically every segment of the technology industry. AI is even being used to improve AI. What changes in core AI uses, tools, techniques, platforms, and standards are in store for the coming year? Here is what we're already starting to see in 2020. AI hardware accelerators have become a principal competitive battlefront in high tech.
Roundup Let's get cracking with some machine-learning news. Starksy Robotics is no more: Self-driving truck startup Starsky Robotics has shut down after running out of money and failing to raise more funds. CEO Stefan Seltz-Axmacher bid a touching farewell to his upstart, founded in 2016, in a Medium post this month. He was upfront and honest about why Starsky failed: "Supervised machine learning doesn't live up to the hype," he declared. Neural networks only learn to pick up on certain patterns after they are faced with millions of training examples.
While the IBM hardware business today is limited to POWER and Mainframe chips and systems, the technology giant is quietly building its expertise and capabilities in AI hardware. Where this could end up is anybody's guess, but here are a few thoughts about what IBM is doing and speculation as to why. IBM founded the IBM Research AI Hardware Center in early 2019 to conduct AI Chip research in collaboration with the New York State, the SUNY Polytechnic Institute, and technology companies including Mellanox, Samsung and Synopsys. The center takes a holistic, end-to-end approach to AI hardware, working towards its aggressive goal to deliver a 1000X increase in AI performance over the next 10 years. This starts with the reduced precision techniques we will discuss here.
Giving and taking objects to and from humans are fundamental capabilities for collaborative robots in a variety of applications. NVIDIA researchers are hoping to improve these human-to-robot handovers by thinking about them as a hand grasp classification problem. In a paper called "Human Grasp Classification for Reactive Human-to-Robot Handovers", researchers at NVIDIA's Seattle AI Robotics Research Lab describe a proof of concept they claim results in more fluent human-to-robot handovers compared to previous approaches. The system classifies a human's grasp and plans a robot's trajectory to take the object from the human's hand. To do this, the researchers developed a perception system that can accurately identify a hand and objects in a variety of poses.
Red Hat, Inc., the world's leading provider of open source solutions, today highlighted that more organizations are using Red Hat OpenShift as the foundation for building artificial intelligence (AI) and machine-learning (ML) data science workflows and AI-powered intelligent applications. OpenShift helps to provide agility, flexibility, portability and scalability across the hybrid cloud, from cloud infrastructure to edge computing deployments, a necessity for developing and deploying ML models and intelligent applications into production more quickly and without vendor lock-in. AI/ML represents a top emerging workload for Red Hat OpenShift across hybrid cloud and multicloud deployments for both our customers and for our partners supporting these global organizations. By applying DevOps to AI/ML on the industry's most comprehensive enterprise Kubernetes platform, IT organizations want to pair the agility and flexibility of industry best practices with the promise and power of intelligent workloads. As a production-proven enterprise container and Kubernetes platform, OpenShift delivers integrated DevOps capabilities for independent software vendors (ISVs) via Kubernetes Operators and NVIDIA GPU-powered infrastructure platforms.
Since the topics "Machine Learning" and "Artificial Intelligence" in general are growing bigger and bigger, dedicated AI hardware starts popping up from a number of companies. To get an overview over the current state of AI platforms, we took a closer look at two of them: NVIDIA's Jetson Nano and Google's new Coral USB Accelerator. In this article we will discuss the typical workflow for these platforms and their pros and cons. NVIDIA's Jetson Nano is a single-board computer, which in comparison to something like a RaspberryPi, contains quite a lot CPU/GPU horsepower at a much lower price than the other siblings of the Jetson family. It is currently available as a Developer Kit for around 109€ and contains a System-on-Module (SoM) and a carrier board that provides HDMI, USB 3.0 and Ethernet ports.
Deep Learning Super Sampling (DLSS) is one of the marquee features for Nvidia's RTX video cards, but it's also one people tend to overlook or outright dismiss. The reason for that is because many people equate the technology to something like a sharpening filter that can sometimes reduce the jagged look of lower-resolution images. But DLSS uses a completely different method with much more potential for improving visual quality, and Nvidia is ready to prove that with DLSS 2.0. Nvidia built the second-generation DLSS to address all of the concerns with the technology. It looks better, gives players much more control, and should support a lot more games.
Speech recognition startup Deepgram has secured $12 million in Series A funding led by Wing VC, writes TechCrunch. Deepgram leverages deep learning and has already raised a few million in capital in its five-year existence. Other investors that joined the funding round include Nvidia, Y Combinator and SAP. The startup wants to use the investment to create new job opportunities in go-to-market and engineering, and expand its team from its current 40 members to an undisclosed number. It also wants to purchase some new hardware, as it runs its own service for better margins, which also makes it a natural partner for Nvidia.