If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Today at the Computer Vision and Pattern Recognition Conference in Salt Lake City, Utah, NVIDIA is kicking off the conference by demonstrating an early release of Apex, an open-source PyTorch extension that helps users maximize deep learning training performance on NVIDIA Volta GPUs. Inspired by state-of-the-art mixed precision training in translational networks, sentiment analysis, and image classification, NVIDIA PyTorch developers have created tools bringing these methods to all levels of PyTorch users. Mixed precision utilities in Apex are designed to improve training speed while maintaining the accuracy and stability of training in single precision. Specifically, Apex offers automatic execution of operations in either FP16 or FP32, automatic handling of master parameter conversion, and automatic loss scaling, all available with 4 or fewer line changes to the existing code. Installation requires CUDA 9, PyTorch 0.4 or later, and Python 3. The modules and utilities are still under active development and we look forward to your feedback to make these utilities even better.
I found Semblance on the second floor of the Fuego Lounge, squeezed into a booth beside a dance floor and a small stage. It was early afternoon, and waitstaff were restocking the long, rectangular bar in the center of the room as game developers, press and PR handlers flitted from station to station. A cloth tent on the balcony offered psychedelic VR meditation; a geodesic dome on the roof showcased swirling galaxies. And all along the walls inside, indie games waited to be played. Semblance stood out among the row of screens for its energetic, purple-tinged visuals.
The startup claims large industrial companies as early customers, including those seeking to improve operations via "dynamic control systems" spanning robotics, wind turbines and machine tuning. "To realize this vision of making AI more accessible and valuable for all, we have to remove the barriers to development, empowering every developer, regardless of machine learning expertise, to be an AI developer," Microsoft noted in a blog post announcing the deal. Terms of the acquisition were not disclosed.
The time required to test an idea should be zero. This was the very first sentence I wrote when considering the Airbnb design tools team vision. We believe that, within the next few years, emerging technology will allow teams to design new products in an expressive and intuitive way, while simultaneously eliminating hurdles from the product development process. As it stands now, every step in the design process and every artifact produced is a dead end. Work stops whenever one discipline finishes a portion of the project and passes responsibility to another discipline.
At some point in the not too distant future, the answer will seem self-evident. Like a lot of things, time changes perspective; the essential advancements we take for granted now were once deemed insurmountable. I believe we'll look back at the introduction of a 2 petaFLOPS deep learning system as essential to the evolution of AI in the enterprise. Single GPU systems once offered a seemingly limitless playground for researchers and developers on which to innovate. As deep learning model complexity and datasets grew to address increasingly exotic (but important) use cases, the standard currency of deep learning compute grew in response.
With the wide‑spread implementation of electronic health records (EHRs) and the tremendous amount of electronic information being created and collected, the health care industry is a new (or not quite-so-new) frontier for Artificial Intelligence (AI). AI is finding its way onto physicians' desks to provide information about drug interactions, and in to EHRs to pull-up requested patient records, and wearables used by health plans to track health care metrics, promote wellness and address chronic conditions. But, health care is heavily regulated – and Part 1 of this blog we explained how using AI applications and systems like those noted above may trigger a multitude of requirements for AI developers, health care providers, and health plans. An important law that may affect AI in the health care setting is the Health Insurance Portability and Accountability Act and its implementing regulations (HIPAA), the federal law establishing a floor for privacy, security, and breach notification related to most health information. A critical threshold question is at what point does an AI vendor become subject to HIPAA?
A smattering of PC games are tracking where you go and what you do on the internet when you aren't playing. Reddit sleuths have discovered that games including Civilization VI, Elder Scrolls Online, Kerbal Space Program (above), Hunt: Showdown and Warhammer: Vermentide II included a tracker called Red Shell. Essentially, it's software that, if installed, discerns whether you were exposed to the marketing campaign for the game you're playing, and if said campaign led to you purchasing the game. In a FAQ section of Red Shell's website (written to developers), it says the following: "All data we collect is YOURs. We do not aggregate, distribute or sell ANY data."
Microsoft has thrown open the doors to its AI Lab, a suite of beginner projects to help developers learn machine learning. There are five different experiments that cover computer vision, natural language processing, and drones. "Each lab gives you access to the experimentation playground, source code on GitHub, a crisp developer-friendly video, and insights into the underlying business problem and solution," according to Microsoftie Tara Shankar Jana on Tuesday. The first one is is the DrawingBot. It teaches developers about generative adversarial networks (GANs), a popular type of neural network that learns to create similar content to the data it was trained on.
With AI's meteoric rise, autonomous systems have been projected to grow to more than 800 million in operation by 2025. However, while envisioned in science fiction for a long time, truly intelligent autonomous systems are still elusive and remain a holy grail. The reality today is that training autonomous systems that function amidst the many unforeseen situations in the real world is very hard and requires deep expertise in AI -- essentially making it unscalable. To achieve this inflection point in AI's growth, traditional machine learning methodologies aren't enough. Bringing intelligence to autonomous systems at scale will require a unique combination of the new practice of machine teaching, advances in deep reinforcement learning and leveraging simulation for training.