Goto

Collaborating Authors

neural network


Quantum computing: A cheat sheet

#artificialintelligence

Quantum computing--considered to be the next generation of high-performance computing--is a rapidly-changing field that receives equal parts attention in academia and in enterprise research labs. Honeywell, IBM, and Intel are independently developing their own implementations of quantum systems, as are startups such as D-Wave Systems. In late 2018, President Donald Trump signed the National Quantum Initiative Act that provides $1.2 billion for quantum research and development. TechRepublic's cheat sheet for quantum computing is positioned both as an easily digestible introduction to a new paradigm of computing, as well as a living guide that will be updated periodically to keep IT leaders informed on advances in the science and commercialization of quantum computing. SEE: The CIO's guide to quantum computing (ZDNet/TechRepublic special feature) Download the free PDF version (TechRepublic) SEE: All of TechRepublic's cheat sheets and smart person's guides Quantum computing is an emerging technology that attempts to overcome limitations inherent to traditional, transistor-based computers. Transistor-based computers rely on the encoding of data in binary bits--either 0 or 1. Quantum computers utilize qubits, which have different operational properties.


Using Deep Java Library to do Machine Learning on SpringBoot

#artificialintelligence

Many AWS customers--startups and large enterprises--are on a path to adopt machine learning and deep learning in their existing applications. The reasons for machine learning adoption are dictated by the pace of innovation in the industry, with business use cases ranging from customer service (including object detection from images and video streams, sentiment analysis) to fraud detection and collaboration. However, until recently, the adoption learning curve was steep and required development of internal technical expertise in new programming languages (e.g., Python) and frameworks, with cascading effect on the whole software development lifecycle, from coding to building, testing, and deployment. The approach outlined in this blog post enables enterprises to leverage existing talent and resources (frameworks, pipelines, and deployments) to integrate machine learning capabilities. Spring Boot, one of the most popular and widespread open source frameworks for microservices development, has simplified the implementation of distributed systems.


This Billie Eilish cover is unlike any other (because it's made by Google's AI)

#artificialintelligence

Google uses common AI tools known as neural networks for a huge variety of tasks, from suggesting text in your Gmail account to serving you up an endless stream of recommended videos every time you fire up the YouTube app. Now, Google has tasked a custom neural net to organize and sync more than 150,000 YouTube cover versions of "Bad Guy," by Billie Eilish. It doesn't sound like all that impressive a task until you consider the scale of the project. From there, however, you can click on the related videos next to the player or any of the hashtags scrolling along the bottom of the screen. Once you do so, the video will seamlessly transition to a cover version of the song that's perfectly synchronized in tempo and key. It works on the most recent versions of all the major browsers on computers, smartphones, and tablets.


Misinformation or artifact: A new way to think about machine learning: A researcher considers when - and if - we should consider artificial intelligence a failure - IAIDL

#artificialintelligence

They are capable of seemingly sophisticated results, but they can also be fooled in ways that range from relatively harmless -- misidentifying one animal as another -- to potentially deadly if the network guiding a self-driving car misinterprets a stop sign as one indicating it is safe to proceed. A philosopher with the University of Houston suggests in a paper published in Nature Machine Intelligence that common assumptions about the cause behind these supposed malfunctions may be mistaken, information that is crucial for evaluating the reliability of these networks. As machine learning and other forms of artificial intelligence become more embedded in society, used in everything from automated teller machines to cybersecurity systems, Cameron Buckner, associate professor of philosophy at UH, said it is critical to understand the source of apparent failures caused by what researchers call "adversarial examples," when a deep neural network system misjudges images or other data when confronted with information outside the training inputs used to build the network. They're rare and are called "adversarial" because they are often created or discovered by another machine learning network -- a sort of brinksmanship in the machine learning world between more sophisticated methods to create adversarial examples and more sophisticated methods to detect and avoid them. "Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are," Buckner said.


The Dark Secret at the Heart of AI

#artificialintelligence

The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen--or shouldn't happen--unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur--and it's inevitable they will. That's one reason Nvidia's car is still experimental.


The case against investing in machine learning: Seven reasons not to and what to do instead

#artificialintelligence

The word on the street is if you don't invest in ML as a company or become an ML specialist, the industry will leave you behind. The hype has caught on at all levels, catching everyone from undergrads to VCs. Words like "revolutionary," "innovative," "disruptive," and "lucrative" are frequently used to describe ML. Allow me to share some perspective from my experiences that will hopefully temper this enthusiasm, at least a tiny bit. This essay materialized from having the same conversation several times over with interlocutors who hope ML can unlock a bright future for them. I'm here to convince you that investing in an ML department or ML specialists might not be in your best interest. That is not always true, of course, so read this with a critical eye. The names invoke a sense of extraordinary success, and for a good reason. Yet, these companies dominated their industries before Andrew Ng's launched his first ML lectures on Coursera. The difference between "good enough" and "state-of-the-art" machine learning is significant in academic publications but not in the real world. About once or twice a year, something pops into my newsfeed, informing me that someone improved the top 1 ImageNet accuracy from 86 to 87 or so. Our community enshrines state-of-the-art with almost religious significance, so this score's systematic improvement creates an impression that our field is racing towards unlocking the singularity. No-one outside of academia cares if you can distinguish between a guitar and a ukulele 1% better. Sit back and think for a minute.


Counterfactual vs Contrastive Explanations in Artificial Intelligence

#artificialintelligence

Counterfactuals, is the Rosetta Stone of causal analysis. Introduction: With the proliferation of deep learning [8] and its anticipated use across various applications in society, trust has become a central issue. Given the black-box nature of these deep learning systems there is a strong desire to understand the reasons behind their decisions. This has led to the sub-field of explainable AI (XAI) gaining prominence [5]. The goal of XAI is to communicate to the consumer, typically a human, why a black-box model made a particular decision.


Automatic Differentiation in PyTorch

#artificialintelligence

Thanks to it, we don't need to worry about partial derivatives, chain rule, or anything like it. To illustrate how it works, let's say we're trying to fit a simple linear regression with a single feature x, using Mean Squared Error (MSE) as our loss: We need to create two tensors, one for each parameter our model needs to learn: b and w. Without PyTorch, we would have to start with our loss, and work the partial derivatives out to compute the gradients manually. Sure, it would be easy enough to do it for this toy problem, but we need something that can scale. So, how do we do it?


PyTorch in Python

#artificialintelligence

First, let me start by explaining how PyTorch will become useful to you. PyTorch has many different uses but is primarily used as a replacement for NumPy to use the power of GPUs, as well as a deep learning research platform providing flexibility and speed. Artificial Intelligence is essentially the building of smart machines that are capable of performing tasks that normally require human intelligence. It encompasses machine learning as well as deep learning. Machine learning provides computer systems with the ability to learn and improve from experience but without having to be explicitly programmed, i.e., the development of computer programs that can access data and learn from it on their own.


AI's next big leap

#artificialintelligence

A few years ago, scientists learned something remarkable about mallard ducklings. If one of the first things the ducklings see after birth is two objects that are similar, the ducklings will later follow new pairs of objects that are similar, too. Hatchlings shown two red spheres at birth will later show a preference for two spheres of the same color, even if they are blue, over two spheres that are each a different color. Somehow, the ducklings pick up and imprint on the idea of similarity, in this case the color of the objects. What the ducklings do so effortlessly turns out to be very hard for artificial intelligence. This is especially true of a branch of AI known as deep learning or deep neural networks, the technology powering the AI that defeated the world's Go champion Lee Sedol in 2016. Such deep nets can struggle to figure out simple abstract relations between objects and reason about them unless they study tens or even hundreds of thousands of examples.