Goto

Collaborating Authors

Results


Understand adversarial attacks by doing one yourself with this tool

#artificialintelligence

In recent years, the media have been paying increasing attention to adversarial examples, input data such as images and audio that have been modified to manipulate the behavior of machine learning algorithms. Stickers pasted on stop signs that cause computer vision systems to mistake them for speed limits; glasses that fool facial recognition systems, turtles that get classified as rifles -- these are just some of the many adversarial examples that have made the headlines in the past few years. There's increasing concern about the cybersecurity implications of adversarial examples, especially as machine learning systems continue to become an important component of many applications we use. AI researchers and security experts are engaging in various efforts to educate the public about adversarial attacks and create more robust machine learning systems. Among these efforts is adversarial.js,


A Brief Introduction to Edge Computing and Deep Learning

#artificialintelligence

Welcome to my first blog on topics in artificial intelligence! Here I will introduce the topic of edge computing, with context in deep learning applications. This blog is largely adapted from a survey paper written by Xiaofei Wang et al.: Convergence of Edge Computing and Deep Learning: A Comprehensive Survey. If you're interested in learning more about any topic covered here, there are plenty of examples, figures, and explanations in the full 35 page survery: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp & arnumber 8976180 Now, before we begin, I'd like to take a moment and motivate why edge computing and deep learning can be very powerful when combined: Deep learning is becoming an increasingly-capable practice in machine learning that allows computers to detect objects, recognize speech, translate languages, and make decisions. More problems in machine learning are solved with the advanced techniques that researchers discover by the day.


Micron Technology hiring Intern - Artificial Intelligence Solutions Engineer in San Jose, California, United States

#artificialintelligence

Micron's vision is to transform how the world uses information to enrich life for all. Join an inclusive team focused on one thing: using our expertise in the relentless pursuit of innovation for customers and partners. The solutions we create help make everything from virtual reality experiences to breakthroughs in neural networks possible. We do it all while committing to integrity, sustainability, and giving back to our communities. Because doing so can spark the very innovation we are pursuing.


New Artificial Neural Networks To Use Graphene Memristors

#artificialintelligence

Research in the field of traditional computing systems is slowing down, with new types of computing moving to the forefront now. A team of engineers from Pennsylvania State University (Penn State) in the U.S. has been working on creating a type of computing based on our brain's neural networks' systems all while using the brain's analog nature. The team has discovered that graphene-based memory resistors show promise for this new computing form. Their findings were recently published in Nature Communications. "We have powerful computers, no doubt about that, the problem is you have to store the memory in one place and do the computing somewhere else," said Saptarshi Das, the team leader and Penn State assistant professor of engineering science and mechanics.


Engineering Practices for Machine Learning Lifecycle at Google and Microsoft

#artificialintelligence

As demands for AI applications grow, we've seen a lot of effort put by companies to build their Machine Learning Engineering (MLE) tools tailored for their needs. There are just so many challenges faced by industries in regards to having a well-designed environment for their Machine Learning (ML) lifecycle: building, deploying, and managing ML models in production. This post will cover two papers, explaining MLE practices from two of the leading tech companies: Google and Microsoft. Adding a little bit of context, this article is part of a graduate-level course at Columbia University: COMS6998 Practical Deep Learning System Performance taught by Prof. Parijat Dube who also works at IBM New York as Research Staff Member. The first section will present a paper from Google and will touch on the building part of an ML lifecycle.


The case against investing in machine learning: Seven reasons not to and what to do instead

#artificialintelligence

The word on the street is if you don't invest in ML as a company or become an ML specialist, the industry will leave you behind. The hype has caught on at all levels, catching everyone from undergrads to VCs. Words like "revolutionary," "innovative," "disruptive," and "lucrative" are frequently used to describe ML. Allow me to share some perspective from my experiences that will hopefully temper this enthusiasm, at least a tiny bit. This essay materialized from having the same conversation several times over with interlocutors who hope ML can unlock a bright future for them. I'm here to convince you that investing in an ML department or ML specialists might not be in your best interest. That is not always true, of course, so read this with a critical eye. The names invoke a sense of extraordinary success, and for a good reason. Yet, these companies dominated their industries before Andrew Ng's launched his first ML lectures on Coursera. The difference between "good enough" and "state-of-the-art" machine learning is significant in academic publications but not in the real world. About once or twice a year, something pops into my newsfeed, informing me that someone improved the top 1 ImageNet accuracy from 86 to 87 or so. Our community enshrines state-of-the-art with almost religious significance, so this score's systematic improvement creates an impression that our field is racing towards unlocking the singularity. No-one outside of academia cares if you can distinguish between a guitar and a ukulele 1% better. Sit back and think for a minute.


What enterprise CISOs need to know about AI and cybersecurity

#artificialintelligence

Modern day enterprise security is like guarding a fortress that is being attacked on all fronts, from digital infrastructure to applications to network endpoints. That complexity is why AI technologies such as deep learning and machine learning have emerged as game-changing defensive weapons in the enterprise's arsenal over the past three years. There is no other technology that can keep up. It has the ability to rapidly analyze billions of data points, and glean patterns to help a company act intelligently and instantaneously to neutralize many potential threats. Beginning about five years ago, investors started pumping hundreds of millions of dollars into a wave of new security startups that leverage AI, including CrowdStrike, Darktrace, Vectra AI, and Vade Secure, among others.


Future of AI according to top AI experts of 2020: In-Depth Guide

#artificialintelligence

Investment and interest in AI is expected to increase in the long run since major AI use cases (e.g. These use cases are likely to materialize since improvements are expected in the 3 building blocks of AI: availability of more data, better algorithms and computing. Short term changes are hard to predict and we could experience another AI winter however, it would likely be short-lived. According to AI Index, the number of active AI startups in the U.S. increased 113% from 2015 to 2018. Thanks to recent advances in deep-learning, AI is already powering search engines, online translators, virtual assistants and numerous marketing and sales decisions. The Google Trends graph below shows the number of queries including the term "artificial intelligence".


Essential Enterprise AI Companies Landscape

#artificialintelligence

Enterprise AI companies are increasingly growing in value and relevance. Global IT spending is expected to soon reach, and surpass $3.8 trillion. Enterprise AI companies are at the heart of this growth. This article will explain not only what enterprise AI companies are but also what they produce. We'll also look at how enterprise AI companies are impacting in various fields such as finance, logistics, and healthcare. Enterprise AI companies produce enterprise software. This is also known as enterprise application software or EAS for short. Generally, EAS is a large-scale software developed with the aim of supporting or solving organization-wide problems. Software developed by enterprise AI companies can perform a number of different roles. Its function varies depending on the task and sector it is designed for. In other words, EAS is software that "takes care of a majority of tasks and problems inherent to the enterprise, then it can be defined as enterprise software". Lots of enterprise AI companies use a combination of machine learning, deep learning, and data science solutions. This combination enables complex tasks such as data preparation or predictive analytics to be carried out quickly and reliably. Some enterprise AI companies are established names, backed by decades of experience. Other enterprises AI companies are relative newcomers, adopting a fresh approach to AI and problem-solving. This article and infographic will seek to highlight a combination of both. And focus on the real competitors for mergers and acquisitions as well as product development. To help you identify the best AI enterprise software for your business, we've segmented the landscape of enterprise AI solutions into categories. A lot of these enterprise companies can be classified in multiple categories, however, we have focused on their primary differentiation features. You're welcome to re-use the infographic below as long as the content remains unmodified and in full. The automotive industry is at the cutting edge of using artificial intelligence to support, imitate, and augment human action. Self-driving car companies and semi-autonomous vehicles of the future will rely heavily on AI systems from leveraging advanced reaction times, mapping, and machine-based systems.


New deep learning models: Fewer neurons, more intelligence – IAM Network

#artificialintelligence

Artificial intelligence has arrived in our everyday lives -- from search engines to self-driving cars. This has to do with the enormous computing power that has become available in recent years. But new results from AI research now show that simpler, smaller neural networks can be used to solve certain tasks even better, more efficiently, and more reliably than ever before. An international research team from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals, such as threadworms. This novel AI-system can control a vehicle with just a few artificial neurons.