Goto

Collaborating Authors

Results


Future of AI Part 5: The Cutting Edge of AI

#artificialintelligence

Edmond de Belamy is a Generative Adversarial Network portrait painting constructed in 2018 by Paris-based arts-collective Obvious and sold for $432,500 in Southebys in October 2018.


Certificate Course on Artificial Intelligence and Deep Learning by IIT Roorkee

#artificialintelligence

Have you ever wondered how self-driving cars are running on roads or how Netflix recommends the movies which you may like or how Amazon recommends you products or how Google search gives you such an accurate results or how speech recognition in your smartphone works or how the world champion was beaten at the game of Go? Machine learning is behind these innovations. In the recent times, it has been proven that machine learning and deep learning approach to solving a problem gives far better accuracy than other approaches. This has led to a Tsunami in the area of Machine Learning. Most of the domains that were considered specializations are now being merged into Machine Learning. Every domain of computing such as data analysis, software engineering, and artificial intelligence is going to be impacted by Machine Learning.


Understanding the difference between AI, ML and DL!!

#artificialintelligence

"A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order."


Alphabet's Next Billion-Dollar Business: 10 Industries To Watch - CB Insights Research

#artificialintelligence

Alphabet is using its dominance in the search and advertising spaces -- and its massive size -- to find its next billion-dollar business. From healthcare to smart cities to banking, here are 10 industries the tech giant is targeting. With growing threats from its big tech peers Microsoft, Apple, and Amazon, Alphabet's drive to disrupt has become more urgent than ever before. The conglomerate is leveraging the power of its first moats -- search and advertising -- and its massive scale to find its next billion-dollar businesses. To protect its current profits and grow more broadly, Alphabet is edging its way into industries adjacent to the ones where it has already found success and entering new spaces entirely to find opportunities for disruption. Evidence of Alphabet's efforts is showing up in several major industries. For example, the company is using artificial intelligence to understand the causes of diseases like diabetes and cancer and how to treat them. Those learnings feed into community health projects that serve the public, and also help Alphabet's effort to build smart cities. Elsewhere, Alphabet is using its scale to build a better virtual assistant and own the consumer electronics software layer. It's also leveraging that scale to build a new kind of Google Pay-operated checking account. In this report, we examine how Alphabet and its subsidiaries are currently working to disrupt 10 major industries -- from electronics to healthcare to transportation to banking -- and what else might be on the horizon. Within the world of consumer electronics, Alphabet has already found dominance with one product: Android. Mobile operating system market share globally is controlled by the Linux-based OS that Google acquired in 2005 to fend off Microsoft and Windows Mobile. Today, however, Alphabet's consumer electronics strategy is being driven by its work in artificial intelligence. Google is building some of its own hardware under the Made by Google line -- including the Pixel smartphone, the Chromebook, and the Google Home -- but the company is doing more important work on hardware-agnostic software products like Google Assistant (which is even available on iOS).


Artificial Intelligence for Social Good: A Survey

arXiv.org Artificial Intelligence

Its impact is drastic and real: Youtube's AIdriven recommendation system would present sports videos for days if one happens to watch a live baseball game on the platform [1]; email writing becomes much faster with machine learning (ML) based auto-completion [2]; many businesses have adopted natural language processing based chatbots as part of their customer services [3]. AI has also greatly advanced human capabilities in complex decision-making processes ranging from determining how to allocate security resources to protect airports [4] to games such as poker [5] and Go [6]. All such tangible and stunning progress suggests that an "AI summer" is happening. As some put it, "AI is the new electricity" [7]. Meanwhile, in the past decade, an emerging theme in the AI research community is the so-called "AI for social good" (AI4SG): researchers aim at developing AI methods and tools to address problems at the societal level and improve the wellbeing of the society.


What Is The Difference Between Deep Learning, Machine Learning and AI?

#artificialintelligence

Over the past few years, the term "deep learning" has firmly worked its way into business language when the conversation is about Artificial Intelligence (AI), Big Data and analytics. And with good reason – it is an approach to AI which is showing great promise when it comes to developing the autonomous, self-teaching systems which are revolutionizing many industries. Deep Learning is used by Google in its voice and image recognition algorithms, by Netflix and Amazon to decide what you want to watch or buy next, and by researchers at MIT to predict the future. The ever-growing industry which has established itself to sell these tools is always keen to talk about how revolutionary this all is. But what exactly is it?


What Is The Difference Between Deep Learning, Machine Learning and AI?

#artificialintelligence

Over the past few years, the term "deep learning" has firmly worked its way into business language when the conversation is about Artificial Intelligence (AI), Big Data and analytics. And with good reason – it is an approach to AI which is showing great promise when it comes to developing the autonomous, self-teaching systems which are revolutionising many industries. Deep Learning is used by Google in its voice and image recognition algorithms, by Netflix and Amazon to decide what you want to watch or buy next, and by researchers at MIT to predict the future. The ever-growing industry which has established itself to sell these tools is always keen to talk about how revolutionary this all is. But what exactly is it?


A 20-Year Community Roadmap for Artificial Intelligence Research in the US

arXiv.org Artificial Intelligence

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.


Natural Adversarial Examples

arXiv.org Machine Learning

We introduce natural adversarial examples -- real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. We curate 7,500 natural adversarial examples and release them in an ImageNet classifier test set that we call ImageNet-A. This dataset serves as a new way to measure classifier robustness. Like l_p adversarial examples, ImageNet-A examples successfully transfer to unseen or black-box classifiers. For example, on ImageNet-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%. Recovering this accuracy is not simple because ImageNet-A examples exploit deep flaws in current classifiers including their over-reliance on color, texture, and background cues. We observe that popular training techniques for improving robustness have little effect, but we show that some architectural changes can enhance robustness to natural adversarial examples. Future research is required to enable robust generalization to this hard ImageNet test set.


No Time Like Now to Leverage AI - TEK2day

#artificialintelligence

In deploying artificial intelligence ("AI") or one of its sibling technologies – machine learning and deep learning – the first order of business is defining the business problem. Next, understand your enterprise data and third party data in terms of scope and quality. Once those elements are in place, you are ready to embark on your AI journey upon which your imagination will be the primary limiting factor. These are problems that can be answered by deploying some combination of AI, machine learning, deep learning/neural networks and/or natural language processing ("NLP"). Data quality is important – "garbage in, garbage out".