Goto

Collaborating Authors

Results


Is Fine Art the Next Frontier of AI?

#artificialintelligence

In 1950, Alan Turing developed the Turing Test as a test of a machine's ability to display human-like intelligent behavior. "Are there imaginable digital computers which would do well in the imitation game?" In most applications of AI, a model is created to imitate the judgment of humans and implement it at scale, be it autonomous vehicles, text summarization, image recognition, or product recommendation. By the nature of imitation, a computer is only able to replicate what humans have done, based on previous data. This doesn't leave room for genuine creativity, which relies on innovation, not imitation.


Artificial Intelligence and Machine Learning – Path to Intelligent Automation

#artificialintelligence

With evolving technologies, intelligent automation has become a top priority for many executives in 2020. Forrester predicts the industry will continue to grow from $250 million in 2016 to $12 billion in 2023. With more companies identifying and implementation the Artificial Intelligence (AI) and Machine Learning (ML), there is seen a gradual reshaping of the enterprise. Industries across the globe integrate AI and ML with businesses to enable swift changes to key processes like marketing, customer relationships and management, product development, production and distribution, quality check, order fulfilment, resource management, and much more. AI includes a wide range of technologies such as machine learning, deep learning (DL), optical character recognition (OCR), natural language processing (NLP), voice recognition, and so on, which creates intelligent automation for organizations across multiple industrial domains when combined with robotics.


Image Classification Model

#artificialintelligence

Image classification is one of the most important applications of computer vision. Its applications ranges from classifying objects in self driving cars to identifying blood cells in healthcare industry, from identifying defective items in manufacturing industry to build a system that can classify persons wearing masks or not. Image Classification is used in one way or the other in all these industries. Which framework do they use? You must have read a lot about the differences between different deep learning frameworks including TensorFlow, PyTorch, Keras, and many more.


Deep Learning In Gaming

#artificialintelligence

Hi All - This event was originally going to be held during GDC week back in March but had to be postponed. Excited to be hosting this event virtually during GDC Summer on Aug 4th. Games have always been at the forefront of AI & they serve as a good testing bed for AI before we put it to use in the real world. Therefore, its natural to look into gaming to peek into new techniques being discovered in AI. What started with self-learning AI in games has now translated into solving real-world problems in computer vision, natural language processing, & self-driving cars.


Why deep learning won't give us level 5 self-driving cars – IAM Network

#artificialintelligence

Tesla CEO Elon Musk believes the basic functionality of level 5 self-driving cars will be completed by the end of 2020. "I remain confident that we will have the basic functionality for level 5 autonomy complete this year." Musk's remarks triggered much discussion in the media about whether we are close to having full self-driving cars on our roads. Like many other software engineers, I don't think we'll be seeing driverless cars (I mean cars that don't have human drivers) any time soon, let alone the end of this year. I wrote a column about this on PCMag, and received a lot of feedback (both positive and negative).


AI algorithm detects deepfake videos with high accuracy

#artificialintelligence

Artificial intelligence (AI) contributes significantly to good in the world. From reducing pollution to making roads safer with self-driving cars to enabling better healthcare through medical big-data analysis, AI still has plenty of untapped potential. Unfortunately, just like any technology in the world, AI can be used by those with less noble intentions. Such is the case with a certain AI-based technique called "deepfake" (combination of "deep learning" and "fake"), which uses deep neural networks to easily create fake videos in which the face of one person is superimposed on that of another. These tools are easy to use, even for people with no background in programming or video editing.


Why deep learning won't give us level 5 self-driving cars

#artificialintelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. "I'm extremely confident that level 5 [self-driving cars] or essentially complete autonomy will happen, and I think it will happen very quickly," Tesla CEO Elon Musk said in a video message to the World Artificial Intelligence Conference in Shanghai earlier this month. "I remain confident that we will have the basic functionality for level 5 autonomy complete this year." Musk's remarks triggered much discussion in the media about whether we are close to having full self-driving cars on our roads. Like many other software engineers, I don't think we'll be seeing driverless cars (I mean cars that don't have human drivers) any time soon, let alone the end of this year. I wrote a column about this on PCMag, and received a lot of feedback (both positive and negative). So I decided to write a more technical and detailed version of my views about the state of self-driving cars. I will explain why, in its current state, deep learning, the technology used in Tesla's Autopilot, won't be able to solve the challenges of level 5 autonomous driving.


Guide to Interpretable Machine Learning

#artificialintelligence

If you can't explain it simply, you don't understand it well enough. Disclaimer: This article draws and expands upon material from (1) Christoph Molnar's excellent book on Interpretable Machine Learning which I definitely recommend to the curious reader, (2) a deep learning visualization workshop from Harvard ComputeFest 2020, as well as (3) material from CS282R at Harvard University taught by Ike Lage and Hima Lakkaraju, who are both prominent researchers in the field of interpretability and explainability. This article is meant to condense and summarize the field of interpretable machine learning to the average data scientist and to stimulate interest in the subject. Machine learning systems are becoming increasingly employed in complex high-stakes settings such as medicine (e.g. Despite this increased utilization, there is still a lack of sufficient techniques available to be able to explain and interpret the decisions of these deep learning algorithms. This can be very problematic in some areas where the decisions of algorithms must be explainable or attributable to certain features due to laws or regulations (such as the right to explanation), or where accountability is required. The need for algorithmic accountability has been highlighted many times, the most notable cases of which are Google's facial recognition algorithm that labeled some black people as gorillas, and Uber's self-driving car which ran a stop sign. Due to the inability of Google to fix the algorithm and remove the algorithmic bias that resulted in this issue, they solved the problem by removing words relating to monkeys from Google Photo's search engine. This illustrates the alleged black box nature of many machine learning algorithms. The black box problem is predominantly associated with the supervised machine learning paradigm due to its predictive nature. Accuracy alone is no longer enough. Academics in deep learning are acutely aware of this interpretability and explainability problem, and whilst some argue that these models are essentially black boxes, there have been several developments in recent years which have been developed for visualizing aspects of deep neural networks such the features and representations they have learned. The term info-besity has been thrown around to refer to the difficulty of providing transparency when decisions are made on the basis of many individual features, due to an overload of information.


Looking into the black box of deep learning - ScienceBlog.com

#artificialintelligence

Deep learning systems are revolutionizing technology around us, from voice recognition that pairs you with your phone to autonomous vehicles that are increasingly able to see and recognize obstacles ahead. But much of this success involves trial and error when it comes to the deep learning networks themselves. A group of MIT researchers recently reviewed their contributions to a better theoretical understanding of deep learning networks, providing direction for the field moving forward. "Deep learning was in some ways an accidental discovery," explains Tommy Poggio, investigator at the McGovern Institute for Brain Research, director of the Center for Brains, Minds, and Machines (CBMM), and the Eugene McDermott Professor in Brain and Cognitive Sciences. "We still do not understand why it works. A theoretical framework is taking form, and I believe that we are now close to a satisfactory theory. It is time to stand back and review recent insights."


Looking into the black box

#artificialintelligence

Deep learning systems are revolutionizing technology around us, from voice recognition that pairs you with your phone to autonomous vehicles that are increasingly able to see and recognize obstacles ahead. But much of this success involves trial and error when it comes to the deep learning networks themselves. A group of MIT researchers recently reviewed their contributions to a better theoretical understanding of deep learning networks, providing direction for the field moving forward. "Deep learning was in some ways an accidental discovery," explains Tommy Poggio, investigator at the McGovern Institute for Brain Research, director of the Center for Brains, Minds, and Machines (CBMM), and the Eugene McDermott Professor in Brain and Cognitive Sciences. "We still do not understand why it works. A theoretical framework is taking form, and I believe that we are now close to a satisfactory theory. It is time to stand back and review recent insights."