Goto

Collaborating Authors

machine learning


AI is already building a more resilient post-pandemic world - Watson Blog

#artificialintelligence

For some organizations, AI tools may have been perceived as "nice-to-have" technologies prior to 2020. In a 2019 IBM/Morning Consult survey of businesses, 22% of respondents worldwide reported they are not currently using or exploring the use of AI. But in a future characterized by uncertainty, only organizations that embrace the most advanced AI tools will be able to weather future storms. The COVID-19 pandemic remains an immediate threat, but all kinds of organizations are looking ahead to build resilient systems that can better withstand future pandemics, as well as natural disasters, cyberthreats, and other destabilizing scenarios. The current crisis is an opportunity to examine the performance of the technological systems that we use to manage the various aspects of human existence.


Teaching Neural Networks to be Humans

#artificialintelligence

The Fourth Industrial Revolution (Industry 4.0) has become a framework related challenge for scientific researchers. Industry 4.0 is principally portrayed by evolution and convergence of nano-, bio-, information and cognitive technologies to upgrade great transformations in economic, social, cultural and humanitarian spheres. Experts managing advancement and introduction of the sixth technological paradigm technologies decide by and large whether our nation can ride the influx of Industry 4.0 developments. For as long as 25 years, the creators have been building up the concept of systematic computer simulation training at schools and educators' training colleges. The idea thoughts have been summed up and introduced in the course reading.


Democratize AI: Exploring Future of Technology

#artificialintelligence

When individuals talk about artificial intelligence (AI), the first organizations that ring a bell are typically the FAANGs -- Facebook, Apple, Amazon, Netflix and Google. However, this is a long way from a complete rundown. Anybody can deploy AI today, and the FAANGs have no exceptional bit of leeway. The large technology companies accomplished early victories with artificial intelligence. Some even manufactured their own specific hardware, machine learning frameworks, and research and development centers.


Does Artificial Intelligence Keep Its Promises?

#artificialintelligence

Artificial Intelligence sounds freaking amazing: humanoid robots, artificial conscious, self learning systems and understanding the human brain. I won't lie; these were the things that motivated me to look into Artificial Intelligence. And till a certain extent they still do. I started out doing Physics and Life Sciences. One thing that caught my attention was the advancements in the field of so called "Artificial Neural Networks".


What is deep learning, Meaning And work?

#artificialintelligence

Deep learning is a sub-field of machine learning and an aspect of artificial intelligence. To understand this more easily, understand that it is meant to emulate the learning approach that humans use to acquire certain types of knowledge. This is somewhat different from machine learning, often people get confused in this and machine learning. Deep learning uses a sequencing algorithm while machine learning uses a linear algorithm. To understand this more accurately, understand this example that if a child is identified with a flower, then he will ask again and again, is this flower?


Maybe Businesses Don't Need To Worry So Much About Inference

#artificialintelligence

I want to talk about a misconception on the difference between inference and prediction. For a well run analytically oriented business, there may not be as many reasons to prefer inference over prediction one may have heard. A common refrain is: data scientists are in error in centering so much on prediction, a mistake no true Scotsman statistician would make. I've actually come to question this and more and more. Mere differences in practice between two fields doesn't immediately imply either field is inferior or in error.


How AI & Data Analytics Is Impacting Indian Legal System

#artificialintelligence

In a survey conducted by Gurugram-based BML Munjal University (School of Law) in July 2020, it was found that about 42% of lawyers believed that in the next 3 to 5 years as much as 20% of regular, day-to-day legal works could be performed with technologies such as artificial intelligence. The survey also found that about 94% of law practitioners favoured research and analytics as to the most desirable skills in young lawyers. Earlier this year, Chief Justice of India SA Bobde, in no uncertain terms, underlined that the Indian judiciary must equip itself with incorporating artificial intelligence in its system, especially in dealing with document management and cases of repetitive nature. With more industries and professional sectors embracing AI and data analytics, the legal industry, albeit in a limited way, is no exception. According to the 2020 report of the National Judicial Data Grid, over the last decade, 3.7 million cases were pending across various courts in India, including high courts, district and taluka courts.


An IQ Test Proves That Neural Networks Are Capable of Abstract Reasoning

#artificialintelligence

Using those primitives, DeepMind generated a dataset known as Procedurally Generated Matrices(PGM) that consists of triplets [progression, shape, color]. The relationship between the attributes in a triplet represent an abstract challenge. For instance, if the first attribute is progression, the values of the other two attributes must along rows or columns in the matrix. In order to show signs of abstract reasoning using PGM, a neural network must be able to explicitly compute relatioships between different matrix images and evaluate the viability of each potential answer in parallel. To address this challenge, the DeepMind team created a new neural network architecture called Wild Relation Network(WReN) in recognition of John Rave's wife Mary Wild who was also a contributor to the original IQ Test. In the WReN architecture, a convolutional neural network(CNN) processes each context panel and an individual answer choice panel independently to produce 9 vector embeddings. This set of embeddings is then passed to an recurrent network, whose output is a single sigmoid unit encoding the "score" for the associated answer choice panel.


Using Machine Learning to Predict Price of Product on Mercari

#artificialintelligence

Original article was published by Aniket Mishrikotkar on Deep Learning on Medium Using Machine Learning to Predict Price of Product on MercariThe numbers have no way of speaking for themselves. We …


ZFNet: An Explanation of Paper with Code

#artificialintelligence

For the visualization of features, the authors use deconvolutional networks (deconvnet). Think of deconvnet as decoder part of the autoencoders. It does the reverse of a normal convolutional network, it uses unpooling and filters to recover the pixels from the features. The only confusing part in this network is how it is undoing the pooling because when any pooling is done, only one value remains out of N² values given NxN filter was used. That whole data cannot be recovered but the max value is still there but it is no use if we don't know where it is located in the output of the convolutional layer.