Goto

Collaborating Authors

Machine Learning


Tech Talks: Lead AI Scientist Bin Shao on Artificial Intelligence

#artificialintelligence

Welcome to eSimplicity's Tech Talks blog series! Tech Talks is a series launched by eSimplicity's technical writing interns to discuss various topics within the tech industry. From personal experiences within the company to emergent innovative technologies, eSimplicity aims to gauge diverse perspectives and shed light on engaging topics within the tech sector! In a recent interview, eSimplicity's Lead AI Scientist Bin Shao shared with us his thoughts on the prominence of artificial intelligence, as well as its place in the future. Bin has over 20 years of professional experience in the areas of artificial intelligence, machine learning, computer vision and cybersecurity.


Why Robotics is emerging technology now a day ?

#artificialintelligence

Why Artificial intelligence, machine learning, deep learning all the the are emerging technologies, we have explained. Please watch the video for more details .


Introduction To Web Applications: Part 1

#artificialintelligence

It is hardly surprising that web applications have seen such an impressive development over the course of the last approximately ten years. If one were to make a synthesis of the overall experience of desktop applications, there are a couple of valid arguments that we can be almost certain would appear. First and foremost, a piece of desktop software has to be manually retrieved (downloaded from the Internet or physically) and installed, which can present issues to the "non-technical" user. Needless to say that this process can bring, and mostly has, subsequent issues in regards to updating and/or patching the software, system requirements, etc. Also, cross-platform development efforts are needed in order to provide versions for the three major operating systems (macOS, Windows, Linux) if the target is to reach as large of an audience as possible. Desktop applications used to be bound to the machine in terms of the licensing as well, which further reduced the flexibility in approaching one's work. A further valid point has to do with the limited, often delayed, user feedback and how that can lead to the diminishing of testing scenarios. Of course no solution is constructed out of disadvantages alone and we are not dealing with such a case here, either: desktop applications tend to be faster and are generally considered more secure than their web counterparts. However, history has shown that while the web is not the perfect solution, its advantages were simply too powerful to ignore. Not only does a web application require no installation from the user, updates can be easily rolled out and made available to all users instantly after a new release.


Pittsburgh Supercomputer Powers Machine Learning Analysis of Rare East Asian Stamps

#artificialintelligence

Setting aside the relatively recent rise of electronic signatures, personalized stamps have been a popular form of identification for formal documents in East Asia. These identifiers – easily forged, but culturally ubiquitous – are the subject of research by Raja Adal, an associate professor of history at the University of Pittsburgh. But, it turns out, the human expertise required to study these stamps at scale was prohibitive – so Adal turned to supercomputer-powered AI to lend a hand. "[From] the perspective of the social sciences, what matters is not that these instruments are impossible to forge--they're not--but that they are part of a process by which documents are produced, certified, circulated and approved," Adal explained in an interview with Ken Chiacchia of the Pittsburgh Supercomputing Center (PSC). "In order to understand the details of this process, it's very helpful to have a large database. But until now, it was pretty much impossible to easily index tens of thousands of stamps in an archive of documents, especially when these documents are all in a language like Japanese, which uses thousands of different Chinese characters."


Using AI and old reports to understand new medical images

#artificialintelligence

Getting a quick and accurate reading of an X-ray or some other medical images can be vital to a patient's health and might even save a life. Obtaining such an assessment depends on the availability of a skilled radiologist and, consequently, a rapid response is not always possible. For that reason, says Ruizhi "Ray" Liao, a postdoc and a recent PhD graduate at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), "we want to train machines that are capable of reproducing what radiologists do every day." Liao is first author of a new paper, written with other researchers at MIT and Boston-area hospitals, that is being presented this fall at MICCAI 2021, an international conference on medical image computing. Although the idea of utilizing computers to interpret images is not new, the MIT-led group is drawing on an underused resource--the vast body of radiology reports that accompany medical images, written by radiologists in routine clinical practice--to improve the interpretive abilities of machine learning algorithms.


A Primer To Explainable and Interpretable Deep Learning

#artificialintelligence

One of the biggest challenges in the data science industry is the Black Box Debate and the lack of trust in the algorithm. In the talk titled "Explainable and Interpretable Deep Learning" during the DevCon 2021, Dipyaman Sanyal, Head, Academics & Learning at Hero Vired, discusses the developing solution for the black box problem. Dipyaman Sanyal's educational background consists of an MS and a PhD in Economics. His career only becomes more colourful, with his current title being the co-founder of Drop Math. In his 15 year career, he has been awarded several honours, including 40 under 40 in India in Data Science in 2019.


Deep Learning’s Diminishing Returns

#artificialintelligence

While room-temperature quantum qubits have been around experimentally for more than 20 years, Quantum Brilliance's contribution to the field is in working out how to manufacture these tiny things precisely and replicably, as well as in miniaturizing and integrating the control structures you need to get information in and out of the qubits. Deep learning is now being used to translate between languages, predict how proteins fold, analyze medical scans, and play games as complex as Go, to name just a few applications of a technique that is now becoming pervasive. Success in those and other realms has brought this machine-learning technique from obscurity in the early 2000s to dominance today. Although deep learning's rise to fame is relatively recent, its origins are not. In 1958, back when mainframe computers filled rooms and ran on vacuum tubes, knowledge of the interconnections between neurons in the brain inspired Frank Rosenblatt at Cornell to design the first artificial neural network, which he presciently described as a "pattern-recognizing device."


Financial Engineering and Artificial Intelligence in Python : Views

#artificialintelligence

Have you ever thought about what would happen if you combined the power of machine learning and artificial intelligence with financial engineering? Today, you can stop imagining, and start doing. This course will teach you the core fundamentals of financial engineering, with a machine learning twist. We will learn about the greatest flub made in the past decade by marketers posing as "machine learning experts" who promise to teach unsuspecting students how to "predict stock prices with LSTMs". You will learn exactly why their methodology is fundamentally flawed and why their results are complete nonsense.


Mammoth AI report says era of deep learning may fade, but that's unlikely

ZDNet

The era of deep learning began in 2006, when Geoffrey Hinton, a professor at the University of Toronto, who is one of the founders of that particular approach to artificial intelligence, theorized that greatly improved results could be achieved by adding many more artificial neurons to a machine learning program. The "deep" in deep learning refers to the depth of a neural network, how many layers of artificial neurons data is passed through. Hinton's insight led to breakthroughs in the practical performance of AI programs on tests such as the ImageNet image recognition task. The subsequent fifteen years have been called the deep learning revolution. A report put out last week by Stanford University, in conjunction with multiple institutions, argues that the dominance of the deep learning approach may fade in coming years, as it runs out of answers for tough questions of building AI.


Intro To ML

#artificialintelligence

The science of today will be the technology of tomorrow. With the same mindset, great passion, and enthusiasm towards technology I penned and tried to encapsulate the technology that shapes human life. I briefed up the introduction of machine learning, its applications with a bunch of methodologies and kept a full stop with a proper conclusion. Machine learning which is one of the finest technology which was been coined by Arthur Samuel of IBM who had developed a computer program for playing checkers in the 1950s. As the program had a very less amount of memory, Arthur initiated alpha-beta pruning.