Goto

Collaborating Authors

Deep Learning


Convolutional Neural Network

#artificialintelligence

Description The artificial intelligence is a large field includes many techniques to make machine thinks. Therefore, in this course, we investigate the mimicking of human intelligence on machines by introducing a modern algorithm of artificial intelligence named convolutional neural network which is a technique of deep learning for computers to make the machine learn and expert. In this course, we present an overview of deep learning in which, we introduce the notion and classification of convolutional neural networks. We gives also the definition and the advantages of CNNs. In this course, we provide the tricks to elaborate your own architecture of CNN and the hardware and software to design a CNN model. In the end, we present the limitation and future challenges of CNN.


Hive's cloud-hosted machine learning models draw $85M

#artificialintelligence

While cloud computing continues to gain favor, only a limited number of companies have embraced machine learning based in the cloud. Hive wants to change this by allowing enterprises to access hosted machine learning models via APIs. Hive has had particular success in the area of content moderation, thanks to its deep learning models that help companies interpret unstructured data, like images, videos, and audio. But it's also expanding into areas like advertising and sponsorship measurement as it seeks to find other areas that would benefit from intelligent automation. In an interview with VentureBeat, Hive CEO Kevin Guo said the company kept relatively quiet as it sought to prove its models work.


[D] Complexity of Time Series Models: ARIMA vs. LSTM

#artificialintelligence

Does this concept of VC Dimension carry over to models in time series analysis? Is it possible to show that LSTM's have a higher VC dimension compared to ARIMA style models? Supposedly, neural network based time series models were developed because modeols like ARIMA was unable to provide reliable estimates for bigger and complex datasets. Mathematically speaking, what allows a LSTM to capture more variation and complexity in a dataset compared to ARIMA? Just as a general question: in what instances would it be better to use a CNN for time series forecasting compared to an LSTM?


IIT Madras' Initiatives on Artificial Intelligence

#artificialintelligence

The Indian Institute of Technology Madras has developed a fellowship program to encourage early-career AI researchers. The Narayanan Family Foundation and the Institute's Robert Bosch Centre for Data Science and AI have teamed up to build a fellowship in Artificial Intelligence for Social Good. The application is available to artificial intelligence researchers who want to use their skills for the betterment. The Indian Institute of Technology Madras hopes to attract recent PhD graduates or newly qualified researchers in computer science, computational and data sciences, biomedical sciences, management, finance, and other engineering departments with outstanding educational achievements to RBCDSAI through this program, which is funded by the Narayanan Family Foundation. With India's largest network analytics and deep reinforcement learning study groups, RBCDSAI is a world's most prominent interdisciplinary research academic centre for Data Science and AI.


OpenCV Face detection with Haar cascades - PyImageSearch

#artificialintelligence

In this tutorial, you will learn how to perform face detection with OpenCV and Haar cascades. I've been an avid reader for PyImageSearch for the last three years, thanks for all the blog posts! My company does a lot of face application work, including face detection, recognition, etc. We just started a new project using embedded hardware. I don't have the luxury of using OpenCV's deep learning face detector which you covered before, it's just too slow on my devices.


Interlinking Artificial Intelligence with Human Brain through Cognition

#artificialintelligence

For a very long time, humans have been trying to design a machine that has complex capabilities like how human brain does. When artificial intelligence first came into existence, people thought that making a model that imitates humans will be easy. But it took more than five decades for scientists to turn the concept successful. Today, we are running after machines that carry the cognitive capabilities of human brain in it. Why is designing a mechanism that is similar to human brain complex?


Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness

#artificialintelligence

Machine learning, artificial intelligence, and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. Despite much promising research currently being undertaken, particularly in imaging, the literature as a whole lacks transparency, clear reporting to facilitate replicability, exploration for potential ethical concerns, and clear demonstrations of effectiveness. Among the many reasons why these problems exist, one of the most important (for which we provide a preliminary solution here) is the current lack of best practice guidance specific to machine learning and artificial intelligence. However, we believe that interdisciplinary groups pursuing research and impact projects involving machine learning and artificial intelligence for health would benefit from explicitly addressing a series of questions concerning transparency, reproducibility, ethics, and effectiveness (TREE). The 20 critical questions proposed here provide a framework for research groups to inform the design, conduct, and reporting; for editors and peer reviewers to evaluate contributions to the literature; and for patients, clinicians and policy makers to critically appraise where new findings may deliver patient benefit. Machine learning (ML), artificial intelligence (AI), and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. The potential uses include improving diagnostic accuracy,1 more reliably predicting prognosis,2 targeting treatments,3 and increasing the operational efficiency of health systems.4 Examples of potentially disruptive technology with early promise include image based diagnostic applications of ML/AI, which have shown the most early clinical promise (eg, deep learning based algorithms improving accuracy in diagnosing retinal pathology compared with that of specialist physicians5), or natural language processing used as a tool to extract information from structured and unstructured (that is, free) text embedded in electronic health records.2 Although we are only just …


Deep Learning Tutorial for Beginners: A [Step-by-Step] Guide

#artificialintelligence

Deep Learning is a subdivision of machine learning that imitates the working of a human brain with the help of artificial neural networks. It is useful in processing Big Data and can create important patterns that provide valuable insight into important decision making. The manual labeling of unsupervised data is time-consuming and expensive. DeepLearning tutorials help to overcome this with the help of highly sophisticated algorithms that provide essential insights by analyzing and cumulating the data. Deep Learning leverages the different layers of neural networks that enable learning, unlearning, and relearning.


Toward deep-learning models that can reason about code more like humans

#artificialintelligence

Whatever business a company may be in, software plays an increasingly vital role, from managing inventory to interfacing with customers. Software developers, as a result, are in greater demand than ever, and that's driving the push to automate some of the easier tasks that take up their time. Productivity tools like Eclipse and Visual Studio suggest snippets of code that developers can easily drop into their work as they write. These automated features are powered by sophisticated language models that have learned to read and write computer code after absorbing thousands of examples. But like other deep learning models trained on big datasets without explicit instructions, language models designed for code-processing have baked-in vulnerabilities.


What Waymo's new leadership means for its self-driving cars

#artificialintelligence

Waymo, Alphabet's self-driving car subsidiary, is reshuffling its top executive lineup. On April 2, John Krafcik, Waymo's CEO since 2015, declared that he will be stepping down from his role. He will be replaced by Tekedra Mawakana and Dmitri Dolgov, the company's former COO and CTO. Krafcik will remain as an advisor to the company. "[With] the fully autonomous Waymo One ride-hailing service open to all in our launch area of Metro Phoenix, and with the fifth generation of the Waymo Driver being prepared for deployment in ride-hailing and goods delivery, it's a wonderful opportunity for me to pass the baton to Tekedra and Dmitri as Waymo's co-CEOs," Krafcik wrote on LinkedIn as he declared his departure.