Collaborating Authors

Time Series prediction using Recurrent Neural Network with Tensorflow.js


Disclaimer:This demostration is 100% educational and by no means a trading prediction tool . Stock markets dynamically flactuates and are unpredictable owing to multiple factors. In data science, 80 percent of the time is spent preparing data, 20 percent of the time is spent complaining about preparing data. To start making predictions we need to train our deep learning model with data .so I've found two good places to get this kind of data

IBM and Pfizer claim AI can predict Alzheimer's onset with 71% accuracy


Pfizer and IBM researchers claim to have developed a machine learning technique that can predict Alzheimer's disease years before symptoms develop. By analyzing small samples of language data obtained from clinical verbal tests, the team says their approach achieved 71% accuracy when tested against a group of cognitively healthy people. Alzheimer's disease begins with vague, often misinterpreted signs of mild memory loss followed by a slow, progressively serious decline in cognitive ability and quality of life. According to the nonprofit Alzheimer's Association, more than 5 million Americans of all ages have Alzheimer's, and every state is expected to see at least a 14% rise in the prevalence of Alzheimer's between 2017 and 2025. Due to the nature of Alzheimer's disease and how it takes hold in the brain, it's likely that the best way to delay its onset is through early intervention.

Google's New "Hum to Search" AI-Powered Feature to Search Song


Everything that we can imagine is now possible with the power of Artificial Intelligence. Have you ever imagined if you can find the song you heard in someplace with only music running in your mind? You might have asked your best friend like this.Google's New AI-Powered "Hum to Search" Song Searching Feature What's that song that goes like this "hum hum hum hum hum." But your friend (A Human Being) also fails to tell you the song name. Artificial intelligence has proved that it can now read what's going in your mind, but a human being can't.

Artificial intelligence gets real in the OR – IAM Network


Since the start of the year, some surgeons and residents at UC San Diego Health have had access to a new surgical resource: reams of video recordings of them performing operations, parsed by artificial intelligence. Video recordings of procedures are uploaded to the cloud for quick analysis. The five surgeons involved in the project and their residents then receive videos of their minimally invasive procedures, which are divided into critical steps with a dashboard that compares an operation against previous procedures. The system pixelates distinguishing features of patients and staff, such as faces and tattoos, to de-identify them. All done with the assistance of AI. "It's giving active feedback on how your operation performed," said Dr. Santiago Horgan, chief of the minimally invasive surgery division and director of the Center for the Future of Surgery at UC San Diego School of Medicine.

There's No Turning Back on AI in the Military


Thankfully, in many cases, we live up to it. But our present digital reality is quite different, even sobering. Fighting terrorists for nearly 20 years after 9/11, we remained a flip-phone military in what is now a smartphone world. Infrastructure to support a robust digital force remains painfully absent. Consequently, service members lead personal lives digitally connected to almost everything and military lives connected to almost nothing.

Five Trends for Voice Assistants in 2020s


Voice assistants are becoming an essential part of our daily lives. When Apple's Siri hit markets in 2011, it managed to gain an impressive attraction of tech enthusiasts, yet no one was certain about how this novelty shall bring a tech revolution. Today, we are regular users of Google Voice Assistant, Amazon Alexa, and many more. Things took a turn when Google Home, Amazon Echo, and Apple HomePod went mainstream in 2017. All these instances converge on how voice assistants are proving themselves as a tech enabler with impressive possibilities. Not only in households, but they are also slowly proving to be useful in the business quarters too.

Blood Face Detector in Python (Part-1)


First, we initialize the parameters for the model: learning rate, number of epochs to train for and batch size. To train the model we have used the concept of Transfer Learning. We will fine-tune the MobileNet V2 architecture which is pre-trained on the ImageNet weights by leaving the head fully connected layer of the based model and then constructing our own head Fully Connected layer and place it on top of the base model. During training, we freeze all the layers of the base model so that they don't get updated during the first training process. Then we compile the model using adam optimizer and binary cross-entropy loss function as it is a binary classification problem.

How To Choose The Best Machine Learning Algorithm For A Particular Problem? – IAM Network


How do you know what machine learning algorithm to choose for your problem? Why don't we try all the machine learning algorithms or some of the algorithms which we consider will give good accuracy. If we apply each and every algorithm it will take a lot of time. So, it is better to apply a technique to identify the algorithm that can be used. Choosing the right algorithm is linked up with the problem statement.

How to create an AI that chats like you on WhatsApp


To train a GPT-2 neural network, first of all we need to pre-process the data, in order to obtain a single .txt For the sake of simplicity and since the machine learning model we will use requires a GPU to work, we're going to use Google Colab for the next step. If you don't know what Google Colab is, check this other article here. To work with the data, we need to upload them on Colab, into the right folders. Now, run all the cells up until the block "2 Parse the data".