Goto

Collaborating Authors

Machine Learning


Top 15 Cheat Sheets for Machine Learning, Data Science & Big Data

#artificialintelligence

Data Science is an ever-growing field, there are numerous tools & techniques to remember. It is not possible for anyone to remember all the functions, operations and formulas of each concept. That's why we have cheat sheets. But there are a plethora of cheat sheets available out there, choosing the right cheat sheet is a tough task. So, I decided to write this article. Enjoy and feel free to share!


[D] Paper Explained - Deep Ensembles: A Loss Landscape Perspective (Full Video Analysis)

#artificialintelligence

Surprisingly, they outperform Bayesian Networks, which are - in theory - doing the same thing. This paper investigates how Deep Ensembles are especially suited to capturing the non-convex loss landscape of neural networks.


[Project] blendtorch: seamless PyTorch - Blender integration

#artificialintelligence

Training with artificial images is becoming increasingly important to address the lack of real data sets in various niche areas. Yet, many today's approaches write 2D/3D simulations from scratch. To improve this situation and make better use of existing pipelines, we've been working towards an integration between Blender, an open-source real-time physics enabled animation software, and PyTorch. Today we announce blendtorch, an open-source Python library that seamlessly integrates distributed Blender renderings into PyTorch data pipelines at 60FPS (640x480 RGBA). Batch visualization from 4 Blender instances running a physics enabled falling cubes scene.


The Case for Causal AI (SSIR)

#artificialintelligence

Much of artificial intelligence (AI) in common use is dedicated to predicting people's behavior. It tries to anticipate your next purchase, your next mouse-click, your next job move. But such techniques can run into problems when they are used to analyze data for health and development programs. If we do not know the root causes of behavior, we could easily make poor decisions and support ineffective and prejudicial policies. AI, for example, has made it possible for health-care systems to predict which patients are likely to have the most complex medical needs. In the United States, risk-prediction software is being applied to roughly 200 million people to anticipate which patients would benefit from extra medical care now, based on how much they are likely to cost the health-care system in the future. It employs predictive machine learning, a class of self-adaptive algorithms that improve their accuracy as they are provided new data. But as health researcher Ziad Obermeyer and his colleagues showed in a recent article in Science magazine, this particular tool had an unintended consequence: black patients who had more chronic illnesses than white patients were not flagged as needing extra care. The algorithm used insurance claims data to predict patients' future health needs based on their recent health costs.


Pytorch 101 -- An Introduction to Deep Learning

#artificialintelligence

Whether you've noticed it or not, Deep Learning (DL) plays an important part in all our lives. From the voice assistants and auto-correct services on your smartphone to the automation of large industries, deep learning is the underlying concept behind these meteoric rises in human progress. A major concept that we implement in deep learning is that of neural networks. A neural network is a computing algorithm of an interconnected system of mathematical formulae used to make predictions by "training" the algorithm on data relevant to the prediction to be made. This is partly inspired by the way neurons are connected in biological brains.


Blue Hexagon Next-Gen NDR innovator recognized in Forbes AI 50 list for 2020 – IAM Network

#artificialintelligence

Deep Learning based Network Detection and Response technology leader included in "America's Most Promising Artificial Intelligence Companies"Blue Hexagon, deep learning innovator of Cyber AI You Can Trust was recognized in the 2020 Forbes AI 50 list. As one of America's most promising artificial intelligence (AI) companies, Blue Hexagon is the only real time deep learning cybersecurity company to instantly stop zero-day malware and threats before infiltration, detect and block active adversaries and reduce SOC alert overload."Traditional We are able to achieve 99.8% threat detection accuracy and sub-second verdict speed with our deep learning technology to revolutionize security operations," said Nayeem Islam, CEO of Blue Hexagon. "Forbes included us for using artificial intelligence in meaningful business-oriented ways. We're proud to be included in their list, and believe AI will fundamentally change the way we protect against cyber threats."In


Machine Learning, Data Science and Deep Learning with Python

#artificialintelligence

Udemy Coupon - Machine Learning, Data Science and Deep Learning with Python, Complete hands-on machine learning tutorial with data science, Tensorflow, artificial intelligence, and neural networks Created by Sundog Education by Frank Kane Frank Kane English, Italian [Auto], 2 more Preview this Course GET COUPON CODE 100% Off Udemy Coupon . Free Udemy Courses . Online Classes


Digital Transformation Powered by Machine Learning

#artificialintelligence

The recent events arising from the global COVID-19 pandemic are a reminder that change is the only constant in life and business. This disruption has turned our lives upside down. All of us have had to learn and rapidly adapt to this new reality, from figuring out how to work remotely to supporting our children with schoolwork. On the business front, enterprises that could adapt quickly to a changing business environment are in the best position to ensure business continuity and long-term profitability. Digital businesses and enterprises that are further along with their digital transformation journeys are better equipped to respond to this rapidly changing environment.


Prospective evaluation of an artificial intelligence-enabled algorithm for automated diabetic retinopathy screening of 30 000 patients

#artificialintelligence

Background/aims Human grading of digital images from diabetic retinopathy (DR) screening programmes represents a significant challenge, due to the increasing prevalence of diabetes. We evaluate the performance of an automated artificial intelligence (AI) algorithm to triage retinal images from the English Diabetic Eye Screening Programme (DESP) into test-positive/technical failure versus test-negative, using human grading following a standard national protocol as the reference standard. Methods Retinal images from 30 405 consecutive screening episodes from three English DESPs were manually graded following a standard national protocol and by an automated process with machine learning enabled software, EyeArt v2.1. Screening performance (sensitivity, specificity) and diagnostic accuracy (95% CIs) were determined using human grades as the reference standard. Results Sensitivity (95% CIs) of EyeArt was 95.7% (94.8% to 96.5%) for referable retinopathy (human graded ungradable, referable maculopathy, moderate-to-severe non-proliferative or proliferative). This comprises sensitivities of 98.3% (97.3% to 98.9%) for mild-to-moderate non-proliferative retinopathy with referable maculopathy, 100% (98.7%,100%) for moderate-to-severe non-proliferative retinopathy and 100% (97.9%,100%) for proliferative disease. EyeArt agreed with the human grade of no retinopathy (specificity) in 68% (67% to 69%), with a specificity of 54.0% (53.4% to 54.5%) when combined with non-referable retinopathy. Conclusion The algorithm demonstrated safe levels of sensitivity for high-risk retinopathy in a real-world screening service, with specificity that could halve the workload for human graders. AI machine learning and deep learning algorithms such as this can provide clinically equivalent, rapid detection of retinopathy, particularly in settings where a trained workforce is unavailable or where large-scale and rapid results are needed.


Seamlessly Scaling AI for Distributed Big Data

#artificialintelligence

Originally published at LinkedIn Pulse. Early last month, I presented a half-day tutorial on at this year's virtual CVPR 2020. This is a very unique experience, and I would like to share some of the highlights of the tutorial. The tutorial focused on a critical problem that arises as AI moves from experimentation to production; that is, how to seamlessly scale AI to distributed Big Data. Today, AI researchers and data scientists need to go through a mountain of pains to apply AI models to production dataset that is stored in distributed Big Data cluster.