Goto

Collaborating Authors

Machine Learning


Machine Learning Intervention Triples Serious Illness Conversations Among Oncology Clinicians

#artificialintelligence

An intervention that delivered machine learning mortality predictions with behavioral nudges to oncology clinicians significantly increased the rate of …


Applying Machine Learning To The E.Coli Class Imbalance Dataset

#artificialintelligence

The E.Coli dataset is a very popular dataset to experiment on because it is a multi-classification that has several imbalances. The E.coli dataset is such a difficult dataset to find a solution for that I have not been able to find a lot written about it on the internet. Jason Brownlee, of masteringmachinelearning.com suggested deleting the rows deleting the rows of the highly imbalance classes, but in my opinion such a practice defeats the purpose of endeavouring to make predictions. After much exhaustive research I was able to come up with a solution where all eight classes in the dataset were identified and predicted on. The E.coli dataset is credited to Kenta Nakai and was developed into its current form by Paul Horton and Kenta Nakai in their 1996 paper titled "A Probabilistic Classification System For Predicting The Cellular Localization Sites Of Proteins."


Can We Trust AI Doctors? Google Health and Academics Battle It Out

#artificialintelligence

Machine learning is taking medical diagnosis by storm. From eye disease, breast and other cancers, to more amorphous neurological disorders, AI is routinely matching physician performance, if not beating them outright. Yet how much can we take those results at face value? When it comes to life and death decisions, when can we put our full trust in enigmatic algorithms--"black boxes" that even their creators cannot fully explain or understand? The problem gets more complex as medical AI crosses multiple disciplines and developers, including both academic and industry powerhouses such as Google, Amazon, or Apple, with disparate incentives.


Lost in Translation: How Artificial Intelligence is Breaking the Language Barrier - DefinedCrowd

#artificialintelligence

Human interaction with machines has experienced a great leap forward in recent years, largely driven by artificial intelligence (AI). From smart homes to self-driving cars, AI has become a seamless part of our daily lives. Voice interactions play a key role in many of these technological advances, most notably in language translation. Here, AI enables instant translation across a number of mediums: text, voice, images and even street signs. The technology works by recognizing individual words, then leveraging similarities in how various languages express the relationships between those words.


Best of arXiv.org for AI, Machine Learning, and Deep Learning – September 2020 - insideBIGDATA

#artificialintelligence

Researchers from all over the world contribute to this repository as a prelude to the peer review process for publication in traditional journals. The articles listed below represent a small fraction of all articles appearing on the preprint server. They are listed in no particular order with a link to each paper along with a brief overview. Links to GitHub repos are provided when available. Especially relevant articles are marked with a "thumbs up" icon.


A practical guide to RNN and LSTM in Keras

#artificialintelligence

After going through a lot of theoretical articles on recurrent layers, I just wanted to build my first LSTM model and train it on some texts! But the huge list of exposed parameters for the layer and the delicacies of layer structures were too complicated for me. This meant I had to spend a lot of time going through StackOverflow and API definitions to get a clearer picture. This article is an attempt to consolidate all of the notes which can accelerate the process of transition from theory to practice. The goal of this guide is to develop a practical understanding of using recurrent layers like RNN and LSTM rather than to provide theoretical understanding.


Hands-On Tutorial On Machine Learning Pipelines With Scikit-Learn

#artificialintelligence

With increasing demand in machine learning and data science in businesses, for upgraded data strategizing there's a need for a better workflow to ensure robustness in data modelling. Machine learning has certain steps to be followed namely – data collection, data preprocessing(cleaning and feature engineering), model training, validation and prediction on the test data(which is previously unseen by model). Here testing data needs to go through the same preprocessing as training data. For this iterative process, pipelines are used which can automate the entire process for both training and testing data. It ensures reusability of the model by reducing the redundant part, thereby speeding up the process.


NLP News Cypher

#artificialintelligence

As you may already have experienced it, your next NLP project may require you to work with knowledge-intensive tasks such as open-domain question answering or fact-checking. Benchmarking these knowledge intensive tasks can be difficult because these tasks require a huge knowledge source to feed off of (and things can get even harder when you have various knowledge sources to work with). As a result, a new benchmark from Facebook AI gives researchers a centralized baseline to start their research and benchmark model performance for these tough tasks, and it's called KILT. It leverages an interface across tasks that are grounded on a single knowledge source: the 2019/08/01 Wikipedia snapshot containing 5.9M articles. Here are the tasks you'll work with in KILT: fact checking, open-domain question answering, slot filling, entity linking, and dialogue.


Why Most Organizations' Investments in AI Fall Flat - SPONSOR CONTENT FROM PWC

#artificialintelligence

With how pervasive artificial intelligence (AI) is these days, executives--up to 85% of executives, in fact--know AI can fundamentally change their businesses. Organizations can look to use AI for everything from automating back-office processes to improving customer experience. In today's COVID-19 era, companies are adopting automation technologies to help compensate for disruption of core operations. Despite this enthusiasm, 76% of organizations surveyed barely broke even with their investments in AI capabilities. Only 6% had AI initiatives scaled across the enterprise, according to the Analytics Maturity Model (AMM) survey, developed jointly with Carnegie Mellon University through the Digital Transformation and Innovation Center sponsored by PwC.


Not all data is created equal: the promise and peril of algorithms for inclusion at work

#artificialintelligence

In 2016, Microsoft unveiled its first AI chatbot, Tay, developed to interact and converse with users in real-time on Twitter and engage Millennials. Tay was released with a basic grasp of language based on a dataset of anonymised public data and some pre-written material, with the intention to subsequently learn from interactions with users. On March 23, Tay took its first steps on Twitter, posting mostly innocuous messages and jokes, like "humans are super cool". However, within hours of its release, Tay had tweeted over 95,000 times and many of those messages were abusive and offensive misogynist/racist remarks, such as variations on "Hitler was right" and "9/11 was an inside job". Microsoft ended up taking down the account 16 hours after joining the internet.