Goto

Collaborating Authors

Machine Learning


I Scraped more than 1k Top Machine Learning Github Profiles and this is what I Found

#artificialintelligence

When searching the keyword "machine learning" on Github, I found 246,632 machine learning repositories. Since these are top repositories in machine learning, I expect the owners and the contributors of these repositories to be experts or competent in machine learning. Thus, I decided to extract the profiles of these users to gain some interesting insights into their background as well as statistics. By removing duplicates as well as removing the profiles that are organizations like udacity, I obtain a list of 1208 users. After cleaning the data, it comes to the fun part: data visualization.


Boltzmann machine learning with a variational quantum algorithm

#artificialintelligence

Boltzmann machine is a powerful tool for modeling probability distributions that govern the training data. A thermal equilibrium state is typically used for Boltzmann machine learning to obtain a suitable probability distribution. The Boltzmann machine learning consists of calculating the gradient of the loss function given in terms of the thermal average, which is the most time consuming procedure. Here, we propose a method to implement the Boltzmann machine learning by using Noisy Intermediate-Scale Quantum (NISQ) devices. We prepare an initial pure state that contains all possible computational basis states with the same amplitude, and apply a variational imaginary time simulation. Readout of the state after the evolution in the computational basis approximates the probability distribution of the thermal equilibrium state that is used for the Boltzmann machine learning. We actually perform the numerical simulations of our scheme and confirm that the Boltzmann machine learning works well by our scheme.


The State of AI and Machine Learning

#artificialintelligence

The 2020 State of AI and Machine Learning report illustrates the current state of artificial intelligence and machine learning, showcasing where the industry is as a whole in 2020 compared to 2019. The 2020 report is the output of a cross-industry, large-organization study of senior business leaders and technologists. It details where organizations are within the AI journey and provides a comprehensive look at how they are implementing AI within their business -- from the types of data they leverage to the tools they use and budgets they have. For readers who might be in the middle of their own AI projects, this report helps them understand the broader context of their work, what their peers are experiencing, and what dials to turn for AI success.


Introduction to Federated Learning

#artificialintelligence

There are over 5 billion mobile device users all over the world. Such users generate massive amounts of data--via cameras, microphones, and other sensors like accelerometers--which can, in turn, be used for building intelligent applications. Such data is then collected in data centers for training machine/deep learning models in order to build intelligent applications. However, due to data privacy concerns and bandwidth limitations, common centralized learning techniques aren't appropriate--users are much less likely to share data, and thus the data will be only available on the devices. This is where federated learning comes into play. According to Google's research paper titled, Communication-Efficient Learning of Deep Networks from Decentralized Data [1], the researchers provide the following high-level definition of federated learning: A learning technique that allows users to collectively reap the benefits of shared models trained from [this] rich data, without the need to centrally store it.


A debate between AI experts shows a battle over the technology's future

#artificialintelligence

Since the 1950s, artificial intelligence has repeatedly overpromised and underdelivered. While recent years have seen incredible leaps thanks to deep learning, AI today is still narrow: it's fragile in the face of attacks, can't generalize to adapt to changing environments, and is riddled with bias. All these challenges make the technology difficult to trust and limit its potential to benefit society. On March 26 at MIT Technology Review's annual EmTech Digital event, two prominent figures in AI took to the virtual stage to debate how the field might overcome these issues. Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, is a well-known critic of deep learning.


Automated histologic diagnosis of CNS tumors with machine learning

#artificialintelligence

A new mass discovered in the CNS is a common reason for referral to a neurosurgeon. CNS masses are typically discovered on MRI or computed tomography (CT) scans after a patient presents with new neurologic symptoms. Presenting symptoms depend on the location of the tumor and can include headaches, seizures, difficulty expressing or comprehending language, weakness affecting extremities, sensory changes, bowel or bladder dysfunction, gait and balance changes, vision changes, hearing loss and endocrine dysfunction. A mass in the CNS has a broad differential diagnosis, including tumor, infection, inflammatory or demyelinating process, infarct, hemorrhage, vascular malformation and radiation treatment effect. The most likely diagnoses can be narrowed based on patient demographics, medical history, imaging characteristics and adjunctive laboratory studies. However, accurate histopathologic interpretation of tissue obtained at the time of surgery is frequently required to make a diagnosis and guide intraoperative decision making. Over half of CNS tumors in adults are metastases from systemic cancer originating elsewhere in the body [1]. An estimated 9.6% of adults with lung cancer, melanoma, breast cancer, renal cell carcinoma and colorectal cancer have brain metastases [2].


We need a new field of AI to combat racial bias – TechCrunch

#artificialintelligence

Since widespread protests over racial inequality began, IBM announced it would cancel its facial recognition programs to advance racial equity in law enforcement. Amazon suspended police use of its Rekognition software for one year to "put in place stronger regulations to govern the ethical use of facial recognition technology." But we need more than regulatory change; the entire field of artificial intelligence (AI) must mature out of the computer science lab and accept the embrace of the entire community. We can develop amazing AI that works in the world in largely unbiased ways. But to accomplish this, AI can't be just a subfield of computer science (CS) and computer engineering (CE), like it is right now.


Intuitively, How Do Neural Networks Work?

#artificialintelligence

In my previous article about Intuitively, how can we understand different classification algorithms, I introduced the main principles of classification algorithms. However, the toy data I used was quite simple, almost linearly separable data; in real life, the data is almost always non-linear, so we should make our algorithm able to tackle non linearly separable data. Let's compare how logistic regression behaves with almost linearly separable data and non-linearly separable data. With the two toy data below, we can see that Logistic Regression helps us find the decision boundary when the data is almost linearly separable, but when the data is not linearly separable data, Logistic Regression is not capable to find a clear decision boundary. It is understandable because Logistic Regression is only able to separate the data into two parts.


How Having Bigger AI Models Can Have A Detrimental Impact On Environment

#artificialintelligence

The COVID crisis has skyrocketed the applications of artificial intelligence -- from tackling this global pandemic, to being a vital tool in managing various business processes. Despite its benefits, AI has always been scrutinised for its ethical concerns like existing biases and privacy issues. However, this technology also has some significant sustainability issues – it is known to consume a massive amount of energy, creating a negative impact on the environment. As AI technology is getting advanced in predicting weather, understanding human speech, enhancing banking payments, and revolutionising healthcare, the advanced models are not only required to be trained on large datasets, but also require massive computing power to improve its accuracy. Such heavy computing and processing consumes a tremendous amount of energy and emits carbon dioxide, which has become an environmental concern. According to a report, it has been estimated that the power required for training AI models emits approximately 626,000 pounds (284 tonnes) of carbon dioxide, which is comparatively five times the lifetime emissions of the average US car.


Top S&P 500 Stocks Based on Genetic Algorithms: Returns up to 75.82% in 3 Months

#artificialintelligence

This top S&P 500 stocks forecast is designed for investors and analysts who need predictions for the whole S&P 500. Package Name: Top S&P 500 Stocks Recommended Positions: Long Forecast Length: 3 Months (4/1/2020 – 7/1/2020) I Know First Average: 32.03% The greatest return came from ABMD at 75.82%. NVDA and ETFC also performed well for this time horizon with returns of 44.61% and 42.98%, respectively. The overall average return in this Top S&P 500 Stocks package was 32.03%, providing investors with a 11.47% premium over the S&P 500's return of 20.56% during the same period.