Goto

Collaborating Authors

[D] Quality Contributions Roundup 7/22

#artificialintelligence

The rest of the thread, Tell me about a paper that you found inspiring, from u/mitare is also quite interesting. This paper is a really comprehensive review detailing what exactly current ML techniques are unable to do that humans can do very well. It lays the groundwork that needs to be done to make human-level artificial intelligence.


X Prize founder Peter Diamandis: A.I. will be most important tool to keep a job after coronavirus

#artificialintelligence

While many people think of automation as the biggest threat to human labor, X Prize founder and executive chairman Peter Diamandis says that artificial intelligence will be more crucial than people realize to reskill workers on the other side of the Covid-19 crisis. Reskilling was already important for workers to keep pace with advances in technology, and the pandemic has heightened the need to upskill the labor force. "Covid-19 hits, and there's another asteroid impact, it hits the playing field so much, any companies that are teetering on being a decent market product fall apart, people start losing their jobs in old school industries, and so rapid reskilling is really about addressing both the exponential tech impact on our job market and also on Covid-19. We need a means by which all of us are able to continue to upskill what we do," said Diamandis, speaking at a recent CNBC @Work livestream. The speed of reskilling will pick up as well, according to the futurist, who launched a competition called "Rapid Reskilling," which has offered a $5 million to teams which create solutions to quickly reskill under-resourced workers.


Valkyrie and Actuarial Risk Management Establish Strategic Alliance

#artificialintelligence

Valkyrie, a science-driven consulting firm that solves organizational and global challenges through AI and machine learning, and Actuarial Risk Management (ARM), a full-service global actuarial consultancy, announced that they have formed a strategic alliance. ARM and Valkyrie's collaboration brings actuaries and data scientists together for a first-of-its-kind advisory team built to penetrate data using advanced learning capabilities, offering customers a competitive advantage and a better means of assessing risk. As the COVID-19 pandemic evolves, risk management has become an increasingly important priority for businesses worldwide as they look to protect their employees, estimate impact and avoid serious financial losses. Austin companies Valkyrie and ARM will jointly offer clients a new view toward data forensics and predictive analytics that are informed by an actuarial perspective and coupled with data scientist-led bespoke models. "This new alliance strengthens our worldwide market presence by combining ARM's strong actuarial team with Valkyrie's deep knowledge of data science and machine learning," said Corwin (Cory) Zass, Founder and Principal at ARM, who recently spoke in a webinar about machine learning and the future impact COVID-19 will have on the life markets.


Powerful Photon-Based Processing Units Enable Complex Artificial Intelligence

#artificialintelligence

The photonic tensor core performs vector-matrix multiplications by utilizing the efficient interaction of light at different wavelengths with multistate photonic phase change memories. Using photons to create more powerful and power-efficient processing units for more complex machine learning. Machine learning performed by neural networks is a popular approach to developing artificial intelligence, as researchers aim to replicate brain functionalities for a variety of applications. A paper in the journal Applied Physics Reviews, by AIP Publishing, proposes a new approach to perform computations required by a neural network, using light instead of electricity. In this approach, a photonic tensor core performs multiplications of matrices in parallel, improving speed and efficiency of current deep learning paradigms.


Association Rule Learning & APriori Algorithm

#artificialintelligence

Association rule learning is a rule-based machine learning method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness. Association Rules find all sets of items (itemsets) that have support greater than the minimum support and then using the large itemsets to generate the desired rules that have confidence greater than the minimum confidence. The lift of a rule is the ratio of the observed support to that expected if X and Y were independent. A typical and widely used example of association rules application is market basket analysis.


Regularization: Machine Learning

#artificialintelligence

For understanding the concept of regularization and its link with Machine Learning, we first need to understand why do we need regularization. We all know Machine learning is about training a model with relevant data and using the model to predict unknown data. By the word unknown, it means the data which the model has not seen yet. We have trained the model, and are getting good scores while using training data. But during the process of prediction, we found that the model is underperforming when compared to the training part. Now, this may be a case of over-fitting(about which I will be explaining below) which is causing incorrect prediction by the model.


Analyzing the Performance of the Classification Models in Machine Learning

#artificialintelligence

Confusion matrix (also called Error matrix) is used to analyze how well the Classification Models (like Logistic Regression, Decision Tree Classifier, etc.) performs. Why do we analyze the performance of the models? Analyzing the performance of the models helps us to find and eliminate the bias and variance problem if exist and it also helps us to fine-tune the model so that the model produces more accurate results. Confusion Matrix is usually applied to Binary classification problems but can be extended to Multi-class classification problems as well. Concepts are comprehended better when illustrated with examples so let us consider an example.


Ensemble Methods: A Beginner's Guide

#artificialintelligence

When I started my Data Science journey,few terms like ensemble,boosting often popped up.Whenever I opened the discussion forum of any Kaggle Competition or looked at any winner's solution,it was mostly filled with these things. At first these discussions sounded totally alien,and these class of ensemble models looked like some fancy stuff not meant for the newbies,but trust me once you have a basic understanding behind the concepts you are going to love them! So let's start with a very simple question,What exactly is ensemble? "A group of separate things/people that contribute to a coordinated whole" In a way this is kind of the core idea behind the entire class of ensemble learning! Well let's rewind the clocks a bit and go back to the school days for a while, remember you used to get a report card with an overall grade.Well how exactly was this overall grade calculated,your teachers of respective subjects gave some feedback based on their set of criteria,for example your math teacher would assess you on his own criteria like algebra,trigonometry etc, sports teacher would judge you how you perform on the field,your music teacher would judge on you vocal skills.Point being each of these teachers have their own set of rules of judging the performance of a student and later all of these are combined to give an overall grade on the performance of the student.


"What's that? Reinforcement Learning in the Real-world?"

#artificialintelligence

Reinforcement Learning offers a distinctive way of solving the Machine Learning puzzle. It's sequential decision-making ability, and suitability to tasks requiring a trade-off between immediate and long-term returns are some components that make it desirable in settings where supervised-learning or unsupervised learning approaches would, in comparison, not fit as well. By having agents start with zero knowledge then learn qualitatively good behaviour through interaction with the environment, it's almost fair to say Reinforcement Learning (RL) is the closest thing we have to Artificial General Intelligence yet. We can see RL being used in robotics control, treatment design in healthcare, among others; but why aren't we boasting of many RL agents being scaled up to real-world production systems? There's a reason why games, like Atari, are such nice RL benchmarks -- they let us care only about maximizing the score and not worry about designing a reward function.


Why Regularization Works

#artificialintelligence

When we train a Machine Learning model or a Neural Network, we witness that sometimes our model performs exceptionally well on our training data but fails to give the desired output when it comes to testing or validation data. One of the many reasons for such a difference in the performance of a model is the large weights learned during the training of the model thus resulting in Overfitting. Large weights cause instability in our model and a little variation in the test data leads to high error. Apart from this the large weights also cause a problem in the gradient descent step of the training. To penalize these large weights, we regularize them to the smaller values.