Bye Data Scientists, Hello AI? Not Likely! - KDnuggets

#artificialintelligence

We have all been working hard to become such people known as Data Scientists (or whatever anyone wants to call it). AI is becoming more mainstream, especially in the last 2 years. The fact that computers/robots will learn after being built and will surpass a human's intelligence is terrifying. Computers can definitely think faster than us and compute faster than us. Being able to compute larger volumes of data does not necessarily mean smarter.



Machine Learning with Knime

#artificialintelligence

In this presentation, Kathrin Melcher, who works as a data scientist at KNIME, will give an overview of KNIME Software, including the open-source tool KNIME Analytics Platform for creating data science applications and services and also the different deployment options you have when using KNIME Server. While the structure is often similar--data collection, data transformation, model training, deployment--each project required its own special trick, whether this was a change in perspective or a particular technique to deal with the special case and business questions. You'll learn about demand prediction in energy, anomaly detection in IoT, risk assessment in finance, the most common applications in customer intelligence, social media analysis, topic detection, sentiment analysis, fraud detection, bots, recommendation engines, and more. Join us to learn what's possible in data science. She holds a Master's Degree in Mathematics from the University of Konstanz, Germany.


r/MachineLearning - [D] Are small transformers better than small LSTMs?

#artificialintelligence

Transformers are currently beating the state of the art on different NLP tasks. Something I noticed is that in all of the papers, the models are massive with maybe 20 layers and 100s of millions of parameters. Of course, using larger models is a general trend in NLP but it begs the question if small transformers are any good. I recently had to train a sequence to sequence model from scratch and I was unable to get better results with a transformer than with LSTMs. I am wondering if someone here has had similar experiences or knows of any papers on this topic.


Papers With Code : Billion-scale semi-supervised learning for image classification

#artificialintelligence

This paper presents a study of semi-supervised learning with large convolutional networks. We propose a pipeline, based on a teacher/student paradigm, that leverages a large collection of unlabelled images (up to 1 billion)... Our main goal is to improve the performance for a given target architecture, like ResNet-50 or ResNext. We provide an extensive analysis of the success factors of our approach, which leads us to formulate some recommendations to produce high-accuracy models for image classification with semi-supervised learning. As a result, our approach brings important gains to standard architectures for image, video and fine-grained classification. For instance, by leveraging one billion unlabelled images, our learned vanilla ResNet-50 achieves 81.2% top-1 accuracy on the ImageNet benchmark.


Five steps to AI-business

#artificialintelligence

Build prototypes on small data sets to gain momentum, support and experience with AI in your organization. Remember to train and involve everyone from the C-suite to frontline employees in the transformation. It takes 2-3 years to transform a large company into an AI company, but initial results should be evident within 6-12 months. This is the experience of former Google Brain Founder and Lead, Andrew Ng. He is a man who knows his AI, having previously also served as Chief Scientist at Baidu, and currently working as the Founder of Landing AI and as Adjunct Professor at Stanford University.


Facial recognition: This new AI tool can spot when you are nervous or confused ZDNet

#artificialintelligence

Whether you're intrigued or sceptical about it, use of facial recognition technology is growing – and now Fujitsu claims to have developed a way to help track emotions better too. The company's laboratories have come up with an AI-based technology that can track subtle changes of expression such as nervousness or confusion. Companies like Microsoft are already using emotion tools to recognise facial expression, but they are limited to eight "core" states: anger, contempt, fear, disgust, happiness, sadness, surprise or neutral. The current technology works by identifying various action units (AUs) – that is, certain facial muscle movements we make and which can be linked to specific emotions. For example, if both the AU "cheek raiser" and the AU "lip corner puller" are identified together, the AI can conclude that the person it is analysing is happy.


Combination of Artificial Intelligence and Radiologists More Accurately Identified Breast Cancer

#artificialintelligence

An artificial intelligence (AI) tool--trained on roughly a million screening mammography images--identified breast cancer with approximately 90 percent accuracy when combined with analysis by radiologists, a new study finds. Led by researchers from NYU School of Medicine and the NYU Center for Data Science, the study examined the ability of a type of AI, a machine learning computer program, to add value to the diagnoses reached by a group of 14 radiologists as they reviewed 720 mammogram images. "Our study found that AI identified cancer-related patterns in the data that radiologists could not, and vice versa," says senior study author Krzysztof J. Geras, PhD, assistant professor in the Department of Radiology at NYU Langone. "AI detected pixel-level changes in tissue invisible to the human eye, while humans used forms of reasoning not available to AI," adds Dr. Geras, also an affiliated faculty member at the NYU Center for Data Science. "The ultimate goal of our work is to augment, not replace, human radiologists."


#ValidateAI Conference

#artificialintelligence

Marta Kwiatkowska is a co-proposer of the Validate AI Conference. She is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. Prior to this she was Professor in the School of Computer Science at the University of Birmingham, Lecturer at the University of Leicester and Assistant Professor at the Jagiellonian University in Cracow, Poland. Kwiatkowska has made fundamental contributions to the theory and practice of model checking for probabilistic systems, focusing on automated techniques for verification and synthesis from quantitative specifications. More recently, she has been working on safety and robustness verification for neural networks with provable guarantees.


#ValidateAI Conference

#artificialintelligence

Marta Kwiatkowska is a co-proposer of the Validate AI Conference. She is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. Prior to this she was Professor in the School of Computer Science at the University of Birmingham, Lecturer at the University of Leicester and Assistant Professor at the Jagiellonian University in Cracow, Poland. Kwiatkowska has made fundamental contributions to the theory and practice of model checking for probabilistic systems, focusing on automated techniques for verification and synthesis from quantitative specifications. More recently, she has been working on safety and robustness verification for neural networks with provable guarantees.