Goto

Collaborating Authors

Results


15 Most Common Data Science Interview Questions

#artificialintelligence

Some interviewers ask hard questions while others ask relatively easy questions. As an interviewee, it is your choice to go prepared. And when it comes to a domain like Machine Learning, preparations might fall short. You have to be prepared for everything. While preparing, you might have stuck at a point where you wonder what more shall I read. Well, based on almost 15-17 data science interviews that I have attended, here I have put 15, very commonly asked, as well as important Data Science and Machine Learning related questions that were asked to me in almost all of them and I recommend you must study these thoroughly.


What Are the Most Important Preprocessing Steps in Machine Learning and Data Science?

#artificialintelligence

Data Science and Machine Learning has been the latest talk right now and companies are looking for data scientists and machine learning engineers to handle their data and make significant contributions to them. Whenever data is given to data scientists, they must take the right steps to process them and ensure that the transformed data can be used to train various machine learning models optimally while ensuring maximum efficiency. It is often found that the data that is present in real-world is oftentimes incomplete and inaccurate along with containing a lot of outliers which some machine learning models cannot handle, leading to suboptimal training performance. It is also important to note that there might be duplicate rows or columns in the data which must be dealt with before giving it to machine learning models. Addressing these issues along with many others can be crucial, especially when one wants to improve model performance and generalizing ability of the model.


Data Science Blogathon 20th Edition - Analytics Vidhya

#artificialintelligence

The Data Science Blogathon by Analytics Vidhya began with a simple mission: To bring together a large community of data science enthusiasts to share their knowledge with the world. With 4000 articles under our belt on various topics such as Data Science, Machine Learning, Deep Learning, Data Lakes, and Data Engineering published by over 700 authors who are avid data science enthusiasts, students, professionals and researchers from across the globe. We bring to you the 20th edition of the Data Science Blogathon. This month's Data Science Blogathon brings you more rewards for you through our special referral programme. Yes, you read that right!


Demystifying Black-Box Models with SHAP Value Analysis - DataScienceCentral.com

#artificialintelligence

As an Applied Data Scientist at Civis, I implemented the latest data science research to solve real-world problems. We recently worked with a global tool manufacturing company to reduce churn among their most loyal customers. A newly proposed tool, called SHAP (SHapley Additive exPlanation) values, allowed us to build a complex time-series XGBoost model capable of making highly accurate predictions for which customers were at risk, while still allowing for an individual-level interpretation of the factors that made each of these customers more or less likely to churn. To understand why this is important, we need to take a closer look at the concepts of model accuracy and interpretability. Until recently, we always had to choose between an accurate model that was hard to interpret, or a simple model that was easy to explain but sacrificed some accuracy.


Community Detection with Node2Vec

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. It's free, we don't spam, and we never share your email address.


Rateless Codes for Near-Perfect Load Balancing in Distributed Matrix-Vector Multiplication

Communications of the ACM

Large-scale machine learning and data mining applications require computer systems to perform massive matrix-vector and matrix-matrix multiplication operations that need to be parallelized across multiple nodes. The presence of straggling nodes--computing nodes that unpredictably slow down or fail--is a major bottleneck in such distributed computations. Ideal load balancing strategies that dynamically allocate more tasks to faster nodes require knowledge or monitoring of node speeds as well as the ability to quickly move data. Recently proposed fixed-rate erasure coding strategies can handle unpredictable node slowdown, but they ignore partial work done by straggling nodes, thus resulting in a lot of redundant computation. We propose a rateless fountain coding strategy that achieves the best of both worlds--we prove that its latency is asymptotically equal to ideal load balancing, and it performs asymptotically zero redundant computations. Our idea is to create linear combinations of the m rows of the matrix and assign these encoded rows to different worker nodes. The original matrix-vector product can be decoded as soon as slightly more than m row-vector products are collectively finished by the nodes. Evaluation on parallel and distributed computing yields as much as three times speedup over uncoded schemes. Matrix-vector multiplications form the core of a plethora of scientific computing and machine learning applications that include solving partial differential equations, forward and back propagation in neural networks, computing the PageRank of graphs, etcetera. In the age of Big Data, most of these applications involve multiplying extremely large matrices and vectors and the computations cannot be performed efficiently on a single machine. This has motivated the development of several algorithms that seek to speed up matrix-vector multiplication by distributing the computation across multiple computing nodes.


Complete Machine Learning & Data Science Bootcamp 2022

#artificialintelligence

This is a brand new Machine Learning and Data Science course just launched and updated this month with the latest trends and skills for 2021! Become a complete Data Scientist and Machine Learning engineer! Join a live online community of 400,000 engineers and a course taught by industry experts that have actually worked for large companies in places like Silicon Valley and Toronto. Graduates of Andrei's courses are now working at Google, Tesla, Amazon, Apple, IBM, JP Morgan, Facebook, other top tech companies. You will go from zero to mastery!


Parametric vs. Non-parametric tests, and when to use them

#artificialintelligence

The fundamentals of Data Science include computer science, statistics and math. It's very easy to get caught up in the latest and greatest, most powerful algorithms -- convolutional neural nets, reinforcement learning etc. As an ML/health researcher and algorithm developer, I often employ these techniques. However, something I have seen rife in the data science community after having trained 10 years as an electrical engineer is that if all you have is a hammer, everything looks like a nail. Suffice it to say that while many of these exciting algorithms have immense applicability, too often the statistical underpinnings of the data science community are overlooked.


9 Completely Free Statistics Courses for Data Science

#artificialintelligence

This is a complete Free course for statistics. In this course, you will learn how to estimate parameters of a population using sample statistics, hypothesis testing and confidence intervals, t-tests and ANOVA, correlation and regression, and chi-squared test. This course is taught by industry professionals and you will learn by doing various exercises.


15 Best Data Science Programs Online in 2022- [Free Programs Included]

#artificialintelligence

This is a completely free course and a good first step towards understanding the data analysis process. In this course, you will learn the entire data analysis process including posing a question, data wrangling, exploring the data, drawing conclusions, and communicating your findings. This course will also teach Python libraries NumPy, Pandas, and Matplotlib.