Explainable-AI: Where Supervised Learning Can Falter

#artificialintelligence 

Disclaimer: I'll be talking mainly about logistic-regression and basic feed-forward neural networks, so its helpful to have programmed with those 2 models before reading this piece. OK -- before statisticians and ML folks come running after me after reading the title, I'm not talking about linear regression, for example. Yes, in linear regression, you can use the R-squared (or adjusted R-squared statistic) to talk about explained variance, and since linear regression only involves addition between independent variables (or predictors), they're pretty interpretable. If you were doing a linear regression to predict, say the price of a car Car_Price, based on the number of seats, mileage, maximum-speed, and battery life, your linear model could be –– say Car_Price c1*Seats c2*Mileage c3*Speed c4*Battery_Power –– the fact that variables are only added makes it pretty interpretable. But when it comes to more complex prediction models like Logistic Regression and neural networks, everything about the predictors (or called "features" in ML) becomes more confusing.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found