You can't eliminate bias from machine learning, but you can pick your bias
Bias is a major topic of concern in mainstream society, which has embraced the concept that certain characteristics -- race, gender, age, or zip code, for example -- should not matter when making decisions about things such as credit or insurance. But while an absence of bias makes sense on a human level, in the world of machine learning, it's a bit different. In machine learning theory, if you can mathematically prove you don't have any bias and if you find the optimal model, the value of the model actually diminishes because you will not be able to make generalizations. What this tells us is that, as unfortunate as it may sound, without any bias built into the model, you cannot learn. Modern businesses want to use machine learning and data mining to make decisions based on what their data tells them, but the very nature of that inquiry is discriminatory.
Nov-15-2020, 00:56:00 GMT
- Country:
- Europe
- Germany (0.06)
- United Kingdom (0.16)
- North America > United States
- California > Alameda County > Berkeley (0.05)
- Europe
- Industry:
- Banking & Finance > Insurance (0.71)
- Health & Medicine > Therapeutic Area
- Immunology (0.48)
- Technology: