Testing for bias in your AI software: Why it's needed, how to do it
When it comes to artificial intelligence (AI) and machine learning (ML) in testing, much of the interest and innovation today revolves around the concept of using these technologies to improve and accelerate the practice of testing. The more interesting problem lies in how you should go about testing the AI/ML applications themselves. In particular, how can you tell whether or not a response is correct? Part of the answer involves new ways to look at functional testing, but testers face an even bigger problem: cognitive bias, the possibility that an application returns an incorrect or non-optimal result because of systematic inflection in processing that produces results that are inconsistent with reality. This is very different from a bug, which you can define as an identifiable and measurable error in a process or result.
Mar-6-2021, 21:35:49 GMT
- Technology: