Here's a study supported by the objective reality that many of us experience already on YouTube. The streaming video company's recommendation algorithm can sometimes send you on an hours-long video binge so captivating that you never notice the time passing. But according to a study from software nonprofit Mozilla Foundation, trusting the algorithm means you're actually more likely to see videos featuring sexualized content and false claims than personalized interests. In a study with more than 37,000 volunteers, Mozilla found that 71 percent of YouTube's recommended videos were flagged as objectionable by participants. The volunteers used a browser extension to track their YouTube usage over 10 months, and when they flagged a video as problematic, the extension recorded if they came across the video via YouTube's recommendation or on their own.
AI algorithms employed in everything from hiring to lending to criminal justice have a persistent and often invisible problem with bias. The big picture: One solution could be audits that aim to determine whether an algorithm is working as intended, whether it's disproportionately affecting different groups of people and, if there are problems, how they can be fixed. How it works: Algorithmic audits -- usually conducted by outside companies -- involve examining an algorithm's code and the data used to train it, and assessing its potential impact on populations through interviews with stakeholders and those who might be affected by it. Between the lines: Financial audits exist in part to open up the black box of a company's internal operations to outside investors, and ensure that a company remains in compliance with financial laws and regulations. Details: Algorithmic audits can help companies screen their AI products for flaws that may not be apparent at first glance.
For all the attention that AI audits have received, though, their ability to actually detect and protect against bias remains unproven. The term "AI audit" can mean many different things, which makes it hard to trust the results of audits in general. The most rigorous audits can still be limited in scope. And even with unfettered access to the innards of an algorithm, it can be surprisingly tough to say with certainty whether it treats applicants fairly. At best, audits give an incomplete picture, and at worst, they could help companies hide problematic or controversial practices behind an auditor's stamp of approval.