Ethical AI Team Says Bias Bounties Can More Quickly Expose Algorithmic Flaws

Bias in AI systems is proving to be a major stumbling block in efforts to more broadly integrate the technology into our society. A new initiative that will reward researchers for finding any prejudices in AI systems could help solve the problem.

The effort is modeled on the bug bounties that software companies pay to cybersecurity experts who alert them of any potential security flaws in their products. The idea isn’t a new one; “bias bounties” were first proposed by AI researcher and entrepreneur JB Rubinovitz back in 2018, and various organizations have already run such challenges.

But the new effort seeks to create an ongoing forum for bias bounty competitions that is independent from any particular organization. Made up of volunteers from a range of companies including Twitter, the so-called “Bias Buccaneers” plan to hold regular competitions, or “mutinies,” and earlier this month launched the first such challenge.

Bug bounties are a standard practice in cybersecurity that has yet to find footing in the algorithmic bias community,” the organizers say on their website. “While initial one-off events demonstrated enthusiasm for bounties, Bias Buccaneers is the first nonprofit intended to create ongoing Mutinies, collaborate with technology companies, and pave the way for transparent and reproducible evaluations of AI systems.”

This first competition is aimed at tackling bias in image detection algorithms, but rather than getting people to target specific AI systems, the competition will challenge researchers to build tools that can detect biased datasets. The idea is to create a machine learning model that can accurately label each image in a dataset with its skin tone, perceived gender, and age group. The competition ends on November 30 and has a first prize of $6,000, second prize of $4,000, and third prize of $2,000.

The challenge is premised on the fact that often the source of algorithmic bias is not so much the algorithm itself, but the nature of the data it is trained on. Automated tools that can quickly assess how balanced a collection of images is in relation to attributes that are often sources of discrimination could help AI researchers avoid clearly biased data sources.

But the organizers say this is just the first step in an effort to build up a toolkit for assessing bias in datasets, algorithms, and applications, and ultimately create standards for how to deal with algorithmic bias, fairness, and explainability.

It’s not the only such effort. One of the leaders of the new initiative is Twitter’s Rumman Chowdhury, who helped organize the first AI bias bounty competition last year, targeting an algorithm the platform used for cropping pictures that users complained favored white-skinned and male faces over black and female ones.

The competition gave hackers access to the company’s model and challenged them to find flaws in it. Entrants found a wide range of problems, including a preference for stereotypically beautiful faces, an aversion to people with white hair (a marker of age), and a preference for memes with English rather than Arabic script.

Stanford University has also recently concluded a competition that challenged teams to come up with tools designed to help people audit commercially-deployed or open-source AI systems for discrimination. And current and upcoming EU laws could make it mandatory for companies to regularly audit their data and algorithms.

But taking AI bug bounties and algorithmic auditing mainstream and making them effective will be easier said than done. Inevitably, companies that build their businesses on their algorithms are going to resist any efforts to discredit them.

Building on lessons from audit systems in other domains, such as finance and environmental and health regulations, researchers recently outlined some of the crucial ingredients for effective accountability. One of the most important criteria they identified was the meaningful involvement of independent third parties.

The researchers pointed out that current voluntary AI audits often involve conflicts of interest, such as the target organization paying for the audit, helping frame the scope of the audit, or having the opportunity to review findings before they are publicized. This concern was mirrored in a recent report from the Algorithmic Justice League, which noted the outsized role of target organizations in current cybersecurity bug bounty programs.

Finding a way to fund and support truly independent AI auditors and bug hunters will be a significant challenge, particularly as they will be going up against some of the most well-resourced companies in the world. Fortunately though, there seems to be a growing sense within the industry that tackling this problem will be critical for maintaining users’ trust in their services.

Image Credit: Jakob Rosen / Unsplash



* This article was originally published at Singularity Hub

Post a Comment

0 Comments