ADVERTISEMENT

AI's Bias Against Women Isn't Just a Technology Problem

The problem really stems from the fact that genAI just takes human bias and automates it, said Wipro's Ivana Bartoletti.

<div class="paragraphs"><p>This image is AI-generated (Source: deepai.org)</p></div>
This image is AI-generated (Source: deepai.org)

Time and time again, we've seen generative artificial intelligence models supply users with answers with biases leaning in a specific direction. In fact, earlier this year, UNESCO flagged that large language models like Llama 2 and GPT-2 have a predisposition to present strong bias against women.

"The problem is not bias; the problem is when that bias becomes discrimination," Wipro's Global Chief Privacy and AI Governance Officer, Ivana Bartoletti, told NDTV Profit in a conversation.

Chinese tech company Baidu's response to ChatGPT, Ernie Bot, also has a problem with gender stereotyping roles. For a nurse, it showed a woman with a ponytail and stethoscope around her neck, and for a university professor, it visualised an old man, according to a report from the South China Morning Post.

One can't fault Gen AI for developing bias, especially considering that they're trained on data that humans create. By and large, the bias really comes down to the human factor.

So, where do these biases come from?

Bartoletti explains its simply: "Garbage in, garbage out."

It's why AI researchers and companies developing the technology have to be careful in how they choose and trim their data sets. If the data hasn't been carefully selected, any genAI model is bound to come up with either biassed responses or hallucinations (incorrect or misleading responses presented as facts by a genAI model).

In a country like India, getting the right kind of data has often proved to be a challenge, something that both the private and public sectors have been trying to find a solution to.

Since algorithms are being given both allocative and predictive functions, Bartoletti explains that the problem really stems from the fact that GenAI just takes human bias and automates it.

It's the reason that advertisements for higher-paying jobs are shown more often to men than women, or why they get higher credit limits. It's because historically, men have been paid more, she said.

Opinion
TCS Launches IoT Engineering Lab In Ohio To Boost AI And IoT Solutions

It's Not All Bad News

Luckily for us, GenAI is a two-way street. It's only biased because we as a society are, right? So we just need to fix ourselves, right? Do that, and the datasets become cleaner, genAI becomes less biased, and everything is hunky dory after that. Easier said than done.

"We can use AI to identify potential correlations and causations between different things in society to identify sources of biases that we were not aware of and that we can't really see with human eyes," said Bartoletti, adding that there are technical solutions to the bias problem.

Many companies developing genAI are constantly finding ways to reduce instances of bias by either tweaking it constantly or refining their datasets.

Wipro's own AI development programmes have what Bartoletti calls a "fairness analyser," which she explains as something that helps them understand where bias and discrimination may come from. As a result, the company moved to tweak its models to minimise such instances.

Privacy In AI-Driven World

GenAI models are built on vast amounts of data, most of which is user-generated. What isn't clear, however, is how much of users' data is being gathered. By now, it's public knowledge that most users' online behaviour is harvested, both for genAI training purposes as well as advertising. Is it that any semblance of privacy online has gone out the window?

"If we want to harness the value and benefit of AI, we have to do it in a way that respects human dignity," said Wipro's Bartoletti. She sees privacy as more of a collective value, such as people's online activity or their spending amounts and habits.

It's an interesting perspective to look from, and one that the Indian government seems to share, at least if the Digital Personal Data Protection Act, 2023 is anything to go by. However, various rights groups have criticised the Act, since it doesn't mention any caveats surrounding information scraping in addition to not really protecting data users put online consensually.

"Companies need to really upskill to understand how we can leverage privacy enhancing technologies," said Bartoletti, pointing to homomorphic encryption, which is a form of encryption which enables companies to use user-generated data, without identifying them.

Companies which are able to bring together AI innovation, while safeguarding people's rights, will have a competitive edge, according to her. "Privacy is a value that is universal."

Opinion
DigiYatra Addresses Concerns Around Data Privacy And Security Of Users