Australia’s human rights commissioner, Lorraine Finlay, has warned that artificial intelligence could deepen racism and sexism if it develops without strong regulatory oversight. She says that biases already present in AI training data risk being embedded into automated decision-making, creating discrimination that could become invisible over time.
Finlay explained that “algorithmic bias,” where prejudices are built into the datasets, can be compounded by “automation bias,” where humans over-rely on machine outputs. This combination, she said, makes it more likely that discriminatory patterns will be accepted as neutral decisions.
The issue has sparked political debate within the federal Labor government. Labor senator Michelle Ananda-Rajah, a former doctor and AI researcher, has argued against a dedicated AI act and instead called for Australian data to be “freed” so that models better represent the country’s diversity. She believes this would reduce the influence of overseas-trained systems that may not suit local needs.
Ananda-Rajah also supports compensating content creators whose work is used in AI training. She cited skin cancer screening tools as an example of overseas-trained AI systems showing bias, and argued that including diverse Australian datasets would help address such shortcomings, provided sensitive information is safeguarded.
Media and arts groups, however, have warned of “rampant theft” of intellectual property if large technology firms are allowed unrestricted access to Australian content. They have called for stronger copyright and privacy protections to prevent misuse.
Finlay emphasised that while diverse local data is important, regulation must remain the central focus. She has called for measures such as bias testing, independent auditing, mandatory human oversight, and legislative guardrails to complement the Privacy Act.
There is growing evidence of AI-related discrimination. A recent Australian study found that AI recruitment tools could disadvantage job candidates who speak with an accent or have a disability. International studies have shown similar disparities in healthcare diagnostics, with certain demographic groups receiving less accurate results.
Speaking to The Guardian, Judith Bishop, an AI researcher at La Trobe University, said freeing more Australian data could help, but warned it is only part of the solution. She stressed the importance of ensuring that imported AI models are adapted to Australia’s specific context.
The eSafety commissioner, Julie Inman Grant, has also expressed concern about the lack of transparency around AI training data. She warned that concentrating AI development in a small number of companies could sideline key voices, and urged greater disclosure and the use of diverse, representative datasets.
The federal government is expected to discuss both AI’s economic potential and these regulatory challenges at its upcoming national economic summit.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!