articles

AI needs to earn our trust: Where does it begin?

AUTHOR
share

“Over time, we’re seeing technology play a greater and greater role in people’s lives, including, and maybe even especially, in government departments and people’s interactions with government.” – Darren Menachemson, ThinkPlaceX Partner and Chief Ethicist. 

This statement, whilst true, doesn’t come without its complexities.  

When ThinkPlaceX Partner and Chief Ethicist Darren Menachemson presented at the Digital New South Wales 2024 Showcase, he delved into the challenge government departments are facing: How can government embrace the advancements technology offers, whilst not losing the public’s trust?  

Understanding how trust is built

Digital NSW is an administrative department of the New South Wales government, promoting positive digital transformation in both public policy and services. Darren chaired the Trust Track, where he dove deeper into designing trusted digital services, experiences and regulatoryt approaches that resonate with the community. 

Digital transformation applies both to the back-end of government (such as analytics-driven risk assessment and AI-supported processing) and to consumer-facing services (such as digital service experience personalisation or identity management).  

Harnessing public trust, Darren argues, is contingent on a thorough understanding of how trust is built. 

At ThinkPlaceX, our public good innovation always involves deep research and meaningful engagement with the community. 

In 2023, we undertook our first comprehensive survey on how Australians feel about the rise of AI. In 2024, we continued this work, undertaking a second survey and publishing another report.  

Year-on-year, people’s confidence in AI fell significantly. In 2023, 52% of respondents said AI would be more trustworthy than humans in a decade’s time, and in 2024 only 37% believed the same. 

So how can government departments help build public trust in AI? For starters, it’s prudent to have concrete, convincing answers when fielding questions and worries about AI.  

 

These are five common questions government departments might encounter:   

1. “Is AI safe to use?” 

Will the user’s identity and privacy be safe and secure?  

2. “Is AI fair and unbiased?” 

Will it have the ability to understand nuance? Will it treat people differently based on demographics or circumstances with no legitimate relevance to the matter at hand? 

3. “Does AI align with my values?” 

Does AI have the capacity to be compassionate? To make decisions that align with the values of our society at its best? Is it environmentally and climate- sustainable?  

4. “Will AI be there in my moments of need?” 

Is it adaptable and available in a natural disaster? Or in moments of personal crisis?  

5. Is AI respectful? 

Is it inclusive? Does it act to reduce the burden of compliance or using services? Is it easy? 

 

Darren’s conclusing remarks should resonate with public servants engaging in an age where digital, data and AI play an ever-greater role in our lives, and where trust is essential in our ability to use these to amplify our impact on public good:  

“As we well know, trust in digital government is fragile. It can take years to build, but it can be lost – or at least severely diminished – in a single bad week…but, when trust is strong, we can use digital for enormous positive impact on society – more tailored and inclusive services, more targeted and active regulation, better policy decisions.” 

This is the preferred future for public services, and one that only exists when trust is on our side.