Skip to main content

AI and the Future of Governance: Transformation, diversification and accountability

15 Dec 2023

Imagine a world where all of humanity’s collective knowledge is at your fingertips, accessible in the context you need. Artificial Intelligence (AI) is well on its way to revolutionise the services the public purpose sector delivers to the Australian public. To continue the thought-provoking dialogue at IPAA Victoria’s AI and the Future of Governance in-person event in October 2023, several eminent speakers weighed in on some burning questions front of centre for many professionals in the sector.

IPAA Victoria asked our eminent speakers Professor Eduard Hovy, Executive Director, Melbourne Connect, University of Melbourne, Peter Williams, Chief Edge Officer, Centre for the Edge, Deloitte Australia, Kristy Hornby, Chair, Risk Community of Practice (CoP), IPAA Victoria, and Associate Director, Grosvenor, Benjamin Yong, Senior Software Engineer, Optimal Reality, Deloitte Australia and Jackson Calvert-Lane, Engineering Manager, Optimal Reality, Deloitte Australia, about the impact of AI on transformation, diversification and accountability. 

In your opinion, how will AI transform the public service in the next five years? 

Professor Hovy: I think we will begin to see some change, but because of the government’s need to be very sure about privacy and accuracy, we will not see any major shift. However, I expect there will be quite a lot of small adaptations of existing processes across diverse tasks and areas. For example, the tedious task of taking and transcribing minutes for meetings can be made much easier by just recording the voices, converting that to text, and pasting the text (in blocks) into a large language model (LLM) and getting a summary. After a bit of editing, the minutes are completed, much more quickly than in the traditional and manual way. For another example, people who face the public to answer questions can be greatly assisted if they have a suitably trained LLM because it can provide them with a nicely readable form capturing all the relevant data very quickly so that they don’t have to search for it or ask colleagues. It’s impossible to predict today all these kinds of uses, but growing experience from industry suggests that wherever text is used, or software is created, LLMs might be brought in to speed up the process. Exactly how depends on the creativity of the individual doing the work. And in the longer term, I think we will increasingly see special-purpose LLMs that are built specifically for government use and that are private and secure.

Peter: While AI, including Generative AI, provides tremendous opportunity, technology in and of itself won’t transform public service in the next five years. Transformation will only occur when there is a willingness to look at opportunities and challenges holistically and a willingness to do things differently. To truly transform the public service, the sector needs to look at the biggest opportunities to use a range of AI, data, technologies and devices, working across agencies and layers of government and with those outside the public sector, to deliver the best outcomes. I have always said: “Don’t think about singular technologies, think about combining technologies in the context of a problem to get a result.” We are seeing some great examples of use, such as Victoria’s Digital Twin Project and the Smarter Roads initiative, which are pockets of transformation that are occurring.

Kristy: I love the rule of three, and for me, regarding AI, the three are – having the right capability, having the human at the centre, and having good governance. In brief, the public service will be disrupted by AI, as one could argue the need for policy research is abolished, the need for human labour to synthesise the results of consultation processes is abolished, the need for manual data collection and entry is abolished. (I’m being dramatic of course, but there will be many impacts.) It’s important the public sector starts to plan now for the tasks that will be impacted, as that will affect the capabilities required, and the way in which the public sector ensures it has the right capabilities for the future. In terms of having the human at the centre, a common tenet of AI principles is to have a ‘human in the loop’ – I argue the public sector needs to go further than just keeping a human in the loop, which connotes a compliance-focused approach, and really put humans at the centre of AI tooling design, training, deployment, update and service delivery – including the impact on the humans receiving the service or program. While in a different context (ADM not AI) the Robodebt program showed in detail what can happen when humans are not at the centre, and we never want this to be repeated in any public sector policy. Lastly, having good governance – you need to govern AI-enabled services and programs as much as, if not more, than ‘normal’ services and programs. While the technology is so new and the use cases continue to evolve, investing more rather than less in fit-for-purpose, flexible governance will help the public sector to be prepared for the transformation ahead.

Benjamin and Jackson: In the next five years, we anticipate AI playing a pivotal role in revolutionising public service. The integration of AI technologies, such as machine learning, digital twins, modelling, and automation, has the potential to streamline administrative processes, optimise decision-making, and enhance service delivery.

In our work at Deloitte with Optimal Reality, we have harnessed data from devices and sensors in the built transport environment to visualise and model traffic metrics, network patterns, and incidents. Additionally, we’ve utilised Computer Vision AI technology to autonomously analyse video feeds and proactively detect traffic incidents and abnormal network conditions. While keeping the human in the loop, we can leverage AI technology to transform network management from a reactive approach to a proactive one.

However, it is crucial to approach these advancements with a strong emphasis on ethics, transparency, and accountability to ensure that the benefits are widespread, and the technology is deployed responsibly. For example, we never store our video feed data for privacy; instead, we only capture inference metadata using AWS Panorama edge computing devices.

What effect will AI have on marginalised and/or under-resourced communities? For example, Indigenous groups, CALD, people with disability, low SES backgrounds. 

Professor Hovy: The nice thing about LLMs is that they require no training to use, and many of them are free. So, anyone can access them and start typing in English and get answers in English. Furthermore, the LLMs are patient, non-judgmental, respectful, and helpful, willing to provide information over and over, in various forms, as desired. These characteristics make them very attractive to marginalised and under-resourced people (as well as to schoolchildren and other students). One problem is that they do tend to contain bias, and also sometimes hallucinate things that are not true, and both these factors might be more problematic for vulnerable users. A certain amount of education is necessary for the form: please do use it! But always be aware that it is only an AI, and might tell you something that is not true, or hurtful!

Peter: While no technology is a panacea for marginalised or under-resourced communities, I believe there is a wide range of opportunities afforded by technology. Understanding that the most profound change brought about by applications such as Chat GPT, Bard or Claude, is that when we are seeking information, we can access it in the context we are seeking it, in words we understand and in the language that we speak. Current search engines direct us to relevant sources, but we can only access the information in the form it is published. I am on the board of LiteHaus International, an education charity that establishes computer labs in remote communities in Australia, and schools in Papua New Guinea as well as the Philippines and Samoa. We are working on how we can use AI to translate and localise curriculum, develop adaptive learning models and build the capability of teachers. We can now do things that were previously too difficult and time-consuming, particularly with generative AI.

Kristy: I think it will widen the gap between the haves and the have-nots overall. While the technology is very accessible, which means knowledge is more democratised than ever, I think at-risk cohorts are likely to be among the first to have their jobs impacted, and may find it difficult to transition to jobs that require higher skills and/or more experience, because those things tend to be a luxury when you’re trying to put food on the table. I do have hopes that it can make services and education far more accessible, which will perhaps help to bridge the gap, but my mind keeps reflecting on those most impacted by similar industry and technology shifts in recent history, such as those affected by the decline in Australian manufacturing, the car industry and the Hazelwood mine closure, not all of whom were able to make the jump to new employment and lifestyles. From this, I’d encourage everyone reading to be familiar with the tools and technologies available and start thinking about how you can augment your job and life with AI, to be more prepared for the changes to come.

Benjamin and Jackson: The impact of AI on marginalised and under-resourced communities is a critical concern. While AI has the potential to address societal challenges and bridge gaps, there is also the risk of perpetuating existing inequalities. It is essential to adopt an inclusive approach in the development and deployment of AI technologies. Collaboration among technologists, policymakers, and community representatives is necessary to address the specific needs and concerns of diverse groups.

Analysing bias in AI/ML training data is a crucial aspect of this inclusive approach. Biases in the data used to train AI models can inadvertently reinforce existing disparities. Therefore, it is imperative to scrutinise training datasets for any biases that may disproportionately impact marginalised communities. This calls for transparent practices in dataset selection, a thorough examination of historical biases, and ongoing evaluation to mitigate and rectify any identified biases throughout the AI development lifecycle. By actively addressing bias in training data, we can contribute to the development of AI systems that are more equitable and sensitive to the unique challenges faced by marginalised communities.

What are the key policy issues that need to be addressed? How can the sector ensure accountability when using tools?

Professor Hovy: The Australian government is actively working on formulating policies to regulate AI. Existing regulations about maintaining privacy, avoiding bias, being truthful, and other similar ethical concerns already exist for humans, and cover AI or humans+AI already. But regarding AI alone, a major concern is the unpredictability of a dynamically evolving adaptable system such as AI: it is very difficult to make rules about something you cannot precisely describe. One good approach is to make sure that every AI builder follows best practice standards and guidelines that already exist in international standards bodies, and that the assembly of a new AI out of prior components is governed by the need to ensure that all these prior components were themselves created according to relevant guidelines and standards. While such a ’supply chain’ of accountability is no surefire guarantee of safety, it goes some way toward ethical and responsible AI without completely stifling the innovation and experiential freedom that AI builders and users need to continue bringing new benefits to society.

Peter: While AI technology holds great promise, it is still in its early stages, so I recommend a gradual approach. Perhaps the first-order issue is to ensure that information generated by AI is accurate, reliable, usable and, as much as is possible, free from bias.  The public sector should consider the information sources used and the context in which they are used  involving people with the relevant knowledge to ensure accuracy. Building capability by adopting AI internally before incorporating it into public-facing services or channels together with training people internally, helps develop an understanding the issues and opportunities as well as ensuring accountability. I always recommend that organisations should accept that their people are using AI, so if you haven’t provided guidelines, that is the first thing that should be done. Banning use is futile as people will have their own devices and use them anyway. Recognising that using AI to review your grammar or to help clarify a topic is acceptable, but inputting sensitive and personal data into a publicly available AI tool is not acceptable are the types clarifications that need to be stated upfront.  Developing greater understanding across management teams can be facilitated by having cross-functional teams work together on policies as well as collaborating on initial prototypes and pilots. Working across functions such as customer service, communications, IT, HR, legal, finance, risk and other relevant teams ensures all aspects are considered before deploying AI. Perhaps the easiest place to start is for organisations to establish their own private and secure instance of an AI platform to ensure data doesn’t leak, enable opportunities to experiment and learn and implement proper training and processes to ensure tools are used effectively. While the technology is new, adopting a pragmatic approach to work with it across an organisation is the key.

Kristy: There are so many! If I was to pick one that’s more of a thought provocation – what does it mean for the public sector that AI can generate art? Books are being written entirely by algorithms, and androids are painting electric sheep, with the advent of this technology. What does this mean for arts policy, funding and sector development? Does it pivot to be human-generated art only? Why? Does the sector continue to exist at all? Should we fund investment attraction for big-screen blockbusters to film in Australia if the content can all be artificially produced? I don’t have a view on these questions myself, but I love our creative industries and have a long and proud association with Writers Vic, so I think this is a really fascinating bit of policy that is being disrupted that not many people turn their minds to.

Benjamin and Jackson: From a software engineering perspective, there is a critical need for comprehensive policies to guide the development and deployment of AI technologies. The key policy issues that need to be addressed include ensuring ethical AI practices, transparency in technology deployment, and safeguarding against biases in training data. Collaborative efforts, involving technologists, policymakers, and community representatives, are essential not only to shape these policies but also to ensure that they reflect the specific needs of our diverse communities. Privacy and secure handling of data are always of utmost importance, especially data used to train and model AI. Organisations should establish private instances to train and tune AI models, keeping sensitive data secure.

Moreover, accountability in AI implementation is a vital ongoing process. Policies should outline mechanisms for ongoing software and data evaluation, stakeholder engagement, and addressing ethical concerns as they arise. By incorporating these considerations into policy frameworks, we can create a regulatory environment that fosters responsible AI development and deployment.

Speakers

Professor Eduard Hovy

Executive Director, Melbourne Connect , University of Melbourne 

Read More

Kristy Hornby

Chair, Risk Community of Practice, IPAA Victoria, Associate Director, Grosvenor

Read More

Peter Williams

Chief Edge Officer, Centre for the Edge, Deloitte Australia

Read More

Jackson Calvert-Lane

Engineering Manager, Optimal Reality, Deloitte Australia

Read More

Benjamin Yong

Senior Software Engineer at Optimal Reality, Deloitte Australia

Read More