AI Readiness in the Public Sector: NSW Government Roundtable Key Takeaways

calendar04/07/2026
clock 6 min read
feature image

Earlier this week, I spent time with a group of public sector leaders discussing where AI is genuinely adding value today — and where it remains more aspiration than reality. The conversation was refreshingly practical: It focused less on hype and more on AI initiative delivery under real constraints.

Working with agencies day to day, I am often involved in early AI conversations that start with enthusiasm and urgency. These are usually driven by executive interest, public expectation, or external pressure to keep pace. What stood out in this discussion was how consistently leaders acknowledged that the opportunity can be real, but the foundations matter far more than most strategies. 

AI Return on Investment Is Rarely About the Tool

One of the strongest themes was ROI. Not as a theoretical concept, but as something that must withstand budget scrutiny, audit, and public accountability. There was broad agreement that licensing an AI capability fails to deliver value on its own. In practice, the outcomes organisations care about are shaped by less visible work such as data governance, security controls, information classification, user training, and clear ownership.

In my role, I often see AI positioned as a solution to enterprise productivity challenges: a pattern that surfaced clearly in this discussion.  

What this conversation reinforced is that, in the public sector, AI often acts as a magnifier. It accelerates whatever foundations already exist, whether they are strong or weak.

Several participants reflected on whether current AI interest could be leveraged to finally fund long-standing information management and data hygiene initiatives that have struggled to gain momentum on their own.

Understanding Enterprise Value Is a Practical Place to Start

Rather than beginning with where AI could be applied, a more effective starting point I see working is to:

  • Identify a specific outcome leadership genuinely cares about, such as reducing turnaround times or improving consistency of advice.
  • Be honest about what currently slows that outcome down, such as manual collation, poor visibility of information, inconsistent records, or access risk.
  • Consider whether AI helps in these bottlenecks, and if so, what constraints need to be in place for it to be safe and defensible. 

Personal Productivity and Enterprise Value Are Not the Same Thing

Another distinction that resonated strongly was the gap between individual productivity gains and enterprise-grade use cases. Personal productivity improvements are often felt immediately: People save time and drafts appear faster. However, these benefits are difficult to quantify and even harder to govern at scale.

Enterprise use cases, by contrast, require a much clearer definition: This includes the problem being solved, the expected end state, and how value will be measured — whether through financial impact, service delivery outcomes, or changes to workload and capacity.

Across agencies, it is clear that the most sustainable momentum comes from these types of use cases. They do not replace judgement, but they reduce friction around it.

“Human in the Loop” Oversight Is a Design Choice, Not a Compromise

A point I hear repeatedly in conversations about AI in public sector are concerns about accuracy, context, and trust. That came through clearly here. In regulated environments, context often matters as much as content, and that context cannot always be inferred.

Several leaders spoke about deliberately designing AI use cases with human oversight built in from the outset. This was not treated as a fallback, but as the intended operating model. It included limiting data sources early, validating outputs, and being explicit about where AI supports work and where accountability remains with people. This reflects a clear priority for accountability — ensuring AI supports decisions without obscuring responsibility.

In practice, this approach does not slow progress; it is what makes scale possible without losing organisational confidence.

Structure and Ownership Matter More Than Enthusiasm

One of the more candid parts of the discussion focused on organisational structure. A recurring theme was the need for a central function, whether a taskforce or advisory group, that can prioritise use cases, enforce guardrails, and align initiatives with policy and assurance frameworks. This aligns with recent New South Wales (NSW) government guidance for agencies to formalise accountability for AI oversight through designated leadership roles.

Public sector AI initiatives often stall not due to resistance, but because responsibility is unclear. Decision-makers must know who approves use cases, sets boundaries, and makes decisions when risk and value are in jeopardy. Agencies that appear to move faster tend to identify these early, rather than relying on informal alignment. 

Guardrails Enable Safe Progress

Data quality, oversharing, fragmented data sources, and security risk remain top concerns. This is particularly critical as 75% experienced at least one AI-related breach in the past year. In the conversation, there was shared recognition that weak guardrails do not just reduce AI value, they actively increase operational and reputational risk.

In my day-to-day work across agencies, there is clear excitement about what AI could enable, alongside concerns about unintended consequences. The discussion reinforced something I strongly agree with: Guardrails are not an obstacle to progress. They are what allow organisations to move forward with confidence.

For organisations early in their journey, instead of aiming for perfection, it often works better to have minimum viable guardrails. These include:

  • Clear boundaries on what information is in scope.
  • Basic access hygiene and reduction of oversharing.
  • Pragmatic classification that people can realistically apply.
  • The ability to review and explain outputs.
  • Uplift in AI literacy so staff understand both the capabilities and limitations.

None of this is glamorous, but it is what turns experimentation into something leaders are willing to stand behind.

Key Considerations for Public Sector Leaders on Responsible AI Use

If you are shaping direction rather than selecting tools, three low-regret moves stood out clearly from this conversation:

  1. Treat AI as a data and information management programme, not a technology rollout.
  2. Anchor early use cases to outcomes leadership already cares about.
  3. Establish clear ownership and decision rights before scaling.

Working for a technology company, I am often involved in the space between strategy and implementation. The real questions in this space are rarely about features; they are about risk, assurance, and operating models that stand up to scrutiny. It was genuinely valuable to hear these same themes articulated so clearly from within the public sector itself.

If you are early in your journey and want to compare AI approaches, sense check foundations, or better understand the important but often overlooked prerequisite steps, we welcome starting that discussion with you.

author

Laurie Yutuc

Laurie Yutuc is a public sector lead at AvePoint ANZ, partnering closely with agencies across NSW, QLD, ACT, and the New Zealand Government. He works alongside government and education leaders to build secure, governed, and compliant data foundations that enable modern collaboration and the safe adoption of AI.

Laurie works with organisations to address the data challenges facing the public sector today: balancing productivity with security, reducing enterprise risk, and demonstrating regulatory compliance while simplifying collaboration for end users. With a strong focus on delivering measurable business outcomes, he brings a practical, consultative perspective to modern data protection, information governance, and operational resilience, helping organisations move forward with confidence in their AI strategy.