News

AI, accountability and that bit inbetween: Why data governance is the gap many organisations overlook

Notitia Managing Director Alex Avery explores the relationship between AI, data governance and organisational accountability.

February 5, 2026

Online, monitor and computer with code. AI accountability and data governance

Why AI accountability depends on data governance | From AI experimentation to trusted deployment | Why data governance is the foundation for scalable AI

Attributed to Alex Avery, Notitia Managing Director.

In this article Alex Avery, Notitia Managing Director, explores the relationship between artificial intelligence, data governance, and organisational accountability. It is written for executives, technology leaders, and policymakers navigating the shift from AI experimentation to scalable, trusted deployment in Australian organisations.

With the Australian Government making moves to define national AI policy and engage more closely with vendors, interest in AI governance is accelerating.

That’s a good thing.

Key takeaways

• AI initiatives stall not because of model capability, but because data ownership, lineage and accountability aren’t defined early.

• Organisations that treat data governance as an enabler deploy AI faster, with fewer incidents and clearer accountability.

• Clear ownership, documented data lineage and ongoing monitoring are the minimum foundations for scaling AI beyond pilots.

• The fastest wins come from clarifying responsibility and controls, not buying new tools.

How do we do AI "well" in Australia?

For Australian organisations already experimenting with generative tools, predictive models, or automation platforms, the conversation is starting to shift from “what’s possible?” to “how do we do this well?”

We’re seeing this shift firsthand in our work at Notitia (where I advise organisations on data governance and AI readiness), with clients now looking to get more value from earlier pilots.

We’ve entered what Gartner calls the trough of disillusionment (the point when early experimentation meets operational reality). Initial experiments were fast-tracked, early enthusiasm was high, and in many cases, budgets were made available to test new ideas. But a lot of those projects are now slowing down, or not delivering what was expected.

That doesn’t mean AI isn’t delivering on its promise. In most cases, it’s a sign that governance needs to catch up.

In this context, data governance (or data stewardship) is about having clear ownership, traceability, controls, and accountability for how data is created, accessed, and used.

AI projects don’t fail because of AI

AI is ultimately a layer built on top of your data. When we work with clients to diagnose underperforming AI pilots, the underlying cause is almost always the same: inconsistent or incomplete data, unclear ownership, and a lack of system-wide data management controls.

This isn’t about fault, it’s about timing. Governance isn’t usually top of mind when projects are moving quickly. But it becomes essential once you want to scale beyond a pilot and into business-critical use.

Interestingly, this is a shift we’re starting to see more of. In the past, data governance was something Notitia had to actively sell to our clients. Today, clients are coming to us with questions like:

● “How do we trace where this data came from?”

● “Who’s responsible for validating these results?”

● “What happens when we connect this tool to that system?”

These are good questions. They show that the conversation is maturing, and that organisations are ready to think beyond experimentation. They point to the same underlying need: clear data ownership, traceability across systems, and agreed accountability before tools are scaled.


The gaps aren’t just technical, they’re structural

As we speak with teams across sectors, one thing is clear: the lines of responsibility aren’t always well defined. The shift to cloud, the rise of SaaS, and the pace of AI development mean that implementation can happen across multiple vendors, platforms, and teams.

When outcomes aren’t as expected, it can be hard to pinpoint where things went off course. Was the tool unsuitable? Was the data incomplete? Was the model misaligned with the original objective?

Often, it’s a combination of all three. Without strong governance, there’s no shared understanding of who owns what—or how success is measured. That’s why a clear framework from the outset matters, even in early-stage pilots.

In practice, effective AI governance usually includes:

  • Defined data owners
  • Documented data lineage
  • Clear approval and escalation paths
  • Ongoing monitoring of model performance

The Office of the Australia Information Commissioner (OAIC) has  highlighted practical examples of governance supporting AI adoption.

In 2025, it published a case study based on preliminary enquiries into I-MED, examining how the organisation approached privacy governance when developing AI models.

While not an endorsement, the case demonstrated how planning for privacy, clarifying responsibility, and embedding safeguards early can enable innovation while meeting legal and community expectations.

Cloud, SaaS, and the rise of all teams owning their data processes

In many organisations, digital tools are no longer exclusively procured by IT.

Marketing might roll out a campaign platform with AI features. A finance lead might adopt forecasting software that integrates with internal systems. These decisions often start with a small trial, and then expand quickly.

This kind of decentralised innovation isn’t a risk in itself, it can be a strength. But when tools are integrated without a shared strategy around data, access, and security, issues can emerge further down the line.

Security and governance shouldn’t be barriers to innovation. They should be enablers that are built into the process in a way that’s clear, consistent, and not overly complicated.

Where policy can (and can’t) help: How far can policy go in shaping AI accountability?

There’s an ongoing discussion about the role of government in setting clearer AI guardrails, especially when it comes to accountability across infrastructure providers, cloud platforms, and software vendors.

In Australia, this is unfolding alongside emerging national AI policy, privacy reform, and growing expectations around transparency and accountability.

It’s a conversation worth having, and we welcome more guidance. But realistically, most regulatory change will only follow major incidents, data leaks, service outages, or reputational damage that attracts widespread attention.

Recent Australian evidence reinforces why governance matters for AI adoption. According to consumer research cited by the Office of the Australian Information Commissioner, 83% of Australians believe companies should seek user consent before using their data to train AI models, and 84% want more control and choice over how their personal information is collected and used.

Without trust and clear safeguards, productivity gains from AI are unlikely to be realised at scale.

Many of our clients are choosing to lead the way themselves. They’re building internal policies, investing in infrastructure, and embedding governance practices that support long-term, sustainable innovation.

Sovereignty and smarter infrastructure investments

One area where government action can make a real difference is infrastructure.

While software vendors and platforms will continue to evolve, the physical infrastructure that supports AI (from data centres to connectivity) creates jobs, builds resilience, keeps value onshore, and supports data sovereignty.

Projects like the Tasmanian Government’s investment in a new data centre in Launceston are good examples of future-focused thinking. They show how digital strategy can also support local capability and economic development. And they ensure that we’re not entirely dependent on overseas platforms to power Australian innovation.

The technology may change. The principles don’t.

The AI landscape moves fast. Technology, tools and features evolve. What stays constant is the need for trust, clarity, and ownership, especially when decision-making is shaped by data.

You don’t need to slow down innovation to put governance in place. But you do need to know what you’re building on.

Five governance checks for organisations using AI

If your organisation is working with AI, here are five things worth asking:

1. Do we trust the data powering our tools?

2. Do we have the right controls in place to manage access and usage? 3. Are we involving security and IT from the start?

4. Have we defined what success looks like, and who owns each part? 5. If something doesn’t go to plan, do we have the visibility to respond quickly and effectively?

These questions aren’t just technical. They’re strategic. And they’re becoming critical as AI becomes embedded across more parts of the business.

As AI becomes embedded across organisations, data governance is no longer a supporting function, it is the foundation for trust, scale, and long-term value.

FAQ: AI, data governance and accountability

Why do AI pilots stall or fail to scale?

Most AI pilots don’t stall because of model performance or tooling. They stall because data ownership, lineage and accountability aren’t defined early. As pilots move beyond experimentation, gaps in responsibility, trust and controls become visible, and progress slows.

Is AI underperforming, or is something else going on?

In most cases, AI is doing exactly what it’s designed to do. Underperformance usually points to upstream issues: inconsistent or incomplete data, unclear ownership, or weak governance across systems. AI is a layer built on top of data, when the foundation is unstable, outcomes suffer.

Who should be responsible for AI outputs?

Responsibility should be explicitly defined, not implied. Effective governance assigns clear data owners, establishes approval and escalation paths, and clarifies who is accountable for validating outputs, especially when AI-driven insights inform decisions.

How do organisations trace where AI data comes from?

By documenting data lineage, where data originates, how it moves between systems, and how it’s transformed along the way. Without lineage, it’s difficult to verify results, resolve issues quickly, or build trust in AI-supported decisions.

What happens when AI tools are connected across systems?

When tools are integrated without a shared strategy around data access, security and governance, risks compound. Issues often emerge later, when usage scales, data is shared more widely, or decisions become business-critical. Governance reduces this risk by creating visibility and accountability from the outset.

Does governance slow down innovation?

No. Poorly implemented governance slows innovation. Clear, lightweight governance enables faster deployment by reducing uncertainty, avoiding rework, and giving teams confidence in the data and systems they rely on.

What governance foundations matter most early on?

At a minimum, organisations should establish:

  • Clear data ownership
  • Documented data lineage
  • Defined approval and escalation paths
  • Ongoing monitoring of model performance

These fundamentals support scale without adding unnecessary complexity.

What should executives ask before scaling AI?

Before moving beyond pilot stage, it’s worth asking:

  • Do we trust the data powering our tools?
  • Are access and usage controls clearly defined?
  • Are security and IT involved early?
  • Have we defined success and who owns each part?
  • If something goes wrong, do we have visibility to respond quickly?

Is this a technical issue or a leadership issue?

It’s both, but it’s primarily a leadership and accountability issue. Technology decisions shape how responsibility is shared across teams, vendors and platforms. Governance provides the structure needed to align those decisions with organisational goals.

About Alex Avery, Notitia Managing Director + Founder

Alex Avery Notitia's managing director headshot

Alex Avery is the Managing Director and Founder of Notitia, an Australian data and digital transformation consultancy working across government, healthcare, community and private sectors.

A recognised voice on the intersection of data, technology, AI and public value, Alex focuses on how organisations can use trustworthy data to support better decisions and real-world outcomes.

His work spans data strategy, analytics, human-centred design and digital delivery, with a strong emphasis on practical, implementable solutions.

Alex’s career includes Big 4 consulting, global startups and academia. He holds a Bachelor of Science (Honours) and is an Honorary Research Fellow at the University of Melbourne.

Today, he advises executive teams on building the systems, tools and data foundations needed to turn insight into action at scale.

Book a chat with Alex to find out how he can solve your data challenge.

Notitia's Data Quality Cake recipe