Why AI accountability depends on data governance | From AI experimentation to trusted deployment | Why data governance is the foundation for scalable AI
Attributed to Alex Avery, Notitia Managing Director.
In this article Alex Avery, Notitia Managing Director, explores the relationship between artificial intelligence, data governance, and organisational accountability. It is written for executives, technology leaders, and policymakers navigating the shift from AI experimentation to scalable, trusted deployment in Australian organisations.
With the Australian Government making moves to define national AI policy and engage more closely with vendors, interest in AI governance is accelerating.
That’s a good thing.
For organisations already experimenting with generative tools, predictive models, or automation platforms, the conversation is starting to shift from “what’s possible?” to “how do we do this well?”
We’re seeing this shift firsthand in our work at Notitia (where I advise organisations on data governance and AI readiness), with clients now looking to get more value from earlier pilots.
We’ve entered what Gartner calls the trough of disillusionment. Initial experiments were fast-tracked, early enthusiasm was high, and in many cases, budgets were made available to test new ideas. But a lot of those projects are now slowing down—or not delivering what was expected.
That doesn’t mean AI isn’t delivering on its promise. In most cases, it’s a sign that governance needs to catch up.
In this context, data governance is about having clear ownership, traceability, controls, and accountability for how data is created, accessed, and used.
AI projects don’t fail because of AI
AI is ultimately a layer built on top of your data. When we work with clients to diagnose underperforming AI pilots, the underlying cause is almost always the same: inconsistent or incomplete data, unclear ownership, and a lack of system-wide governance.
This isn’t about fault—it’s about timing. Governance isn’t usually top of mind when projects are moving quickly. But it becomes essential once you want to scale.
Interestingly, this is a shift we’re starting to see more of. In the past, data governance was something Notitia had to actively sell to our clients. Today, clients are coming to us with questions like:
● “How do we trace where this data came from?”
● “Who’s responsible for validating these results?”
● “What happens when we connect this tool to that system?”
These are good questions. They show that the conversation is maturing, and that organisations are ready to think beyond experimentation.
The gaps aren’t just technical—They’re structural
As we speak with teams across sectors, one thing is clear: the lines of responsibility aren’t always well defined. The shift to cloud, the rise of SaaS, and the pace of AI development mean that implementation can happen across multiple vendors, platforms, and teams.
When outcomes aren’t as expected, it can be hard to pinpoint where things went off course. Was the tool unsuitable? Was the data incomplete? Was the model misaligned with the original objective?
Often, it’s a combination of all three. Without strong governance, there’s no shared understanding of who owns what—or how success is measured. That’s why a clear framework from the outset matters, even in early-stage pilots.
In practice, effective AI governance usually includes:
- Defined data owners
- Documented data lineage
- Clear approval and escalation paths
- Ongoing monitoring of model performance
Cloud, SaaS, and the rise of decentralised decision-making
In many organisations, digital tools are no longer exclusively procured by IT.
Marketing might roll out a campaign platform with AI features. A finance lead might adopt forecasting software that integrates with internal systems. These decisions often start with a small trial, and then expand quickly.
This kind of decentralised innovation isn’t a risk in itself, it can be a strength. But when tools are integrated without a shared strategy around data, access, and security, issues can emerge further down the line.
Security and governance shouldn’t be barriers to innovation. They should be enablers that are built into the process in a way that’s clear, consistent, and not overly complicated.
Where policy can (and can’t) help: How far can policy go in shaping AI accountability?
There’s an ongoing discussion about the role of government in setting clearer AI guardrails, especially when it comes to accountability across infrastructure providers, cloud platforms, and software vendors.
In Australia, this is unfolding alongside emerging national AI policy, privacy reform, and growing expectations around transparency and accountability.
It’s a conversation worth having, and we welcome more guidance. But realistically, most regulatory change will only follow major incidents—data leaks, service outages, or reputational damage that attracts widespread attention.
Rather than waiting for policy to catch up, many of our clients are choosing to lead the way themselves. They’re building internal policies, investing in infrastructure, and embedding governance practices that support long-term, sustainable innovation.
Sovereignty and smarter infrastructure investments
One area where government action can make a real difference is infrastructure.
While software vendors and platforms will continue to evolve, the physical infrastructure that supports AI (from data centres to connectivity) creates jobs, builds resilience, keeps value onshore, and supports data sovereignty.
Projects like the Tasmanian Government’s investment in a new data centre in Launceston are good examples of future-focused thinking. They show how digital strategy can also support local capability and economic development. And they ensure that we’re not entirely dependent on overseas platforms to power Australian innovation.
The technology may change. The principles don’t.
The AI landscape moves fast. Technology, tools and features evolve. What stays constant is the need for trust, clarity, and ownership, especially when decision-making is shaped by data.
You don’t need to slow down innovation to put governance in place. But you do need to know what you’re building on.
If your organisation is working with AI, here are five things worth asking:
1. Do we trust the data powering our tools?
2. Do we have the right controls in place to manage access and usage? 3. Are we involving security and IT from the start?
4. Have we defined what success looks like, and who owns each part? 5. If something doesn’t go to plan, do we have the visibility to respond quickly and effectively?
These questions aren’t just technical. They’re strategic. And they’re becoming critical as AI becomes embedded across more parts of the business.
As AI becomes embedded across organisations, data governance is no longer a supporting function, it is the foundation for trust, scale, and long-term value.
About Alex Avery, Notitia Managing Director + Founder

Alex Avery is the Managing Director and Founder of Notitia, an Australian data and digital transformation consultancy working across government, healthcare, community and private sectors.
A recognised voice on the intersection of data, technology, AI and public value, Alex focuses on how organisations can use trustworthy data to support better decisions and real-world outcomes.
His work spans data strategy, analytics, human-centred design and digital delivery, with a strong emphasis on practical, implementable solutions.
Alex’s career includes Big 4 consulting, global startups and academia. He holds a Bachelor of Science (Honours) and is an Honorary Research Fellow at the University of Melbourne.
Today, he advises executive teams on building the systems, tools and data foundations needed to turn insight into action at scale.
Book a chat with Alex to find out how he can solve your data challenge.






