Field Notes from the Governments and Councils Breakout Forum

At the 2025 AIoT Global Summit, I chaired the Governments and Councils forum. The strongest insight to emerge was this:
AI tends to amplify what is already present.
If your organisation has well-aligned systems, clear roles, and structured data, AI may support your goals. If not, it will likely highlight inefficiencies, increase confusion, and introduce new risks.
This is particularly important in public sector contexts, where governance and clarity are non-negotiable.
Why AI tends to fail in government settings
The most common challenges facing government AI projects are not technical. They are structural.
-
Poorly integrated systems
-
Siloed teams and responsibilities
-
Inconsistent data definitions and reporting
-
Lack of visibility around performance and risk
AI tools do not solve these issues. They amplify them.
Key insights from the AIoT Global Summit breakout
Reflections from the Governments & Councils Breakout – by Praxxis Group
Last week’s AIoT Global Summit (hosted by Masoud Shakiba and Amir Mohammadi at TechTalk) brought together an ambitious cross-section of government, industry, and policy leaders. The stated theme was Economic Growth. The real theme, at least in the Government & Councils breakout forum, was trade-offs.
Here’s what stood out, what matters, and where we see room for action.
1. Start with governance, not more data
Frank Zeichner warned against uncontrolled data accumulation. Many agencies are investing in sensors and platforms without the necessary frameworks to make sense of the information. As Frank pointed out, the tech has moved ahead. What’s lagging is coherent frameworks for procurement, accountability, and cross-agency data use. You can build a sensor network in six weeks, but building a decision pathway that someone will stand behind? That’s where it gets slow.
If you are collecting data without knowing how it maps to service delivery or policy outcomes, it can become a liability.
Learn more about Frank’s work:
-
Frank Zeichner, CEO of IoT Alliance Australia
2. Public trust depends on operational discipline
Fabrizio Gilardi’s research was a sharp reminder that most public concern around AI has less to do with the technology and more to do with outcomes. People don’t want an AI model explained. They want to know if it will help them, or hurt them and whether they’ll have any say in the matter. His research shows that most people fear how AI will be used, not what it is. When decisions are made using systems that no one understands or can explain, confidence breaks down. For government leaders, that risk is unacceptable.
Explore Fabrizio’s research:
Study: "Willingness to Read AI-Generated News Is Not Driven by Their Perceived Quality"
3. Māori-led AI is increasing in impact
Karaitiana Taiuru put this best: data sovereignty, particularly for Māori, isn’t something you retrofit. If it’s not designed into the system at the start, it usually doesn’t get done at all. We’d expand that principle to most areas of data governance. Councils and agencies are increasingly working with datasets they don’t fully control. He also surprised us with the growth of Māori-led AI ventures and their influence across both cultural and economic spheres.
Discover more on Māori data sovereignty:
4. Councils are ready to engage
Suzanne Boyd gave a clear picture of the reality councils face. The desire to modernise is there. So is the need. But many are working under funding constraints, infrastructure failures, and overlapping reforms.
She extended a clear invitation to the tech sector: Don't wait to be asked. Engage early. Bring solutions that can help. Taituarā is actively seeking partnerships on behalf of its members and is open to conversations with vendors ready to do the work.
Learn more:
-
Taituarā: Aotearoa New Zealand’s leading membership network for professionals working in and for local government
5. Begin with a common service issue
Jannat Maqbool has a three-decade record of turning emerging technologies into public value. She led New Zealand’s first public “Smart Space” in Hamilton, advised on digital twin ecosystems across Australia, New Zealand, and the UK, and founded IoT Waikato to galvanise regional innovation. Her advice for councils looking into IoT, start with a single, widely understood pain point. Choose something that affects residents directly and solve it well. This builds trust and sets the foundation for further development.
Find out more:
AI highlights what is already working... or not
When AI tools enter public systems that lack coherence or accountability, outcomes become more difficult to manage. Risks grow faster than the ability to control them.
This is already happening in areas like automated reporting, compliance systems, and predictive analytics.
What OPAL3 is designed to support
OPAL3 is built for public sector leaders managing performance and risk under pressure. It helps teams:
-
Align roles, processes, and data
-
Improve transparency and reporting
-
Support smarter decision-making
-
Prepare systems to safely integrate emerging technologies
AI is not arriving all at once. It is showing up in upgrades, vendor tools, and new expectations. OPAL3 helps ensure your systems are ready before that happens.
Common search queries this post helps address:
-
How can government prepare for AI?
-
Risks of AI in local government
-
Public sector AI readiness framework
-
Māori data governance and AI
-
OPAL3 software for performance and risk
-
Structuring data before AI implementation
-
Why AI fails in government projects