I Figured Out What AI Advisory and AI Governance Are

In my previous post, I discussed AI project planning and execution. My approach is very straightforward: keep the project small, iterate fast and put the code in version control. Also, start from what you already know, and you can gain lasting learning about AI. As a former teacher, I can tell you that lasting learning mainly happens when new information is connected in multiple ways to things already known.

I can imagine an AI advisor, AI transformation catalyst or AI governance professional would strongly object to what I wrote.

— This is cutting corners! Where's the governance model? Where's the risk assessment? Where's the CoE approval?

I've been puzzled too. For about a year now, I've been wondering about these: AI advisory and AI governance.

— What are these? Are they actually needed for something?

I decided to find out what they are, why they exist and whether they're needed.

AI-native vs AI-enabled

I had a hunch that these new AI "things" are related to the fact that AI is now in 2026 everywhere and on everyone's lips. And that I had grown up in a very different AI environment.

Before 2022/2023, the situation was entirely different. Companies working with AI were mainly companies that built AI systems. AI was their core product and main business. The vast majority of companies had nothing to do with AI whatsoever. Some were customers of the aforementioned AI companies, but in practice almost everyone was blissfully unaware of AI.

And now everyone is doing something with AI. In software development, AI assistance is permeating the entire field and often determines developers' competitiveness. Copilot is being adopted at an accelerating pace in companies, assisting employees in various administrative and content production tasks. RAG solutions are abundant: a language model is placed between intranet documents and the user in hopes that the intranet would finally yield something useful and plain-spoken. The bravest are starting to build something agentic.

The difference between companies that build AI and companies that apply AI is significant, and there are established terms for them: AI-native and AI-enabled.

An AI-native company is built around AI from the start. AI is not an add-on but part of the core business model. These companies typically have their own models, their own data pipelines and a research-driven culture. OpenAI and Anthropic are obvious examples, and smaller companies with their own AI product belong to the same category.

An AI-enabled company is a traditional organisation that integrates AI into its existing processes. The business model is not inherently about AI — AI supports it but does not define it. The current wave — adopting Copilot, RAG for the intranet, chatbot for customer service — is AI-enabled activity.

An AI-native company (which is likely a "startup") has its own framework. In developing and optimising its core business — the AI product — it uses data science, machine learning and data architecture methods: data is analysed with statistically sound methods, models are evaluated with metrics, experiments are tracked systematically, iteration cycles are kept fast and deployment is done through MLOps practices. It is disciplined, but it's not called "governance" — it's called normal product development.

But for a large, long-established company bringing AI into its existing operations, the situation is different. There are systems accumulated over the years, sunk costs, organisational silos, compliance requirements, complex procurement processes and hundreds or thousands of employees whose ways of working are changing. AI doesn't arrive on a blank slate but into the middle of a living, complex organisation.

When a large organisation adopts AI, sensitive situations arise quickly. An employee uses the free version of ChatGPT to draft a customer complaint response — feeding the customer's name, order number and complaint details into a service whose data may end up in the model's training set and leak further into the public. The marketing team generates images with AI for a campaign without checking what usage rights apply to the generated material. The customer service team decided to boost efficiency and ordered an AI chatbot from an external provider, which then gave incorrect information and a customer made a harmful decision based on it.

AI Governance and AI Advisory

These are not hypothetical scenarios but everyday reality that begins as soon as AI enters an organisation. In a large organisation, it is essentially mandatory to establish AI governance: if you don't, AI creeps into operations uninvited and unpleasant things follow. AI governance is a structure that addresses these situations: it defines the rules, responsibilities and boundaries for how AI may and should be used. It is largely also about building processes with AI in mind.

Related to this is AI advisory: a consulting service for companies that are seeking direction with AI or struggling with the problems it brings. AI advisory helps a company understand what is worth doing, where to start and how to avoid pitfalls — and often the concrete output is precisely the building of governance and a CoE.

I'd also like to mention the AI Center of Excellence, as I've been involved in establishing one. Imagine the same large organisation: different departments have their own AI experiments, but nobody knows what's being done elsewhere. Marketing is trying one LLM service, customer service another, product development a third. Each team is solving the same problems separately — how to get data to the model, how to evaluate results, how to take a solution to production. At worst, overlapping licences are purchased and the same integrations are built in parallel. An AI Center of Excellence is a centralised team that brings this together: it maps what is already being done in the company, shares learnings between departments, standardises tools and helps business units move experiments forward. It's not about one team doing all the AI — it's about someone maintaining the big picture, preventing duplicate work and serving as the in-house expert when departments have questions about using AI.

I have to say, this research and writing about it has broadened my understanding. I hope it has been useful for you as well.

What About AI Transformation?

Then there's AI transformation. It means an organisation's transition from an AI-enabled state towards an AI-first state — a situation where AI is so deeply embedded in operations that it fundamentally changes the way work is done.

I don't know whether AI transformation is a real thing. Perhaps it is an inevitable process where AI becomes a basic success factor similar to IT. But in any case, its progress depends on a company's ability to put AI in service of its own industry and its own expertise. An insurance company that deeply understands insurance can also assess where AI produces the most value for it. A weather service that understands meteorology knows what to demand from an AI solution. A company that doesn't know its own industry cannot guide its AI either.

And here I return to where I started: learning about AI should be built on top of what the organisation already does and knows.

If this sparked any thoughts, get in touch!

social