In this post, I cover three things that are central to the success of an AI project: how learning stays within the organisation, why the project needs to be fast, and how the implementation should be done.
Build on what the organisation already knows
Learning is often even more important than the output of the work itself. A good practitioner can help ensure that the end result or product is perceived as "useful" and "successful". But even when running the first experiments, it's worth recognising that you can learn from the project and that the greatest value of a project may be how it prepares you for future experiments and projects. An AI product itself ages and changes – but what you learn as an organisation, that stays.
Learning from projects should therefore be maximised. In AI projects, this can be supported by keeping the focus firmly on the organisation's own domain and expertise. Then there's only one thing to learn: what AI means in this domain – not everything there is to know about AI.
Yet you often hear AI experts – they may be enthusiastic employees who want to get involved in exciting new territory, or consultants trying to demonstrate their competence and win the client's trust – flooding people with AI jargon. What they say may be entirely accurate and come from good sources, but it's wasted knowledge if it isn't connected to the organisation's own work. If it doesn't relate to what's already known within the organisation, it will be forgotten and no lasting learning takes place.
The client's role in the project is central. I know how to build AI solutions – but I don't know what instructions should be used to test an energy sector agent's capabilities, I don't know what documents can be found in your intranet, and I can't judge which responses from the AI are acceptable. I can't perform meteorological analyses as well as professional meteorologists, and I don't know how the intricate logic of flight reservation systems is designed.
But fortunately, in every project there was always someone who had that knowledge. And that knowledge is a good foundation for building AI understanding: you don't need to learn everything that goes into an AI solution, but rather understand the factors that made the solution work in this particular case and why certain problems had to be solved before truly good results were achieved. And these lessons stick, because they relate to something familiar and well understood.
Keep the project small and fast
In data science and machine learning projects, I've learned one key principle: optimise cycle speed. Every iteration teaches something. Sometimes you go off track quickly, but you also get back on course quickly. When the cycle is short and cheap, a wrong direction isn't a catastrophe – it's a lesson.
The same principle applies to AI projects more broadly. It's a big mistake to start too broadly – I've made it myself: ambitious architectures, five-integration PoCs, months of work before anyone sees results.
The first experiment should be scoped so small that it can be completed in weeks – not months:
- One use case, not three
- Limited data, not the entire company's data warehouse
- Simple architecture that works first and gets optimised later
- A clear success metric that shows whether the experiment was worthwhile
A small experiment is cheap. You learn from it. And based on its results, you can decide whether to continue, change direction, or stop.
Code and infrastructure are the foundation of iteration
Fast iteration requires a stable foundation. And this is where it's worth discussing who does the work and how.
A talented data scientist can achieve things that seem miraculous. But sometimes the data scientist doesn't document all the steps taken to reach the solution. Similarly, a consultant who worked through the cloud provider's console may leave no trace of how they built the service or what configuration it's running on. The result may be impressive, but no one else can reproduce it – let alone iterate on top of it.
This is why two things are essential in an AI project's implementation:
Application code in version control. All of the solution's logic is in code, stored in the client's repository. No hidden configurations, no consultant's private environments, no notebooks that only work on one data scientist's machine. When the code is under control, any competent developer can continue the work.
Infrastructure as Code (IaC). Cloud infrastructure – servers, databases, network settings – is defined as code. The entire solution can be redeployed with a single command, and every change is traceable. This makes iteration safe: you can experiment, break things, and restore without fear of permanent loss.
When the solution's logic and infrastructure are in version-controlled code, you don't need to rebuild the foundation on every iteration cycle. Energy goes where it matters: improving the solution. Or the solution can be quickly applied to another use case entirely.
How this is reflected in my work and pricing
These principles naturally lead to how I price and do my work. I sell AI implementations on a fixed-price basis, where all deliverables – application code, IaC, documentation – remain the client's property. After delivery, the client is free to handle operations and further development themselves, with the team of their choosing.
I publish my prices openly on my pricing page.
If this sparked any thoughts or you have an AI project in mind, get in touch.