The video is in Finnish.
Plain “just do this” prompting tends to produce uneven results, variable quality and a lot of revision cycles. When you know the domain well, model errors and exaggeration show up immediately; when you don’t, the same output can look impressive. The potential is still there — the question is how you steer the model toward the right end result.
What makes up an agent?
In practice, an agent includes a language model, some tools (what else it can do besides write back to you), a control loop (when things happen) and a role (its task and instructions). The role isn’t just play-acting: it narrows the task and sets priorities, and that shows up directly in the text that follows.
Job search without roles vs with roles
With prompting alone (“write an application and CV” plus files) you often get exaggeration, poor prioritisation and an overlong “CV with full history”. The outcome is uncertain. A job search isn’t one task: it involves fit assessment, drafting, reviewing materials and interview prep — different angles that a single generic agent easily mixes together.
In the video I build an agent swarm: separate roles as Cursor rule files. Same job ad and source material, but different instructions for each subtask — so the result is more focused, more accurate and easier to verify.
What does this actually fix?
Working with a language model is partly about prioritisation or incentivising toward a goal: the clearer the direction and boundaries, the less drift and rework. When each agent’s roles and “recipes” live in files, you don’t repeat the same instructions for every application. The work is split into manageable pieces — you build an “agent swarm”.
If this sparked any thoughts, get in touch! Let me know too if you’d like a video on a topic you choose.