Reflections
What I keep, what I abandon, and how to work with me.
Review & stance (2022 -> mid-Nov 2025)
This journal covers three years in which I put AI at the center of my work, sometimes very intensively. Looking back, what remains are mostly convictions and a stance: what I keep, what I abandon, and how I want to work with these tools.
Quality of models and data
One simple conviction: output quality is capped by training-data quality. When public code is average, a model’s average is too. A “powerful” model is not enough. To get clean output, you need explicit rules, well-scoped context, and serious tests.
I stay wary of superficial trends (emojis, “too human” tone, gratuitous storytelling). What interests me is what we can reproduce tomorrow in another project, not just today’s demo.
My conductor stance
At the start, my pleasure was “eating code”. Today, I mostly see myself as a conductor: someone who designs the systems in which AIs work.
Concretely, I spend more time:
- defining rules and contracts the code must respect;
- writing tests and checks that act as guardrails;
- designing flow architectures (context, agents, CI);
- automating context collection and synthesis rather than copying it by hand.
The developer remains critical and in charge of decisions. For me, AI is a thought amplifier: it accelerates when the frame is clear, and it drifts when it is not. Modern prompt engineering is less about a magic sentence and more about a well-prepared environment.
It is also a tool for temporal exploration: I test more ideas faster, fail earlier, and learn faster.
Acknowledged failures & what I abandon
What I keep from failure is not shame, but a list of what I no longer want to repeat. I am not covering everything here, but these are the major abandonments:
- Exhaustive local scanning: too slow, exponential cost. I prefer selective indexing and on-demand access.
- Hand-maintained context files: dead end (delegated to AIs, still too heavy). Context must be generated and cleaned automatically whenever the tree changes.
- Over-engineered orchestration: too many sub-commands, scripts, and ceremonies end up costing more than they return. KISS and YAGNI are not slogans; they are guardrails.
- Magic prompts without architecture: “more prompts” does not compensate for missing structure. I now aim for “more architecture, fewer prompts”.
- Untestable rules: an intuition that does not become a verifiable rule (lint, check, test) usually fades out. If I can automate it, I do.
These abandonments are not renunciations; they are design choices for the next cycles.
Working with me on these topics
If you made it this far, there is a good chance these topics matter to you too. Here are examples of collaborations that attract me (while staying open to other approaches, as long as we focus on systems and quality):
- Putting AI rails in an existing project: clarify context, wire CI, and design a simple but robust orchestration workflow.
- Auditing / consolidating an AI setup: review what exists (prompts, scripts, agents, CI), identify over-engineering, simplify, and document a clearer next version.
- Co-building open-source tools or workflows around context, MCPs (powerful but context-heavy), orchestration, and quality.
- Training or coaching a team on serious LLM usage in development: how to delegate without letting go of the wheel, and how to turn intuitions into rules and tests.
If you want to discuss, the easiest way is to start from a concrete problem (a project, a workflow, a team) and see how these ideas can help. No sales pitch, just field work.
Conclusion
The core of my experience: AI amplifies the thinking that drives it. The point is not to let AI write in our place, but to think with it: design rules, flows, and guardrails that make it useful, repeatable, and safe.
I share this because it is concrete, and because the shift is clear: from the developer typing code to the conductor designing systems. The next chapter will probably be another cycle: less experimentation, more stabilization, and more transmission.