Tuesday, March 31, 2026

From Copy & Paste to AI Agents: A Developer’s Journey (Part 3)


Hello, my AI friends...


If you did not read Part I and Part 2, here they are first!

A developer discovering that convincing coworkers to use AI agents is harder than using them.

So after the money talk, the tool talk, and the "I only wrote 500 lines myself" confession, there is still one question left:

Can you really trust an AI agent in day-to-day development?

The short answer is: No. And yes.

No, you must not trust the agent the way you trust a compiler. And yes, you can trust it the way you trust a junior developer who works incredibly fast, never gets tired, and is brave enough to touch every file in your repository.

That is exactly the point: the agent is not magic. It is not a senior architect. It is not a legal department. It is not a compiler. It is not your final QA. But it is a surprisingly productive team member if you build the right rails around it.

For me, the real productivity boost did not come from simply saying "implement feature XY". The real boost started when I forced the agent into a workflow that looks more like a disciplined development process.

That means:

  • clear coding rules
  • small, testable tasks
  • build scripts it must use
  • a fixed format for commit messages
  • a habit of writing tests before touching bug fixes
  • and a strong preference for asking questions before changing too much

If you let the agent work without these rails, it will still produce output. Sometimes impressive output. But sometimes it will "improve" things that were not broken, rewrite working code because it found a prettier abstraction, or confidently explain nonsense in a very professional tone.

That part is new for many developers: you are no longer mainly writing code, you are designing the behavior of your digital coworker.

I spend a lot of time defining process now. (And because of that, I had an idea, but more about that in the next blog post.)

Which compiler must be used?

Which config?

Are comments wanted or not?

Must interfaces live in separate units?

Must a bug fix come with a test?

May it edit old ANSI source files directly?

Should it stop and ask before changing public APIs?

All these rules sound boring. But boring rules are exactly what make AI coding useful in production.

Without rules, the agent is creative. With rules, it becomes productive.

And there is another thing I had to learn: context is everything.

If I start a fresh session and just throw a task at the tool, the result may be okay. But if the agent already knows the repository, the coding style, the current branch, the open bug, and the surrounding units, the quality jumps massively.

So a large part of my work now is not coding itself, but feeding the right context and cutting work into chunks that the model can solve safely.

This also changes debugging.

Sometimes I no longer start with the debugger. I start with a question like:

Find the most likely reason why this value can become nil although the constructor should have initialized it. Check all call sites and the lifetime management around the interface references.

And very often the answer is not the final truth, but it gives me three strong places to inspect immediately. That alone saves a lot of time.

Of course, there are still complete failures.

Sometimes the agent overlooks the obvious.

Sometimes it introduces a regression in a totally different area.

Sometimes it uses modern Delphi syntax where Delphi 2007 would simply laugh and die.

Sometimes it writes a beautiful helper class that nobody asked for.

And sometimes it keeps pushing forward, although it should have stopped and asked a question twenty minutes earlier.

That is why reviews matter more, not less.

In the old world, I reviewed code mostly because humans are inconsistent. In the AI world, I review code because the agent is fast enough to create a lot of very convincing mistakes in a very short time.

So my confidence does not come from "AI is so smart." It comes from this combination:

  • strict rules
  • repeatable build steps
  • automatic tests
  • small commits
  • and fast review loops

If all of that is in place, then working with an AI agent feels less like gambling and more like scaling.

And there is something else that changed for me: documentation.

I used to postpone documentation because it always felt like the part of the work that steals time from the "real" work. Now I often let the agent draft it immediately while the implementation is still fresh. README files, release notes, migration hints, installation steps, and even ticket summaries. Suddenly, all the annoying but necessary text around the code is no longer such a burden.

That alone removes a lot of friction from finishing projects properly.

So, where is this heading?

I think the next big step is not that AI writes even more code. The next big step is that it will understand workflows better: tickets, logs, build pipelines, documentation, dependencies, and all the little conventions that make up real software engineering.

We are moving from "generate me a function" to "help me run software development as a system."

And that is why I do not see AI agents as a gimmick anymore.

They are already becoming infrastructure.

Not perfect infrastructure. Not cheap infrastructure. Not trustworthy without supervision.

But infrastructure nevertheless.

So yes, I still read a lot. I still review a lot. I still stop the agent when it goes in the wrong direction. But I also get more done, across more projects, with less context switching pain than ever before.

That trade is worth a lot.

Maybe you are not using AI agents yet. Maybe you are worried that AI might cost you your job in the near future. But of one thing I am absolutely sure: if you do not engage with this topic today, you will be sidelined within the next three years at the latest.

Stay tuned—and have fun with AI.

Wednesday, February 11, 2026

From Copy & Paste to AI Agents: A Developer’s Journey (Part 2)

Hello, my AI friends...



Here is my current AI tool of choice. If you did not read Part 1, here it is!

Currently, I’m using Augment Code. You can use it in VS Code, JetBrains IDEs, and in the terminal.

In VS Code, I use only this plugin. (Yes, I’ve installed all the necessary Delphi tooling too—syntax highlighting, WordStar key bindings, code folding, and more…)

Over the last 14 weeks, I’ve written—if I’m being generous—about 500 lines of code myself. The rest was written by an AI-Agent (Claude.ai & ChatGPT). Hundreds of thousands of lines of source code that compile cleanly (depending on the task) with Delphi 2007 and Delphi 13. And of course, with 100% DUnitX test coverage.

Could I have written all of that myself? Sure—in six months or more, full-time

I use Claude.ai through an agent (Augment Code). // It can also use ChatGPT, but that burns more credits.

I’ve stored guidelines for how my code must be formatted and how variables must be named—in a really large *.md file. (And yes: the agent generated that file itself by reading hundreds of my units!)

I defined rules like:

  • Classes must always be created as TInterfacedObject with an interface.

  • Interfaces must always live in their own *.Intf unit.

  • DUnitX must be used.

  • If a project has TestInsight set via IFDEF, it must compile that project with config=AI.

  • It must always use MSBuild and generate a batch file that sets up rsvars.bat and my environment variables.

  • For files encoded as Windows-1252, it must use my tool and must not attempt to edit them via PowerShell.

Oh, and who owns the source code? I paid for it—so me (I think).

So yes, it’s also allowed to buy new credits for $40 whenever my budget is used up. By the way, I’m already on the highest tier—and it’s still cheaper than doing it all myself.

Oh, and the tool for editing non-UTF-8 files was written 100% by the agent. So does it belong to “him” after all?

Well, he published it on my GitHub account. (And of course also wrote the README and the installation guide in German and English.)

He also “learned” a workflow: whenever he needs a new feature, he writes a feature request as a *.md file and hands it to a colleague (himself, in another workspace).

When “the other one” implements the feature, the changelog gets updated, binaries are compiled, zipped, and published to GitHub again.

By now, the tool can also log user mistakes. That log is then analyzed fully automatically, and “he” suggests how the documentation or parameters should be improved—and whether a new function would be useful.

Besides the credits I burn with Augment Code, I’m now also using Claude.ai and Codex (OpenAI) in the terminal in parallel. This also works with auggie, the terminal version of Augment Code. Why the terminal if the VS Code plugin looks so nice? Because running multiple threads/agents in VS Code doesn’t parallelize that well, and I already had to bump my VM to 32 GB RAM. Terminal windows are simply slimmer.

This way, I can work on multiple projects in parallel. (And by the way, Claude.ai has a nice feature too: if you tell it to do something in parallel, it creates subtasks on its own and executes them.)

Sure, you could manage features and bugs with #TODO comments—or use a ticket system like Jira. But if you can just tell the agent about bugs and features and it maintains a *.md checklist or bug list, that’s far easier than creating tickets. (There are also integrations—for example, with Jira and GitHub—that can be synchronized automatically.)

So how has this changed my day-to-day work?

Definitely more exciting, but not more relaxing. You still end up reading along constantly—sometimes across multiple windows—and you keep getting questions or new tasks. The attention load is higher, no doubt. But in exchange, you get the output of 2–3 programmers in the same time. Especially parallel work across different projects boosts productivity massively. You no longer spend three months on one task before you finally find time to return to another topic. That also eliminates the “getting back up to speed” phase. If I don’t remember the current state anymore, I just ask the agent what the project status is and what we wanted to do next.

And the cool part is the answers you get in that situation…

You wanted to debug the XY bug. The problem is most likely in Whatever.pas. Just set a breakpoint at line 1045 and tell me the value of Index.

With that, the problem becomes clear. The agent finds the faulty line and fixes it—of course, not before writing a unit test for it: red first, green after.

If it’s green, a commit is created—properly named, not just “Bug-Fix-Done”.

As the final step, the known-bugs list is updated.

I don’t know if you’re always proud of your code, and I think everybody writes code that “just works.” But for me—if I write a really good class or something more complex—I’m genuinely proud of it. The kind of code without TODOs that survives for more than a year without refactoring.

I didn’t expect this, but that emotional part of my work is basically gone with this “vibe coding” using an agent.

Sure, you still need to write good request prompts, and you need to watch what the agent is doing. But even if the resulting code is excellent, there’s no emotional bonding. It works—fine. Call it a day.

Next month, I need to write at least an interface myself—just to get that feeling back.

Stay tuned—and have fun with AI.