Hello, my AI friends...
Here is my current AI tool of choice. If you did not read Part 1, here it is!
Currently, I’m using Augment Code. You can use it in VS Code, JetBrains IDEs, and in the terminal.
In VS Code, I use only this plugin. (Yes, I’ve installed all the necessary Delphi tooling too—syntax highlighting, WordStar key bindings, code folding, and more…)
Over the last 14 weeks, I’ve written—if I’m being generous—about 500 lines of code myself. The rest was written by an AI-Agent (Claude.ai & ChatGPT). Hundreds of thousands of lines of source code that compile cleanly (depending on the task) with Delphi 2007 and Delphi 13. And of course, with 100% DUnitX test coverage.
Could I have written all of that myself? Sure—in six months or more, full-time…
I use Claude.ai through an agent (Augment Code). // It can also use ChatGPT, but that burns more credits.
I’ve stored guidelines for how my code must be formatted and how variables must be named—in a really large *.md file. (And yes: the agent generated that file itself by reading hundreds of my units!)
I defined rules like:
-
Classes must always be created as
TInterfacedObjectwith an interface. -
Interfaces must always live in their own
*.Intfunit. -
DUnitX must be used.
-
If a project has
TestInsightset viaIFDEF, it must compile that project withconfig=AI. -
It must always use MSBuild and generate a batch file that sets up
rsvars.batand my environment variables. -
For files encoded as Windows-1252, it must use my tool and must not attempt to edit them via PowerShell.
Oh, and who owns the source code? I paid for it—so me (I think).
So yes, it’s also allowed to buy new credits for $40 whenever my budget is used up. By the way, I’m already on the highest tier—and it’s still cheaper than doing it all myself.
Oh, and the tool for editing non-UTF-8 files was written 100% by the agent. So does it belong to “him” after all?
Well, he published it on my GitHub account. (And of course also wrote the README and the installation guide in German and English.)
He also “learned” a workflow: whenever he needs a new feature, he writes a feature request as a *.md file and hands it to a colleague (himself, in another workspace).
When “the other one” implements the feature, the changelog gets updated, binaries are compiled, zipped, and published to GitHub again.
By now, the tool can also log user mistakes. That log is then analyzed fully automatically, and “he” suggests how the documentation or parameters should be improved—and whether a new function would be useful.
Besides the credits I burn with Augment Code, I’m now also using Claude.ai and Codex (OpenAI) in the terminal in parallel. This also works with auggie, the terminal version of Augment Code. Why the terminal if the VS Code plugin looks so nice? Because running multiple threads/agents in VS Code doesn’t parallelize that well, and I already had to bump my VM to 32 GB RAM. Terminal windows are simply slimmer.
This way, I can work on multiple projects in parallel. (And by the way, Claude.ai has a nice feature too: if you tell it to do something in parallel, it creates subtasks on its own and executes them.)
Sure, you could manage features and bugs with #TODO comments—or use a ticket system like Jira. But if you can just tell the agent about bugs and features and it maintains a *.md checklist or bug list, that’s far easier than creating tickets. (There are also integrations—for example, with Jira and GitHub—that can be synchronized automatically.)
So how has this changed my day-to-day work?
Definitely more exciting, but not more relaxing. You still end up reading along constantly—sometimes across multiple windows—and you keep getting questions or new tasks. The attention load is higher, no doubt. But in exchange, you get the output of 2–3 programmers in the same time. Especially parallel work across different projects boosts productivity massively. You no longer spend three months on one task before you finally find time to return to another topic. That also eliminates the “getting back up to speed” phase. If I don’t remember the current state anymore, I just ask the agent what the project status is and what we wanted to do next.
And the cool part is the answers you get in that situation…
You wanted to debug the XY bug. The problem is most likely in
Whatever.pas. Just set a breakpoint at line 1045 and tell me the value ofIndex.
With that, the problem becomes clear. The agent finds the faulty line and fixes it—of course, not before writing a unit test for it: red first, green after.
If it’s green, a commit is created—properly named, not just “Bug-Fix-Done”.
As the final step, the known-bugs list is updated.
I don’t know if you’re always proud of your code, and I think everybody writes code that “just works.” But for me—if I write a really good class or something more complex—I’m genuinely proud of it. The kind of code without TODOs that survives for more than a year without refactoring.
I didn’t expect this, but that emotional part of my work is basically gone with this “vibe coding” using an agent.
Sure, you still need to write good request prompts, and you need to watch what the agent is doing. But even if the resulting code is excellent, there’s no emotional bonding. It works—fine. Call it a day.
Next month, I need to write at least an interface myself—just to get that feeling back.
Stay tuned—and have fun with AI.
