Ditching Claude Code for Aider Local

ain’t da beer cold

🎂 Birthday present to myself, I signed up for a Claude Pro plan, ready to learn.

Claude Code

I decided to start with a new project, but stick to a domain I’m very familiar with, horse racing past performance data files. I provided two files:

In addition I decided to draft a file called domain.txt in which I tried to break down the major entities and their relationships. Things like, Horse, Race, Track, Race Entry, etc. Just a sentence about each, and a list of the most essential properties.

And with that, I booted up claude code.

What follows here is an account of 4 conversations I had with claude code. This is a condensed version, only showing my side of the conversation. I’m sharing it so you can see how I talked to it, the granularity of tasks I chose, and the specific improvements I focused on. Although I’m not showing the tool-call invocations and output, I read through its explanations and reviewed the code it was generating.

Phase 1

Please analyze this codebase and create a CLAUDE.md file, which will be given to future instances of Claude Code to operate in this repository.

I’d like to create a new Go project in this directory. This project should have a single command-line program which takes a required argument, the path to the file to be parsed. The program should use the Go standard-library’s CSV parsing to parse the file. To keep it simple initially, just print the primary horse’s name in each row of the file (column 45).

Can we update CLAUDE.md now that there is code in this project?

I must have read somewhere to keep CLAUDE.md up to date. It seemed to work, and as you’ll see I stick with it.

At this stage the program is nothing more than a toy, one main function doing everything. As the first real challenge, we need to start mapping things from this complex CSV into a Go structure.

Phase 2

I’d like to create some initial structs for the domain objects described in the domain.txt file. Then I’d like to refactor the main.go such that for each line in the CSV parsed, we produce one new Horce Race Entry object. Feel free to ask me any follow-up questions.

This is great, can you check CLAUDE.md and see if we need to update it for the future?

As you see, I tried to give a clue that the Horse Race Entry was the top-level structure for each row. And it did a great job, without any follow-up questions. I had specifically limited the domain.txt initially, so it wasn’t trying to cover the entire CSV file yet.

This is now a real program, with some thoughtful organization of the functions and structures. But, it’s taking a naive approach to an important issue. Crucially, I don’t have a particular solution in mind at this point, so I lead with a question, not a request to change the code.

Phase 3

I noticed a general problem with the current implementation, but I will focus on one particular example. Many of the entities described in the contents of the CSV are the same entity. For example, right now the Horse struct has a field named Sire of type string. That isn’t wrong, but it turns out the Sire is another Horse (the original horse’s dad). So, instead of modeling this directly as a string, it should be modeled a reference to another Horse. Then, taking this a step further, as we parse other lines in the CSV, another horse may have the same Sire, and we should make the correct reference, not create a new Horse instance. How should we address this general problem?

It correctly honed in on the notion of uniqueness being one of the real challenges here.

This is a great start, but I see one problem. Focusing on the properties that ensure uniqueness makes sense, but in some contexts we may learn about a horse only by name, and not know the birth year. But you’re right, strictly speaking two horses can have the same name, and would be differentiated by birth year (if we knew it). There may be no perfect solution, but what would you propose?

Here it proposed a solution I found inelegant, using composite string keys in a hashmap.

Is there anything more elegant than composite string keys, some entities like the Race itself has a complex unique key (track name/code, race date, race number)?

Let’s go ahead and implement this using structured keys (option 1).

Great, let’s update CLAUDE.md with these changes, so we’re ready for future work.

It seemed like this approach worked well in this case. We had some back and forth, got to a list of possible ways forward, I chose the one I wanted, and then it implemented it.

But even though everything has been successful so far, I was not confident about how to explain to claude that certain elements of the CSV were repeated, and represented one or more historical workouts, which are properties of the Horse. So, I tried to hold it’s hand more this time. But, I think it just got it.

Phase 4

I’d like to add support for a new domain object, a Horse Workout. Fields 109-209 in the CSV describe the horse’s last 10 workouts. Can you start by reviewing the CSV definitions for these fields, and propose an update to the domain.txt file describing it.

Now can you propose an update to domain.go supporting this data?

Nicely done, now can you update the parseHorseRaceEntry logic to include the workout information from the CSV file. We probably should have a helper method dealing with the fact that workout information is repeated at different offsets for the different workouts.

Great, now let’s update CLAUDE.md to reflect our changes from this session.

After these four sessions I was pretty happy with how this experiment was going. And I would say I was impressed with the results from claude code. Over dinner, I think I spent 15 minutes telling my wife how awesome this was, and I started talking through what I planned next. I was excited.

finish line

But, I woke up to find out that Anthropic was changing their Consumer Terms and Privacy Policy. While I never trusted Anthropic, their previous policy of not training on our conversations DID appeal to me. And yes, I can still opt-out, but their signaling here is clear. And although it’s disappointing, I appreciate the clarity it gives me.

Aider

Let’s give aider a try, and we’ll configure it to use Ollama with Qwen3-30B-A3B-Instruct-2507. That’s the model we recently had success with in our own agent.

After following their installation guide, it’s relatively straightforward to point to my local Ollama, and the specific model we want.

export OLLAMA_API_BASE=http://127.0.0.1:11434
aider –model ollama_chat/qwen3:30b-a3b-instruct-2507-q4_K_M

Which when everything works should look like this:

Aider v0.86.1 Model: ollama_chat/qwen3:30b-a3b-instruct-2507-q4_K_M with whole edit format
Git repo: .git with 0 files
Repo-map: using 4096 tokens, auto refresh

The documentation mentioned adding specific files directly relevant to the task at hand. My only object right now was to see if it understood this project based on the code and other artifacts in the directory.

/add CLAUDE.md

Now let’s see what aider can do.

I’ve been building this project out with claude code. Now I’m switching to aider. Can you familiarize yourself with the project, and tell me what you see?

Aider:

I’ve reviewed the project structure and documentation. The repository contains a Go application for parsing horse racing data in DRF (Daily Racing Form) format. The key components are:

1 main.go - The entry point that parses DRF files using Go’s CSV library
2 domain.go - Defines the domain model with structs for horse racing entities
3 registry.go - Implements an entity management system with structured keys for deduplication
4 go.mod - Go module definition
5 Data files - DRF format specification, track codes, and sample data

The system uses a sophisticated entity management approach with:

• Structured keys for type-safe entity identification
• Progressive enhancement capabilities (upgrading entities when more information becomes available)
• Proper reference integrity between entities

The DRF format is complex with 1,435 fields across multiple sections including race data, horse history, workout records, and past performance data. The parser handles variable-length records and uses the entity manager to ensure proper relationships
between entities.

The documentation is comprehensive and provides detailed guidance on working with the data format, track codes, and entity relationships. The system is designed to handle large files efficiently with proper data deduplication and reference management.

Would you like me to suggest any specific changes to improve the codebase?

Tokens: 2.4k sent, 280 received.

That sounds pretty promising. I specifically wanted to see how confident it is in its ability to go forward with this project.

Not right now, thanks. I have a follow-up question. Does this project appear to be in state that you would be able to help me continue working on it?

Aider:

Yes, I can help you continue working on this project. The codebase appears to be in a functional state with a well-structured domain model, entity management system, and clear documentation. The architecture is sound with proper separation of concerns
between data parsing, domain modeling, and entity management.

I understand the DRF file format specification, the entity relationships, and the overall design patterns used in the code. I can help with:

• Extending the domain model to handle new fields
• Improving the parsing logic for edge cases
• Enhancing the entity management system
• Adding new features like data analysis or reporting
• Refactoring for better performance or maintainability
• Writing tests for the existing code

The project is ready for continued development, and I’m prepared to help you make progress on your goals.

Tokens: 2.7k sent, 164 received.

So if like me, you’re ready to switch off claude code, the switch itself is painless. The real test will be how well the agent handles actual development tasks.

I’m going to try and pick up where I left off with claude code. The next phase will involve mapping the horse’s past-performance records. These are similar to the workouts but with many more data elements. As always, I plan to document how it goes, good or bad. Stay tuned.