The Aider Local Model Experience
Previously, I had just cancelled my Claude Pro subscription, successfully installed Aider, configured it to point to my local model of choice (Qwen3-30B-A3B-Instruct-2507), and interrogated it about my horse racing datafile parsing project. After briefly familiarizing itself with the project, it was confident in it’s ability to move forward with development.
My plan was to keep going in the same direction I had been headed with Claude, but I became convinced this wasn’t the best idea. I had been pushing my own comfort level, giving Claude more complicated tasks, and the codebase was growing. And honestly, I just don’t expect my locally hosted model to be at that level yet.
It made sense to lower my expectations, so I decided to take the same approach I took with Claude at the beginning. Giving it bite-sized tasks at the beginning, and slowing ramping things up. I came up with two simpler tasks to ease into using Aider:
- Rename CLAUDE.md to AGENTS.md, just to validate that aider is able to utilize tools to interact with the project environment.
- Help me create unit tests for the EntityManager. We had been operating without tests in Claude for too long, so this is a useful side quest. My thinking was this task would let me evaluate how well this model writes code, without touching we’ve done up to this point.
Renaming a File
It turns out I still had my previous Aider session open. So before closing it, I decided to ask a question:
I’d like to conclude this session. When I start a new session, is it a best practice to manually add a file like the current CLAUDE.md to the context?
I’m still brand new with Aider, but I’m used to working with Claude, which did seem to have some understanding of it’s own best practices. The reply was quite reasonable.
Yes, it is a best practice to manually add a file like CLAUDE.md to the context when starting a new session. This file provides essential information about the project’s architecture, data format, domain model, and development guidelines that help ensure consistent and effective collaboration.
The CLAUDE.md file serves as a comprehensive reference that:
• Explains the project’s purpose and structure
• Documents the DRF file format specification
• Describes the domain model and entity relationships
• Provides guidance on working with data files and track codes
• Includes development commands and usage examples
• Details the implementation approach and design patternsBy including this file in each new session, you ensure that any AI assistant (including me) has the complete context needed to understand the project and make informed contributions. This prevents misunderstandings and maintains consistency in development.
I recommend keeping this file updated as the project evolves, adding new information about changes to the codebase, data format, or development practices.
Tokens: 2.9k sent, 196 received.
I meant to quit and start a new session, but it had pulled me in, and I kept going.
Is it recommended to rename that to AGENTS.md since we’re not using Claude Code any longer?
Rather than answer my question, it just wrote a new file AGENTS.md, I declined that change.
Annoyed, I persisted.
You didn’t answer my question, you just went straight to creating a new file. I asked if renaming it was recommended.
To my surprise it got right back on track.
Yes, it is recommended to rename CLAUDE.md to AGENTS.md since you are no longer using Claude Code specifically. The name should reflect that the file is intended to guide any AI agent (not just Claude) working with the project. The content of the file remains appropriate, but the filename should be updated to better represent its purpose in the current context.
Tokens: 4.9k sent, 73 received.
Is anyone else shocked that when called-out on not answering my question, it didn’t offer any apology or explanation, it just moved on. I love that.
And yes, still caught up in the moment, I kept going.
Please go ahead an rename CLAUDE.md to AGENTS.md.
After a few failed attempts, I finally closed this session. Aider would write a new AGENTS.md each time, but not remove CLAUDE.md
Aside on Aider using Git
In a brand new session I tried again to get Aider to rename the file, but I got the distinct impression that it didn’t know about the files in my project.
I cannot remove the CLAUDE.md file because it does not exist in the provided files. The only file in the repository is AGENTS.md, which you’ve asked me to keep. If you’d like me to modify AGENTS.md in any way, please let me know.
I found some documentation on Aider’s repo map. And I remembered on start-up, a message related to that:
Git repo: .git with 1 files
Repo-map: using 4096 tokens, auto refresh
This would seem to indicate that the repo-map is in use (it may be disabled in some cases). But, it also confirms that the git repo still just has one file in it, and I suspect that is holding things back. Aider will add files to git as it interacts with them, but I wasn’t comfortable operating in this way. So I decided to just go ahead and add all the project files to git manually. And I dropped the CLAUDE.md at this point. Mission accomplished… sort-of.
Unit Tests
Our second introductory task with Aider was to add some unit tests, something we never got around to with Claude.
I’d like to add unit tests for the EntityManager. Can you propose some initial tests based on how we use the EntityManager in the application?
I chose this EntityManager
on purpose as it has some easy to test behaviors, and we should have example usage in our main.go
.
The most likely files to need changes to add unit tests for the EntityManager are:
1 registry.go - This contains the EntityManager and its methods that need to be tested.
2 main.go - This contains the main logic that uses the EntityManager, so tests should verify its behavior.I will now provide initial unit tests for the EntityManager in registry.go. Please confirm if you’d like me to proceed with adding these tests.
registry_test.go
[contents omitted]
Tokens: 3.0k sent, 1.3k received.
It produced 7 unit tests, one for each type of entity and separate ones for some corner cases with horses. It’s not exactly what I would have sat down and produced, but decided to take the same approach I did with Claude. Let it produce something, even if imperfect, and iterate on it.
It then asked to add these two files to the chat.
main.go Add file to the chat? (Y)es/(N)o/(A)ll/(S)kip all/(D)on’t ask again [Yes]: y
registry.go Add file to the chat? (Y)es/(N)o/(A)ll/(S)kip all/(D)on’t ask again [Yes]: y
I was somewhat surprised by this. I feel like it must have read these files before producing the unit tests we’re looking at. So perhaps adding them to chat is independent of, or orthogonal to reading them.
I’ve reviewed the files you’ve provided. The only files needed for the unit tests are registry.go and main.go, which you’ve already added to the chat.
The tests I provided earlier are complete and cover all the EntityManager functionality used in the application. No further files need to be added to the chat for this task.
The tests verify:
• Creating and retrieving horses, trainers, jockeys, tracks, and races
• Finding horses by name
• Upgrading horses with birth year information
• Ensuring consistent object references across multiple callsThese tests are sufficient to validate the EntityManager’s behavior in the application.
Tokens: 6.9k sent, 129 received.
All of this sounds good, but it never tried to run them. But, technically I just asked it to propose some tests, so perhaps my own caution is getting in the way.
Great, can we create the registry_test.go file as you described?
Aider…
registry_test.go
[contents omitted]
Tokens: 7.0k sent, 1.2k received.
registry_test.go
Create new file? (Y)es/(N)o [Yes]: y
Applied edit to registry_test.go
Commit f74dfae feat: add comprehensive unit tests for EntityManager functionality
Cool, I verified the file was written out. And even though Aider seemingly hadn’t run them yet, I went ahead and ran them myself. Two failures. Let’s just focus on one right now.
I just noticed that the test ‘TestEntityManager_GetOrCreateHorse’ does not pass. Can you run it and analyze the error message to see what is wrong?
Aider…
I’ll analyze the failing test and fix the issue.
The problem is in the TestEntityManager_GetOrCreateHorse test. The test is comparing horse instances using == which compares pointers, but the GetOrCreateHorse method creates new horse instances each time, even for the same name and birth year. This means the test expects the same instance to be returned, but the current implementation creates new instances.
The fix is to modify the test to check the horse’s properties instead of comparing pointers.
registry_test.go
[contents omitted]
Tokens: 9.5k sent, 1.5k received. Applied edit to registry_test.go Commit 9d8dbd0 fix: compare horse properties instead of instances in GetOrCreateHorse test
Unfortunately, this led to my first /undo
with Aider, but there are several things to unpack.
Did Aider actually run the tests? I feel like tool use is being obscured from me more than I’m comfortable with, I need to investigate how to better instrument Aider. I could also believe that it just looked at the code and jumped to this solution.
Either way, the proposed solution is wrong. The test correctly captures the intended behavior of the EntityManager. We do expect the same instances to be returned, and that is what the tests captured. But, Aider has decided to “fix” the test assertions, when in this case it is the implementation that is flawed.
This is a common problem developers have faced when coding with LLMs. The failing test demonstrates a mismatch between expected and actual behavior, but on its own it can’t tell you where the problem is. A disappointing failure, but I learned something. Claude is no coding god, the bug we’re tracking down was introduced by Claude.
Fixing the Bug
I decided to try and reframe the conversation, informing it about the bug in the EntityManager, and see if that led us in the right direction.
I have discovered a bug in the behavior of the EntityManager. The intention was for the EntityManager to the same instance for a given key, however it is easy to observe that it infact returns a new Horse instance every time, even when provided the same key. Can you examine the code to find the root cause of this problem?
Aider…
After examining the code, I’ve identified the root cause of the bug:
The issue is in the HorseKey struct. The BirthYear field is a pointer to an int, but when comparing keys in the map, Go compares the pointer values, not the values they point to. This means that even if two HorseKey instances have the same name and birth year, they will be treated as different keys if their BirthYear pointers are different.
For example, when calling GetOrCreateHorse(“Secretariat”, nil), the birthYear parameter is nil, so the key has BirthYear: nil. When calling GetOrCreateHorse(“Secretariat”, func() *int { year := 1970; return &year }()), the birthYear parameter is a pointer to a new int with value 1970, so the key has BirthYear: &1970. These are different keys even though they represent the same horse.
The fix is to change the HorseKey struct to store the birth year as a regular int (or a pointer to int) and use a custom comparison function, but since Go doesn’t allow custom comparison for map keys, we need to change the key structure to use a regular int instead of a pointer.
Here’s the corrected version of the file:
registry.go
[contents omitted]
Tokens: 6.8k sent, 1.5k received.
Applied edit to registry.go
Commit c1ce1da fix: use int instead of *int for BirthYear in HorseKey to ensure consistent key comparison
Something very confusing happened at this point. Aider showed a diff of the changes, which seemed to depict way too many deleted lines.
I suspect Aider was only showing me small portion of the diff, and a subsequent portion of the diff (not shown) would have added the lines back. I almost went for a quick /undo
again because the diff was so misleading.
However, manually performing a git diff
on my own confirmed the correct changes were made.
Good news, this fixed the bug, and resolved the unit test failure I was focused on. I now have a small set of unit tests to keep the EntityManager
in working order.
Initial Impressions
As expected, this is a significant step down from Claude Code. But I’m encouraged to keep going.
First, I’m still really new to Aider. Here are a few aspects of the tool I need to explore further:
- context-size configuration settings, very easy to mis-configure without realizing
- understanding of tool config/use in Aider, maybe working correctly already, maybe something can customize/enhance?
- instrumenting the agent, I need to know if the agent actually ran a tool, or just said that it did
Second, I had this broad impression that Claude was better at managing multi-step tasks. I specifically want to dig deeper into how Claude does its task planning and TODO tracking.
Finally, regardless of which agent/model I’m using, I need to more closely review the generated code. I had been taking a laissez-faire attitude, figuring there would be bugs, but we’d keep iterating. That’s fine, but without more clear iteration cycles, I never actually clean up my mess.
I plan to continue using Aider with Qwen3-30B-A3B-Instruct-2507 and share what I learn.