The Quest for a Free AI Coding Solution: Unlocking the Potential of Goose and Qwen3-Coder
The AI coding landscape is buzzing with the potential of open-source alternatives. With the rise of paid services like Claude Code, developers are seeking more affordable options. Enter Goose and Qwen3-coder, two promising tools that might just revolutionize the game. But can they truly compete with the big players? I set out to find the answer.
Jack Dorsey, the visionary behind Twitter, Square, and Bluesky, sparked curiosity with a cryptic tweet about Goose and Qwen3-coder. The internet speculated that these two free tools could challenge Claude Code. But is it really possible? I decided to dive in and put them to the test.
Goose, an open-source agent framework developed by Dorsey's Block, resembles Claude Code in many ways. Qwen3-coder, a coding-centric large language model, is similar to Sonnet-4.5. Both are entirely free to use, which is a huge advantage for developers on a budget.
This article is the first of a trilogy exploring the integration of Goose, Ollama (an LLM server), and Qwen3-coder. I'll guide you through the setup process, explain their roles in AI agent coding, and attempt to build an iPad app using these tools.
Let's begin with the installation process. You'll need to download Goose and Ollama first, and then Qwen3-coder from within Ollama. I initially installed Goose first, but it couldn't communicate with Ollama until I realized I needed to set up Ollama first. Lesson learned!
For Ollama, I recommend installing it before Qwen3-coder. I used the app version on MacOS, but you can choose your preferred platform. The model won't download until it's prompted, so be sure to type something to initiate the download. The model's size is substantial, so ensure sufficient storage space. This setup keeps your AI local, ensuring privacy and control.
After installing Qwen3-coder, expose the Ollama instance to your network. I kept it in the .ollama directory, which hides the large file. I set my context length to 32K, but you can adjust it based on your RAM. Remember, we're aiming for a free, local setup, so avoid signing in to Ollama if possible.
With Ollama and Qwen3-coder ready, it's time to install Goose. I chose the MacOS Apple Silicon Desktop version, but there are multiple options. Configure Goose to connect to Ollama and select the Qwen3-coder:30b model. Congratulations! You've set up a local coding agent.
Now, let's put Goose to the test. Like any chatbot, you'll type prompts to interact. I set a directory for Goose to work from and ran a simple test: building a WordPress plugin. Goose struggled at first, failing to generate a working plugin. After explaining the issues, it took five attempts for Goose to get it right.
First impressions? I was slightly disappointed by Goose's performance compared to other free chatbots. However, agentic coding tools like Goose and Claude Code have an advantage: they work directly on the source code, allowing improvements with each correction.
Performance-wise, my setup on an M4 Max Mac Studio with 128GB RAM held up well. I didn't notice a significant difference in response time compared to cloud-based AI services. But these are initial observations; a more extensive project will reveal the true potential.
Have you ventured into the world of local AI coding? Share your experiences with setting up Goose, Ollama, or Qwen on your hardware. How does it compare to cloud-based solutions like Claude or OpenAI Codex? Let's discuss in the comments. Stay tuned for the next installment, where I'll delve deeper into the AI agent coding process and attempt to build an iPad app.