Every time I interview, I realize I’ve been building on the same foundation that has major gaps. That foundation is front-end-heavy full-stack development—React, Next.js, TypeScript, Node.js. It’s a stack I’m comfortable in. I know how to move fast, ship features, and build systems that scale on the front end. But the deeper backend work? The LeetCode and algorithm tests? The core computer science terminology? That’s where I wanted, no, needed to grow.
So instead of jumping straight into tutorials or spinning up yet another API, I decided to take a step back and do something I don’t think we do enough: Build a learning curriculum from scratch. And this time, I used AI to help me out.
Starting with Research
Before writing a single line of Go, I wanted to make sure I was learning the right language for me. During my job hunt, I found roles I’d be interested in and I identified what backend languages were showing up consistently. That quickly narrowed my choices down to two options:
- Python
- Go
Both are solid choices, but they solve slightly different problems. Python is everywhere—data, AI, scripting, backend services. It’s flexible, widely adopted, and easy to pick up. But I kept coming back to one thing: I didn’t just want flexibility. I wanted structure. I wanted types. I wanted straightforward control flow. I wanted something that felt closer to how I already think in TypeScript, but pushed me further into backend systems.
So I chose Go. Go felt like the most natural next step. Coming from JavaScript and TypeScript, the syntax is approachable, but the philosophy is different enough to force you to think differently. There’s no hiding behind abstractions. No over-engineering your way out of decisions. Everything is explicit.
Also—loops.
This might sound small, but I find comfort in having a simple, predictable loop structure that doesn’t rely on array methods for everything. Go brings that back in a way that feels refreshingly direct. More importantly, Go is built for the kind of backend work I want to get better at: APIs, services, concurrency, and performance.
Coming Up with a Plan (with AI)
Once I landed on Go, the next challenge was figuring out how to learn it. I didn’t want only learn how to build an API. I wanted to learn core backend topics: language fundamentals, data structures, and algorithms. Only then would I learn real-world application. I gave two AI tools the exact same prompt: “Create a learning plan for Go. Include basic data structures/algorithms.”
A quick note here: I could have iterated on this prompt, refined it, added constraints, and probably gotten better results from both tools. But that wasn’t the point. I wanted to compare what each one produced from the same prompt.
I tested:
- Claude
- Google Gemini
Claude
Claude’s output was… thorough.
What worked
It generated a well-structured Microsoft Word document that broke everything down into phases. Each phase covered a general topic and included example projects to reinforce what I was learning. One thing that stood out was how tailored it felt. Since Claude had access to my resume, it made comparisons to backend tools I already use, primarily Node.js. This inclusion made the daunting task feel more grounded and less abstract. The final phase was especially strong. It included a portfolio project that tied everything together, along with guidance on what I should demonstrate and even suggested project ideas.
It didn’t just tell me what to learn—it told me how to prove I learned it.
What didn’t work
My biggest issue with the results was vagueness. Everything was broken into high-level phases, which is helpful, but I still had to do some work to translate that into day-to-day learning. It told me what to focus on, but not always how or when. It was also long. The plan spanned 14–16 weeks. That’s not unreasonable, but that length felt daunting and increased dread. Another friction point: all the details are in the document. The chat itself didn’t give me much to work with unless I opened the file.
Gemini
Gemini took a different approach.
What worked
It was concise. Instead of long phases, Gemini broke things down week by week—an 8-week plan with clear goals for each week. I immediately knew what I was supposed to learn and what I should be able to do by the end of the week. Each week also included a project that included knowledge learned in the previous week. That structure made the learning feel incremental and practical instead of theoretical. Gemini also included a “Real-world engineering” phase, which I appreciated. It acknowledged that learning a language isn’t just about syntax—it’s about how things actually work in production. I loved this because it was at the end of the learning and allowed me to see everything I’ve learned come together.
What didn’t work
The output wasn’t as polished. When I tried to generate a document from the results, the output felt half-baked compared to Claude’s version. Gemini’s plan also lacked a true final project—something that pulls everything together and acts as a capstone. The weekly projects were helpful, but I still wanted that one larger piece to validate everything I’d learned, something to test that I actually learned what I set out to learn.
Where I Landed
This comparison test identified how each one fits into my process. I preferred Gemini’s approach to learning: Shorter timeline, clear weekly goals, incremental projects, and real-world context. But Claude built a better system: a more thorough structure, a stronger final project, better documentation, and more tailored insights. In the long run, Claude gave me a more complete learning plan. It just needed refinement. So that’s what I’m doing. I’m using Claude to iterate on the plan: pulling in the clarity and pacing I liked from Gemini, while keeping the structure and depth Claude provided. In the end, I got a plan that feels right for me; a plan that encourages me to start learning instead of aspiring to learn.
And honestly, that’s usually the hardest part.