Choosing your AI tools as a PM - a practical guide
Why the tools matter less than you think, and one decision that matters more than most PMs realise
If you spend any time in PM communities right now, a significant portion of the conversation is about tools. Which AI is best. Which platform does the most. Which new product just launched and whether it changes everything.
Most of it is noise.
Not because the tools don't matter - they do. But because the conversation tends to focus on the wrong question. The question most PMs are asking is "which tool should I use?" The more important question is "what do I actually need, and how do I make sure I own it?"
This article answers both. But it starts with the second one, because most guides skip it entirely and it's the thing you'll be most glad you thought about early.
The platform trap
Right now, there are a growing number of PM-specific AI platforms that promise to handle your workflows end to end. AI-powered roadmapping tools. Platforms that generate PRDs from a prompt. Tools that claim to manage your entire product process with AI built in.
Some of these are genuinely useful. But there's a risk in building your working practice around any of them, and it's worth naming directly.
When you work inside a specialised platform, your prompts, your workflows, your context, and your outputs live inside that platform. The way you work gets shaped by what the platform supports. If the platform changes its pricing, gets acquired, pivots its focus, or shuts down, you lose not just the tool but the entire working practice you built around it.
Platform Dependency Risk
Build your workflow around one proprietary platform and you inherit its business risk. Build your workflow around portable files, prompts, and context docs you own, and you can switch tools without losing how you work.
This has happened repeatedly in the software tools space. Products that PMs built significant workflows around have disappeared, changed fundamentally, or priced themselves out of reach. Every time, the people most exposed were the ones who had invested deeply in platform-specific features rather than portable skills and systems.
The AI space is moving faster than any tools market in recent memory. The platforms that exist today are not all going to exist in their current form in two years. Some will be acquired. Some will pivot. Some will be made obsolete by model improvements that make their core value proposition irrelevant.
Building your working practice in a way that doesn't depend on any single platform surviving is not paranoia. It's just sensible.
What owning your workflow actually looks like
The alternative to platform dependency is a toolkit you own and control, designed to work across multiple tools and models, stored somewhere you have permanent access to it.
This is exactly what the ai-pm-toolkit is built around, and it's the principle worth understanding regardless of whether you use that toolkit or build your own.
When your prompts are markdown files in a GitHub repository, they work with any AI tool that accepts text input. When your context doc templates are plain text files you fill in before a session, they can be pasted into Claude, ChatGPT, Gemini, or any model that comes next. When your workflows are documented as readable playbooks rather than automated sequences inside a proprietary platform, they survive any tool change.
GitHub is the right place to keep this, for several reasons. It's free, it's permanent, it versions your changes over time so you can see how your approach has evolved, and it's accessible from anywhere. More importantly, it's yours. Nothing about how GitHub stores a folder of markdown files is going to change in a way that makes your prompts inaccessible.
This might sound like extra work compared to just using a purpose-built PM AI platform. Initially it is. But the compound return on building a portable, platform-agnostic toolkit is significant. Every prompt you refine, every context doc you improve, every workflow you develop becomes a durable asset that travels with you across tools, across jobs, and across whatever the AI landscape looks like in three years.
The PMs who will be most capable in five years are not going to be the ones who used the best platform in 2025. They'll be the ones who developed a clear, portable, personally-owned working practice that they've been refining for years.
The tools you actually need
With that principle in place, here's the practical landscape. You need three things, and for most PMs getting started, two of them are enough.
A conversational AI for thinking and drafting
This is your primary thinking partner. The tool you use to work through ambiguous problems, synthesise research, pressure-test decisions, draft documents, and get a second perspective on something you're not sure about.
Claude and ChatGPT are the two obvious choices here. Both are capable. The practical difference that matters most for PMs is how they handle context and instruction. Claude tends to follow nuanced instructions more precisely and handles longer context well, which matters when you're feeding it a full context doc before starting work. ChatGPT has a broader ecosystem of integrations and a more familiar interface for most people.
Start with whichever you're more comfortable with. The skills you develop using one transfer directly to the other, which is the point. If you're starting from scratch, Claude is the recommendation here, partly because it pairs naturally with the Claude Code workflow described below, and partly because its handling of detailed context instructions is particularly strong.
The free tiers of both are enough to get started. The paid tiers are worth it once you're using them seriously, primarily because they unlock longer context windows and access to the most capable models.
An AI-native editor for building and prototyping
This is where the structural shift described in article one becomes practical. An AI-native editor is a code editor with AI deeply integrated, capable of taking a problem description and building working software, running multiple approaches in parallel, and iterating based on your feedback.
The three main options right now are Cursor, Claude Code, and Windsurf.
Cursor is the most established and has the most mature interface. If you're comfortable with VS Code, Cursor will feel immediately familiar. It's the easiest entry point for PMs who haven't used a code editor before.
Claude Code is Anthropic's own offering and is particularly powerful if you're already using Claude as your conversational AI. It's more terminal-native and slightly higher friction to start with, but the depth of integration with Claude's models is strong.
Windsurf is the newest of the three and has been developing quickly. Its agentic capabilities are strong and it has a clean interface that tends to feel accessible to non-developers.
For most PMs who are new to this workflow, the recommendation is to start with Cursor. It has the gentlest learning curve, excellent documentation, and the widest community of PM practitioners using it.
One important thing to understand: you do not need to know how to code to get value from these tools. You need to know how to describe a problem clearly. That's a PM skill you already have. The editor handles the implementation. Your job is direction and evaluation.
Optionally: a specialist tool for research or synthesis
Some PMs find genuine value in tools that specialise in research synthesis, competitive analysis, or handling large document sets. Perplexity is useful for research that requires current information. NotebookLM is strong for synthesising across a large collection of documents.
These are genuinely useful but not essential when you're starting out. Get the first two working well before adding anything else. The risk of adding too many tools early is that you spend more time managing your setup than actually developing your workflow.
How to think about model choice
Alongside tool choice, there's a lot of noise about which underlying AI model is best. GPT-4o versus Claude Sonnet versus Gemini. Benchmarks and comparisons and arguments about which one is smarter.
For most PM work, this matters less than people think.
The models at the frontier are all remarkably capable. The differences between them for the kind of work PMs do - thinking through problems, synthesising research, generating and evaluating approaches - are real but not decisive. What's decisive is how well you direct them.
A well-formed problem with rich context given to a mid-tier model will produce better output than a vague prompt given to the best model available.
More importantly: the model landscape is changing fast. The best model today is not going to be the best model in six months. If you build your working practice around prompts and context docs that work across models, you can switch when better options appear without rebuilding anything. If you build around platform-specific features that only work with one model or one service, every model improvement potentially requires you to change how you work.
This is another argument for the portable toolkit approach. Write your prompts in plain language that any capable model can interpret. Store your context docs in a format that can be pasted into any interface. Build workflows that are model-agnostic by default.
A practical starting point
If you're reading this and want to act on it today, here's the minimum viable setup:
Start a free account with Claude at claude.ai. Spend thirty minutes having a real conversation with it about a product problem you're actually working on. Not a test prompt, not a hypothetical. Something live. Notice what happens when you give it more context versus less. Notice where it surprises you and where it misses the point.
Then, when that feels natural, download Cursor and open it on a project. Use the ai-pm-toolkit quickstart to set up your IDE config and run your first session. You don't need to build anything significant. You need to experience what it feels like to describe a problem and watch something take shape.
That experience is what shifts the mental model. Reading about it is useful. Feeling the speed of a well-directed agent working on something real is what makes it concrete.
On the toolkit side: fork the ai-pm-toolkit repository on GitHub and make it yours. Add notes. Modify the context doc templates to fit how you think. Adjust the prompts based on what works for your specific workflow. The version that lives in your GitHub account, customised to how you work, is more valuable than any out-of-the-box platform.
Start simple. Build something that's yours. Make it better over time.
What to expect from here
The tool choices above will get you set up. The rest of the series is about using them well: building the context habits that dramatically improve your output, developing a daily workflow that sticks, and getting the most out of the toolkit from day one.
Next up: the single habit that will improve your AI output more than any tool upgrade.