Exploratory Spikes with AI: My Workflow for Rapid Discovery
A product manager's secret weapon for quickly vetting ideas & building a strategic roadmap
The trap almost every data team falls into is spending 99% of their time responding to inbound requests from the business. As I argued in my data platform product article, it’s essential we balance being responsive to business needs with proactive platform building. But how, HOW exactly do I do that. Well, two ways:
Dedicated long term platform road mapping in partnership with engineering leadership
Exploratory research spikes that uncover valuable feature directions for deeper research
Between the two you’ve got a continuously evolving proactive strategic data platform roadmap. I’ll write about #1 another day, today is about #2.
Why Exploratory Spikes Matter
As a data platform team I’d heard many times the analogy that we are rebuilding the plane while flying it - nothing has ever felt more true to me. Especially because I’ve held this role in two established enterprise companies under digital transformation, but it’d be true at many younger companies too. You have to deliver on:
Table Stakes: Build what the business needs now, with the platform you have today
Continuous Improvement: Improve the ways you build & the platform that enables you and your users to do so
Tomorrowland: Build the platform the business will need in 1, 3, 5 years - all while data & AI landscapes evolve at record pace
Re-build that plane while flying it. Cool!
It’s a lot of concurrent horizons. All day every day new ideas are coming forth - more than I could ever deeply discover. Ideas come from all over:
A dev mentions a new tool that could improve our cost observability
I’m wondering if we should consolidate BI tools and if so, which one and *how*?
I see a LinkedIn headline about a new open-source interoperable semantic standard and wonder - what does this mean for us?
These ideas are often strategically relevant-but chasing them all would derail the roadmap. GenAI has made doing exploratory research spikes faster then ever - unlocking the ability to quickly vet an idea and either abandon it or push it to the discovery backlog for future deep work.
What Triggers a Spike
So when do I run a spike? Literally like 5 times a day every day. Is this the most efficient, maybe not, but it’s energizing and gets me results so I’m gonna keep on doing it. You might be more structured and less chaotic, you do you.
But I do a spike when:
Curiosity meets strategic relevance
The idea could impact cost, reliability, adoption
The idea relates to an already prioritize problem we are working on
Notice, curiosity alone is not enough. I’m not saying I never go down a rabbit hole driven by curiosity alone—sure I do—but then I usually just abandon it after a quick informational Google AI Mode search or something. But a real research spike usually results in me being pulled down a productive rabbit hole that’s headed somewhere useful.
Timebox Your Research Spikes
I’d say most of the times when I do this we are talking 5-15 minutes. Occasionally up to 30, but not beyond that. That’s the point. This is about rapid exploration, a triage of ideas - is there something there, THERE. You know?
Objective: Learn enough to decide which of these two categories your idea belongs to:
Dead end: Well that was interesting and I’m moving on! Things reach a dead end when I realize:
The tech isn’t mature enough for enterprise context
This overlaps with another part of our stack that works fine
This idea doesn’t do what I hypothesized it could do
Not a good fit with our way of working and priorities
Prioritize for future discovery: Within 5-15 minutes I try to get to a point where it’s clear we’re onto something useful and it’s worth me revisiting later. Which later? Well that depends on urgency, relevance, leverage, if there’s a current project to apply it to etc. We prioritize discovery bandwidth just like we prioritize engineering capacity.
The goal is to reach a stopping point
Package the learnings so I can pick up where I left off (or hand off to someone else)
Place it in our discovery backlog so I don’t forget about it (a tool like Jira Product Discovery)
The point isn’t to fully vet this, get it to a PRD, pull in others - the point is a quick research spike to turn the flood of ideas into lighted vetted discovery topics to pursue.
Copilot as My Accelerator - A Prompt Framework
Did I do research like this before? Of course, I google all the time. But it was really inefficient - digging through stack overflow posts, medium articles, comparing product documentation. It was hard to get very far in 5-15 minutes. But with AI, it’s easier - and small hallucinations don’t matter - it’s usually directionally correct and I’ll do first-hand research when I get to it.
Research spikes follow a lot of patterns, but if I were to synthesize the goal I might:
Start broad:
“Explain [tool/approach] and its pros/cons.”
“Hmm, how does tool X compare to tool Y (that I’m already familiar with”
“How do companies decide whether to consolidate to a single BI tool or maintain a multi-BI strategy long term?”
Narrow:
“Hmm, so if we integrate tool X with tool Z will that be duplicative? Or do teams often mix the two in their stack?”
“If we consolidate to Power BI though, how will our low-code/no-code users get data? They currently rely on Cognos…”
“Well that sounds exciting, but what about applying this in a regulated context (like healthcare or finance)?”
Summarize:
Package findings for future reference.
The goal here is to remind yourself
what you learned,
why you thought this was relevant, and
how to systematically progress the research when you get there.
An Example With Sample Prompts & Output
Let’s illustrate with a fun prompt & some screenshots of output shall we?
Starting Prompt (Broad Understanding)
Can you help me understand the current state of AI-assisted SQL translation tools and whether they might be useful for migrating legacy BI reports from tools like Cognos to modern platforms like Power BI? How realistic are out of the box solutions (vs custom Github Copilot agents we might build in house) and what are the risks/cons of taking such an approach in migrations?The initial summary from Claude is pretty good:
I mean, as I suspected…this is not straightforward…might be more useful if we were migrating legacy SQL based pipelines to something like dbt. The response went on a bit longer than this, but for brevity lets skip ahead…at the end was something useful
Lets see what happens on follow up/narrowing.
Narrowing Prompts (Getting Specific)
Oh interesting. Lets talk about #1 (AI for discovery & inventory) and #3 (human in the loop validation framework). At work for discovery this would probably be our product mangers (using Office 365 Copilot), but if Github Copilot offers key benefits we might have our engineers help us build something we can use for #1.
For #3 we would want something that could potentially act as an agent - given boundaries to tackle in the code base (like a report or set of similar reports) it analyzes, proposes a model, submits a PR, etc - acts like a junior dev in the workflow. How feasible is something like this?Whomp whomp Office Copilot 365 is looking pretty limited guys, but maybe a hybrid approach could work:
Okay so maybe we’re not ready to plan our Cognos→Power BI migration for 2 more quarters but my curiosity is satisfied and I want to park these ideas. Let’s jump to summarize.
*Note: in practice there’s often a lot more back and forth of all this additional context & clarifying questions, but I’m gonna keep this one simple so you don’t all get super bored…
Summarize Prompt
Interesting, can you summarize this into a discovery plan I can revisit in 2 quarters as we more seriously begin planning our Cognos->Power BI migration? Please include:
1. The findings: What we learned about tool capabilities, accuracy
rates, and limitations. Please structure this as three separate discovery directions:
- The hybrid discovery approach you recommend here for discovery & inventory of the current report base
- Semantic context/metadata infrastructure/investments we may need to make to increase the likelihood of accuracy and success
- Actual developer workflow approaches & limitations (after discovery & semantic investment)
2. Possible solution directions:
Different migration approaches we could take (bulk translation vs. phased, which tools to pilot)
How we might handle reports whose complexity goes beyond the limits of tools.
How third party tools (AI powered or not) beyond Office 365 & Github Copilot might expand what's possible here and be worth going through procurement, infuse, etc.
3. Key risks & unknowns: What needs validation before we commit
(accuracy on our specific query patterns, security/compliance
verification, manual cleanup effort estimates)
4. Dependencies: What we’d need in place (report usage analytics
to prioritize, test environment, resource allocation for cleanup)
5. Next steps: What the deep dive research would need to investigate
Be detailed enough that a team member could pick this up later, but remember the goal is to direct discovery further, not to provide a full report right now.And ta-da - a nice little summary for me to revisit in 6 months
It actually made the response way too extensive - I don’t have this problem as much on Office Copilot so apparently Claude is interpreting my prompt differently. Interesting.
Definitely fine-tune this to work so you get the level of detail you want.
But the key is - having the AI take your back & forth rabbit holes and asking it to turn this into a coherent future discover plan make it much easier on future you to jump back in and not have to read a scattered transcript.
In Conclusion
I never know if I’m writing up super obvious AI tips - but simple repeatable patterns like this really do change the flow of work for me. Doing exploratory spikes allows me to consider longer range strategic work in small pockets of time in between the more urgent work we’re always doing to respond to immediate and near term business needs.
Having AI assist speeds up discovery in these small pockets.
Having AI recontextualize these spikes really reduces the cognitive load and sets up future me for success when I go to revisit it.
It lets me follow inspiration and curiosity without wasting too much time. It is really energizing too.
I’m curious - how do you explore ideas without derailing delivery? Drop your workflows—or your favorite prompts—in the comments.









Love this framework, Anna. One thing I’ve found: spikes without decision architecture become throwaway research. When I come back 6 months later, I would have to start over because I’d documented findings, not the decision scaffold. I therefore anchor every spike to “what needs to be true to act”, so exploratory spikes build conviction instead of stopping at intelligence gathering. Drafting a piece on this approach now…
Spot on. The 'plane while flying' is so real. How do you pick your exploratory AI spikes?