I've read a lot of RFPs lately. Too many, probably.
And I keep seeing the same section. Different heading every time, same questions underneath:
"how many years has your team worked on this platform, list your certified developers, provide three client references from the past three to five years."
I get it. Five years ago these were exactly the right filters. Platform complexity was high, AI tools didn't exist, and the only real signal of competence was time spent doing the work. Experience was a genuine differentiator.
But something has changed. And the RFP process hasn't caught up yet.
When you ask an implementation partner how many years they've worked on Shopify or Sitecore, you're asking about their past. That's useful context. But it tells you almost nothing about how they work today, how fast they can deliver, or whether they'll be able to keep pace with the changes your business will need over the next few years.
Teams that have fully integrated AI into their workflow are delivering the same scope of work significantly faster than teams running the same processes they used in 2020. We're not talking about marginal improvements. The phases of a project that used to take weeks are taking days. Code scaffolding, documentation extraction, integration mapping, test generation, edge case discovery. These are the tasks that historically burned project budgets and pushed timelines out. AI compresses the time it takes to execute them dramatically.
A firm with three years on a platform and AI woven into every step of their process will outdeliver a firm with ten years on that platform still running the same playbook they used before these tools existed. That's not a criticism of experience. Experience still matters enormously. The question is whether that experience is being amplified by AI or running at the same pace it always has.
I'm not suggesting experience and references don't belong in an RFP. They do. But alongside them, the evaluation should include questions that simply didn't exist two years ago. How does your team use AI during the build phase, not in demos but in actual day to day workflow? How much faster do you deliver now than you did two years ago, and what specifically drove that change? Show me a project where AI changed the outcome. What happened and what did it mean for the timeline and cost?
These questions are harder to answer with a slide deck. They require a firm to actually show you how they work, not just what they've built.
Here's the uncomfortable part. The companies that benefit most from evaluation criteria that haven't changed are the ones whose work hasn't changed either. Which means right now, companies are picking implementation partners based on who they were, not what they can deliver today.
