So, apparently a popular (at least in some circles) figure in the procurement technology space retired last week. Dr. Elouise Epstein created the so-called spider chart back in 2017. I first saw it at ScoutRFP’s (now Workday) event in San Fransisco that same year. While I understood the idea behind it (i.e., that the Big S2P suites weren’t being used successfully), I never really understood how the specific solution providers were selected, or in some cases classified. I also never really understood how it was utilized.
In 2025, it seems to have hundreds of solution providers on there, often with pretty arbitrary classifications. Are you supposed to use all of them? Are the ones mentioned somehow recommended in that category? But let me be the first to admit that I have not read up on the spider chart, so I might have made the same mistake as many others have done when looking at quadrants, waves, and other simplified models — just looked at the graphic and jumped to conclusions.
What the spider chart did well is that it allowed solution providers that will likely never make it onto any ratings quadrants to have a place in the spotlight. That said, they had to share that spotlight with about a hundred other solution providers and, as more were added, the definitions became more and more arbitrary.
Now, if we compare this with a good tech advisor, magic quadrant, wave, marketscape, or whatever the analyst firm in question calls it, there are some key differences. Unlike the spider chart, these reports are done to compare solutions and solution providers in specific markets. Thus, the first difference is that there should be clear definitions of what specifically is required to be evaluated. The second difference is that there should be clear definitions of what criteria are being used to evaluate the solutions. That said, I have seen some of these ratings compare some truly confusing groups of solution providers, so nothing is guaranteed.
So, What Can We Take Away From This?
Firstly, that solution providers love being included in whatever type of graphic or report you can think of. I get it. When it comes to the ratings reports, being featured at least requires you to meet a set of inclusion criteria (even more so if you are rated as some sort of leader). That’s an accomplishment. But celebrating being mentioned as some sort of sample solution provider doesn’t feel like that much of an accomplishment. Similarly, the same goes for solution providers that proudly claim to be mentioned as sample vendors in a hype cycle, for example.
A second takeaway is that you as a reader must understand the report you’re reading or the graphic you’re looking at. Understand the inclusion and evaluation criteria of the report before drawing any conclusions. In fact, even if you’ve done that, I strongly recommend talking to one of the authors of the report as they can provide more detail and give even more precise recommendations given your specific context and requirements.
At Ardent Partners, we call our evaluation reports Tech Advisors, and they follow a rigorous methodology to ensure fairness and consistency. If you have any questions, don’t hesitate to reach out.