Can we objectively evaluate advanced fielding data? by Mike Fast July 16, 2010 Colin has an article at Baseball Prospectus today that is required Sabermetrics 101 reading. It’s been linked already in the THTLinks Twitter feed this morning, but it deserves more prominence and more discussion. This is an article of fundamental importance for the baseball analysis community. Anyone who is evaluating fielding, which is almost everyone in these heady days of Wins Above Replacement (WAR) statistics, needs to read and understand what Colin is saying. He states an approach that should have been tackled by the analytical community before advanced defensive metrics started to gain such widespread acceptance. How did we ever come to accept such statistics without ever objectively testing them? Now that they are being tested objectively, it should not surprise us that problems are being found. That does not invalidate the metrics. It is the path to knowledge. We are, after all, on a search for objective knowledge about baseball, are we not? Openly and objectively testing defensive metrics is not the quest of those who want to destroy baseball knowledge, as some will tell you when this topic is broached. It is a path well-worn by sabermetric pioneers, though “small is the gate and narrow the road that leads to life, and only a few find it.” We want to know what we know and why we know, when we trust it and when we don’t trust it. We want to know what the sources and ranges of the errors are. This way lies improved fielding metrics and the ability to silence critics with facts that can be demonstrated in a way that is convincing, not one that demands blind faith that sabermetricians know what they are doing. This also does not mean that we should stop using advanced fielding data today or that it has zero utility in objective sabermetrics. First and foremost, this is a clarion call to the community to turn its research efforts toward cracking this problem. Secondly, it’s a wake-up call to understand and quantify the uncertainty in our measurements related to fielding, and to the derivative statistics like WAR, not a call to abandon them altogether. Scientific inquiry has always operated in an environment of measurements made with uncertainty. This had led scientists to devote great effort to estimating the bounds of that uncertainty in order to determine their confidence in their measurements, and thus their confidence in the conclusions based on those measurements. There is no need to abandon the “science” of fielding measurement. Far from it. There is a need for the application of the time-tested sabermetric approach. Doubt is not something to be feared. When its source is based on facts, doubt is healthy. Colin’s doubt, which I share, is healthy. Let’s take this opportunity as an analytical community and turn doubt into growth.