A conversation with Bill Baer

Well, again The Pujols Awards came up short on submissions. Maybe we have to make it an every second week feature unless I get a lot of nominations in a given week. Anyway, Bill Baer of Baseball Digest Daily and Crashburn Alley (and a person I neglected to mention in Wednesday’s column) had some thoughts to share with me regarding the article “I was born a ramblin’ man.”

I thought our discussion might help clarify some misunderstandings many had about what I wrote. Bill understood where I was coming from and offered feedback on that basis. Here’s what we went over:

John, you stirred up quite a controversy with your recent THT article. As I mentioned, I enjoyed it and I don’t think that your main point should be lost among a couple of generalities.

Well, that’s good—stirring up controversy. I’m not dumping on sabermetrics, I’m concerned about its current form and direction. I began studying it because there’s so much on the field occurring that simply isn’t captured by traditional numbers. My favorite example of what piqued my interest is something like this: Rod Barajas reaches on an error, Lyle Overbay doubles him to third, and Brad Wilkerson hits a deep fly ball and scores Barajas. Wilkerson gets an RBI, Barajas a run scored (even though both made outs) and Overbay nothing. Were that to happen 200 times to Overbay in a season, it would be said that he hit a “soft” .360 even though he’s the key guy in producing runs.

Obviously, that’s idiotic. It was situations like this that got me into sabermetrics and why I still study it. I don’t ignore sabermetrics by a long shot. I have some points of general disagreement in that I prefer players be compared to league/positional averages rather than “replacement level” or “bench.” I like the fact that defensive measures are starting to become more reliable. Heck, I was excited as anybody waiting for Bill James’ Win Shares to come out. I loved his old Abstracts and remain a fan.

Sabermetrics is the science of baseball. The credo of science is that it admits its own ignorance, is self-critical, and is always on a quest for more information. It is some of the people who do the science who cause it to stagnate. Obviously, I’m simply restating something that you’ve already said, but I think it’s important to keep in mind that the science is not what’s wrong; it’s the people.

I thought I conveyed that point. Especially in the conclusion: “From Alexander Cartwright to Branch Rickey< through Bill James the search for understanding this great game will continue as long as three strikes mean you’re out.” The trouble is that many identify themselves with the science and since science is ever-reliable and mathematics represents the ultimate truth, they begin to think, act and write that they are ever-reliable and sources of ultimate truth.

That mindset is what creates arrogance and close-mindedness in what they say, think and write. This is where things start to stagnate. It’s why I put out the reminder that:

“In a sense, sabermetrics has jumped the shark since it has gone from being a verb (the first part of the word is from Society for American Baseball Research) to becoming a noun. It has become an ideology, an end; when you embrace sabermetrics you have reached enlightenment in all things baseball and all that is left is a crusade to convert the ignorant masses to the new light. Many of Bill James’ later acolytes have gone the same way as Jesus Christ’s post-apostolic-era “followers.” They have gone from being the persecuted to being the persecutors. Instead of being the ones executed as apostates, they have become the ones executing apostates. They have gone from being students and teachers to conquerors and crusaders.”

It has gone from research to research within certain parameters instead of weighing all point of data even if those points may be unpalatable. Science is about letting all the data speak and letting the chips fall where they may. However some within the sabermetric community only use statistical data when doing analysis. Obviously, more often that not such research ends up with an answer that fits neatly within the framework. All this does is reinforce the ideologues that it’s ultimate truth even though certain information was deliberately excluded.

In a sense it’s like questioning evolution/creation: It’s okay for an evolutionist to raise questions about evolution because he’s an insider, the scientists don’t feel threatened. Were an outsider/skeptic to raise the exact same question then the defenses arise even if it’s the exact same question being raised. It goes likewise for those who believe in creation. A fellow believer can raise a question that an atheist cannot. It becomes an issue, not of the question itself, but rather who it is asking it. You’re not allowed to question it unless you’ve pledged fealty to a certain belief system.

It’s how I feel, since I use sabermetrics without actually being part of the group that embraces it as gospel truth and I draw a lot of fire. It’s okay for Derek Zumsteg to question something but it’s not okay if Bill Conlin poses the exact same question. One is merely curious; the other an idiot for even asking.

It is entirely true that there are some adherents of sabermetrics who are causing it to stagnate. There is no argument that when you stop admitting ignorance (or, conversely, think that you know it all), you lose the ability to learn any more.

John, I think you’re bringing up some very important criticisms and there are going to be a lot of people who aren’t interested in hearing it. Don’t let that stop you. I believe you understand the problem, and see the path clearer than most and it’s important for you to keep pounding on those keys—it’s for our own good.

Thanks, as I said, I’m not against sabermetrics as long as it remains a verb and not a noun (metaphorically speaking—hi Dave!) … a field of study (that includes all points of data) and not an ideology.

Lastly, on the subject of intangibles, I wonder what the general sabermetric consensus is on them. I’ve never been able to glean that. My personal feeling on it is that they do indeed exist, but since they are intangibles, and by definition immeasurable, it’s more scientifically inaccurate to shove them into your analysis than it is to exclude them.

A Hardball Times Update
Goodbye for now.

Well, I think those are best handled as qualifiers rather than part of the study itself. Often the conclusions reached by some within the community are treated as near absolutes rather than subject to variation and exception. Although it’s not an intangible, raw run scoring has become sacrosanct (an absolute) and anything that depresses run scoring is evil and idiotic punishable by being dragged outside McAfee Coliseum and pelted with old Abstracts.

However, no team has won the pennant by merely scoring the most runs in a season; pennant winning teams have led the league in run scoring but correlation doesn’t necessary equal causation. Teams win pennants by scoring more runs more often than their opponents within 162 individual game units. You can outscore your division rival over the course of a season but if he outscores his opponents one more time than you do then they win the flag. Run distribution is what wins games and sometimes teams have to do things that may depress run scoring over the course of a year if it aids distribution within a given game.

As I mentioned in a TPoSGD post, the Angels are doing a poor job scoring runs but a good job distributing them. Now, sabermetrics will tell us that it’s most likely a fluke that will even out over time. What some sabermetricians will not tell us is that when they say it will “regress-return to the mean,”whether it will happen within the 162 game unit that is being contested or outside of that. Is the Angels 31-13 record in games decided by two runs or fewer evening out something that occurred in 2007, or will it even out sometime in 2009?

It’s the same with the Blue Jays hitting (especially with RISP): the players should return to the mean unless their slumps are just that (a return to the mean) from a hot run in 2007 … or whether they’ll return to career norms in 2009. The larger the sample, the more reliable the results. Well, sample size doesn’t run from a convenient early April-early October time frame on a year-to-year basis. A team cannot transfer what occurs from 2007 or what will occur in 2009 into this season. They can only deal with the here and now since there’s no way to know if a return to the mean will happen within a period of time that will be of benefit to the season being played.

This is why I feel teams need to be proactive when teams are struggling offensively in order to improve distribution over raw accumulation. Again, the Jays are 17-27 in games decided by two runs or fewer (and 11-20 in one-run games). This will even out over time, but will it even out within the next 82 games? Will the Jays go 27-17 and 20-11 in their next 44/31 two run/one run games or will this even out at some point in the 2009 season?

We don’t know, which is precisely why I advocate certain approaches … had the Jays tossed in a bunt with men on/none out in those 20 one-run losses might they have won 5-10 of those games? I don’t know, you don’t know and Bill James doesn’t know but you cannot trot out “suppresses run scoring over a season” or “return to the mean” as reasons not to do it.

The thing is, 2+2=4; it has always been thus and there are no outliers, flukes, random variations in history that caused 2+2 to equal anything other than four. However, in baseball things happen that cause outliers, flukes, random variations which means that a hard and fast approach to run scoring (or anything else) is nowhere near a guaranteed solution. Alas, intangibles are one of those things–a pitcher might live for the big moment while a far more talented hurler might wet the bed. A hitter might focus better and be looser in the big situation than a far more talented batter who might tighten up when the big blow is required.

As a Jays’ fan it’s frustrating to watch A.J. Burnett and Jason Frasor—both have electric stuff but neither seem to produce the results their talent suggests. Compare those results with a Tom Glavine and a Tug McGraw: inferior raw stuff but far better results when healthy.

A game that will have a million new variables crop up over the next few seasons cannot be fully quantified right now: there is what we know now and what we will know then and I can guarantee that there will be differences. Sadly, some within the sabermetric community react as if all the answers are already in and to ask questions is pointless.

I, for one, will keep asking questions regardless of how people react to my queries. It’s not that I do not care about the study of sabermetrics—it’s because I do.


Comments are closed.