Ranking the General Managers
1. Introduction
Theo Epstein might be a god, but how do you go about proving that? Billy Beane might be a genius, but how good is he compared to say, Terry Ryan? And is Brian Sabean really as bad as the sabermetric community makes him out to be, or is he as good as traditional baseball analysts seem to think?
You have questions, and I think I might have the answers. After seeing the Wins-Added Manager Measurement and Evaluation Rating System, which rates managers and was developed by Bradford Doolittle, I started thinking about quantifying the quality of MLB’s general managers. My first step was to think out what the key things are that a GM has to do, and which ones are quantifiable. I came up with three categories that I can measure and that I think are extremely important. They are: building a roster tailored to the team’s home ballpark, getting good bang for your buck, and making smart in-season decisions, whether that involves making a trade or sticking by a struggling player. Let me explain how I tackled each category.
2. Methodology
Building a team tailored to its home ballpark.
While this is the least significant of the three measures, I find it to be an important one. All major league teams have a home field advantage, but building it can be very important. If you’re the Boston Red Sox, for example, you don’t want to have a pitching staff consisting mostly of flyball lefty pitchers. And if you’re the San Diego Padres, you don’t want to have three slow outfielders patrolling spacious PETCO Park.
Measuring how well a team was suited to its home ballpark was pretty simple and straightforward. I adjusted runs per game at home (H_RPG) based on a team’s park factor (taken out of the Bill James Handbook) and then divided that by its runs per game on the road (A_RPG). Next, I adjusted each team’s H_RPG/A_RPG to make the league average equal to one (since all of my measurements were going to be based on above/below average). This was also calculated for runs allowed per game, both home and away. I then calculated how many runs above (or below) average each team was and divided that by the league runs per win number. Here is the leaderboard for 2004:
Team H/A Diff Diamondbacks 7.69 Devil Rays 6.78 Rangers 6.67 Brewers 5.82 Red Sox 5.81 Yankees 5.38 Blue Jays 3.29 Twins 3.26 Rockies 2.95 Pirates 2.37 Athletics 2.29 Mets 1.75 White Sox 1.62 Cubs 1.53 Astros 0.84 Dodgers -0.03 Cardinals -0.10 Royals -0.54 Marlins -0.75 Phillies -0.78 Braves -1.59 Giants -2.98 Nationals -3.19 Indians -3.58 Reds -3.59 Mariners -3.85 Tigers -5.86 Angels -9.60 Padres -9.94 Orioles -11.65
Bang For Your Buck
I will use a formula here that I recently introduced on Fanhome which was followed by a discussion with Tangotiger. His contention with this formula was that a “SMART (his capitalization, not mine) team could add salary and get some bang for their buck” at a higher level than the formula I am about to present would predict. But I’m not comparing GMs to smart GMs, I’m comparing them to the average GM. And empirical evidence tells us that the more payroll is increased, the more a team is spending for each marginal win. Plus, it logically makes sense that even a team with All-Stars at every position would lose a certain amount of games. So the formula I will use here is,
xW% = (.6*Lg_Payroll + Team_Payroll)/(2.2*League Payroll + Team_Payroll).
This sets a replacement level team ($7.5 million) at .307 and results in this kind of a graph:
I used this formula to predict each team’s won-lost record and then compared that to Diamond Mind’s win projections. One question that might arise is why I did not use actual W/L records. The reason for this decision is simple; a GM’s job in the offseason (which is when payroll is being set, for the most part) is to put together a team that he projects will win (or do as well as it possibly can). If the team doesn’t play up to its abilities, or half the squad gets hemorrhoids and is out for a month, it’s not fair to fault the GM. The GM needs to put a team that, on paper, looks good. And this is what I’m measuring. So, again, the leaders for 2004:
Payroll vs. Team Projected Wins Athletics 13.18 Indians 10.60 Marlins 6.72 Twins 6.58 Phillies 6.50 Nationals 6.17 Padres 5.83 Astros 5.23 Cardinals 4.60 Royals 4.19 Blue Jays 4.12 Brewers 3.23 Red Sox 2.59 Reds 1.64 White Sox 0.92 Pirates 0.69 Cubs 0.30 Mariners 0.16 Giants -0.00 Orioles -0.58 Devil Rays -0.88 Braves -3.58 Diamondbacks -4.78 Rangers -5.03 Angels -6.61 Rockies -8.17 Yankees -9.26 Tigers -12.46 Dodgers -13.40 Mets -18.51
Midseason Actions
In Moneyball, Billy Beane says that the first two months are spent finding out what you have, the next two are spent getting what you need, and the team you want plays the last two months. So the third thing I looked at was how adept GMs were at building the team they wanted. Again, my methodology was simple. First, I took each team’s record on August 1 and projected end-of-the-season records using Dennis Boz’s formula,
Final W% = (.5*(1-GP/162)^2.25) + (W/GP*(1-(1-GP/162)^2.25)).
As discussed on Fanhome, this formula is deadly accurate and worked very well with what I was doing. I then compared each team’s projected record with its actual record, resulting in this table:
Team PW/W Diff Astros 11.00 Red Sox 9.91 Braves 7.30 Angels 5.45 Giants 4.41 Orioles 4.09 Phillies 3.54 Cardinals 3.51 Cubs 2.45 Marlins 2.00 Twins 1.85 Nationals 1.47 Yankees 0.96 A’s 0.85 Dodgers -0.06 White Sox -0.16 Mariners -0.42 Rangers -0.54 Royals -1.73 Diamondbacks -2.19 Indians -2.46 Padres -2.46 Reds -3.54 Rockies -4.54 Blue Jays -4.40 Pirates -5.62 Tigers -6.23 Mets -6.51 Devil Rays -7.03 Brewers -11.33
This should do a pretty good job measuring how well a GM was able to evaluate his team in-season.
One objection that can be made is that bad teams are at a disadvantage because they will often trade away players or call up minor leaguers looking ahead to the future. But what I’m doing is measuring how well a GM did in a specific season, and a GM’s job in any season is to maximize wins.
3. Final Results for 2004
General Manager (Team) $/PW Diff H/A R Diff H/A RA Diff MidSeaW/W Diff Overall Theo Epstein (BOS) 2.59 4.29 1.52 9.91 18.31 Gerry Hunsicker (HOU) 5.23 -0.45 1.28 11.00 17.06 Billy Beane (OAK) 13.18 3.55 -1.26 0.85 16.31 Terry Ryan (MIN) 6.58 2.79 0.47 1.85 11.69 Ed Wade (PHI) 6.50 -2.68 1.90 3.54 9.26 Walt Jocketty (STL) 4.60 0.42 -0.52 3.51 8.02 Larry Beinfest (FLA) 6.72 -2.81 2.06 2.00 7.97 Mark Shapiro (CLE) 10.60 -1.10 -2.48 -2.46 4.56 Omar Minaya (WAS) 6.17 -6.68 3.49 1.47 4.45 Jim Hendry (CHC) 0.30 4.99 -3.46 2.45 4.29 J.P. Ricciardi (TOR) 4.12 2.96 0.32 -4.40 3.01 Ken Williams (CWS) 0.92 3.98 -2.36 -0.16 2.38 John Schuerholz (ATL) -3.58 0.35 -1.94 7.30 2.13 Allan Baird (KCR) 4.19 -9.54 8.99 -1.73 1.92 Brian Sabean (SFG) 0.00 3.42 -6.40 4.41 1.43 John Hart (TEX) -5.03 2.01 4.67 -0.54 1.11 Joe Garagiola (ARI) -4.78 1.14 6.55 -2.19 0.72 Chuck LaMar (TBD) -0.88 1.98 4.79 -7.03 -1.14 Doug Melvin (MIL) 3.23 5.16 0.67 -11.33 -2.28 Dave Littlefield (PIT) 0.69 -2.99 5.36 -5.62 -2.57 Brian Cashman (NYY) -9.26 1.69 3.69 0.96 -2.91 Bill Bavasi (SEA) 0.16 -4.70 0.86 -0.42 -4.10 Dan O’Brien (CIN) 1.64 -4.89 1.31 -3.54 -5.49 Kevin Towers (SDP) 5.83 -3.49 -6.44 -2.46 -6.57 Beattie/Flanagan (BAL) -0.58 -2.46 -9.19 4.09 -8.14 Dan O’Dowd (COL) -8.17 3.32 -0.37 -4.54 -9.75 Bill Stoneman (ANA) -6.61 -2.98 -6.63 5.45 -10.76 Paul DePodesta (LAD) -13.40 3.50 -3.54 -0.06 -13.50 Jim Duquette (NYM) -18.51 1.70 0.05 -6.51 -23.28 David Dombrowski (DET) -12.46 -2.46 -3.40 -6.23 -24.54
4. Conclusion
First of all, I’m happy to say that this table, in general, conforms to what I expected with a few exceptions (Minaya, Williams, Sabean, DePodesta), meaning it conforms to Bill James’ 80/20 rule and more importantly, that I’m likely on the right track. While winning GMs are generally near the top, and losing GMs are generally near the bottom, this was expected as good GM’s are going to put together better teams, and there are, again, some exceptions (Shapiro, Minaya, Depotesta, Stoneman, Cashman) which prove the rule.
But while I’m pretty happy with my results, there is one disclaimer I must include. While this is presented in wins above average, I’m not claiming that if you gave David Dombrowki an average payroll, his team would win 56 games. Wins are simply an easy way to express the results. I don’t claim to know what the numbers mean. If you want my best guess, regress everything 50% and that’s the actual amount of extra wins that a GM was worth. And who knows? Maybe with an average GM, the Tigers would have won 88 games. That’s certainly not out of the realm of possibility. One quick thing to add: I ran the numbers for 2003 and found a .5 r^2 (r a little over .7), which makes me believe that this is not just complete randomness (Beane, Epstein and Ryan were the top three). That makes me happy.
And yes, Theo Epstein is clearly a god. And I now have proof.