Breaking down Division 1 baseball

We’re only a few weeks away from the opening of another college baseball season. With it, we’ll face another round of challenges in evaluating a veritable mountain of statistical data. In Division 1 alone, close to 300 schools will play 50 to 60 games apiece, all under the watchful eyes of Major League scouts and analysts.

As with all baseball stats, that mountain of data must be viewed in context. That’s even more important for college stats than for professional numbers. With so many teams, the level of competition varies drastically. There are plenty of other factors to take into consideration—metal bats, different styles of coaching, park factors that range from 60 to over 200—but there’s more than enough in quality of competition to keep us busy for one article.

Strength of schedule and RPI

Major and minor league baseball fans don’t hear much about “strength of schedule,” but it’s an important tool when comparing teams that play unbalanced schedules, especially when every team doesn’t play every other team.

Relative Power Index is college baseball’s version of football’s BCS computer rankings. It plays a role in determining postseason berths and seedings, using strength and schedule along with the team’s own performance to rank schools. As you might expect, RPI has its detractors—Boyd Nation, for instance, publishes “pseudo-RPIs” designed to replicate official RPIs, but has also designed his own measure, which he calls Iterative Strength Ratings.

Regardless of which tool you use to rate schedules, the numbers are fairly consistent from year to year. While the presence of a David Price or a Stephen Strasburg can give an individual school a huge boost for a couple of years, the impact it has on the schedules of other teams is minimal.

For instance, using my measure of team strength, five of six of the worst D-1 teams hailed from the Southwestern Athletic Conference (SWAC), and four of the top five are members of the ACC. In fact, the top 21 spots are held by the ACC, SEC, Big 12, and Pac 10. The self-reinforcing nature of college recruiting (the teams that recruit good players find it easier to recruit more good players) ensures that the strongest conferences will remain strong, regardless of minor shifts in team quality within the conference.

Relative conference power

Some version of SOS and RPI isn’t hard to come by, but it is arduous to work with. Unless you’re doing a extensive, detailed project, you probably don’t want to apply one of 300 different multipliers to thousands of players’ stat lines. For most purposes, it’s good enough to have a general idea whether a school plays a difficult, easy, or middle-of-the-road schedule.

Since D-1 teams play about half of their schedules against conference opponents (the stronger teams tend to play more than half in-conference), simply knowing how the conferences rate is enough.

There are a couple of different ways to go about this. The simplest is to take a strength rating for every team in a conference, then average those ratings for a composite conference rating. As we’ll see in a moment, this gives us sensible results—top conferences such as the SEC and ACC are as strong as a .650 team relative to D-1 average, while the SWAC and the Mid-Eastern Athletic Conference come in below .350. That doesn’t mean there are never good teams in the SWAC or bad teams in the ACC, just that we shouldn’t look at, say, a .500 team in each of these conferences and see anything like equivalent teams.

The other approach is to look only at non-conference games. The advantage there is that it more purely compares conferences to each other—the data isn’t cluttered with 4,000 in-conference games. The downsides, however, are many.

First, while every team plays a fair number of non-conference games, it’s as few as 15-20 for some teams, and it’s often concentrated at the beginning of the season. In February and early March, northern teams spend a disproportionate amount of time on the road, and as we saw with champion Fresno State last year, it just takes some teams a couple of months to come together.

Perhaps the most serious problem is that, after the first few weeks of the season, most schedules involve conference games on the weekend, with non-conference games relegated to Tuesday and Wednesday. Since teams throw their best pitchers over the weekend, non-conference games are a test of depth—something high-profile teams have, and others don’t. Using this second method is kind of like ranking MLB teams based on their performance in games started by their 4th and 5th starting pitchers. It measures something, but it doesn’t tell the whole story.

Some numbers

Enough chatter—it’s time for some numbers. I’ve listed the 31 Division 1 conferences below, along with their 2008 relative strength, measured in both of the ways I’ve described. “Non-Conf” is the method that includes only non-conference games, while “All Games” is the method that includes—you guessed it—all games.

Non-Conf   All Games   Conference
0.706      0.648       Atlantic Coast Conference (ACC)
0.685      0.643       Southeastern Conference (SEC)
0.681      0.636       Pacific-10 Conference
0.654      0.631       Big 12 Conference
0.580      0.583       Conference USA
0.571      0.567       Big East Conference
0.574      0.557       Big West Conference
0.551      0.539       West Coast Conference
0.527      0.524       Southern Conference
0.492      0.518       Colonial Athletic Association
0.534      0.517       Sun Belt Conference
0.504      0.513       Big Ten Conference
0.503      0.508       Western Athletic Conference (WAC)
0.464      0.501       Atlantic Sun Conference
0.441      0.497       Missouri Valley Conference
0.464      0.488       Mountain West
0.455      0.487       Mid-American Conference
0.414      0.483       Southland Conference
0.466      0.477       Patriot League
0.443      0.472       Big South Conference
0.405      0.449       Atlantic 10 Conference
0.398      0.447       America East Conference
0.403      0.443       Ohio Valley Conference
0.369      0.434       Metro Atlantic Athletic Conference (MAAC)
0.389      0.433       The Summit League
0.367      0.427       Horizon League
0.363      0.410       Ivy Group
0.279      0.376       Northeast Conference
0.302      0.334       Independent
0.209      0.333       Mid-Eastern Athletic Conf.
0.129      0.317       Southwestern Athletic Conf. (SWAC)

If you’re still reading, I’ll infer that you’re interested in some of the gritty details. There’s nothing groundbreaking in my method of rating team strength, but it does differ a bit from other approaches.

I start with Pythagorean winning percentage instead of actual winning percentage. That probably explains some of the extremes in the non-conference ratings for the worst conferences, as some of those teams get absolutely drubbed on occasion.

Next, I adjust for home field. The best teams tend to spend more time at home, while lesser schools, especially those in the northeast, play more on the road. A few teams played 70 percent of their games at home, while several played that much on the road. Last year, home teams won approximately 56 percent of the time; I attributed a bit of that to the fact that the best teams have more control over their schedules and are more likely to host early-season invitationals, so I used an actual home field advantage of .550.

A Hardball Times Update
Goodbye for now.

I calculate a basic strength of schedule: It’s two-thirds the average of the pythagorean winning percentages of the team’s opponents, and one-third the average of that of their opponents’ opponents. Finally, I adjust the team’s winning percentage for their strength of schedule using the log5 method. This resulted in a change of .050 or more for 35 of 297 teams in 2008.

Next up

Whether you use conference power ratings or some version of team strength ratings should depend on your purpose. For all but the most involved projects, conference ratings should be sufficient.

What I’m really interested in, though, is division power. Most collegiate talent is concentrated in a few conferences of Division 1, but every year, several dozen players are drafted from schools outside of Division 1. Complete data is more difficult to come by for Division 2, Division 3, and NAIA, but for instance, D-1 teams played about 150 games against D-2 opponents last year.

Armed with that data, combined with strength ratings like the ones described above, we should be able to get a firmer grip on the quality of play in Division 2 and beyond. In a couple of weeks, we’ll take on that very topic.

Comments are closed.