# Can We Predict Hitters’ Change Based on How Pitchers Approach Them?

Access to PITCHf/x data has been a boon to the independent baseball analytics community. It’s hard to imagine life without detailed pitch information from Brooks Baseball or FanGraphs, but it was only seven years ago when this sensor-generated data made its way into the public sphere.

We’ve learned a lot about what makes pitchers effective (the examples are too numerous to cite), but we are still quite behind the curve when it comes to hitters and a deeper understanding of their performance outside of classic outcome metrics.

While detailed, sensor-related data are still largely out of the public’s reach (although that might be coming soon) there have been attempts to leverage not just PITCHf/x data, but also the play-by-play data from MLB Advanced Media’s (MLBAM) Game Day app for all it’s worth to get new information on hitters. The MLBAM data include batted ball coordinates, making it possible to not only plot the location of batted balls, but estimate things like the distance each ball is hit and its horizontal angle relative to home plate.

Brooks and FanGraphs have incorporated some hitter-focused PITCHf/x data into their various reporting platforms. Even outside of the big boys, there are some great examples of trying to leverage these data. For example, there is BaseballHeatmaps.com (maintained by our own Jeff Zimmerman and his brother), which provides a bevy of tools for calculating hitter and pitcher performance beyond our typical outcomes statistics. Readers can generate leader boards for batted ball distance as well as run values relative to league average based on pitch type, velocity, and location. Daren Willman’s BaseballSavant.com has also made a splash in the past year, allowing readers to generate data–including spray charts–for hitters based on PITCHf/x data. I even maintain a spray chart tool that uses PITCHf/x data to not only plot batted balls, but allows for all sorts of filtering and run values based on standard variables such as velocity, pitch type, pitch location, batted ball distance, type, and angle (e.g., pull, center, opposite).

Despite these advances, I would argue that we are still very far from gaining any additional explanatory power from these “process” data for hitters. We’ve managed to explain and describe some facts about hitters–such as how their power ages by looking at batted ball distance and how that may impact their future performance–but in general we’ve just scratched the surface in terms of what kind of statistics and metrics based on the publicly available data are truly meaningful.

### Using PITCHf/x data to predict hitter performance

Today, I want to examine whether the way a pitcher approaches a hitter can provide any kind of signal regarding how that hitter’s performance is likely to change the following year.

We know that, in general, pitchers will adjust to hitters over the course of their career. In an article at FanGraphs two years ago, I focused on the frequency of pitch types thrown to hitters over three-year periods. Among other things, I found that the percentage of fastballs thrown to hitters generally increased with age. Combine that with the fact that hitters generally tend to see more pitches in the zone as they age, and one can see how pitchers will take advantage of a hitter’s diminished ability. The question is whether these changes by pitchers are predictive of hitter change–in short, do they give us an early warning signal that a hitter’s ability is changing beyond what we would expect from basic outcome statistics?

Robert Arthur at Baseball Prospectus has dived into this topic with gusto over the past few months (for example, see here). Besides finding a similar pattern to the one above in terms of fastballs seen and age, he looked at how changes in the average location of pitches thrown to hitters might offer clues as to whether we should expect hitters to significantly increase or decrease their performance in the following year.

Arthur calculated the change in the average distance from the center of the zone over the course of a season on a per batter basis. Essentially, large positive changes indicated that pitchers were avoiding the heart of the strike zone against a hitter, and large negative changes indicated pitchers where grooving the ball at a higher rate to the hitter. As as result, Robert found some evidence that changes in distance from center over the course of Year One might help predict changes to overall hitter performance in Year Two, even after we take into account aging and regression.

I’ve been working on parallel research over the past few months, using a different approach. I’ve used center zone distance in previous research from the 2013 Saber Seminar on the likelihood of generating swings and misses by pitchers, but for this work I’ve decided to leverage Heart%.

Heart% comes out of the work Jeff Zimmerman and I have done on defining the various edges of the strike zone. The graphic below depicts the five areas of the strike zone by batter handedness, with the white area indicating the heart of the strike zone:

I focused on Heart% for this work, specifically fastballs thrown to the heart of the strike zone. My thinking here is that, ideally, pitchers would love to mostly throw fastballs in the heart of the zone. Why? The fact is that pitchers get a significantly larger number of called strikes when throwing to the heart of the zone versus the edge: 97 percent of all pitches taken and thrown to the heart of the zone are called strikes. Compare that to 71 percent when thrown to the horizontal edges, 75 percent when thrown to the bottom edge, and 58 percent when thrown to the top edge. The problem, of course, is when pitches in the heart of the plate are put into play. There we see that run values increase from -7.1 per 100 to 7.0 per 100. Compare that to the 3.2 runs per 100 when the pitch is thrown to the edges of the strike zone.

Given this dynamic, we should expect pitchers to increase fastballs in the heart of the strike zone when they are less fearful of what a hitter can do when putting those pitches in play. Conversely, we should see Heart% decrease when pitchers become more fearful of a hitter’s ability.

Using this idea, I decided to test whether changes in the rate of fastballs to the heart of the strike zone added any predictive value to how hitters might perform in the following season.

### Testing the hypotheses

I pulled data on all non-pitchers from 2009 through 2013. I collected data on plate appearances, age, weighted on-base average (wOBA), and the percent of fastballs thrown to them in the heart of the strike zone. For fastballs, I used four-seam, two-seam, sinkers and cutters. I avoided split-fingered fastballs since they are rarely intentionally thrown in the zone (let alone the heart of the zone).

I then calculated the change in Heart% from Year One to Year Two for each hitter and then the change in his wOBA from Year Two to Year Three. To control for aging and regression I also included each hitter’s Marcel projection. (If you are not familiar with Marcel, you can read up here.)

Now, Marcel alone does a very good job at predicting how a hitter will perform in the coming year. The table below shows the correlations, r-squared, and standard error (the standard deviation of the residuals, i.e., predicted values minus the actual values) for Marcel under various conditions as compared to simply using the current season to predict the next. You can see that Marcel not only does a better job from a pure correlation standpoint, but it has a better standard error.

Next Season Prediction, Marcel vs. Current Season | ||||
---|---|---|---|---|

Predictor |
R |
R^2 |
Standard Error (Actual – Predicted) |
Sample Restrictions |

Marcel | 0.581 | 0.34 | 0.0301 | >=350 PA both years, same team, and three years data |

Marcel | 0.574 | 0.33 | 0.0301 | >=350 PA both years and same team |

Current wOBA | 0.537 | 0.29 | 0.0344 | >=350 PA both years, same team, and three years data |

Current wOBA | 0.291 | 0.08 | 0.0796 | No restrictions |

Given how good Marcel is, we shouldn’t expect the change in Heart% to outperform it or any other projection system. The question is how well it can add to Marcel or how well it can predict on its own after controlling for natural aging and regression (which is exactly what Marcel does).

I restricted the test to hitters who had at least 350 plate appearances in both the current and predicted seasons*, stayed with the same team in both seasons (to control for park effects), and had at least three years worth of major league wOBA data prior to the predicted season (since Marcel doesn’t account for minor league performance). That left me with a sample of 418 batter seasons. Not a ton, but enough to work with.

I ran four initial tests to establish baselines for changes in Heart% and then effects after controlling for Marcel. For each, I used a basic linear regression model:

- Test 1: Actual wOBA (Year 3) minus current wOBA (Year 2)~Change in fastball Heart%
- Test 2: Actual wOBA (Year 3) minus current wOBA (Year 2)~Change in<93 fastball Heart%
- Test 3: Actual wOBA (Year 3) minus current wOBA (Year 2)~Marcel minus current wOBA+Change in fastball Heart%
- Test 4: Actual wOBA (Year 3) minus Current wOBA (Year 2)~Marcel minus current wOBA+Change in<93 fastball Heart%

I decided on two Heart% variables; the first is just like I described above, but the second just looks at the percent of fastballs less than 93 mph throw in the heart of the zone relative to all pitches seen. The logic here is that while elite velocity can be much harder to handle even in the heart of the strike zone, lower velocity fastballs are even riskier to throw down the middle. We can quantify this by comparing the expected runs on fastballs greater than or equal to 93 mph and those below 93 mph (numbers are per 100 pitches or contact):

Expected Run Values, Fastballs | ||
---|---|---|

MPH |
Ave. Run Value (All Pitches) |
Ave. Run Value (On Contact) |

<93 | -2.8 | 7.2 |

>=93 | -3.8 | 5.8 |

Total | -3.2 | 6.7 |

Here are the results, where predicted wOBA change shows how a one percentage point change in whatever Heart% variable is used impacts the predicted change in wOBA. I’ve included r-squared and standard errors just for the two baseline models just to give readers a sense of how weak these variables are on their own:

Results, Tests 1-4 | ||||
---|---|---|---|---|

Variable |
Test 1 |
Test 2 |
Test 3 |
Test 4 |

R | 0.08 | 0.04 | ||

R^2 | 0.01 | 0.00 | ||

Adjusted R^2 | 0.00 | 0.00 | ||

Standard error (STDEV of residuals) | 0.0334 | 0.0335 | ||

Predicted wOBA change | 0.001 | 0.000 | 0.001 | 0.000 |

p-value | 0.168 | 0.530 | 0.235 | 0.558 |

First things first: None of the predictor variables are statistically significant, with or without controlling for Marcel. Second, even if they were, the size of the effect is very, very small, if present at all. Take the change in fastball Heart%; a one percentage point change from Year One to Year Two predicts only a .001 change in wOBA in Year Three. Controlling for Marcel does decrease the impact of how pitchers approach hitters, but it is almost too small to bother noticing.

- Test 5: Actual wOBA (Year 3) minus current wOBA (Year 2)~In-season change in fastball Heart%
- Test 6: Actual wOBA (Year 3) minus current wOBA (Year 2)~In-season change in <93 fastball Heart%
- Test 7: Actual wOBA (Year 3) minus current wOBA (Year 2)t~Marcel minus Current wOBA+In-season change in fastball Heart%
- Test 8: Actual wOBA (Year 3) minus current wOBA (Year 2)~Marcel minus Current wOBA+In-season change in <93 fastball Heart%

So, at least initially, it looks like once we control for expected regression and aging changes in how pitchers approach hitters (at least in terms of fastballs thrown in the heart of the strike zone) we don’t get any additional information about future performance.

Let’s take a different approach, borrowing a bit from Robert Arthur. He wasn’t looking at change in distance from the center from previous seasons; he was looking at the trend over the current season and using that to predict change in hitter performance the next season.

This time I modeled the change in wOBA based on Marcel and the change in fastball Heart% between April and September of the current season:

Results, Tests 5-8 | ||||
---|---|---|---|---|

Variable |
Test 1 |
Test 2 |
Test 3 |
Test 4 |

R | 0.11 | 0.11 | ||

R^2 | 0.01 | 0.01 | ||

Adjusted R^2 | 0.01 | 0.01 | ||

Standard error (STDEV of residuals) | 0.0343 | 0.0342 | ||

Predicted wOBA change | 0.001 | 0.001 | 0.000 | 0.000 |

p-value | 0.0322 | 0.0239 | 0.539 | 0.175 |

Initially, the focus on in-season changes seems to do a better job–both test 5 and test 6 are statistically significant. However, look at the effect size–we are really no better than when we started. Add in Marcel, and now the change in how pitchers are approaching hitters in-season becomes meaningless.

But what about extreme changes? Maybe the relationship between how pitchers approach hitters and hitter performance isn’t linear. Maybe extreme changes can act as a signal that the hitter’s performance is about to change in an extreme way, while moderate changes are just normal variation.

We can test this by looking to see how changes in fastball Heart% might predict the likelihood of more extreme changes in wOBA.

I coded changes in year-over-year changes in wOBA as a 1 where the change was greater than one standard deviation of all changes observed (which, if you remember, was .0344). I created two dummy variables–one for change that was positive and one for change that was negative. The baseline odds of increasing wOBA by more than one standard deviation were .14:1 in the sample. The odds of decreasing wOBA by more than one standard deviation were .28:1. As we would expect, changes of this size in either direction have a low probability.

I ran a logistic regression on both the high and low change and used in-season change for fastballs less than 93 mph as the indicator variable (or independent, depending on your lingo). I used 70 percent of the data to train the model, and the remaining 30 percent to evaluate it.

By itself, a one percentage point change in in-season fastball Heart% increased the odds of a hitter increasing his wOBA by more than one standard deviation the following season by 7.7 percent. Controlling for the predicted change in wOBA from Marcel, the result is pretty similar–a 6 percent increase in the odds of a large wOBA jump. The results aren’t so great for predicting big drops in wOBA. Whether we include Marcel or not, the results for fastball Heart% are not statistically significant (even if they were, the size of the effect is quite small).

Finally, I calculated odds ratios where the change (up or down) in in-season fastball Heart% was greater than one standard deviation, coding the variable as either a 1 or a 0. The results showed almost no change in the odds of large change in either direction, and neither showed statistical significance.

### Summing Up

Well, if how pitchers approach hitters does provide some signal as to how hitters might change the following year, it can’t really be found in the frequency of serving up fastballs in the heart of the strike zone. No matter how we modeled the relationship, after controlling for how we would expect hitters to change based on age and general regression we were left with either non-statistically significant results or results with minuscule effect sizes. Even if we managed to increase our sample size, those effect sizes were too small to worry about.

Now, this is just one way to look at how pitchers approach hitters. Other ways need to be tested before we can throw out the entire theory. Maybe it is more a function of being willing to throw inside, or out over the plate, or maybe it depends on what area of the zone individual hitters traditionally have owned. I plan to look at those additional ways in future articles, but if readers have suggestions I would be happy to try to include them in the analysis.

This also doesn’t fully answer the question regarding the analytical value of PITCHf/x-like data for understanding hitter performance. We are still working through how different uses of PITCHf/x data help us describe (what happened), diagnose (why did it happen), and predict (what will happen next) hitter performance. All three analytical levels are valuable in their own ways, but serve different purposes. Understanding where different variables and metrics fit in that scheme is important and we are still just scratching the surface when it comes to hitter performance and this data. This was just one test of one type of metric, and for the moment it appears changes in fastball Heart% are merely descriptive and possibly diagnostic at best.

…

* I decided to use this criterion mainly because it appears to be when Marcel–or likely any projection–is at its best. Of course, this introduces some potential bias into the analysis, as hitters whose true talent may have collapsed will see a drastic reduction in playing time, such that they accumulate less than 350 PAs the following season. However, players could also fail to accumulate >= 350 PAs for other reasons — for example, an injury sustained some time after the previous season. Changes in how pitchers approach hitters can’t logically be said to predict a future injury (of course they might predict a current, but undiagnosed one), so I tried to limit these false positives in this analysis. Also, if a player starts out poorly and is given a very short time to right his performance the following year, that could also bias the results. This isn’t necessarily the only way to handle these issues (for example, you can weight the difference between the predicted and actual by PAs), just the one I chose for this run of the analysis.

### References & Resources

- Jonah Keri, Grantland, Q&A: MLB Advanced Media’s Bob Bowman Discusses Revolutionary New Play-Tracking System
- Bill Petti, The Hardball Times, How Batted Ball Distance Ages
- Bill Petti, FanGraphs, Hitter Aging Curves: Plate Discipline
- Robert Arthur, Baseball Prospectus, Moonshot: What PITCHf/x Can Tell Us About Batters
- Bill Petti, Saber Seminar 2013, Anatomy of Swings and Misses in the Zone
- Bill Petti, The Hardball Times, Expanding the Edges of the Strike Zone

².

I would think pitch selection (FB v offspeed) would be a lagging indicator, not leading indicator, of hitter performance. A young hitter who has “trouble with offspeed” would presumably get a lot of them until he learns to keep his weight back. At that point, a well-located fastball is your best bet.

Pitch location, contra pitch selection, is a troubling metric because you can’t assume intent from pitch location. If a pitcher throws a fastball, we can assume he intended to throw a fastball; but if a pitcher throws to a spot, we can not assume he intended to throw to that spot. In other words, if you’re trying to determine Where the pitcher intended to locate a ball, it isn’t very conclusive to measure Where the ball ended up. Heart% is a perfect example — in most cases, a ball thrown down the middle is either a mistake in of itself, or a function of three previous mistakes (i.e., a 3-0 fastball).

While the relationship between location and intent is not one-to-one, location is a good proxy for intent, on the average. IOW, many more pitches that end up in the middle of the plate were intended to be somewhere near the middle than pitches on the edge. Basically, a scatter plot of pitches will be centered around the intended location, which is all that matters in an analysis like this.

BTW, I love the fact that Bill is reporting confirmation of the null hypothesis (or rather the absence of rejection). Too many articles are only printed when there is evidence of a null hypothesis rejection, creating publishing bias and too many Type I errors.

Very cool and thorough, Bill. I wonder why our results disagree. Obviously, Heart% is not the same as zone distance, and it might not be as well-suited to detect changes in pitcher approach. On a related note: how well does Heart% correlate with, for example, wOBA? Despite your and Jeff Zimmerman’s neat work on Edge% etc., I haven’t played around with your zone classifications (but I will now).

well duh, how did I miss that in the table.

Also one has to be careful in using change in approach during the season and then using Marcel in a model. That is because Marcel does not distinguish between the weight of early and late season performance. For example, let’s say that a batter is “hot” during the first half and pitchers are staying away from them all during the first half, starting after the first 2 weeks of the season. And then let’s say that starting in July the hitters starts and stays ice cold for the rest of the season and pitcher start pitching them in the middle of the zone for the last 2 or 3 months or so. Marcel would see nothing unusual in performance for the season, since overall the season is around the batter’s historical norm, so the projection does not change. But because in the second half, the performance is poor, that performance will get more weight, and the projection will go down a little. So we might see a change in pitcher approach presaging a decline in performance the next season which Marcel does not pick up because of the way it gives the first and second halves of the season the same weight.

presaging a decline in performance the next season which Marcel does not pick up because of the way it gives the first and second halves of the season the same weight.