Adam Wainwright, Run Clustereron September 24, 2020 at 1:00 pm

Adam Wainwright, Run Clusterer

On Monday night, I was watching the Cardinals battle the Royals when I heard something that stopped me in my tracks. As Adam Wainwright labored in the sixth inning — two runs in and runners on the corners with two outs — the Cardinals announcers mentioned one of Wainwright’s greatest strengths — in their minds, at least. “That’s something that Adam Wainwright is really good at, is not compounding the inning… going back and getting the next guy.” I’ve been a Cardinals fan my whole life — and to that tidbit, I said, “Huh?”

It was, in truth, something I’d never thought about. Are some pitchers better than others at turning off the tap, amping up their performance when they need it and keeping crooked numbers from getting even crooked-er? My saber sense was tingling — something about this didn’t sound quite right. But of course, these spots are exactly where if a pitcher could bear down more than expected, it would make the most difference. I decided I’d try to find out how real this effect was.

Defining what I was looking for turned out to be a difficult. What, exactly, does “not compounding the inning” mean? The announcers seemed to think it meant that Wainwright pitched better after runs were in, or at least pitched the same while most pitchers in baseball got worse. Either way, the general idea was that his ones and twos turned into threes and fours less often than average.

One possible reaction to that might be “So?” His ERA is his ERA, regardless of whether it comes via a three-run spurt and eight zeros over nine innings, or three one-run frames and six zeros. To that I say: reasonable point. There are still reasons to care, though. For one, if a pitcher were actually prone to clustering, they’d tend to underperform their FIP over time. One of the reasons home runs are so bad is because they always result in runs, whereas other hits can be scattered around in otherwise dry innings without damage. A cluster-prone pitcher wouldn’t have that advantage; when you give up baserunners in bunches, a single and a home run become much closer in value.

In the same way, a pitcher who was prone to lots of singleton runs allowed but then mysteriously got better after letting one in would beat his FIP over a long time horizon. Base/out states tend to be more dangerous after a run has scored, naturally enough. Getting better then, or not getting worse while most pitchers do, would be quite the superpower.

Of course, that’s not necessarily the right way to think about it. There’s a simpler way to take this statement. Maybe Wainwright simply has fewer crooked numbers, as a proportion of the runs he gives up, than the average pitcher does. There doesn’t need to be a provable reason why that should happen, or even a real advantage to displaying that behavior. Maybe Wainwright simply allows runs differently.

I wasn’t exactly sure how to approach this exact problem, so I decided to define terms very narrowly and answer some problem, rather than spending forever thinking of how to do it. Is it the right problem? You tell me. Here’s what I did, though: I looked at each inning that a pitcher both started and finished and grouped them by how many runs were allowed.

Why exclude innings where they were pulled? Because we can’t know what would have happened. What relievers do with the scraps they have to pick up doesn’t necessarily mean much about the pitcher who left the mound. We could assign some run value based on the base/out state when the pitcher left — but that wouldn’t do what we want. We’re looking for a place where pitchers behave differently than a naive run expectation.

Here, for example, is Wainwright’s runs allowed distribution across every inning he has both started and finished in his career:

That’s a broad picture, but it gives you a general sense of the shape of the innings he allows. About 60% of the time that he allows at least one run, it’s only one. Put another way, if you know only that Wainwright allowed at least one run, it will be exactly one 60% of the time. If you know that he allowed at least two runs, it will be exactly two runs 65% of the time. If you know that he allowed at least three runs, it will be exactly three 68% of the time.

Working out what “average” is in this statistic is tricky. In 2020, for example, pitchers as a whole check in at a 60.1% rate of one-run innings out of all the innings in which they allowed a run, almost exactly identical to Wainwright for his career. But this statistic isn’t talent-level agnostic; the better the pitcher, the higher their proportion of one-run innings should be. Jacob deGrom checks in at 63%, Clayton Kershaw at 68%. Among pitchers who have completed 30 innings this year, the slope comes out thusly: for every point of ERA below average, pitchers have a four percentage point higher rate of holding their opponents to just one run.

Some era adjustment is necessary, because the offensive environment has changed, and that itself could change the rate of one spots as compared to other run-scoring innings. I looked at every complete inning since 2005 and found a rate of … 60.1%. Okay, so maybe we don’t need to adjust for the run-scoring environment.

Over his career, Wainwright has an ERA 0.8 runs better than league average. We’d naively expect him to have a distribution of 63.4% one-run innings out of all of his run-scoring innings. By this metric, he actually allows more big innings, relative to his overall skill level, than the average pitcher!

There are plenty of problems with this way of looking at things. An inning with a single and a homer, for example, counts as a “big inning” even though the pitcher never had a chance to bear down after allowing the first run. Maybe the better question is what wOBA a pitcher allows after a run has scored, or their strikeout rate, or something along those lines. But it’s my article, and I like this formulation for its simplicity, so we’re sticking with it.

There’s another question worth answering here: Wainwright doesn’t seem to have this skill, but does anyone? Is it actually a skill, or something that happens randomly to pitchers? To test this, I looked at every pitcher who started and finished at least 100 innings in 2016 and 2017 combined. I assigned each of them a projected one-run percentage (there has to be a better name for this, I just can’t think of one) based on their ERA.

Let’s use Chris Sale as an example. He threw a whopping 441 innings over those two years. When he did allow a run, he allowed a lot; he allowed one run 51 times, two runs 20 times, and three or more 16 times. That works out to a 58.6% one-run percentage. He had a 3.12 ERA over those two years, though, far better than average, which means we’d predict him to have a one-run percentage of 67.4% (with huge error bars, to be fair). Thus, we assign him a “one-run error” of -8.8%, the difference between our goofy prediction and his actual rate. I did the same thing for every pitcher.

Next, I did the same for 2018-2019. Let’s look at Sale again. In 2018 and 2019, he had a 3.21 ERA (consistent!) and a 54.1% one-run percentage. We would have predicted him to have a 64.4% one-run percentage, which means he was again an outlier. Sale seems to have fewer one-run innings, as a percentage of his scoring innings, than the average pitcher of his caliber, not that there are many pitchers of Sale’s caliber.

From here, I took every pitcher with 100 innings in each of my two time periods and divided them based on how much they beat or missed my prediction in 2016-2017. Those groups look like so:

Sale would be in that first group, the one with a much lower actual one-run rate than you’d expect from their ERA’s. It’s not clear whether there’s any sample bias — the pitchers with the least and greatest errors are the two groups with the lowest ERAs, and good luck explaining that. Let’s see how each group did in 2018-2019:

Well, so much for that. Pitchers who are at one extreme or the other for two years (quadrants one and four) had almost exactly equal one-run rates in the next two years. The middle half was again the worst group by ERA, which makes sense — it’s the same pitchers who were the worst in the first two years. But they, too, had aggregate one-run percentages almost exactly on top of a naive prediction.

In the end, I’m not sure what to say about this analysis. Maybe the way I framed it was wrong. Maybe it’s a real skill that only shows up intermittently, or there are so few players who actually possess it that grouping into quartiles obscures their skill. Maybe — and I’d say this is most likely — it’s nearly impossible to perceive the difference between 60% and 63% of your run-having innings be one-run jobs. If that’s the case, it wouldn’t be surprising that your brain links a positive trait — keeping the inning manageable — with a pitcher awash in other positive traits, and Wainwright certainly fits that bill.

Great players are — well, they’re great! Adam Wainwright qualifies as one of those over the course of his career. Be careful about haphazardly ascribing traits that go beyond what you can see in the numbers, though. That’s a great way to end up believing something that simply doesn’t appear to be the case. Being great at measurable things is impressive enough without the need to ascribe bonus intangible excellence.


Read More

Leave a Reply

Your email address will not be published. Required fields are marked *