It requires a certain type of mind to excite itself over “fragments of fragments,” but the normally sober baseball analyst Rob Neyer exults giddily over them in his column the other day.
The question at issue is how lucky the 2002 Detroit Tigers were. On the one hand, they lost 106 games. On the other, if you apply Pythagorean analysis to their run margin, they “should” have lost 112 games. So they were lucky. But on the third hand, as one of Neyer’s correspondents points out, they scored fewer runs than one would expect from their offensive components, and allowed more than would expect from the offensive components of their opponents, and they really should have lost 98 games. So they were unlucky.
But why stop there?
All hits, for example, are not created equal. If two players hit 120 singles, we consider those accomplishments the same. But what if one of the players hit 80 line drives and 40 ground balls with eyes, and the other hit 120 line drives? Would we expect them to match performances the next season?
No, we wouldn’t. We’d expect the guy with 120 line drives to outperform the guy who got lucky with the grounders.
That is just one tiny example, of hundreds we could come up with. And for the people who care about such things, finding the fragments of the fragments of the fragments is the next great frontier.
Ah, fragments of fragments of fragments. Perennial employment for baseball analysts! More work for Rob Neyer!
Neyer analogizes this process to pricing financial derivatives, which I happen to know something about, having worked as a programmer for several years for a software company that did exactly that. On slow afternoons the analytics boys would quarrel over whether to construct the yield curve using a two- or three-factor Heath-Jarrow-Morton model. Sure, with a two-factor model you might be able to price the bond to four decimal points, but with a three-factor model you can price it to seven! Eventually someone, usually me, would have to rain on their parade by pointing out that bonds are priced in sixteenths (of a dollar), and that the bid/offer spread dwarfs anything beyond the first decimal point.
In baseball granularity is not measured in sixteenths, but in wins. Since it takes about eight to ten additional runs for each additional win, any variance below five runs or so is a big, fat engineering zero. And I can assure Rob Neyer without even firing up a spreadsheet that a team’s line drive/ground ball ratio when hitting singles won’t get you anywhere near five runs. It’s barely conceivable that it could help you draft a fantasy team. Knock yourself out.
Hitting has been well understood since John Thorn and Pete Palmer published The Hidden Game of Baseball twenty years ago. All work since has been on the margins. The new frontiers in baseball analysis lie elsewhere. Pitching is still imperfectly understood, because its results are mixed with fielding, which, until Bill James’s new book on Win Shares, was not understood at all. Voros McCracken (where do you sign up for a name like that?) recently demonstrated that a pitcher’s hits allowed, relative to balls in play, is almost entirely random. That’s serious work. Fragments of fragments is masturbation.
The lesson here, which applies more broadly to the social sciences, is not to seek more precision than is proper to your subject. Fortunately Professors Mises and Hayek have already given this lecture, and I don’t have to.
(Update: Craig Henry comments.)
"And I can assure Rob Neyer without even firing up a spreadsheet that a team’s line drive/ground ball ratio when hitting singles won’t get you anywhere near five runs."
I understand your larger point, but not its application to this case. I believe that Neyer is right that we would have good reason to believe that the performance of the 120 line-drive hitter would be more likely to be repeated the following season. Your point seems to be about the relative effect of a line drive single vs. a ground ball single, but that doesn’t seem to be Neyer’s point. (If anything, I would guess that a ground ball single is actually more productive than a line drive single.) Or do you mean something else by that ratio?
Speaking to your larger point, we aren’t going to find players that hit only line drive singles anyway, and the differences between players on whether their singles are one or the other probably isn’t that great. Perhaps this is what you are saying. (I do remember one year when Boggs was in a bit of a slump, and people were commenting on the number of his line drive outs as evidence that his performance hadn’t really changed, but I don’t remember anything much coming from that evidence.)
Seems to me Neyer has it exactly backwards: the more consistent player will be the one who gets the ground ball singles. Why? He is more likely faster. But since speed is a factor, all we have to do is measure speed — not fragments of fragments.
By the way, do you think string theory is scientism run amok? I do.
Quite right, Eddie, I should clarify. Neyer’s point, as I understand it, is that a player who hits a lot of line drive singles would be more likely to have a higher average next season than a player who hits a lot of ground ball singles, ground balls being more likely outs. (A dubious assumption, but grant it.) Ground-ball/fly-ball ratios range from about 3 to 0.5 among major-league hitters. Ground-ball to fly ball ratios for singles will have a similar, but somewhat larger range, since the data set is smaller. The outliers would be approximately 90-30 and 30-90, and this is counting all fly singles as line drives, including Texas Leaguers. Even if we assign some arbitrarily high number to the increased likelihood of a fly ball dropping, like 0.1, the expected difference between the outliers is something like six hits. That just isn’t significant, and it won’t be significant on a team level either, since outlying teams will be much closer in ratio than outlying players.
The true test of Neyer’s theory would be to study the pattern of Tony Gwynn’s hits. Gwynn was famous for hitting ground balls through the 5.5 hole. These weren’t seeing-eye hits, but solid purposeful placements.
The only way Neyer’s theory would hold is if you could distinguish between the seeing-eye hits and those that came as a result of good bat control.
I doubt Gwynn knew where he was going to hit the ball most of the time; he just hit a lot of hard ground balls. Your batting average is probably mostly a matter of how often you hit the ball hard, whether it’s on the ground or in the air.
There’s a famous story that someone once asked Ted Williams (or maybe Willie Keeler, I forget) if, with a man on third and less than two out, you should try to hit under the ball to produce a sacrifice fly. Williams (or Keeler) answered, "Why not hit it square, and break up the game?" If Gwynn could really place the ball that precisely, why would he bother with lousy singles in the 5.5 hole? Why not doubles over third or first base instead?
I also doubt that Gwynn had that much bat control, though it is certainly possible, and besides, if he always used the same offensive approach, both the pitchers and the infielders would adjust to counteract it.
But, insofar as it is under the batter’s control, there is a good reason to shoot for the hole rather than down the line: your margin of error is greater and the ball doesnt have to be hit as hard to get through. To get a ball past a corner infielder (other than the likes of Mo Vaughn) you have to really blast it–and even then hope you dont either hit it right at them or shoot it foul. But a reasonably hard hit ball in the hole takes an exceptional play to get you out–unless the infielders are cheating that way.
On another matter you mention: the McCracken study is a little more complicated than you indicate. Apparently knuckleballers consistently do better at getting outs on balls in play, and I recently read (at baseballprospectus.com) that extreme groundball pitchers give up measurably more hits on balls in play than flyball pitchers do. But they also give up fewer extra-base hits. And that leads to a larger matter: we all know that batting average is the most misleading offensive statistic. Well, so is batting average against. The real question is: do pitchers have any control over slugging percentage? Maybe they do, maybe they dont. I suspect they do, but I have never seen any attempt to address that issue and make a case for one view or the other. (Pitchers do have meaningful control over on-base percentage, because they largely control walks.)
An ‘OPS against’ for pitchers would certainly be intriguing to look at. It’s a wonder that’s not a common stat, now that I’m thinking about it.
I’ve theorized in the past that as OPS is as good a statistic for pitchers as it is for hitters. McCracken’s work makes me doubt that to a degree, but I spent a good deal of time working it out a few years ago, and it’s certainly at least suggestive.
You are right, John, that knuckleballers are something of an exception to McCracken’s rule, but they are rare. To answer your question, pitchers certainly do have control over slugging percentage. It is dominated by home runs allowed, and these vary meaningfully between pitchers.
Aaron: I was referring to slugging percentage on balls in play; that is, excluding home runs (as well as walks and strikeouts). McCracken has shown that a Perdo Martinez is no more likely to get outs on balls in play than is the fourth starter for the Texas Rangers (whoever that is this week). But I would be very surprised if Pedro was not regularly near or at the top of the league in lowest slugging average on balls in play. But that remains a hunch: no one to my knowledge has examined those stats. And that seems to be to be something worth finding out. A lot of the conclusions people are drawing on the basis of McCracken’s study are based in the implicit (and unconscious) assumption that pitchers have no significant control over slugging averages on balls in play, either.
I would think that pitchers would still have a good bit of control over a balls-in-play slugging percentage. Pitchers who don’t have exceptional velocity but who tend to "fool" hitters seems to get a lot of weak groud balls, which should be easy outs. But where this falls apart is, for example, Randy Johnson. Of balls that are actually put in play against him, you would expect a fair number to go for hits, if not extra bases. And because he strikes out so many batters, he would have fewer balls-in-play to work with than, say, a Glavine or a Maddux-type pitcher. So a slugging percentage of balls-in-play may actually penalize strikeout pitchers.
John and Casey: This is an empirical question; I doubt our hunches are very useful. I’ll ask Voros, and try to get back to you.
That said, I doubt slugging percentage on balls in play is significant. We’re probably talking about a difference of 40 extra bases between the outliers, or less than 10 runs.
Aaron:
I am quite sure that the actual differences from one pitcher to another for slugging average on balls in play can be rather large, just as they are for batting average. What McCracken has shown is that those differences are not consistent from year to year: they reflect random variations in luck (and in the quality of the defense behind the pitcher) and not any skill in the pitcher–unlike walk, strikeout, and home runs allowed rates. He has not –to my knowledge–even studied whether variations in slugging percentage are also random. I have a hunch that they are not; you have a hunch that my hunch is wrong–but neither of us KNOWS anything. If you learn anything, please pass it along!
I’m currently pursuing an internship in the minors in hopes of one day attaining a front office position in the majors, but all this sabermetric stuff is intimidating me. Its starting to look like being a master in this field is a prerequisite for getting hired.