What better way to start off a blog than by ripping ESPN? Let me start with a prelude - I really do like ESPN.com's website. I like some of the colorful stuff like DJ Gallo, and I love their NBA coverage (especially now that they have TrueHoop). And I do really like their MLB coverage, especially the blogs (Insider-only, unfortunately) - Buster Olney and Rob Neyer both have outstanding blogs. I read anything I can written by Neyer and Keith Law. For some reason, it's become fashionable for bloggers to express their hatred for "mainstream" sites like ESPN.com - I don't feel that way at all. (I'm not a fan of their NFL coverage at all, with the exception of Tuesday Morning Quarterback, but that's neither here nor there...) So what I'm going to say is not out of hatred for ESPN.com - it's about how ridiculous this particluar article was.
But Monday afternoon, I came home and found this on the ESPN.com homepage:
ESPN.com found a scientific method to sort out baseball's best players.
Put your slide rules down. We've got something more cutting-edge. Want baseball's top 100 players, with a mystical number rating their worth? Done. How about All-Star projections? Presto. Player Ratings
Now, I was intruiged. A scientific method to sort out baseball's best players? You mean, like using statistics? We already have a number of ways to do that, and those could be applied to All-star projections as well. VORP (Value Over Replacement Player) is a very good rating of a player's value. EqA is probably the best measure of a player's hitting ability. Bill James spent five years or so working on his Win Shares statistic, which not only rates players but also expresses their worth in a meaningful way (wins contributed to their team), not just a "mystical number". But you're saying these stats are now archaic? Your stat is "more cutting-edge"? Cool! I'd love to see what new and creative stats you came up with!
Unfortunately, the explanation was less impressive. Here's the link to the story explaining the rankings, and it's not nearly as "cutting-edge" as it seems. Instead, the formula basically takes a bunch of counting stats and mixes them to produce one super-stat, a smart idea except it's been done hundreds of times (and usually much more effectively) by other people before. Here's how they figured out how batters rank:
Batting bases accumulated: 20%
Runs produced: 15%
Net Steals: 5%
Team winning percentage, defensive position: 5% each
Let's take these one-by-one, to make things more organized:
Batting Bases Accumulated: 20%
This is very reasonable - the formula given is TB + BB + HBP. It's kinda like OPS, except actually a bit better, since it doesn't count hits twice. It's not perfect, but it's a good start.
Runs Produced: 15%
Oh, boy. They define "Runs Produced" as Runs + RBI - HR. So, you're saying that 15% of a player's worth is based on how good his teammates are? Raul Ibanez finished seventh in RBI in all of baseball last year. Raul Ibanez. Jimmy Rollins OBP'd only .334 last year, but still managed to finish tied for third in runs scored because he had Utley and Howard hitting behind him. There are dozens of statistics that are a better measure of how good a player is than runs or RBI (though apparently nobody's told the
OBP 15%; BA: 10%
Not really a whole lot to pick at here, although both should be worth much more than they are. Hits are worth more than walks, so BA should be a category, along with OBP. One thing to notice, though, is the curious overlap of stats - you've already really accounted for both OBP and BA in the "Batting Bases Accumulated" category...it seems they could have condensed things some and made it less confusing (or maybe more confusing)...
Um...what about doubles? What about triples? You really think you can determine 10% of a player's worth directly from how many homers he hits? Okay, then...
RBIs 5%, Runs 5%
Again? Didn't I go through this already? First of all, "RBI" stands for "Runs batted in", so there's really no need to add an extra "s" there. But that's just nitpicking. Still, so far 25% of a player's value is apparently determined by how good his teammates are. And more strange overlapping of stats.
Okay...how are "Hits" and "Batting average" different? Couldn't they have just weighted BA by at-bats, eliminating the need for another "Hits" category? This is really starting to confuse me...
Net Steals: 5%
I'm assuming they mean (SB - CS). But, according to numerous experiments, a player is actually hurting his team if he converts less than 75% of his steal attempts. So, shouldn't you have used 75% as your baseline? Plus, steals aren't often worth that much anyways - certainly not 5% of a player's value, with the possible exception of ridiculous Juan Pierre-types. (And, actually, his success rate was under 75% last year, so the value of those steals was negligible. Not that his .330 OBP added much value either...)
Team Winning Percentage, Defensive Position 5% each
If you're counting at home, that's 30% of a player's total value that is based on how good his teammates are. Really, winning percentage? You're saying that Oakland's hitters are more valuable than Tampa Bay's because Oakland's pitching staff has given up 150 fewer runs? How does that make sense? To be fair, the actual impact of this ends up being not much - the difference between 36-26 Detroit and 23-40 Texas ends up being only about .4 points (out of 100). Which prompts the question, why have the category anyways? But the more egregious one is the second column. Only 5% of a position player's worth comes from what position he plays? A player's net steals is worth the same as his position? This is completely absurd. The league-average 1B last year hit .285/.365/.495 (BA/OBP/SLG). Pudge Rodriguez' career numbers are very similar - .304/.342/.483 - but, because he's a catcher, he's a future 1st-ballot Hall-of-Famer. Are you telling me Pudge, for his career, was only 5% better than the league average first baseman? Secondly, they count all outfielders as "OF", and don't count where they play in the outfield. Center fielders are much more valuable than corner outfielders, because they must cover much more range. In fact, according to the defensive spectrum, center fielders are more valuable than third basemen as well. So this system unfairly hurts center fielders, and that is obvious in the rankings. Finally, there is no accounting for quality of defense, either. So, according to ESPN's "cutting edge" stat, it doesn't matter at all how good a defensive player you are! Derek Jeter must be thrilled...Update: According to ESPN.com, they now differentiate between the types of OFs, though the difference still isn't that much, and CF is still behind 3B. But it's a start...
Here's the formula for starting pitchers:
ERA vs. league average weighted by IP: 40%
Wins weighted by win percentage: 20%
Innings Pitched: 10%
Defensive independent bases allowed per IP: 10%
Opponents' BA: 10%
These are actually okay categories - ERA is probably one of the best stats to use here, and it's good to see they used the league average to account for the DH. IP is important, because pitchers who throw innings are helping their team get through games, even if they aren't very effective innings. The defensive independent bases are defined as (4*HR + BB + HBP), which is a very good measure of a pitcher's true effectiveness (i.e., not determined by luck or defense). Unfortunately, they then have to count opponents' batting average (very influenced by luck and defense) at the same weight, which doesn't make much sense. But the one category that really sticks out is wins. Wins are loosely related to how good a pitcher is, but are at least as related to how good the team is. Case in point:
Randy Johnson, 2006 - 17 wins, 5.00 ERA
C.C. Sabathia, 2006 - 12 wins, 3.22 ERA
Who had the better year?
Saves and wins with blown save penalty: 40%ERA is tough to use for relievers, as one bad inning can really damage your ERA because you don't throw very many innings, but there aren't that many better metrics to use. K/BB = good stat, Opponents' BA = bad stat. The save is quite possibly the dumbest stat ever; is your best reliever more valuable with one out and the bases loaded in the seventh in a tie game or with nobody on and a three-run lead to start the ninth? Wins are almost completely random for relievers; often they come because a reliever actually has a bad outing and blows a save, but ends up with a win when the offense saves them. And what is this "steep blown save penalty"? The formula given is (saves * 2) + wins - (blown saves * 3). That means that if a closer converts around 67% of his save chances and picks up the lucky win or two, he gets some points. (What are points, anyways? Is 0 a replacement player? Is 50 average? What the hell do these numbers even mean?) I can't find the exact number, but I bet that teams with a lead entering the ninth win much more than 67% of the time. You should not get points for saving 7 out of every 10 games.
ERA vs. league average weighted by IP: 20%
Opponents' BA: 20%
Inherited runners stranded percentage: 5%
Overall, these ratings aren't awful, but they're worse than most of the sabermetrics used today. This would be fine if it were just some guy's creation, but ESPN's trying to actually use it; I saw them being mentioned on Baseball Tonight as if they actually proved anything. ESPN's baseball writers (Olney, Neyer, etc.) have been noticably silent on this topic - they don't want to out-and-out disparage their employer's stat, but if they really thought it was good, don't you think they would have mentioned it by now?
Here are the main problems, as I see it:
1. No park factors - Jake Peavy is rated as the best pitcher right now, mostly because he's having an amazing year, but also partly because he gets to pitch half his games in spacious Petco Field. Meanwhile, stats in mile-high Coors Park count exactly the same as those in Petco or RFK. It's not that difficlut to add park factors into ratings, so it really surprises (and disappoints) me that they didn't go that extra step. This early in the season, park factors usually aren't worth using because they fluctuate so much, but using last year's won't really disrupt anything.
2. No defense adjustment - A great defensive shortstop is worth more than a poor one. That's just fact. Unless you're working with this system. (To be fair, VORP doesn't have a defense adjustment, but it at least bills itself as such.)
3. Not enough adjustment for position - Chase Utley, Joe Mauer, and Jose Reyes can tell you that the position you play determines much more than 5% of your value. The difference between center field and the corner outfield spots isn't enough, and the difference overall isn't nearly enough.
4. Too much reliance on teammates - Really, team winning percentage? Runs and RBI? Wins for a pitcher, and saves for a reliever? These are the best stats you can come up with to evaluate an individual's performance?
5. No meaning to the nubmers - I still can't figure out what the numbers mean. 0 is the floor, and 100 is the higest possible...but where do normal people score? What's a normal MVP-caliber score? 70? 80? What does a score of 0 mean? A replacement player? A bad minor-leaguer? Is 50 average? Is the difference between 60 and 65 big or small? There are way too many questions to answer.
Here's a better system, in my opinion:
Batters MLB ranks in:
The ESPN.com "Rating" is an okay stat, and I'm sure it took a lot of effort, but it's not nearly as comprehensive or meaningful as stats such as VORP, EqA, and Win Shares. If this had been invented/popularized five years ago, it might have been useful, but today we have too many other stats that are better. There are too many flaws - such as the fact that it only compares relievers to relievers, position players to position players, etc., and the fact that it doesn't adjust for park factors - it just doesn't measure up to the more comprehensive stats.
Baseball Prospectus' Nate Silver has an excellent take on this as well...