Now that I finished all of the Win Charts for every 2009-10 NBA team (which you can access anytime on the “Pages” section of this blog) I’ve been trying to get some answers about the metric and about what it suggests about certain classes of players.
First of all, the overall correlation coefficient between player winning percentage last season (the number of wins created per 241 floor minutes) and win credits produced by the player this season is running at .89, so the metric is proving pretty reliable. I was particularly concerned about this issue because there are suggestions that perhaps the Oppo Win Score is based more on luck or good teammates than skill. That doesn’t appear to be the case. In fact, if this post on Sabermetric Research is to be believed, MWS is proving slightly more reliable than all of the other metrics. (and, like Win Score, MWS doesn’t need to be “adjusted to match the team’s offensive and defensive efficiency” as the other metrics mentioned in the post had to be. If you have to adjust a metric to match wins, what good is it? And “Alternate Win Score” isn’t even a real metric! Its just somebody screwing with a proven metric to get the results HE thinks the metric ought to produce. Its just a guy arbitrarily saying “well, I think this is worth that, and this is worth that” How can an offensive rebound be worth more than a defensive rebound anyway? What is a defensive rebound other than an offensive rebound denied. If an offensive rebound is worth so much, then the denial of such a valuable prize should have an equal value.)
Anyway, enough ranting. On to the interesting questions.
I wanted to know two things (1) When a player changes teams, how does his production with his new team correlate with his past production for his old team? and (2) How does a sophomore’s performance correlate with his rookie performance?
To answer the first I took a sample of 30 players who switched teams last offseason and compared the wins they would have produced this season had they played at last season’s level to the wins they actually produced. I was interested in this question primarily because I trumpted Ramon Sessions all last season and this season when he moved to Minnesota he was awful, and so was Ben Gordon after moving from Chicago to Detroit. I wanted to know if this was common
The correlation coefficient for traveling players was .79, much weaker than the overall NBA correlation to past performance, but still decently strong. But it means that only 58% of the variance in win credits produced by the player on his new team can be explained by his performance in his last season with his old team. So its still a bit of a Pig in a Poke.
Finally I took a look at and calculated for the Sophmore Class to see how their second year performance correlated with their debut seasons. There the correlation was even weaker, .69, meaning only 49% of the variance in sophomore wins can be explained by rookie wins.
But for Brandon Jennings fans there was some good news. Of the 24 sophs I used, 13 improved upon their rookie performances, while only 9 declined (two stayed exactly the same). And the risers outweighed the deliners, with the overall average difference being +14%.
It seems that there are a few things that can make a team’s outlook volatile. (1) A lot of new veteran transfers; (2) a lot of players in their first three seasons; (3) a new coach; (4) a lot of older players; or, (5) key players coming off injury.
If you have those things, particularly the last two, and the Pistons had almost all five, tbings can be dicey.
If you have only the first, and if the veteran transfers are well established high producers, like the Cleveland Cavaliers received, things are fairly easy to predict. Cleveland’s combined player wins produced this season had an astonishing .98 correlation to past performance. That’s why established teams stay on top.