With the 2009-10 season’s Win Charts nearly finished, I will have three seasons of recorded Win Charts and corresponding win production statistics produced using my derivation of Professor Berri’s Win Score metric that I call “Marginal Win Score”. (For more on Marginal Win Score, visit the Pages listed on the right column of this blog).
With this much data on the books, this summer I intend to put the metric to the test. One area of concern is consistency. If a players MWS48 from the previous season does not tend to explain his MWS48 the season after that, then it would tend to suggest that what I am measuring is not “individual skill” but something closer to happenstance, like what I believe the “Adjusted +/-” statistic measures (that statistic is quite inconsistent from season to season).
I was particularly concerned with this issue because my metric, unlike many of the other basketball metrics available, incorporates something that could be loosely referred to as “individual defense” in its final result. The criticism of this usage to date has been that individual defensive statistics are inconsistent and therefore reflect too many “team” aspects for them to be fairly assigned to individual players. If that is the case, then as I tested this season’s results against last season’s I would expect to find something substantially less than the 80% consistency shown by the Original Win Score metric. So far I have actually found more consistency.
With about 25 team “Win Charts” completed for the 2009-10 season I’ve tested the consistency of “Win Credit” production for veteran players on 7 of the teams. Here’s how I did it. I took each veteran player’s number of “Game Responsibilities” (meaning his minutes played divided by the team’s overall player minutes divided by 82 — those are the number of actual “game results” the player is charged with producing based on the assumption that every player is responsible for 1/5th of the outcome for every minute he is on the court) then I applied eech player’s “Player Winning Percentage” from last season (found on 2009nbawincharts.blogspot) to this season’s GRs and called the results the “expected wins”. I then compared the “expected” Win Credits to the actual “Win Credits” produced by those players this season.
Here were the accuracy results for the teams tested:
1. Orlando Magic…..98.3%
2. Los Angeles Lakers…..97.4%
3. Cleveland Cavaliers….96.3%
4. New York Knickerbockers…..95.0%
5. Philadelphia Sixers…..94.7%
6. Minnesota Timberwolves…..73.4%
7. Oklahoma City Thunder…..62.9%
What those numbers mean is this. If you told me in October the exact minutes played for each player on those seven team’s rosters, then for the players on those rosters who played more than 100 minutes the previous season I could have told you with 88.3% accuracy how many wins those team players would produce. The seven teams represent a small sample set, but the seven teams chosen are fairly representative of the different tiers of success in pro basketball (I’ll do the whole Association shortly).
But a cautionary note. While MWS48 can predict with a pretty high degree of accuracy a given team’s win total, predicting individual player results is a bit dicier. That’s because the variations in production from season to season by individual players tend to even out over an entire roster.
So some players production seems to vary, while others are remarkably consistent. Things that I have so far found that might make a player more difficult to predict:
1. Was he a rookie last season?
2. Did he switch teams this season?
3. Did he play less than 500 minutes last season?
4. Is he coming off an injury to his legs in either season?
5. Did the team get a new coach this season?
Answering “yes” to any of those makes a player trickier to predict, I think. But for players who are established and who played high minutes last season for the same team with the same coach , the predictability for those players is strong.
But on a young team, with a new, more defensive minded coach, such as the Oklahoma City Thunder, the predictability is much lower. But those teams are rare.
All of the above helps explain how I was able to use MWS at the beginning of the season to predict almost precisely the pythagorean win totals for the Lakers, Spurs, and Blazers, even though some of my assumptions about individual players on each team turned out to be inaccurate.
As I finish more of my work, I will report more of the results.