After completing my NBA Win Charts, I ran some regressions. I discovered something interesting.
The Marginal Win Score correlations I posted after last season, each and every one of them, were wrong. I don’t know what the hell happened, but they were wrong. I believe the results might have been polluted by sample bias.
The true correlation between MWS wins and losses and pythagorean wins and losses is 0.988, meaning MWS and pythagorean wins are nearly perfectly correlated. Likewise, the true correlation between MWS wins and losses and actual wins and losses is 0.965. That suggests MWS does a very good job of explaining wins and losses.
However, it turns out MWS does a poor job of predicting future performances, much worse than I previously thought. I ran regressions for both the 2010-11 season and the 2009-10 season and came up with the very same correlation coefficient each time: 0.671. That’s pretty weak. It means a player’s MWS and winning percentage from the preceding season explains only 45% of his current MWS and winning percentage.
Since Wins Produced, the parent metric to MWS, has a correlation coefficient of 83% under similar analysis, the results I calculated suggested at first blush that Opponent Win Score (lets call it “Defensive Win Score”) might either be based upon luck or it might be dependent upon teammates.
But after considering the matter, I came up with a possible third explanation. Perhaps the low correlation instead reflects inconsistent defensive motivation or desire.
After all, defense in basketball is a thankless and virtually uncompensated task. As a result, it would seem the motivation to play defense has to come from a non-monetary source, like personal pride… or the demands of the coach.
Do Coaching Changes Help Explain Defensive Fluctuations?
To test my hypothesis that defensive win scores are inconsistent because different coaches demand different levels of effort, I calculated each NBA team’s overall Defensive Win Score from 2009-10 and 2010-11 and then determined the correlation coefficient. It was 0.613. About what I expected. Not especially strong.
Next, I isolated teams that changed coaches between 2009-10 and 2010-11 from teams that did not and ran separate regressions for each.
The correlation coefficient for the teams that maintained the same coach, even if they completely changed over the core of their roster (like the Miami Heat) was 0.832, a very strong correlation and exactly the correlation coefficient Professor Berri found for Wins Produced.
On the other hand, the correlation coefficient for teams that changed coaches, whether those teams maintained their core group of players (like the Chicago Bulls) or did not, was a measly 0.121.
There it was! The inconsistency in MWS appears to be explained in large measure by the different approaches employed by different coaching regimes. When a team maintains the same coach, even if that team employs a new group of personnel, Defensive Win Score is as consistent as Offensive Win Score. On the other hand, when a team or player changes coaches there seems to be no consistency with previous defensive results, even if virtually the same players are employed (Bucks fans witnessed this after the team changed over to Scott Skiles).
Here’s what I take away from that (and I may be wrong). Certain coaches demand defensive accountability. Certain coaches establish defensive atmospheres. Others take a more relaxed attitude (Kurt Rambis). Players respond to these pressures (obviously there are other motivating factors as well, such as a team’s championship prospects, etc.).
I will expound on these thoughts in subsequent posts.