In the 1960s, Bill Russell of the Boston Celtics won multiple MVPs and was generally considered the game’s best player. He had virtually no offensive game to speak of. I sometimes contend that if Russell played today, with his awkward offensive skills, ESPN’s public opinion makers would persuade the public to believe Russell was an ordinary player of questionable value. My hunch is he would be viewed today in the same light the public views Tyson Chandler or Marcus Camby.
Even less likely to draw critical acclaim today would be former Bullets C Wes Unseld. Yet in 1968-69, Wes Unseld won both the ROY and MVP. Such a feat by a player of Unseld’s obviously limited offensive skill would be unthinkable today. Unseld was slow and he couldn’t hit the broad side of a Winnebago with his outside shot. Plus his Usage rate was lower than Luc Moute’s. Yet Unseld was an incredible win force because he was efficient with his shooting and very productive in most every non-scoring department. Somehow the press saw his value in 1968-69m, but there is absolutely no chance they would see it in 2010-11. He would be even less appreciated than Kevin Love.
Has our understanding of the game gone backwards?
In the wake of Derrick Rose’s MVP selection I have been wondering: has our understanding of winning basketball gone backwards since the 1960s? Did they intuitively understand the hidden game better then than we understand it now? Maybe to a certain extent they did. Consider the following.
The top 20 from the 1967-68 season
I just completed win charts and win-loss assessments for every NBA team and player that participated in the Association’s 1967-68 season. (If you want to see the ’67-’68 Win Charts, I will prepare a Page and provide a link right here. Someday I hope to complete every season and use the information to objectively rank the Top 100 players of all-time)
Based on my results, the following players were the 20 Most Valuable Players during the 1967-68 NBA season (Glossary explaining Chart Statistics):
1968 Press saw it the same as Marginal Win Score?
Unlike what has happened in recent seasons, in 1967-68, the Marginal Win Score results closely mirrored the post season awards and nominations.
The 1967-68 MWS MVP, by a country mile, was Wilt Chamberlain. He had one of the great seasons in basketball history. He was also the press choice for MVP. The 1967-68 Marginal Win Score ROY was Baltimore Bullets PG Earl “The Pearl” Monroe. He was also the press choice for ROY.
Two obvious choices? Maybe, but I don’t think Chamberlain would have been a shoe-in winner in a 2010-11 vote.
Look at the treatment of Dwight Howard, the Wilt Chamberlain of today (and the 2010-11 MWS MVP). Howard has had several dominating seasons in a row, and he has never even sniffed an MVP award. I wonder whether the contemporary press would devalue Chamberlain in the same way it devalues Howard. That might have cost him the MVP award, regardless of his overwhelming stat sheet.
Plus, in 1967-68, there was a more appealing “modern” MVP candidate who played on an equally successful team: the Laker’s SF/PF Elgin Baylor. Baylor was a high flying perimeter player who liked to take the ball to the basket. He was also much more offensively and artistically skilled than Wilt and he carried a better scoring average than Chamberlain (Wilt’s scoring average actually went down in ’67-’68, a death blow to most modern candidacies). In other words, Baylor was exactly the kind of candidate ESPN would have endorsed and shilled for on a nightly basis.
Finally, Chamberlain might have been hurt in today’s voting environment because he won the award the season before (for some reason reporters consider past awards when they choose — its a sign they have no clue what they are doing). So Chamberlain would not have been a slam dunk choice in 2010-11.
Earl Monroe, on the other hand, was the highest scoring rookie in ’67-’68. So he probably would have won the award in either era. I’ll concede that point. (Although technically, the Knicks Walt Frazier was the most productive rookie per minute played — he simply did not play enough minutes to surpass The Pearl on the MWS Value scoring).
All-NBA Teams and All-MWS similar in ’67-’68
Here is a chart showing what would have been the Marginal Win Score All-NBA 1st and 2nd Teams in 1967-68 compared by position to the actual All-NBA 1st and 2nd Teams from that season.
|1st Team||1st Team|
|L Wilkens||D Bing|
|O Robertson||O Robertson|
|P Silas||E Baylor|
|J Lucas||J Lucas|
|W Chamberlain||W Chamberlain|
|2nd Team||2nd Team|
|J West||J West|
|H Greer||H Greer|
|D DeBussch||J Havlicek|
|W Reed||W Reed|
|B Russell||B Russell|
Dave Bing: the 60s version of Allen Iverson
The two sets of nominees are remarkably similar with two notable exceptions: Detroit’s PG Dave Bing would have been nowhere near the All-MWS teams, and he was first team All-NBA, and St. Louis SF/PF Paul Silas was an easy nominee using MWS, whereas anyone voting for him in ’67-’68 would have had his press credentials pulled.
Silas was a classic underrated player. He was a terrific win producer, yet Silas went unappreciated throughout his career, despite the fact that he played a central role on three world championship teams (you could argue that he cost the Bucks the championship in 1973-74). That’s probably because he was a rebounding force, not a scoring force.
Dave Bing, on the other hand, appears to have been the most overrated player of the 1960s — the Allen Iverson of his day. Bing is the one choice on the two teams (plus on the All-Star teams listed below) who’s MWS average in 1967-68 did not merit consideration. Bing’s MWS average per 48 minutes was (-0.35), which put his win production for the Detroit Pistons just above that produced by the median NBA player that year.
So how did he make 1st Team All-NBA, and how did he make the All-Star team? He was the leading scorer in 1967-68.
So perhaps scoring was still a bit overvalued back then. But I don’t know. Take a look at the East and West All-Star teams from 1967-68:
|D Bing||-0.35||M Rahmin||-0.24|
|D Barnett||0.08||E Baylor||1.38|
|W Chamberlain||7.34||Z Beaty||0.39|
|D DeBussch||1.53||B Boozer||-1.19|
|H Greer||0.83||B Bridges||1.22|
|J Havlicek||0.87||A Clark||0.53|
|G Johnson||1.82||J King||0.01|
|S Jones||0.94||D Kojis||0.42|
|W Reed||1.91||R LaRusso||-1.86|
|J Lucas||3.89||C Lee||0.43|
|O Robertson||3.19||N Thurmond||3.07|
|B Russell||4.44||J West||3.51|
Based on the MWS Top 20, the only egregious omissions were the aforementioned Paul Silas, and New York Knick center Walt Bellamy. But Bellamy may have been a victim of circumstance. He had both Russell and Chamberlain in his conference.
The interesting thing about the All-Star teams is the number of inspired choices. I don’t know that “inside” players like Bill Bridges or Nate Thurmond or Gus Johnson would make a 21st Century All-Star team. None of them had much scoring ability. Yet they made the team in 1967-68, and deservedly so.
So did they understand the game better back then?
I think, on balance, the opinion shapers of the 1960s understood the “hidden game” of basketball better than their modern counterparts (with exceptions made for Charles Barkley). Yes, they still admired “volume scoring” in the 1960s, but I think the players of that day helped shape a broader appreciation of the wider variety of tasks that win basketball games.
You see glimpses of this broader understanding when “Russell supporters” among the era’s players explain why Russell was a better player than Chamberlain (at this moment I am thinking specifically of quotes from John Havlicek and others I read in Bill Simmons’ “Book of Basketball“. I will excerpt them in a later update of this post). Invariably the Russell supporters will mold their explanation into some form of the argument that “stats don’t tell the story”. What they clearly mean to argue is “scoring statistics do not necessarily reflect a player’s win production”.
That is the hidden game of basketball in a nutshell. And for some reason they understood it better at an earlier stage in the game’s development, and somewhere along the way we just lost sight of it altogether. Now public opinion is shaped and driven by unreconstructed wankers like Ric Bucher, who proudly and rightly credits himself for playing the role of KingMaker in the Derrick Rose MVP selection. We’re moving backwards.
If you compare the “Value” column from the 1967-68 Top 20 MVPs to the “Value” column from this season’s Top 20 MVPs, you will notice that the “Value” cut-off point for making the 1967-68 list (20th player: John Havlicek — Value: 9.7) is far lower than the “Value” cut-off point for the 2010-11 list (20th player: Al Horford — Value: 14.3). In fact, John Havlicek’s 9.7 Value would be considere somewhat pedestrian in 2010-11.
I wondered what this told me about the two eras. At first I speculated that it indicated a distribution of player winning percentages that was far differently skewed than the one we have today. Perhaps the NBA featured a weaker overall pool of talent in the 1960s.
Actually, it didn’t mean that at all. After analyzing the situation, I found that the W% distribution in 1967-68 was almost identical to the one I found this season. The W% posted by the median NBA player in 1967-68 was almost identical to the W% posted by the median player in 2010-11 (0.420 in 2010-11, 0.415 in 1967-68). So the distribution of talent was pretty much the same then as it is now.
So what was I looking at? Finally, I turned my brain back on and found the error in my reasoning.
I should have expected a far lower 20th player “Value” score in 1967-68 because I was nominating the same number of players (20) in each season, but one season had many fewer candidates than the other. Once I remembered this, the explanation followed.
If W% were similarly distributed in both seasons and if the number of player participants was much lower in 1967-68, then in order to fill out 20 MVPs in 1967-68 I needed to dig much lower down in Value score to do so. (If you grade two separate classes on the same kind of grading curve, the 20th best score in the class of 200 will always be higher than the 20th best score in a class of 100).
The Value score differences between the 2oth ranked MVPs in the two different seasons did not contain any insight at all into the comparative quality of player talent. By almost concluding that it did, I nearly made a classic “real numbers to percentages” critical reasoning error.