Q&A With Kyle Wright and “The NBA Top to Bottom”

The following is a question/answer session I recently conducted with journalist Kyle Wright.  Kyle just wrote, “The NBA Top to Bottom”, a fascinating new book where he ranks every single NBA team in history.  For any casual fan or even an NBA buff, there is sufficient amount of data to digest and have fun with.  Kyle was nice enough to sit down and talk with us for a few moments about his book, his methodology, and some of the interesting issues raised:

-Why don’t you tell everyone who you are and how you came up with the idea of ranking every NBA of All-Time.

My name is Kyle Wright. I currently work on the sports staff at the Pensacola (Fla.) News Journal. I grew up in Indiana, so I am a big fan of the Indiana Pacers.

Indiana happens to be the home base for computer sports ratings godfather Jeff Sagarin, and for one of the companies that calculates the college basketball Ratings Percentage Index. As a result, I got to see regular examples of computer sports ratings systems as far back as the 1980s, before such ratings became household terms.

I took interest in such ratings systems because the teams I supported or played for generally were not good enough to be ranked in the top 20 or top 25 in human polls. A computer system was the only way to evaluate how the 6-14 basketball team I played on as a senior in high school stacked up against the rest of the state.

The idea to rank every team in NBA history initially was supposed to be just a chapter of a book. My original idea was to create one formula to rank every player in NBA history, one formula to rank all of the coaches, and one formula to rank all of the teams. My formula for rating individuals hit some snags, mostly due to incomplete statistical records from league’s first 25 years. I did already have my formula for rating teams, so I decided to proceed and focus on the team-by-team ratings.

-How did you come up with the POST formula?  Did it require a lot of tweaking or what is something that just flowed from your fundamental assumptions?

I began developing my own system in the early 1990s. I went through about four different methods before I hit upon my current system in 1995.

I call the system the POST formula. POST stands for “Points Over (or under) a Standard Team.” It also is a sensible name for ratings for football, basketball, soccer, hockey, and any other sport with a goalpost!

I measure teams by how they would fare against a “standard”, or average team. If my system gives a team a rating of +10.0, then that team would be expected to beat an average team by 10 points. If my system gives a team a rating of -10.0, then that team would be expected to lose against an average team by 10 points. The key components of the formula are strength of schedule and point differential.

This system allows for comparison between teams from different seasons because the definition of “average” does not change much from year to year. If I tell you the 1951-52 Philadelphia Warriors went 33-33, you most likely would think, “Oh, they were an average team.” If I tell you the 2000-01 Indiana Pacers went 41-41, you most likely would think, “Oh, they were an average team.” Those “average” teams are the starting point for comparing the rest of the teams.

The key point to remember with my system is that it measures dominance, which is not the same as measuring talent or ability. For example, I don’t think the 1946-47 Washington Capitols, who finished 49-11, would be capable of beating the 2006-07 Memphis Grizzlies, the team that finished with the NBA’s worst record last season. However, my system ranks the 1946-47 Capitols as one of the best teams of all time and ranks the 2006-07 Grizzlies 983rd all-time because Washington was much more dominant during the 1946-47 season than Memphis was last season.

The system has identified 56 percent of the NBA’s champions correctly (34 of 61), including the Spurs last season. Since the team with the NBA’s best record wins the title just 48 percent of the time (29 of 61), I think the formula is a useful tool.

-One thing I notice with the POST formula is that it does not seem to take into account post-season dominance because it was designed to measure the teams against their average contemporaries.  Did you ever consider adding a postseason element to the formula?

I very much wanted to add postseason results, but could not do so because I would get a lot of situations where teams could get better ratings by losing a playoff series, and situations where teams could get better ratings by missing the playoffs.

An example of the former would be the 1974-75 Washington Bullets. Washington beat Boston by six points in Game 7 of the Eastern Conference finals, but then got swept in the NBA Finals by a less-than-stellar Golden State Warriors team. If my system counted playoff results, I would have a situation where Washington would have been better off losing Game 7 against Boston by one point and avoiding the additional losses against Golden State.

An example of the latter would be the 1995-96 Miami Heat. The Heat snuck into the playoffs as a No. 8 seed, but got blasted by Chicago in three games in the first round. If I counted playoff results, Miami’s rating would wind up worse than it would have had the Heat simply missed the playoffs. Miami would get some strength of schedule reward for playing the Bulls, but not enough to make up for getting beat by an average of 23 points per game.

If I was evaluating only the 61 NBA/BAA champions and no one else, I would count playoff results. There would be no danger of a team getting rewarded for losing a playoff series because we would know that none of them actually did lose a playoff series. It would be a fair comparison between those 61 teams. In fact, I might do that an post the results on my Web site.

Since I am evaluating all 1,153 teams at once, under my system it would not be fair to include playoff results and make it possible to punish teams for winning a playoff series.

-You also seem to reject cross-era comparisons of teams as kind of an apples and oranges problem.  Did you ever considering wrestling with that issue you too?

I do not think my system is appropriate for comparing talent level between eras. In fact, I’m very skeptical that any analysis based on game scores and/or individual statistics would be appropriate for comparing talent level between eras. I think such an exercise first would require measuring physical attributes like height, weight, quickness and jumping ability from era to era. Then, one could find out how those variables correlate to team success. (Do the tallest teams win most of the time? The quickest teams? The ones that jump the highest?). After all of that, then perhaps a person could blend those results with something like my formula to compare raw talent level from era to era.

That said, I still think my book is useful for comparing teams’ dominance from era to era. I’ll use the 1946-47 Washington Capitols as an example again. Could they compete in the modern NBA? I doubt it. They probably would be too short and too slow. Were they more dominant than any team since the late-1990s Chicago Bulls? Without question.

-Another issue is that it seems that dominance has varied based upon era.  The early 1970s and the mid-1990s have had several really good and bad teams while the 1960s and late 1970s seem to be seriously mediocre.  Were you concerned that the trends over time might make skew the results of POST?

I initially was concerned that a lot of the really dominant and not-so-dominant teams seemed to emerge from expansion seasons. My concerns were somewhat lessened when I studied each expansion year one-by-one. For every expansion season that seemed to create a “super team” (the 1970-71 and 1995-96 seasons come to mind quickly) I can find expansion seasons where, if anything, the top teams got less dominant (the 1968-69 season and the two late-1980s expansion seasons come to mind). It seems that expansion makes it more likely for a dominant team to emerge, but it is not automatic.

As for the early 1960s and late 1970s … were they times of great parity, or were they times of great mediocrity? Since my system deals with dominance and not overall talent level, I don’t claim to have an answer. I would not say my system indicates “serious mediocrity” during those eras. Rather, I would say my system indicates that not many truly dominant teams emerged from those eras. I don’t think the eras themselves are unduly punished. The 1977-78 Portland Trail Blazers were on pace to be a top 20 team in my system before Bill Walton’s injury. The 1961-62 Celtics are my 10th-best NBA champion. The Celtics of the 1960s finish in my top five when I set the computer to measure ratings over a 10-year span. They come out No. 1 when I measure over a 15-year span. Teams like the 1985-86 Celtics, 1969-70 Knicks and 2006-07 Spurs are rewarded for doing well in competitive NBA seasons.

-Getting past methodology, did you have a theoretical top ten NBA teams of All-Time in your head?

I grew up assuming the three best teams in NBA history were the three most recent teams to hold the single-season victories record – the 1966-67 Philadelphia 76ers, the 1971-72 Los Angeles Lakers and the 1995-96 Chicago Bulls. After previewing the numbers before putting the data into my system, it became clear to me that the 1970-71 Milwaukee Bucks were going to be in a three-horse race for the No. 1 spot in my book, along with the 1971-72 Lakers and the 1995-96 Bulls.

-How did your results jive with your gut feelings?  What were your biggest surprise teams?

My hunch was that the 1971-72 Lakers were going to wind up in the top spot, but when the computer spit out the final numbers, the 1970-71 Bucks got the No. 1 ranking. As it turns out, the 1971-72 Lakers had a slightly weaker schedule than that of the 1970-71 Bucks and 1995-96 Bulls.

There is a challenge arguing for the 1970-71 Bucks as the most dominant team of all time because they are not as well-known as some other great champions, and because the franchise has just one total title. (Many argue that the Bucks should have won more titles, yet no one seems to hold this against the 1971-72 Lakers. The Laker franchise has 14 titles, but the core of the 1971-72 team accounted for just one of them).

The 1970-71 Bucks’ numbers stack up favorably against those of any team. I think of a team’s statistical profile the same way the public might think of a DNA profile. To me, the 1970-71 Bucks’ statistical “DNA” is that of a 70-win team.

My biggest surprise was the 2006-07 Spurs cracking the top 10. When the playoffs started last season, a lot of people didn’t think San Antonio was one of the top two teams in the league, much less one of the top 10 teams of all time. Fortunately for me, the Spurs backed up their lofty ranking by winning the title. Unfortunately for me, that also meant I had to re-write the Top 10 chapter at the last moment. I hit the “send” button to send my manuscript to the publisher literally the moment San Antonio wrapped up Game 4 of the NBA Finals.

-In terms of good showings, I was pretty surprised by how well the more recent Houston teams did versus the Hakeem title teams.

My two key components are point differential and strength of schedule. The recent Houston teams had better point differential than the franchise’s championship teams of the mid-1990s. The Western Conference clearly is better now than it was in the mid-1990s. As a result, the 2006-07 and 2004-05 Rockets rank as my most dominant Houston teams of all time.

As you might imagine, this did not go over well in a Houston newspaper blog. My response is this:

Sometimes, dominance doesn’t guarantee greatness. A prime example is the Houston Rockets of the 2000s.

And sometimes, greatness need not require dominance. A prime example is the Houston Rockets of the 1990s.

-I was also pretty surprised to see  that (a) the Suns had a ton of teams in the top 200 or 300 and that the Barkley teams didn’t fare so well in that group.

I hadn’t really noticed until you pointed it out, but I guess Phoenix teams do come out rather well in this system. I would say there are two reasons. First, the Suns have been pretty good over their franchise history. They haven’t won a title, but they seem to be masters of the 55-win season. Second, I admit my system tends to reward offense-oriented teams a little more than defense-oriented teams. And Phoenix teams traditionally tend to score a bunch of points.

As for why the Barkley teams don’t fare so well … the only Barkley-led Suns team to put up a truly impressive point differential was the 1992-93 version. Also, Barkley played for the Suns at a time when the West was noticeably weaker (blame the Mavericks and Timberwolves for much of that), so his Phoenix teams don’t get much of a boost in the strength of schedule department.

-If the 1970-71 Bucks played the 1995-96 Bulls 100 times on a neutral court what’s your guess at their record.

If you mean putting the two teams in time machines in meeting up in 2007 and putting them on a neutral court … I think the 1995-96 Bulls would slaughter them. I am a big believer that the average NBA player has gotten more and more athletic over the course of history, and that would give Chicago a big advantage. Let’s say the Bulls would win 87 of 100 games, since that was Chicago’s overall record during the 1995-96 season. (No, I don’t really think it would be that one-sided, but I do think the Bulls would win at least 60 of the games)

Now, if you are asking who would win more games if both teams played an “average” team from their own season 100 times, I would take the Bucks. We’ll say the Bulls (again) would win 87 games, but Milwaukee would win 88.

-Have you received any strong visceral reaction from historical NBA fans?  What most sticks out to you so far?

The most memorable reaction was from a reader on a Houston Chronicle blog message board, who suggested I need to stop writing books and “go back to eating frosted flakes.” I’m not quite sure what that means, but I’m pretty sure it wasn’t a compliment.

On a rational level, there have been two main debates.

The first is whether teams that don’t win titles can be considered more dominant than those that do. To me, it is clear the 1993-94 Seattle Supersonics were more dominant than the 1978-79 Sonics. Yet many people will say the 1978-79 team has to have a better rating since they won a title and the 1993-94 team didn’t.

The other debate is one you brought up, the difference between eras. How much did expansion dilute the league for teams like the 1970-71 Bucks and the 1995-96 Bulls? Were the early 1960s and late 1970s times of great parity or great mediocrity? My answer is that teams like the Bucks and Bulls were dominant even if their competition did stink, and regardless of whether the early 1960s and late 1970s were balanced or mediocre, those eras didn’t produce many dominant teams.

-Are there any plans for a follow up?

I’ll definitely update the top-to-bottom NBA rankings on my Web site (www.sportsfromtoptobottom.com) after each season. My next book project will be a similar top-to-bottom concept for the NFL. The numbers crunching is done. The writing part will take me about three years, so we’re looking at 2011 at the earliest before I can get a product onto the shelves. I also write a “By the numbers” stats-based column for my newspaper. An example of my weekly column: Before the baseball playoffs, I explained Pythagorean wins and predicted a Red Sox victory in the World Series. The column appears on Fridays at pnj.com.

To those who have already read the book, thank you for your support and interest. To those interested, check out my Web site at www.sportsfromtoptobottom.com for a sneak preview. I think and hope you’ll enjoy what you see.

You can purchase a copy of “The NBA Top to Bottom” here

Leave a Reply