March 16, 2003

Final notes on this year's men's selection process:

Final notes on this year's women's selection process:

March 15, 2003

With only the four final championship games to play, here is an update to my predictions.

What changed:

Women's Tournament:

March 12, 2003

With most of the midmajor tournaments in the books and the major tournaments gearing up, here's a first stab at how the NCAA tournament field will look:

In general, it will be tougher than usual for teams to secure at-large berths. The problem is the overall weakness of midmajor teams. Last season there were plenty (about 10) midmajor teams that would have had good shots at at-large berths; this year there are only three (and two are in the MVC). This means that there are roughly five more "bad" teams given automatic bids, and thus there are fewer spots available for "good" teams. This is slightly mitigated by Georgia's withdrawal from the postseason, but nevertheless means that there are going to be some teams that would have made it last season but are left out. Note that the above total is not 65; there are 8 midmajor conference tournaments pending, none of which will be won by teams I consider likely at-large invitees.


February 22, 2003

With selection Sunday only a few weeks away, the college basketball selection process and RPI in particular are beginning to be noticed. In particular, ESPN.com writer Andy Katz recently ran a piece critical of the RPI's listing of BYU ahead of Maryland and Pitt. This is certainly a valid criticism, but unfortunately his analysis is largely incorrect. I describe the various aspects of the RPI that can cause "problems", in decreasing order of importance.

  1. Margin of Victory. The RPI ratings are calculated solely from knowledge of who beat whom, and not by how much. I fully agree with such a philosophy, as using scores in an important rating system causes some coaches to run up scores, thus invalidating the use of scores as an accurate and unbiased rating factor. So while the NCAA is correct is using scores only, it means that the resulting ratings do not use all available data, and thus do not consider everything we see when we watch games. The difference is clear from my own ratings. Ignoring margin of victory in team ratings, I rank Pitt #13, Maryland #31, and BYU #35. Including scores, the ratings instead become Pitt #5, Maryland #9, and BYU #26. Maryland's remarkable difference is due to the fact that their average winning margin has been 23 points while their average losing margin has been 8 points. Thus they are better than their record would indicate. Given that people have a very good understanding of who is better than whom, the lack of score data in the RPI is the #1 reason why somebody would make a double-take at the RPI ratings.
  2. Schedule Strength. To a reasonable approximation, the RPI computes schedule strength roughly as a straight average of the RPI ratings of a team's opponents. This is not the best approach. You can find a detailed explanation in my predictive ratings page; the bottom line is that a team's schedule strength should be weighted in favor of games against comparably-matched opponents. The reason is that a game against an opponent that you have virtually no chance of beating (or one that you have no chance of losing to) tells nothing about your team's strength. However the RPI does not recognize this fact, and a team would improve its schedule strength as much from playing the #162 team instead of the #323 team as it would from playing the #1 team instead of the #162 team. Realistically, a top-flight program should have virtually no problem dispatching either the #162 or #323 teams, while the #1 team would be a tough challenge. The end result is that, in my ratings, BYU has the #58 schedule strength, compared with #10 for Maryland and #13 for Pitt. As BYU's record was comparable to Maryland's and worse than Pitt's, this would have been enough to put the three teams in a more "reasonable" order.
  3. Schedule Approximation. The previous two issues are well-understood and generaly are correctly adjusted by the committee to produce reasonable selections and seeds. What is almost universally overlooked is that the RPI's accuracy is based on an assumed correlation between the average record of a team's opponents and the average record of a team's opponents' opponents' opponents, opponents' opponents' opponents' opponents, and beyond. Overall the approximation is a good one -- because a team plays over half of its games against conference foes, there is a good correlation between the strength of its opponents and that of its opponents' opponents' opponents and beyond. However, in single cases, this can be a very poor approximation, in that a team will profit from playing teams with weak opponents. What is worse, the selection committee does not appear to be aware of this fact, and is thus unlikely to make mental adjustments for teams benefiting from or hurt by this inaccuracy.
  4. Game Location. The most common (and most overrated) complaint about the RPI is that it does not factor game locations into account. In reality, a typical opponent is 0.033 RPI points more difficult when at home, and is 0.033 RPI points easier when on the road. Since a typical tournament-bound team will play over 30 games during the season, its RPI is lowered by about 0.001 points for every road game and improved by about 0.001 points for every home game. On average, this equates to one spot in the RPI rankings per extra game at home or on the road. What is particularly interesting is that Katz' article featured the BYU coach complaining about this aspect of the RPI. Apparently he needs some math lessons, as BYU had played 11 home games, 8 road games, and 4 neutral-site games at the time the article was written.

OK, so now that the issues are explained, how do these things really affect the RPI ratings?

  1. There's really no complaint here. Scores should not be used as part of the RPI, as it would discourage unsportsmanlike behavior from teams on the bubble. What bothers me is that the Sagarin ratings are used by the selection committee, which are 50% based on scores and 50% on win-loss information. The effect of scores on ratings can be seen by comparing my "predictive" and "standard" ratings.
  2. The schedule strength error penalizes teams that have several weak opponents on their schedule. I doubt anyone will shed tears over Maryland being penalized for playing cupcakes out of conference; who really gets hurt are the elite midmajor teams who are forced to play weak conference opponents. You can judge the effect of this by comparing the "improved RPI" with my standard ratings.
  3. Schedule approximation is a sneaky one to account for, as it is primarily based on how tough a schedule your opponents are playing. You get helped if your opponents rack up lots of wins against easy opponents, since your opponents record counts as 50% of the RPI while your opponents' opponents' record counts as only 25%. Similarly, you are penalized if you play teams that scheduled tough opponents. The most glaring recent example is Butler, whose RPI was in the bottom-70's (out of contention) but should have been in the mid-50's. You can judge the effect of this by comparing the "RPI" and the "improved RPI" in my ratings.
  4. As noted above, your RPI is improved 0.001 points per home game and lowered 0.001 points per road game. Realistically, nobody plays all that imbalanced of a schedule, so the effect is minimal. I can't forsee anybody's rating messed up by more than 5-6 RPI spots because of this.


Return to ratings main page

Note: if you use any of the facts, equations, or mathematical principles introduced here, you must give me credit.

copyright ©2003 Andrew Dolphin