Whose S.O.S Ratings Are More Accurate—Ours or
the BCS's?
(published the week of Dec. 3, 2000)
The BCS standings are a well-conceived blend of objective (computers)
and subjective (polls) measures of teams’ on-field success, and the BCS
ensures a national title matchup while preserving—and even accentuating—the
week-to-week drama of the most meaningful regular season in all of sports.
But to find college football’s most accurate strength of schedule (S.O.S.)
ratings, don’t look to the BCS; look to the Anderson & Hester/Seattle
Times Rankings.
The BCS S.O.S. ratings say that Florida, Florida State, and Miami
have played the 3 toughest schedules in the country. But, in truth,
the 3 toughest schedules have been played by Stanford, UCLA, and Colorado.
Florida, Florida State, and Miami, have played only the 19th, 14th, and
13th-toughest schedules in the country, respectively.
Compare the relative difficulties of the BCS’s #1-ranked 11-game schedule
(Miami’s) and the Anderson & Hester/Seattle Times Rankings’ #1-ranked
11-game schedule (Stanford’s), using the BCS’s own rankings of teams from
#1-115 (as unofficially compiled by BCS expert Jerry
Palm):
|
Miami |
|
|
Stanford |
BCS
rank
|
Opponent |
|
BCS
rank
|
Opponent |
2
|
Florida State |
|
4
|
Washington |
4
|
Washington |
|
6
|
Oregon State |
5
|
Virginia Tech |
|
11
|
Notre Dame |
39
|
Pittsburgh |
|
12
|
Texas |
41
|
Syracuse |
|
37
|
UCLA |
44
|
West Virginia |
|
40
|
Arizona State |
51
|
Boston College |
|
43
|
Arizona |
79
|
Temple |
|
53
|
USC |
91
|
Rutgers |
|
62
|
Washington St. |
104
|
Louisiana Tech |
|
63
|
Cal |
1-AA
|
McNeese St. |
|
68
|
San Jose State |
Avg: #46* |
|
Avg: #36 |
|
|
|
|
|
*This is the average ranking of Miami's
10 Division 1-A opponents. If McNeese St. were rated as the #116
team, then Miami's average would rise to #52. |
Stanford's average opponent is ranked #36 by the BCS; Miami's average
opponent is ranked #46—and this is without considering Miami's game against
1-AA McNeese State. The results of this comparison should not
be surprising to most college football fans—particularly to Big Ten and
Pac-10 fans, who have long maintained that teams in those conferences are
not granted the luxury of de facto weeks off against bad teams.
According to the BCS rankings, Miami has played 4 opponents that were easier
than Stanford’s easiest one, meaning that more than 1/3 of Miami’s games
have been against easier teams than Stanford saw all season.
The Anderson & Hester/Seattle Times S.O.S. ratings are far more
consistent with the BCS’s own rankings than the BCS S.O.S. ratings are.
The BCS rankings support the Anderson & Hester/Seattle Times Rankings’
claim that Stanford (average opponent: #36), UCLA (average opponent:
#35), and Colorado (average opponent: #35) have each played tougher
schedules than Florida (average opponent: #43), Florida State (average
opponent: #42), and Miami (average opponent: #46). The
BCS rankings also support the Anderson & Hester/Seattle Times Rankings’
claim that Washington (average opponent: #40) has played the toughest
schedule of any 1-loss team.
So why are the Anderson & Hester/Seattle Times S.O.S. ratings
able to be more accurate? The most overarching reason is that the
Anderson & Hester/Seattle Times Rankings feature the only S.O.S. ratings
that directly take into account teams’ conference strength—and it is essential
to take this into account. Most teams play 8 of their 11 games
versus teams in their own conference. Therefore, on paper, for more
than 70% of the season the Mid-American Conference's teams (51-51 won-lost
record in conference games) look to be as strong as the Big 12's teams
(49-49). Every conference, no matter how weak or how strong, posts
a collective .500 winning percentage in conference games. Therefore,
unless every team’s performance in conference play is adjusted to reflect
its conference’s actual strength, unless a 4-4 won-lost record in the Big
12 is recognized as being profoundly more impressive than a 4-4 won-lost
record in the MAC, then S.O.S. ratings will underrate the schedules of
teams from good conferences. Only the Anderson & Hester/Seattle
Times Rankings avoid this potential pitfall.
Additionally, the Anderson & Hester/Seattle Times S.O.S. ratings
are more accurate than the BCS's because of two shortcomings in the BCS
S.O.S. ratings that are readily apparent, and easily fixable. First,
the BCS takes out opponents’ wins or losses in games played against the
team in question. For example, both Nebraska and Oklahoma played
10-3 Kansas State, but the BCS takes out Kansas State’s results against
those teams when evaluating their schedules. Therefore, since Kansas
State beat Nebraska, and since that win is taken out, the BCS counts Kansas
State as a 9-3 team when evaluating Nebraska’s schedule. And, since
Kansas State lost twice to Oklahoma, and since those two losses are taken
out, the BCS counts Kansas State as a 10-1 team when evaluating Oklahoma’s
schedule. Yet it is the same Kansas State team. To give another
example, in terms of opponents’ won-lost records, the BCS S.O.S. ratings
give Florida exactly the same amount of credit for having played 9-3 Auburn
as they give UCLA for having played 10-1 Washington. Both Florida
and UCLA receive credit for having played a 9-1 team. To give one
further example, in terms of opponents’ won-lost records, the BCS S.O.S.
ratings actually give Florida more credit for twice having played 20th-ranked
Auburn than Auburn gets for twice having played 7th-ranked Florida—because
Auburn’s losses, and Florida’s wins, in those games are dropped, leaving
Auburn with a better remaining won-lost record than Florida. In short,
the BCS S.O.S. ratings could easily be improved by not dropping
opponents’ results against the team in question (and by including the won-lost
record of the team in question in the opponents’ opponents’ won-lost tally).
Second, the BCS undervalues the importance of opponents’ opponents’
won-lost records. According to published reports, the BCS’s original
plan was to count a team’s opponents’ won-lost records and its opponents’
opponents’ won-lost records equally—each would have accounted for 50% of
a team’s S.O.S. rating. But this was changed to a 2/3-1/3 split because
of concern that teams can only affect who they play, not who their opponents
play. But this concern is beside the point. If one were truly
to follow that argument, then opponents’ opponents’ won-lost records should
not be included at all. But the reason for reviewing opponents’ opponents’
won-lost records is that, without looking at who teams play, their won-lost
records don’t tell you much. Opponents’ opponents’ won-lost records
need to be fully incorporated in order to evaluate the strength of a team’s
opponents—to evaluate how good a team’s opponents actually are.
The BCS should count opponents’ won-lost records and opponents’ opponents’
won-lost records equally, as it originally intended to do. An example
demonstrates why. If, say, Alabama were to go 5-5 versus teams that
usually win 70% of the time, then that would be no more or less impressive
than if the Crimson Tide were to have gone 7-3 versus teams that usually
win 50% of the time—in either case, Alabama would have been playing at
a .700-winning-percentage level. Thus, if a team’s opponents have
posted .500 winning percentages against teams that in turn have posted
.700 winning percentages, then that team’s schedule would be equally as
difficult as one in which a team’s opponents had posted .700 winning percentages
against teams that in turn had posted .500 winning percentages. These
would be identical schedules in terms of difficulty, yet the BCS would
currently not regard them as such. A 50-50 weighing of opponents’
and opponents’ opponents’ won-lost records would result in each of these
schedules being given a rating of .600; but the BCS’s 2/3-1/3 emphasis
would result in the former schedule being given a rating of .567 and the
latter being given a rating of .633—a big difference. The BCS S.O.S.
ratings’ over-emphasizing of opponents’ won-lost records without sufficiently
taking into account who those teams have been playing in amassing their
won-lost records has the indirect effect, more often than not, of further
underrating the schedules of teams from good conferences. But, again,
this could very easily be fixed.
For all of these reasons, the Anderson & Hester/Seattle Times
S.O.S. ratings are the most accurate S.O.S. ratings available. Only
in these ratings are the schedules of teams from across the country, from
across all conferences, evaluated with maximum accuracy.
|