Scotland Course and Slope Ratings
 
Course Rating
Dealing with Sandbagging
Handicapping Guidelines
History of Handicapping
Junior Golfers
Pace of Play
Scramble Tournaments
Tournament Point System
World Rankings
Rankled by the Rankings
World Match Play needs more influence
PGA Tour Rejects Rankings
Maggert Chips In for the Ranking
Follow-through Dec/99
These golf rankings are for the birdies
Study of ranking system finds flaws
Knuth comes up with ranking changes
McCormack Defends Rankings
Ranking system requires math genie
Taking the rank out of the World Ranking
The WGC's exclusionary policies
World Rankings Hinder Euro Golfers
Magazine Articles
Scotland
Other
About the Pope Of Slope
Contact
Home




Pope Of Slope

Rankled by the Rankings

A numbers whiz pinpoints where the current system fails -- and how it can be fixed.

By DEAN KNUTH
Golf Digest magazine, April, 1999

The Official World Golf Ranking is a quirky, slow-to-respond system that few people understand, much less agree with. This became abundantly clear in recent years as the ranking kept Greg Norman too high afloat during his extended absence in '98, overrated Jumbo Ozaki of Japan for years and gave too many points to tournaments that too few top players compete in. Most recently, the failure to declare David Duval No. 1 amid a sizzling stretch of golf had people puzzled again.

Still, the rankings didn't rankle us too badly before, because they weren't really important. Now they are. The rankings, overseen by International Management Group (IMG) and the PGA Tour, are being used to help answer the thorniest question in golf: Who gets to play in all of the majors and two of the new, wealthy World Golf Championships?

The rankings as of designated cutoff dates will send some players passes to, and keep others just decimal points away from, golf's most elite events: the Masters and British Open (each exempting the top-50 ranked players), the U.S. Open (top 20), the Andersen Consult-ing Match Play Championship (top 64) and the American Express Stroke Play Championship (top 50).

While much attention has been focused on whether the rankings get the top spots right, more unrest should be brewing among players in the range of 40th to 70th place -- where the current system is the most error prone. This range is separated by about a point and would require a micrometer to measure the difference. In fact, one-tenth of a point separated the 50th-ranked player -- who gets invited to Augusta -- and the 51st, who does not.

That's not to say the rankings couldn't be improved at the top, too. Had the methodology for figuring the rankings not been so scientifically unsophisticated, Mark O'Meara would have overtaken Tiger Woods at the end of 1998. And Duval, after winning his first two events of 1999 and having nine victories in his last 30 starts, would have supplanted both of them by now (with a cutoff date of Jan. 31, 1999, after the first four events of this season).

Before I break down the problems in the rankings and proffer some solutions, here's a little background. IMG has made some refinements over the years in an attempt to make the rankings more responsive. For example, it shortened the period of retaining points from three years to two. Events from the five pro-fessional tours are used and points are awarded according to a player's finishing positions and are related to a "strength of field'' factor -- based on the number and ranking of the top-100 ranked players and the top 30 of the "home tour" players that are playing in each event. The four major championships are rated separately and higher than other events. Certain other events such as the Players Cham-pionship and the Volvo PGA Championship on the European tour get higher points. The world ranking points for each player are accumulated over a two-year rolling period, with the points made in the most recent 52 weeks being doubled. Each player is then ranked according to his average points per tournament.

The Official World Golf Ranking is overseen by a board comprised of leaders of the world's major golf organizations and tours, with IMG founder Mark McCormack serving as its chairman. The board includes a urologist representing the U.S. Golf Association and a banker representing the Augusta National Golf Club -- but no mathematicians.

Golf Digest asked me to analyze the current system and make recommendations for improving it. Since I know from experience, having worked 16 years at the USGA, that a committee of one is a lot more flexible than a committee of world golf leaders, I was more than willing to tackle the job. Here are the major problems I found and some suggested ways for fixing them:

Problem 1: An inadequate "decay'' process. Most conspicuously, this system is supposed to be tracking current performance, but has been terribly slow to respond, because of an inadequate performance "decay'' procedure. Put another way, a victory last week is given the same weight as a win 51 weeks ago. It's clear here that the world ranking uses a primitive mathematical moving average. A very basic "step function'' is used to degrade the importance of a performance from 53 weeks to 104 weeks old in just one step. It makes no sense that the importance of a performance should suddenly drop drastically from week 52 to week 53 -- and then stay constant for another 51 weeks.

Solution 1: A good system should not have "step functions'' -- or else it should have lots of small steps. A simple linear decay in importance from week 1 to week 104 would make the rankings much more responsive and current. I suggest that each month the "current'' importance of a past event be decreased by one-twelfth. The weighting would then be 2.0 for this month, 1.92 for the the previous month down to 0.083 for a tournament two years ago. Such a process would have dropped Norman, for example, much quicker during his extended absence.

However, because there are only four majors a year, I would not subject them to the same decay process. Instead, they would maintain their full weight until the following year's majors take place, after which their weight would drop in half.

Problem 2. Points for majors. The points awarded to the four majors are higher than in any other events, representing 10 percent of the total yearly points assigned. But this is still not high enough. The four majors are the biggest competitions in golf, a huge step above everything else. They are also the four occasions when the best players in the world compete against each other for history, not just money. Currently, a player can finish fifth in the U.S. Open, or any major, and earn fewer points than he can for finishing third in certain regular tour events -- or for winning some Japan and other tournaments. Some egregious examples:

David Duval won the Tucson Chrysler Classic a year ago to get 50 points -- 25 multiplied by 2 for 1998 -- the same number of points he got for finishing tied for second in the 1998 Masters.

Mark O'Meara finished fourth in the PGA Championship to get only 12x2=24 points. His seventh place in the limited-field World Series of Golf got half that number of points (6x2=12), but he got a whopping 44 points (22x2) for winning the 12-player World Match Play at Wentworth.

Lee Westwood finished seventh in the U.S. Open and got only 8x2=16 points and yet got 21x2=42 points for winning the Deutsche Bank TPC in Europe. Finishing fifth in the '98 Players Championship got him 8x2=16 points. Yet first in the Dunlop Phoenix in Japan got him 20x2=40 points.

Solution 2: I'd increase the points for the four majors to 15 percent of the year's total. A victory in each would be worth 75 points (multiplied by 2 for the current year), up from 50. A smoother distribution on places would spread out the points better down to the bottom-cut position.

Problem 3. Strength begets strength. The strength-of-field issue is a big one and has inherent problems. It's too heavily weighted to the participation of the top-ranked players. If the No. 1-ranked player enters any event, he brings with him 50 strength-of-field weighting points. However, if the No. 7 player enters, only 20 weighting points are assigned, and the No. 16-ranked player brings 11 points. This system is self-perpetuating; top players in effect create their own strength of field wherever they play.

Clearly, certain tour events are more important and prestigious than others, but how that is calculated is worrisome. In 1998, for example, the Bay Hill Invi-tational gave out more points than the Western Open and the Memorial Tournament, because of differences in strength of field.

Tiger Woods, as the No. 1-ranked player, takes his strength-of-field points (50) anywhere in the world and makes it a much more highly ranked tournament. On the other hand, Tom Lehman (20th), to pick a good player, could go on a tear this spring or summer after recuperating from shoulder surgery. But if he enters an event that Tiger isn't playing ( which otherwise has a strong field) and performs well, he'll still be penalized because Tiger's strength-of-field points won't be added. The result is that it takes a player like Lehman a lot longer than it should to move up in the rankings.

Solution 3: I'd keep a strength-of-field factor, but reduce its dominance and fluidity based on individual players. To smooth out the strength-of-field points more evenly, I'd compute the average strength of field for tournaments over the past five years (1994-'98). That value would be used to determine the 1999 strength of field, no matter who enters the event. If an event has a particularly weak or strong field, that would be reflected as a marginal change in the next year's computation. The majors would not fluctuate.

Problem 4: Averaging procedure. As with strength of field, there are some serious flaws with the current averaging procedure. Dividing total points by total events entered puts too much emphasis on finishes in the top 10 places; the system becomes one that measures ability to stay in the top 10 positions. Players who don't finish high get no points or only 1 or 2, depending on the event, and this drives down their average tremendously. It is commonplace for really good players to miss the top-10 spots, but not commonplace for them to miss cuts. A simple averaging of points accentuates the problem by driving down averages too far and closing the gap with the lesser players. This diminishes the accuracy of the system in the now-important lower rankings.

The current data clearly indicate that it's easier to finish in the point-getting positions in some weaker events than in strong U.S. tour events. Finishing out of the top 20 in a strong event will often give a player 0 or 1 point, while he more likely could have earned points in a weaker event despite the weak event's low strength-of-field factor. Foreign players like Norman, Nick Price and Ernie Els who return home and play in relatively weak events pick up a lot of easy points this way. For instance, in 1995, Norman received 36 points for winning the Australian Open -- many more than he did for finishing third in that year's Masters.

Solution 4: More work is required to build a better mathematical model here. As a start, I would clip off the worst 10 percent of a player's finishes (if 0 or 1 point). A form of this technique, known as "Windsorization'' is used in other handicapping systems, including the USGA's. The answer might be to use a player's performance in the majors, plus the best half of his other finishes. This change would put more focus on how well a player normally plays and less emphasis on how many cuts he misses.

Problem 5. Inequitable distribution. As stated above, the assignment of points is heavily weighted to the top few positions, then drops off dramatically to 1 or 0. Giving the same 1 or 0 point to 30th and 60th place makes little statistical sense.

The current ranking points are typically distributed as follows:

1st place = 23.1 percent.
2nd place = 13.8 percent.
3rd place = 9.2 percent.
4th place = 7.7 percent.
5th place = 6.2 percent.
6th-7th places = 4.6 percent.
8th-14th places = 3.1 percent.
15th-20th places = 1.5 percent.
21st place and below = 0.

The step functions in this breakdown are very pronounced (eighth place is the same as finishing 14th). Points are awarded for all positions in the four majors, but the drop-off goes flat after seventh place. The difference in points given to seventh place and 50th is only eight. The entire table is such that if you don't finish third or better in any event, you (a world-class player) lose ground in your average standing. If you don't finish fairly high, you end up with only a point or two. Such a system cannot track a lot of players accurately.

Solution 5: The table is too short, uneven and has a "scaling'' problem. It is one designed to track a few good players -- not to rank all the best players in the world. The way to fix this is to increase the points issued by a magnitude of 10 (18 for winning in a normal event becomes 180, 50 for a major becomes 500.) Then scale the table from that number down to the cut. This would make for a lot smoother distribution of points than the current allocation. I would also adopt the PGA Tour's purse distribution, which puts a lower percentage on the top positions and assigns points down to the cut line (the tour purse breakdown is 18 percent for first, 10.8 percent for second, 6.8 percent for third, 4.8 percent for fourth, and so on). The U.S. Open purse distribution, which I developed for the USGA 15 years ago, is even smoother and more equitable.

Problem 6: Weak links. The Japan and Australasian tours are getting too many points, even with a normally weak strength of field. Eighteen foreign tour pros are ranked in the top 64, but five of them play the Japanese tour. Jumbo Ozaki's eternally high ranking is a case in point. How has he maintained such a high ranking when he is never competitive against the best players (in the majors)? At this writing he is still ranked 13th, ahead of Fred Couples (14) and Steve Elkington (15), both major winners. It would seem obvious that Japanese tour players are not as competitive as U.S. tour players, yet their winners earn as many points as some PGA Tour event winners, and sometimes more -- 16 points (multiplied by 2) are awarded for winning the Japan Open. Again, keep in mind that if any Japan Tour player could finish fifth in the Masters, U.S. Open, British Open or PGA Championship -- which only Brian Watts in recent years has been able to do -- he would earn fewer points than for winning back home.

Some examples of players who do well in Japan, but nowhere else:

Jumbo Ozaki missed the cut in the three majors he entered last year (and has not finished in the top 20 in any major since the 1989 U.S. Open). But he won in Japan three times in 1998 and finished frequently in the top five there.

Carlos Franco of Paraguay is ranked 41st, with a 3.29 point average. He tied for 64th in the British Open for 1 point, tied for 40th in the PGA for 2 and the rest of his points were generated in Japan, including two victories there.

Frankie Minoza of the Philippines finished tied for 52nd in the British Open, but won the Kirin Open in Japan to get 12x2=24 points. He is ranked 54th in the world.

New this year, additional strength-of- field ranking points (up to 75) will be awarded based on the number and position of the top 30 from the previous year's money list of the "home tour'' players entered in an event. It seems that this change will only serve to make the above overrating problem in Asia worse.

Solution 6: Cut the points for the Japan and Australasian tours in half. This would drop Jumbo out of the top 30, and his brother Joe, Minoza and Franco all fall out of the top 64.

Problem 7: The European issue. The current strength-of- field factor penalizes the top European players, because most of the "heavyweight'' points are used on the U.S. tour. The Europeans are good enough to beat our American stars in the Ryder Cup. But 40 of the current top-50 ranked players are regulars on the PGA Tour, and 53 percent of all points are played for in America (counting our three majors). As a result, the strength-of-field weighting on the European tour has diminished significantly in recent years. European events often get minimum strength-of- field points and, as a result, the likes of Colin Montgomerie and Lee Westwood don't always get the points commensurate with their high level of play.

Solution 7: Insure that if the top "home tour'' players are competing in a European event, they get comparable points to a U.S. tour event, thus allowing the most outstanding Europeans to be ranked with Americans.

Problem 8: Small, stacked fields. Significant points are being awarded to several unofficial, off-season events. Mark O'Meara got 22 points, multiplied by 2, for winning the '98 World Match Play (against Woods in the final). This is a very limited-entry event that gives four players an enormously unfair advantage -- a first-round bye in a match-play field of 12. David Duval and some other top players did not even enter it in 1998. The entries outside of the four seeds have been disproportionately IMG clients. I don't believe such a limited, stacked-field event should count this much.

By the way, if the World Match Play had not counted as much, Duval would have passed Woods last fall. I also feel the same about the validity of the Million-Dollar Challenge in South Africa. With only 12 by-invitation players, it doesn't fit the criteria of "competitive golf.''

Solution 8: Cut the off-season points in half. Be stingy also in allotment of points for any extremely limited-field event, such as the second of the new World Golf Championships, open only to the most recently named Ryder and Presidents cup teams. Such small-field events feel more like exhibitions, not competitions.

Conclusions: So who should be No. 1 right now (as of Jan. 31, 1999)? David Duval, indisputably, according to all except the Official World Golf Ranking. Since his first victory in October '97, he has nine wins and 18 top-10 finishes in 30 starts. He also had a strong showing in last year's majors: T2 in the Masters, T7 in the U.S. Open, T11 in the British Open and missed the cut in the PGA. In 1999 he was 2 for 3 as of my deadline with victories in Hawaii and Palm Springs.

Three weaknesses in the world ranking are chiefly responsible for Duval not being ranked No. 1 sooner:

He is a more active player than Woods and O'Meara (Duval played in 55 events the last two years; Woods 45 and O'Meara 50). More active players seem to be penalized by the current averaging technique.

The step function of depreciating old scores clearly hurt his ranking. If more steps were in place, he'd move up fast.

The weighting of finishes for players finishing below fourth place is wrong in the current system, and it also penalizes a player like Duval, who has so many top-10 finishes, but has not gotten the points that he should (a top-10 finish can actually lower your ranking in some events).

Perhaps some improvements in the system will result from this analysis. I hope so. One thing's for sure: As the rankings become more important, they need more scrutiny, and changes are inevitable -- the sooner, the better.

Reshuffling the deck: A revised top 64

The methodology detailed here by Dean Knuth would reshuffle most of the top 64 players in the Official World Golf Ranking. In all, 28 (of the top 64) players move up, 24 drop and eight stay the same (as of Jan. 31, 1999). Four players move in and out of the top 64 -- the cutoff point for the initial World Golf Championships Andersen Consulting Match Play in late February. Here's the author's revised list; players moving up are in boldface, and IMG/PGA ranking in parentheses.

1. David Duval (2)
2. Tiger Woods (1)
3. Mark O'Meara (3)
4. Vijay Singh (9)
5. Fred Couples (14)
6. Davis Love III (4)
7. Ernie Els (5)
8. Lee Westwood (6)
9. Jim Furyk (10)
10. Colin Montgomerie (8)
11. Justin Leonard (11)
12. Jesper Parnevik (17)
13. Phil Mickelson (12)
14. Nick Price (7)
15. Lee Janzen (19)
16. Darren Clarke (16)
17. Steve Stricker (22)
18. Tom Lehman (20)
19. Steve Elkington (15)
20. Jeff Maggert (23)
21. M. Calcavecchia (24)
22. Scott Hoch (21)
23. Payne Stewart (30)
24. J.M. Olazabal (29)
25. John Huston (26)
26. Brian Watts (18)
27. Bernhard Langer (27)
28. Tom Watson (28)
29. Hal Sutton (31)
30. Brandt Jobe (37)
31. Jeff Sluman (33)
32. Bill Glasson (38)
33. Greg Norman (25)
34. Bob Tway (34)
35. Stuart Appleby (32)
36. Thomas Bjorn (46)
37. Billy Mayfair (35)
38. John Cook (36)
39. Bob Estes (39)
40. Fred Funk (44)
41. M.A. Jimenez (56)
42. Scott Verplank (48)
43. Jumbo Ozaki (13)
44. Eduardo Romero (61)
45. Stephen Leaney (63)
46. Paul Azinger (62)
47. Ian Woosnam (45)
48. Patrik Sjoland (52)
49. Loren Roberts (40)
50. Andrew Magee (50)
51. Stewart Cink (49)
52. Shigeki Maruyama (43)
53. Steve Jones (53)
54. Brad Faxon (42)
55. Rocco Mediate (57)
56. Dudley Hart (55)
57. Steve Pate (59)
58. Craig Parry (51)
59. Per-U. Johansson (67)
60. Frank Nobilo (69)
61. Jay Haas (74)
62. Nick Faldo (64)
63. Glen Day (47)
64. David Toms (72)

Move out of top 64: Carlos Franco (from 41st to 76th), Frankie Minoza (54th to 72nd), Joe Ozaki (58th to 65th) and Michael Bradley (60th to 66th).

After beating the math portion of his SAT with a perfect 800 score, Dean Knuth attended the U.S. Naval Academy and served as a captain of the Naval Reserve. In civilian life, he spent 16 years with the U.S. Golf Association, where he created the USGA Course Rating and Slope Rating Systems, earning the sobriquet "the Pope of Slope.'' He now runs the San Diego office of Inter-National Research Institute, which writes high-tech software for the U.S. Military.

April 1999
Copyright � 1999 The New York Times Company Magazine Group, Inc. All rights reserved.

© 1998-2016 PopeOfSlope.com