RANKING SYSTEMS IN FACE TO FACE PLAY:
GAME RANKINGS VS TOURNAMENT RANKINGS AND INBETWEEN

By Edi Birsan


We are a hobby of individual egotists who sometimes shockingly will ask the question of who is the best or what is the best.

Whenever there is a tournament there is a Scoring System that is applied to the table results to give an overall ranking of the participants in that specific event. The scoring system reflects the bias or the perverse whims of the Tournament Director. A review of the players historically have indicated that about less than 50% of the average tournament has players who are trying to play the scoring system to achieve recognition of their results (it is argued that less than 50% of those who try really understand what the scoring system is and how to play to it). Tournament Players will change their play styles and even the targets in their game based on the perception of the scoring system and where they and their rivals are at each individual round of the tournament. This is the meta-gaming aspect of tournament play.

A Ranking System is a method that is applied to a group of results based on the bias of the Ranking System maintainer/creator. The pure Tournament Ranking compilation looks only at the final ranking in a tournament and then combines them whereas the Game Ranking systems ignore the tournament results and looks to rescore the games as if they were played under the scoring method of the system.

Game Ranking Systems that take the results of individual games and rescore them is most prominently represented by the North American Diplomacy Federation (NADF) two ranking system: the NADF-Masterpoint Ranking and the NADF scoring system. These North American systems are maintained and created by Buz Eddy and applied to both social games and tournament games reported to him. The NADF system goes back about 12 years and has over 3000 games in its compilation. Whereas the NADF-Masterpoint system, exist only for about the last 5 years worth of games. Complicating the understanding of the NADF system is that at various times over the last decade the technical methodology has been tweaked. Furthermore the data is maintained mostly on a current running total making research and rechecking almost impossible. The philosophical arguments against the game ranking systems are that as tournament games are played under different incentives the application of a Draw Based bias or a Lead Center based system to rescore the games is contrary to what the players were playing for in the tournament. The argument in favor of it is classically represented by the comment 'but that is the way I want it.' Those who do the work, make their own rules.

NADF Masterpoint:

The basis is 6 points per game divided by the number of people in a draw. So a three way draw is worth 2 points to the draw partners or all 6 going to the winner. Players also get a base value for playing which is weighted by the country played. The value of the country factor is a point sum roughly from .8 to 1.2 with Italy/Austria/Germany at the bottom and England/France/Russia on the top. There is a division in points scored for Social play and Grand Prix play. To advance in different titled classifications such as Master and Grand Master takes a combined score with different quantities of Social Vs Grand Prix play to count.

NADF Ranking

The basis of this is a draw-based system with a chess like opponent component.

The last time it was described in print the computation formation was:

All players start at 1000. The base is raised each game two points per number of winners. Computation ratings falling below base revert to base rating.

  1. Update the country factors. (Current about R 16.1, E- 16.3, F-16.2 , T- 15.1, G- 13.2 A-12.1, I- 10.4)
  2. Input the player names by country and search table for rating. Non-match equals 1000.
  3. Input number of winners where 'winning' means solo or those participating in a draw as reported.
  4. compute game "pot" which is 12% of the total ratings in the game
  5. compute winners share (WS) which is pot divided by winners
  6. Add losers average rating - LA, add winners average rating - LA, and modify winners share by LA\WA. (Reduction to not much if WA significantly larger the LA)
  7. Winners gets country factor adjusted winners share added to old rating. (less own contribution to pot). Must get a minimum of game base adjustment plus two.
  8. Loser gets country factor adjusted contribution to pot subtracted from rating.
  9. Single game gain limited to one half the difference between starting rating and players individual cap Individual Cap is 3000 and increased by 50 points for each country which a solo is rated from Grand Prix play for that player.

Combined Tournament and Game Ranking methods:

There are combined Tournament Rankings and Game ranking system, most notably represented by the Downunder (Australia and New Zealand) DAANZ Masterpoint system where players get points for tournament rankings as well as individual games played over a life time of tournament play only

DAANZ Master Point

All players receive 2 point for a tournament win and 34 points for having a solo in a tournament. In addition there are classifications: Novice/Intermediate/Veteran/Champion/Master. Within each category you get points for individual games based on your class and the results.

For example for a Veteran to get point you need to have a result with 10 centers to get 3 Master points. However a Champion only gets points for centers if his total is final count is 16. They also have a requirement for placing in a tournament or number of solo's to get to the next classification. Go to www.DAANZ.org for a detailed version.

TOURNAMENT RANKINGS

The pure Tournament Rankings system method was until recently only represented by The Diplomatic Pouch Tournament Ranking created and maintained by Matt Shields. This system ignored individual game results, or social games, and focused on the final ranking of the players. It then applied a percentile step advance methodology over a 10 year moving window and a weighting system to reflect a bias for results in the World DipCon Championships and the equivalent of Continental Tournaments such as The DipCon in North America and the European DipCon Championship, to come up with a final world wide ranking.

With the massive tournament (900+) and game results database maintained by Laurent Orange (Joly) and Emmanuel Du Ponbtavice the prospects for additional Ranking Systems now exist using a common database (an aspect that has plagued those in the past who have tried to create hobby-ranking systems). Originally started on www.18centres.com with Arnaud Boirel, Laurent and Emmanuel have split with that web site and moved the most updated version to www.EuroDip.eu. They have a section for individual research as well as a section on 'Rankings'.

There are three currently updated Hobby Wide ranking systems, all of which are Tournament Ranking systems rather than Game Rescoring systems.

The DPTR

It is a fluctuating system where your score goes up and down a little based on your last results with an emphasis on your position in the tournament relative to the size of the event and the prestige of the event.

Each player begins with an initial rating of 40 before their first tournament. After each event, a player's final placement (e.g. 1st, 2nd, 3rd, etc.) is converted to a percentile score. The formula used is: (([Number of Players in the event]+0.5-[Placement])/[Number of Players in the event])*100__If the event was the World Championship, the tournament value is 20._Otherwise, if the tournament has only one round, the tournament value is equal to the number of players in the event divided by 7, plus 2. If the event has more than one round, the tournament value is equal to the number of players in the event divided by 3.5, plus two. ( (#players/3.5)+2 ). However, the maximum tournament value for any non-world championship event is 15._Events before 1996, with the exception of World Championships, have a tournament value of zero, meaning they have no effect on ratings. The theory is that results be based only on the last ten years of competitive play._

As an example of calculating new ratings, suppose a player enters an event with a rating of 55. Suppose also that they finish in 8th place out of 65 players. The player's percentile score is (65 + 0.5 - 8)/65 * 100 = 88.46. A 65 player event would have a tournament value of (65/3.5) +2 = 20.57. However, since the maximum tournament value is 15 that will be the value for this event. Therefore, this player's rating will move 15% of the distance from 55 to 88.46. The difference between 88.46 and 55 is 33.46. 15% of 33.46 is 5.019. Therefore, we add 5.019 points to this player's rating as a results of this event, meaning their rating increases from 55 to 60.019.

This method was invented by Matt Shields and has attracted the attention of world-class players since its inception about 6 years ago.

C-Diplo Method

It is a sum of the points method. That is your score can never go down. It looks only at the tournament results with a bias towards the tournaments with more players.

You get points based on the number of players in the tournament that you did better than in tournament: (Number of players/your rank) = points

If you're in the first place + 38 points
If you're in the second place + 14 points
If you're in the third place, + 7 points
If you're at another place, you have no bonus point
If you finish third in a tournament of 35 players, you have: 14 + 35/3 = 25.67 points.

This method comes from the 'C-Diplo' bias which puts an emphasis on the 1-2-3rd positions only and combines with it the idea that there is some value in the number of participants in the tournament that you do better than that gives creditability to your achievements. (Created by Edi Birsan)

The WPE or World Performance Evaluation System

It is a some of the points method. That is your score can never go down. It has a host of factors giving different prestige points for doing well in events such as the World DipCon, The EDC, any National Tournament, and some of the long standing events that are given National Tournament status such as DixieCon in North Carolina that has been going on over 24 years or so.

The method of calculation is not detailed on the EDA site and as of this writing we have not gotten the details from its inventor: Lei Saarlainen. However as it was once described it relates to points for the top 50% of a tournament and the value of different tournaments.

Short Term Ranking Systems

Starting with the Australian/New Zealander's Bismarck Cup concept, there are several yearly tournament circuits that combine a few events together to get a yearly champion. In addition to the Bismarck Cup, the largest is the European Grand Prix, and then the NADF Grand Prix (not to be pronounced in the French manner). There are also some smaller combination of events such as the East Coast (U.S.A.) Swing and the Tour De France. One of the unique aspects of the West Coast 'Swaggle' of 4 events is that it awards points not only for Ranking in a tournament, but also for winning any of the Best Country Awards, attending all four events and solo victories. Points at the Swaggle (invented by Mike Hall) are awarded for: 1st 100pts 6th 40pts
2nd 80pts 7th 30pts
3rd 70pts 8th 20pts
4th 60pts 9th 10pts
5th 50pts 10th 10pts
Solo 50pts
Best Country Awards 30pts

SO WHY NOT MAKE YOUR OWN?

In making a Ranking System you need to decide on some basic issues:

  1. Will it count social games as well as tournament games?
  2. Will it look at tournament ranking results or the individual games?
  3. Will it rescore the games?
  4. Will the system be a summation: scores always add?
  5. Will it be a time factored system: scores weighted by time or only looking at a certain period say the last year such as the Grand Prix's.
  6. Is the time fixed or rolling? Grand Prix's are calendar fixed the DPTR is 10 years rolling.
  7. Will all event sizes be treated equally: Coming in 1st in a 14-player event is better than 2nd in a 21 player event?
  8. Will all events prestige be the same: winning the South African National Championship with 14 players is better the Sydney Championship regardless of the number of players?
  9. What about player vs. player match ups: is there any difference if the players in the tournament are all new players or all experienced? How do you determine experience vs. new?
  10. Are all tournament systems equal: is a tournament with a Top Board method the same value as one without it?
  11. Are individual game match ups equal: do you look at the scores of the players involved in each board to change the value of one board vs. another?
  12. Are all countries equal: do players who play with Austria/Italy/Germany in a tournament get treated differently than a player that has France, England and Turkey?
  13. Do individual game results matter: Someone who comes in third in a tournament with 21 players whose score includes a solo, is his achievement better than someone who comes in third in 21 player tournament whose score includes two two way draws and a 10 center third place? How do you relate the achievement recognition of playing under draw based vs. center-based systems?
  14. If you go to a game scoring, what is the value of 'other results' that will equal a solo result score in your system?
  15. Is there a value to awards in a tournament other than ranking, for example the Best or Outstanding Country awards? How would you handle awards such as Best Diplomat, Best Tactician, Worst Stab, Dead Bunny or Best Female Player under 18>?
  16. Does nationality matter: is being a French and getting a score in Germany the same as a Frenchman scoring the same in France? Is a non-national winning a national tournament worth more than if it was won by a national? How do you define national?
  17. Does the number of events attended give you a bonus or a negative: example the Swaggle gives a bonus to attend all 4 events if you are a West Coaster (3 for non west coast), some of the Grand Prix take a percentage of your BEST score for the records.
Look at the structure of the Database that Laurent/Emmanuel have created and see what you can come up. Then sell it to them and see what your results are and if they are acceptable to you. After all there are many different philosophies on how to play this game and what achievement is, so there should be as many different ranking systems to reflect those biases.

Edi Birsan
The EDItor
(editor@diplom.org)

If you wish to e-mail feedback on this article to the author, and clicking on the envelope above does not work for you, feel free to use the "Dear DP..." mail interface.