Ratings
- Eric Strange
- Posts: 438
- Joined: Thu Mar 12, 2009 12:10 pm
- What do you like about checkers?: What's not to like?
- Location: Colorado
- Contact:
Re: Ratings
I knew this would eventually happen...
Okay so it seems some players from the old system recently played in a tournament and are now part of the active ACF ratings. We have a B division player in the top 4 LOL
I propose we add an exclamation mark or something along those lines next to players who don't have an established ELO rating yet. I'm not implying to be knit picky about who has an established rating or not, but if it is extreme, there needs to be some type of identifying marker.
Let me know what you guys think
Okay so it seems some players from the old system recently played in a tournament and are now part of the active ACF ratings. We have a B division player in the top 4 LOL
I propose we add an exclamation mark or something along those lines next to players who don't have an established ELO rating yet. I'm not implying to be knit picky about who has an established rating or not, but if it is extreme, there needs to be some type of identifying marker.
Let me know what you guys think
-
- Posts: 583
- Joined: Sat Jan 08, 2011 10:11 pm
- What do you like about checkers?: It is a game of beauty when played at a high level.
- Location: PA
Re: Ratings
How would we accurately decide who would qualify for the special marking? We could make it that in order to get on the active list from the inactive list, a player needs to play a minimum of 20 games, the same principle for a new player starting at a 1600 rating. It is no secret that some players on the inactive list have a rating that is much higher or lower than what their true ELO rating is. I think using that method may be the more professional approach and would work. I can tell you that many people who came off that inactive list after one tournament had ratings that were way off, but after more tournaments their rating became more accurate. The way it works now, a player can come back and play 1 game to get on the active list and then decide not to play another game for the next 2 years. Perhaps this can be improved upon. My method could also motivate any inactive players to come play at more tournaments so they can get on the active list. I would also like to thank JR for doing a terrific job with the ratings.Eric Strange wrote:I knew this would eventually happen...
Okay so it seems some players from the old system recently played in a tournament and are now part of the active ACF ratings. We have a B division player in the top 4 LOL
I propose we add an exclamation mark or something along those lines next to players who don't have an established ELO rating yet. I'm not implying to be knit picky about who has an established rating or not, but if it is extreme, there needs to be some type of identifying marker.
Let me know what you guys think
-
- Posts: 105
- Joined: Fri Dec 22, 2006 10:18 am
- Location: Ireland
Re: Ratings
Thanks to all involved. There is something more eyecatching than a "B" player in the top 4. Just look at the ratings ....
There was a time when there were many "Grandmaster" players (5-6) rated at 2500+. I remember Tinsley rated at 2700+ According to these current ratings Tinsley would be lucky to have a rating of 2400+.
How has this happened? Today we have someone who is considered the best player in the world (according to ratings) with a rating of considerably less than 2400. Keep going and in the not too distant future it will be less than 2300 if the same criteria is followed. Just look at Alex Moiseyev's ratings. Here we have someone who on entering the National tournament would be considered a favourite to win it with a rating just above 2200+ (it was once close to 2600). Has the standard of these players and the standard of the game in general (e.g. the Masters Division of the National Tournament) dropped that low?
A Grandmaster or Master "norm" would have been earned /awarded to players who played exceptionally well in a strong event (average high rating of opponents). ( I believe that GM Shane Mc Cosker was the last player to have earned such a title "the hard way"). Today the average high rating is about 250 points less, so how can anyone ever achieve such a norm? The was a time when the average Grandmaster had a rating of 2500 . Today its about 2250. These ratings ask more questions than they answer. Maybe someone can explain the phenomenon? Hugh
There was a time when there were many "Grandmaster" players (5-6) rated at 2500+. I remember Tinsley rated at 2700+ According to these current ratings Tinsley would be lucky to have a rating of 2400+.
How has this happened? Today we have someone who is considered the best player in the world (according to ratings) with a rating of considerably less than 2400. Keep going and in the not too distant future it will be less than 2300 if the same criteria is followed. Just look at Alex Moiseyev's ratings. Here we have someone who on entering the National tournament would be considered a favourite to win it with a rating just above 2200+ (it was once close to 2600). Has the standard of these players and the standard of the game in general (e.g. the Masters Division of the National Tournament) dropped that low?
A Grandmaster or Master "norm" would have been earned /awarded to players who played exceptionally well in a strong event (average high rating of opponents). ( I believe that GM Shane Mc Cosker was the last player to have earned such a title "the hard way"). Today the average high rating is about 250 points less, so how can anyone ever achieve such a norm? The was a time when the average Grandmaster had a rating of 2500 . Today its about 2250. These ratings ask more questions than they answer. Maybe someone can explain the phenomenon? Hugh
- Alex_Moiseyev
- Posts: 4339
- Joined: Sat Nov 12, 2005 5:03 pm
- What do you like about checkers?: .....
Re: Ratings
Hugh, at one short time I hit 2700 (ACF rating) barrier and was dreaming to reach Dr. Tinsley record 2812. No way ! After several conversations on this forum we figured out that one thing seriously impact my rating - my extra activity. In order to keep high rating you have to play in small number high caliber events. This is our standards today.Hugh Devlin wrote:Just look at Alex Moiseyev's ratings ... it was once close to 2600).
The main goals of ant rating system - ranking and motivation. What kind of motivation we have now ? For at least 2 years I continue to hear a nice story that this is initial period and eventually things (ratings) will be accurate. How long we have to wait ? And yes, I failed to support my GM title because don't see any real chances to return back to 2500 zone.
From now on, until I come back to 2500, please, don't use "GMI" across my name in tables.
Alex
I am playing checkers, not chess.
Re: Ratings
Alex-I'm having the same problem my rating goes down after each tournament! steve holliday class "B" Grandmaster
-
- Posts: 105
- Joined: Fri Dec 22, 2006 10:18 am
- Location: Ireland
Re: Ratings
Ales wrote "In order to keep high rating you have to play in small number high caliber events".
Alex, I've heard this before and this is simply not true. Such statements explain nothing.
You only have to look at Lubabulo Kondlo's rating as proof to the contrary. He has only played in a small number of high calibre events - US National tournament, WQT, WCM (GAYP), and SportAccord Games 2012. Sometimes not even one event per year - yet his rating is dropping like a stone. I'm baffled.
Alex, I've heard this before and this is simply not true. Such statements explain nothing.
You only have to look at Lubabulo Kondlo's rating as proof to the contrary. He has only played in a small number of high calibre events - US National tournament, WQT, WCM (GAYP), and SportAccord Games 2012. Sometimes not even one event per year - yet his rating is dropping like a stone. I'm baffled.
-
- Posts: 940
- Joined: Sun Nov 27, 2005 2:56 pm
- Location: Ireland
Re: Ratings
Hi Hugh,
With regard to Lubabalo Kondlo’s rating, I think a point to bear in mind is that his results were, nevertheless, sufficient to place him top of the list. The deflation that has occurred in the ratings is, in my view, due to the scoring system adopted of points per game. This is an inherent defect in the system that cannot be overcome. The opposite inflationary effect, however, can also occur in the case of the frozen rating but that is a separate category. (see list below)
In the case under discussion, I agree with Alex that the best route to maximise his rating is to choose a limited selection of high grade events and avoid the weaker tournaments.
Below is a list of items that may need fixing.
1. The compression syndrome.
The scoring by points per game instead of points per round greatly compresses the bandwith of the ratings.
The scoring by points per ballot (or round) in 3 Move Tournaments would result in a vastly improved rating list, and allow the Elo system to operate in the manner for which it was designed. (as in Chess)
Adherence to scoring by points per game in 3 Move Tournaments will, I believe, only prolong the current unsatisfactory situation.
When I pointed this out on a previous occasion, one commentator suggested that compression would be a good thing as it would lessen the difference between the lower and higher rated players !
2. The apples and oranges syndrome.
The practice, when calculating the ratings, of including such a wide range of strengths of tournaments from the strongest master tournaments right down to friendly or fun day events, all in one huge amorphous mass.
3. Allocation.
One of the main purposes of the ratings is:
To assist in grading players in tournaments when these are classified as Master, Major, Minor etc.
In short to make certain that a player cannot enter a class , below or above that which one belongs.
4. The frozen rating syndrome.
Enter one event and if you come away with a good score, do not enter any further events until the 3 year cycle is up.
With regard to Lubabalo Kondlo’s rating, I think a point to bear in mind is that his results were, nevertheless, sufficient to place him top of the list. The deflation that has occurred in the ratings is, in my view, due to the scoring system adopted of points per game. This is an inherent defect in the system that cannot be overcome. The opposite inflationary effect, however, can also occur in the case of the frozen rating but that is a separate category. (see list below)
In the case under discussion, I agree with Alex that the best route to maximise his rating is to choose a limited selection of high grade events and avoid the weaker tournaments.
Below is a list of items that may need fixing.
1. The compression syndrome.
The scoring by points per game instead of points per round greatly compresses the bandwith of the ratings.
The scoring by points per ballot (or round) in 3 Move Tournaments would result in a vastly improved rating list, and allow the Elo system to operate in the manner for which it was designed. (as in Chess)
Adherence to scoring by points per game in 3 Move Tournaments will, I believe, only prolong the current unsatisfactory situation.
When I pointed this out on a previous occasion, one commentator suggested that compression would be a good thing as it would lessen the difference between the lower and higher rated players !
2. The apples and oranges syndrome.
The practice, when calculating the ratings, of including such a wide range of strengths of tournaments from the strongest master tournaments right down to friendly or fun day events, all in one huge amorphous mass.
3. Allocation.
One of the main purposes of the ratings is:
To assist in grading players in tournaments when these are classified as Master, Major, Minor etc.
In short to make certain that a player cannot enter a class , below or above that which one belongs.
4. The frozen rating syndrome.
Enter one event and if you come away with a good score, do not enter any further events until the 3 year cycle is up.
-
- Posts: 583
- Joined: Sat Jan 08, 2011 10:11 pm
- What do you like about checkers?: It is a game of beauty when played at a high level.
- Location: PA
Re: Ratings
Hugh,Hugh Devlin wrote:Thanks to all involved. There is something more eyecatching than a "B" player in the top 4. Just look at the ratings ....
There was a time when there were many "Grandmaster" players (5-6) rated at 2500+. I remember Tinsley rated at 2700+ According to these current ratings Tinsley would be lucky to have a rating of 2400+.
How has this happened? Today we have someone who is considered the best player in the world (according to ratings) with a rating of considerably less than 2400. Keep going and in the not too distant future it will be less than 2300 if the same criteria is followed. Just look at Alex Moiseyev's ratings. Here we have someone who on entering the National tournament would be considered a favourite to win it with a rating just above 2200+ (it was once close to 2600). Has the standard of these players and the standard of the game in general (e.g. the Masters Division of the National Tournament) dropped that low?
A Grandmaster or Master "norm" would have been earned /awarded to players who played exceptionally well in a strong event (average high rating of opponents). ( I believe that GM Shane Mc Cosker was the last player to have earned such a title "the hard way"). Today the average high rating is about 250 points less, so how can anyone ever achieve such a norm? The was a time when the average Grandmaster had a rating of 2500 . Today its about 2250. These ratings ask more questions than they answer. Maybe someone can explain the phenomenon? Hugh
You pose many of the same questions that I and other ACF members have thought about regarding our current rating system. While I am not an expert with rating systems, I do have experience working with our current system and hopefully I can help you understand some of what is going on. You mention that grandmaster ratings have decreased by a couple hundred points or so from where they used to be in the old system. I like to point out that the lowest ratings on the current list have actually "increased" from where they used to be in the old system. We had a good number of players in the old system that had a 1000 rating (the lowest it could go). Now most lowest rated players in this system are nowhere near that mark. So what we can conclude from this is that the overall rating range in this current system has decreased. It is just not a matter of higher rated players ratings going down. I believe one of the ideas behind this new system was to have a more competitive and exciting system. Under this system, people's ratings are much closer together and therefore are constantly going up or down the ranks. It does challenge the higher ranking players more than the old system, but it also is more forgiving to the lower ranked players.
You also mention, "Today the average high rating is about 250 points less, so how can anyone ever achieve such a norm?" To answer your question, under this system the norms have changed. Like I discussed, the ratings have condensed more. So yes, we will probably not see anyone get up to the 2500 mark in this new system. Alex will not be able to break Tinsley's 2800+ record. But I would argue, checkers and ratings are just not about the top grandmasters, but rather it is about everyone as a whole. Though if you look at the ratings, the best players are still at the top and the weakest players are still at the bottom. Only the norms have changed and I don't think that is really a detriment to checkers. Sure, there are some people still out of place in the ratings, especially toward the top, but I don't think that is the fault of the system. Rather, I would argue those people have just not played in enough tournaments under this new rating system. As a result, the ratings are still in a transition period, but at some point soon we will see a more accurate representation of rating rankings.
-
- Site Admin
- Posts: 92
- Joined: Tue Aug 14, 2007 11:58 am
- What do you like about checkers?: Winning
- Location: Kitchener, ON, Canada
Re: Ratings
Can anyone provide details on the exact rating formula used by the ACF? I wouldn't mind taking a look at it.
Re: Ratings
One problem is that players generally enter the tournament arena, improve (mostly) over several tournaments, and then never play tournaments again (quitting, retiring, dying, ...). So, over time, the ratings of the remaining players go down. The average rating will go down every year, and the amount that it decreases will be unpredictable (to some extent) depending on which players quit. The range of ratings (top-bottom) will apparently compress in a varying and unpredictable manner. There is probably no sure way to keep the average rating the same from year to year, or to keep the range from high to low from shrinking, except to readjust the average and the range now and then. As I seem to recall, the chess organizations routinely deal with the same problems.
I find it interesting that the ratings are so volatile. Some players leap up or down the list, when one might expect a little inertia to decrease the leaps, maybe by including more games over a greater number of years. And the number of players is small. Increasing the number of players (perhaps by increasing the number of years) will increase the statistical validity of the data.
I find it interesting that the ratings are so volatile. Some players leap up or down the list, when one might expect a little inertia to decrease the leaps, maybe by including more games over a greater number of years. And the number of players is small. Increasing the number of players (perhaps by increasing the number of years) will increase the statistical validity of the data.
- Eric Strange
- Posts: 438
- Joined: Thu Mar 12, 2009 12:10 pm
- What do you like about checkers?: What's not to like?
- Location: Colorado
- Contact:
Re: Ratings
You guys seem to have the correct understanding.
The system we use is ELO - this can be googled to get all the information you want about the system.
The old system that would jump people as high as 2700 and as low as 1000 caused HUGE gaps in ratings. Players were sometimes hundreds of points in difference. Which is not a good system considering how low our tournament attendance is.
The new system is bringing players closer together which in turn causes low to go up and high to go down. This compression will cause everyone to stay competitive with each other and give new norms.
With this system, if we had a larger number of players, the points spread would be larger also. With double the amount of tournament players we could expect the top of the rating scheme to be 200 points higher, give or take.
I would say that 5 of the top 6 on http://icheckers.net/ratings/ do not have their established ELO rating yet. Kondlo as a key example. You see how he keeps dropping significantly every single tournament? This is because he has not gotten to his real rating under the new system.
Alex, your rating would get punished under the GLICKO system (which is what WCDF uses) for playing in more tournaments. ELO system is unbiased against that. ELO only takes in account your rating vs their rating. If you're playing in a tournament where everyone is 200 points lower than you. You wouldn't get much for wins and you would lose points on draws. That is just how it works, as it should. The only reason it seems like you're getting punished is because you are one of the players who played enough games to establish your true ELO rating. Although you will still have some changes when you play in tournaments with over rated players or under rated from the old system. They would give you a rating boost or drop in a tournament.
Every year this new system becomes more and more accurate.
I don't think 20 games is enough for some players to be accurate. I think a symbol next to obvious extreme's is going to be the best way to go.
The system we use is ELO - this can be googled to get all the information you want about the system.
The old system that would jump people as high as 2700 and as low as 1000 caused HUGE gaps in ratings. Players were sometimes hundreds of points in difference. Which is not a good system considering how low our tournament attendance is.
The new system is bringing players closer together which in turn causes low to go up and high to go down. This compression will cause everyone to stay competitive with each other and give new norms.
With this system, if we had a larger number of players, the points spread would be larger also. With double the amount of tournament players we could expect the top of the rating scheme to be 200 points higher, give or take.
I would say that 5 of the top 6 on http://icheckers.net/ratings/ do not have their established ELO rating yet. Kondlo as a key example. You see how he keeps dropping significantly every single tournament? This is because he has not gotten to his real rating under the new system.
Alex, your rating would get punished under the GLICKO system (which is what WCDF uses) for playing in more tournaments. ELO system is unbiased against that. ELO only takes in account your rating vs their rating. If you're playing in a tournament where everyone is 200 points lower than you. You wouldn't get much for wins and you would lose points on draws. That is just how it works, as it should. The only reason it seems like you're getting punished is because you are one of the players who played enough games to establish your true ELO rating. Although you will still have some changes when you play in tournaments with over rated players or under rated from the old system. They would give you a rating boost or drop in a tournament.
Every year this new system becomes more and more accurate.
I don't think 20 games is enough for some players to be accurate. I think a symbol next to obvious extreme's is going to be the best way to go.
-
- Posts: 448
- Joined: Sat Dec 15, 2007 10:57 am
Re: Ratings
For every master disappointed by his low rating, there is an over-rated average player happy as a lark!
Viva la incentive!
Viva la incentive!
-
- Site Admin
- Posts: 92
- Joined: Tue Aug 14, 2007 11:58 am
- What do you like about checkers?: Winning
- Location: Kitchener, ON, Canada
Re: Ratings
What k-factor is used for our ratings?
Is this k-factor adjusted based on player ratings or the same for everyone?
Are the calculations for each round done using the pre-tournament ratings for each player, or do the calculations take into account games played up to that point?
Thanks
Is this k-factor adjusted based on player ratings or the same for everyone?
Are the calculations for each round done using the pre-tournament ratings for each player, or do the calculations take into account games played up to that point?
Thanks
- Eric Strange
- Posts: 438
- Joined: Thu Mar 12, 2009 12:10 pm
- What do you like about checkers?: What's not to like?
- Location: Colorado
- Contact:
Re: Ratings
K factors are as follows. Which I had read was the most accurate k-factor
Players below 2100 --> K-factor of 32 used
Players between 2100 and 2400 --> K-factor of 24 used
Players above 2400 --> K-factor of 16 used.
Rating is scored round by round. So each round a players rating is technically changed.
Clint, the best explanation for you to 100% understand. ACF's ELO rating system is essentially the same rating system used in Yahoo! and Playok.com for 2 player games.
The only differences are round by round scoring (because of how crosstables are notated.)
Provisional (first 20 games do not effect opponents rating, only the n00b)
The reason the ratings look screwy is because we are converting from one system to another, as time goes on the ratings will correct themselves out. A LOT OF PLAYERS are right at their proper levels for this amount of active players. The ones that aren't make it look wrong. This is what I am attempting to get feedback on. How should we handle players on the active ratings list who's rating is extremely out of proportion. I say we put a symbol by their name, motivating them to play more tournament games to get their proper rating. Joe suggests having new players into the system go through a provisional period until they are allowed onto the active list. I don't like the second option although it is a good idea, just preference. Anymore questions I can answer?
Players below 2100 --> K-factor of 32 used
Players between 2100 and 2400 --> K-factor of 24 used
Players above 2400 --> K-factor of 16 used.
Rating is scored round by round. So each round a players rating is technically changed.
Clint, the best explanation for you to 100% understand. ACF's ELO rating system is essentially the same rating system used in Yahoo! and Playok.com for 2 player games.
The only differences are round by round scoring (because of how crosstables are notated.)
Provisional (first 20 games do not effect opponents rating, only the n00b)
The reason the ratings look screwy is because we are converting from one system to another, as time goes on the ratings will correct themselves out. A LOT OF PLAYERS are right at their proper levels for this amount of active players. The ones that aren't make it look wrong. This is what I am attempting to get feedback on. How should we handle players on the active ratings list who's rating is extremely out of proportion. I say we put a symbol by their name, motivating them to play more tournament games to get their proper rating. Joe suggests having new players into the system go through a provisional period until they are allowed onto the active list. I don't like the second option although it is a good idea, just preference. Anymore questions I can answer?