Jump to content

Archived

This topic is now archived and is closed to further replies.

J10

CET Competition 2015-2016 Rules thread

Recommended Posts

This thread is to discuss the rules from the CET competition.

 

Entries for December 2015 should go here -> https://forum.netweather.tv/topic/84464-cet-forecasts-for-december-2015-start-of-2015-16-competition-year/

 

(however any predictions made here in error, will be moved over, so no need to worry)

 

Is everyone happy with the deadlines

 

Day 1 = 10 pt overall deduction

Day 2 = 20 pt overall deduction

Day 3 = 30 pt overall deduction

Day 4+ = no entry allowed

 

Is everyone happy with the scoring system, or would you prefer more of a bonus for getting it closer compared to overall accuracy and consistency.

 

Finally is everyone happy with 2 missed entries on the overall competition, but none on the seasonal competition.

 

I would also like to thank Roger for his continued support in running the comp, it really makes my task that much easier. :D

Share this post


Link to post
Share on other sites

This thread is to discuss the rules from the CET competition.

 

Entries for December 2015 should go here -> https://forum.netweather.tv/topic/84464-cet-forecasts-for-december-2015-start-of-2015-16-competition-year/

 

(however any predictions made here in error, will be moved over, so no need to worry)

 

Is everyone happy with the deadlines

 

Day 1 = 10 pt overall deduction

Day 2 = 20 pt overall deduction

Day 3 = 30 pt overall deduction

Day 4+ = no entry allowed

 

Is everyone happy with the scoring system, or would you prefer more of a bonus for getting it closer compared to overall accuracy and consistency.

 

Finally is everyone happy with 2 missed entries on the overall competition, but none on the seasonal competition.

 

I would also like to thank Roger for his continued support in running the comp, it really makes my task that much easier. :D

 

Could you explain the scoring system in a bit more detail, J10? I'm not sure I've ever fully understood it!

Share this post


Link to post
Share on other sites

I honestly believe there should be some form of penalty for missing entries - it doesn't have to be very punitive but it makes no sense if someone can win on just 10 predictions.

Share this post


Link to post
Share on other sites

Thanks for the replies,

 

In terms of the entries, the reasoning to allow two missed entries, is that people miss out on entries for whatever reason, and 2 was deemed reasonable.

 

However no player has won the completion having missed any entries, although 2 players (in different years) have come 2nd with only 10 entries. and the competition started in 2006-07. so this is its 9th year of running.

There are currently built in penalties for no entries as per the ranking system, however in my next post, I have given a few options as to make full entry more of a benefit.

 

In terms of BFTV query on the scoring, I will try to simplify the rules , So I can understand them . :crazy:  :rofl:

In essence the scores are based on the rankings on 4 criteria.

1. Monthly Error of Entries per month (Average) so if you guess 10.5c and the CET is 10.0c, the error is 0.5c.

2. Monthly Ranking Points (Average), this is based on the winner getting 100pts per month, and the last place 1pt, with all in between on a sliding scale.

3. Monthly Ranking Points (Total),

4. Monthly Accuracy Points (Average), this is based is you getting a set number of points depending how close you are to the total, 5pts for spot on, down to 1 pt for being within 0.5c, with a bonus 1pt for getting the wright side of the average, i.e. predicting above or below average correctly.

 

Each of these scorers is then ranked depending on performance, with the best player getting 150 ( if there are 150 entries) and the lowest 1 pt.

There are also Weighting factors to apply, (These are shown on columns AV to AY of the current spreadsheet).

 

1. Monthly Error of Entries (Average) *1

2. Monthly Ranking Points (Average) *1

3. Monthly Ranking Points (Total) *1

4. Monthly Accuracy Points (Average) * 0.5

 

So there is a benefit in entering all 12 months to all entries in criteria 3.

 

So on the current basis, the maximum score is 3.5 * the number of total entries 141 = 493.5. less any penalty points. In C on my next post, I have set out a number of options to make this more consistent across the year.

Share this post


Link to post
Share on other sites

Summary of above

 

A) Entry Requirements - See Below

 

B) Ranking Changes

 

There are other ways the scoring could be amended to favour those who enter each month.

 

By say changing it to

1. Monthly Error of Entries (Average) *1

2. Monthly Ranking Points (Average) *1

3. Monthly Ranking Points (Total) *1

4. Monthly Accuracy Points (Average) * 1

5. Monthly Accuracy Points (Total) * 1

 

this would make 40% of the score dependent of total scoring, comparing to 14% at the moment.

 

C) Total Points available before penalties / Bonuses

It would also be possible on this to rank 1-100, so there is a definitive maximum points each month, which could be set at 400 or 500pts before time bonuses/penalties, and this maximum would then stay the same of the duration of the tournament.

 

Question Options - Please answer the questions below.

 

Rules re Entry Requirements (A)

1. Keep it as it is. (2 missed entries allowed)

2. Only allow 1 missed entry

3. Allow no missed entries

4. Allow for 2 missed entries - but have a bonus for entering each month.

  So 40 pts for entering on times.

and 30 pts for entering 1 day late

and 20 pts for entering 2 day late

and 10 pts for entering 3 day late

and nil pts  for no entry.

 

Question re Rankings (B)

1. Keep it as it is.

2. Make Total scoring account for 25% of total scoring. (both average above *0.5)

3. Make Total scoring account for 40% of total scoring. (as above)

 

Question on Total Scores ©

1. Keep as it is

2, Make Maximum Score 100pts

3, Make Maximum Score 500pts (very similar to current maximum)

Share this post


Link to post
Share on other sites

First point, I am happy to assist with the opening posts and tables, the scoring alone is a big chore in this contest so taking care of the other housekeeping is an easy contribution.

 

As to the scoring system, I suppose since this has been in place for many years (all of them since I joined NW ten years ago, I think) people might be reluctant to change anything.

 

However, for the sake of discussion, I would suggest that the weighting of the four factors be changed so that total score was 1.0 and average accuracy points was 0.5. This would give somewhat more weight to the scores of regular entrants, while accuracy points are generally going to correlate highly with the other scoring features anyway.

 

And just for the sake of discussion, another scoring method that is somewhat simpler and a bit more rewarding for accuracy in some cases is to start with 100 points for the right forecast and to lose two points per 0.1 C error. You could maintain the order of entry feature by giving the first person that score and all later entrants one fewer points (e.g., three at 100 would change to one at 100 and two at 99).

 

This would eliminate all the scoring other than late penalties which could be applied to the score (in another contest, the procedure is to remove 5 pts per half day late).

 

From this scoring, you could then calculate seasonal and annual order of finish. If you wanted to give a chance to those who miss a month or two, you could go with best 11 or 10 out of 12 for the annual. My system would require three entries to qualify for a seasonal win although I would imagine the system we have now pretty much weeds out anyone who fails to enter three. I think the system we have now probably weeds out annual entrants at about 9 or fewer simply because of the low positional score in total score. My earlier suggestion of amending that would definitely weed out 9 and possibly 10 of 12 (hard to crack the top half with only 10 goes although you'll likely beat at least one of us, hmm wonder who??).

 

Looking at the pros and cons of the simpler method, no doubt simpler to administer, CET value more significant as guesses will be more widely scattered from extremes than near-average, no scoring distinction based on number of entries, and if you have a good forecast in a cluster of good forecasts, you don't get shoved down the table very far. Those were all pros, so cons? I'm going to leave that open because I don't really see any. One minor variation that I have seen employed is to widen the scoring range when an actual value is very anomalous. Perhaps a minimum scoring progression rule, such as this: first place has to be at least 90, second at least 80, third at least 75, fourth at least 70, etc down to tenth has to be at least 40, then differentials of 2 down to 20 points and 1 down to zero, but if your raw score is higher than the formula, you get the raw score instead. That might come into play with a Dec 2010 type of result.

 

Finral point, late penalties -- they are a bit lenient in this contest, I find, once you see what numbers are being reduced by 10, 20 and 30 points (numbers that can be well over 200). I would suggest increasing these to 10 per half day if we stay on the current scoring system. Half day being 0001-1200 and 1201-2400h of each day 1st to 3rd. There aren't many late entries so this would not be much of a task. It would keep people more aware of the deadline to lose that many points.

 

Anyway, all of the above just offered for a bit of a discussion, I am quite happy to go with the current system too. Anyone who made 12 wonderful forecasts would win this easily under any scoring system imaginable.

 

 

Share this post


Link to post
Share on other sites

Just a thought I had J, what about each member  was allowed to play their 'joker', they have only 1 month in the year to play it, and whatever score they got by playing it is doubled?

 

Just a thought. :)

Share this post


Link to post
Share on other sites

Cheers for that Roger, a very good contribution there.

Share this post


Link to post
Share on other sites

Cheers for that Roger, a very good contribution there.

 

A question if I may, looking at the annual table, it looks like(unless I missed it) 'Coldest Winter who is currently in 4th, on 444 points has not entered a guess for November, can they still win the competition? Or have I got the wrong end of the stick?

Share this post


Link to post
Share on other sites

You were looking at the right totals. But I don't think in this case it is mathematically possible for coldest winter to move up 27 points, given how the month is ending and where the other three are located in the probable scoring for the month. BFTV looks set to maintain rankings even if the other two drop slightly, and there aren't enough ranking points available for coldest winter to catch up, also he(?) can only drop a few places on total points as at least ten people will likely pass him, so that will balance out any gains as he stays steady on other averages that might rise in this more unusual month. I've said this without knowing at what point an entrant is dropped from the main table into the auxiliary table of infrequent forecasters (some of whom also have high totals given that they only entered a few and scored high in those).

 

However, I think that if the contenders had all been way off the mark in November and inflated their average error enough to drop a large number of rankings, then coldest winter could have snuck through that maze to claim top spot. BFTV has 8.0 C for Nov, and the way the current average is dropping he's likely to score at least his average number of points and get his average error or close enough that his contest rankings won't drop. So in this case the problem is avoided.

 

Personally, I think the only change I would really like to see of the ones I suggested is to reverse the scoring for total points and average accuracy points, and to make it even more total-oriented you could change average accuracy points to total accuracy points.

 

As I say, not sure where the cutoff lies for entries, coldest winter had already missed two. However, sometimes a person changes username and I have not gone through the new usernames in November to see if that happened here, maybe J10 knows.

Share this post


Link to post
Share on other sites

You were looking at the right totals. But I don't think in this case it is mathematically possible for coldest winter to move up 27 points, given how the month is ending and where the other three are located in the probable scoring for the month. BFTV looks set to maintain rankings even if the other two drop slightly, and there aren't enough ranking points available for coldest winter to catch up, also he(?) can only drop a few places on total points as at least ten people will likely pass him, so that will balance out any gains as he stays steady on other averages that might rise in this more unusual month. I've said this without knowing at what point an entrant is dropped from the main table into the auxiliary table of infrequent forecasters (some of whom also have high totals given that they only entered a few and scored high in those).

 

However, I think that if the contenders had all been way off the mark in November and inflated their average error enough to drop a large number of rankings, then coldest winter could have snuck through that maze to claim top spot. BFTV has 8.0 C for Nov, and the way the current average is dropping he's likely to score at least his average number of points and get his average error or close enough that his contest rankings won't drop. So in this case the problem is avoided.

 

Personally, I think the only change I would really like to see of the ones I suggested is to reverse the scoring for total points and average accuracy points, and to make it even more total-oriented you could change average accuracy points to total accuracy points.

 

As I say, not sure where the cutoff lies for entries, coldest winter had already missed two. However, sometimes a person changes username and I have not gone through the new usernames in November to see if that happened here, maybe J10 knows.

 

 

Thank you for the reply.

Share this post


Link to post
Share on other sites

Summary of above

 

A) Entry Requirements - See Below

 

B) Ranking Changes

 

There are other ways the scoring could be amended to favour those who enter each month.

 

By say changing it to

1. Monthly Error of Entries (Average) *1

2. Monthly Ranking Points (Average) *1

3. Monthly Ranking Points (Total) *1

4. Monthly Accuracy Points (Average) * 1

5. Monthly Accuracy Points (Total) * 1

 

this would make 40% of the score dependent of total scoring, comparing to 14% at the moment.

 

C) Total Points available before penalties / Bonuses

It would also be possible on this to rank 1-100, so there is a definitive maximum points each month, which could be set at 400 or 500pts before time bonuses/penalties, and this maximum would then stay the same of the duration of the tournament.

 

Question Options - Please answer the questions below.

 

Rules re Entry Requirements (A)

1. Keep it as it is. (2 missed entries allowed)

2. Only allow 1 missed entry

3. Allow no missed entries

4. Allow for 2 missed entries - but have a bonus for entering each month.

  So 40 pts for entering on times.

and 30 pts for entering 1 day late

and 20 pts for entering 2 day late

and 10 pts for entering 3 day late

and nil pts  for no entry.

 

Question re Rankings (B)

1. Keep it as it is.

2. Make Total scoring account for 25% of total scoring. (both average above *0.5)

3. Make Total scoring account for 40% of total scoring. (as above)

 

Question on Total Scores ©

1. Keep as it is

2, Make Maximum Score 100pts

3, Make Maximum Score 500pts (very similar to current maximum)

It all looks very complicated to me, J10...Can't the table simply reflect how accurate (or not) peeps' guesses are? I'd allow up to 12 non-entries per person; and zero entry earns zero points??

 

I suspect that you and Roger are infinitely more savvy than I am. So I'll no be digging my hole any bigger! :D  :D  :D  :fool:

Share this post


Link to post
Share on other sites

Is someone misses 3 goes they are out.

 

In terms of people changing names, I tend to look at new names and see if they have a previous name, which should in theory stop entries being missed. I dare say though that I have missed a couple over time. :whistling:

Share this post


Link to post
Share on other sites

Thanks for the replies,

 

In terms of the entries, the reasoning to allow two missed entries, is that people miss out on entries for whatever reason, and 2 was deemed reasonable.

 

However no player has won the completion having missed any entries, although 2 players (in different years) have come 2nd with only 10 entries. and the competition started in 2006-07. so this is its 9th year of running.

There are currently built in penalties for no entries as per the ranking system, however in my next post, I have given a few options as to make full entry more of a benefit.

 

In terms of BFTV query on the scoring, I will try to simplify the rules , So I can understand them . :crazy:  :rofl:

In essence the scores are based on the rankings on 4 criteria.

1. Monthly Error of Entries per month (Average) so if you guess 10.5c and the CET is 10.0c, the error is 0.5c.

2. Monthly Ranking Points (Average), this is based on the winner getting 100pts per month, and the last place 1pt, with all in between on a sliding scale.

3. Monthly Ranking Points (Total),

4. Monthly Accuracy Points (Average), this is based is you getting a set number of points depending how close you are to the total, 5pts for spot on, down to 1 pt for being within 0.5c, with a bonus 1pt for getting the wright side of the average, i.e. predicting above or below average correctly.

 

Each of these scorers is then ranked depending on performance, with the best player getting 150 ( if there are 150 entries) and the lowest 1 pt.

There are also Weighting factors to apply, (These are shown on columns AV to AY of the current spreadsheet).

 

1. Monthly Error of Entries (Average) *1

2. Monthly Ranking Points (Average) *1

3. Monthly Ranking Points (Total) *1

4. Monthly Accuracy Points (Average) * 0.5

 

So there is a benefit in entering all 12 months to all entries in criteria 3.

 

So on the current basis, the maximum score is 3.5 * the number of total entries 141 = 493.5. less any penalty points. In C on my next post, I have set out a number of options to make this more consistent across the year.

 

More complex than I realised, but I think I get it. Must have been fun putting all of that together the first time!

Share this post


Link to post
Share on other sites

More complex than I realised, but I think I get it. Must have been fun putting all of that together the first time!

 

To be fair, it was Stratos Ferric who first designed the rules for this, and they haven't really changed that much since, only a little tinkering around the edges.

 

and I can confirm that for the first time, the 2015-16 competition (Dec 15 to Nov 16), will have a prize for the winner.

 

This will comprise a sub for NW Extra and some gift vouchers kindly agreed to by Paul.

Share this post


Link to post
Share on other sites

I think its great as it is

Share this post


Link to post
Share on other sites

Now that you've mentioned that three missed forecasts drops the entrant to the second tier, that makes our earlier discussion moot. Ranking them on any of 10, 11 or 12 out of 12 is equivalent to a best 10 out of 12 system. And we don't yet have the contest scored for November, so a direct comparison at this point would be a lot of work. However, to show how the two systems compare (current format vs best ten of twelve raw scores from 100 minus 2 pts per 0.1 C) I have compared the actual top 10 after October with same positions from the other system (ranks there include lower ranked entrants that I checked as far down as about 20th in the average error category, beyond that it's self-evident that scores would be lower). By doing that I found two other best 9 of 11 scores that would push Coldest Winter down to 13th overall.

 

Why are there any differences at all? It tends to matter slightly where you get your good scores, if you get them in months where many guesses are close, you'll lose more in the current system than in my proposed alternative.

 

As you can see, it's not quite the same order but fairly similar. The last number in the row is the sum of the nine largest scores with the ranking of that in brackets. That tends to make it more similar to the existing list. I kept "coldest winter" in the list because this is just a discussion of systems, not actual outcomes.

 

 

* missed one month and ** missed two months

 

FORECASTER _______ CONTEST SCORE ____ ERROR-ONLY SCORE ____ BEST 9 (rank)

 

 1. Weather26 ______________471.5 ________________ 1000 __________ 834 (t2)

 2. Born from the Void ______469.0 _________________ 986 __________ 840 (1)

 3. Summer Blizzard _________444.5 _________________ 978 __________ 834 (t2)

 3t Coldest winter **_________444.5 _________________ 790 __________ 790 (13)

 5. Diagonal Red Line ________441.0 _________________ 976 __________ 824 (t5)

 6. DAVID SNOW ____________ 429.0 _________________ 978 __________ 830 (4)

 7. Stargazer* _______________ 427.5 _________________ 876 __________ 808 (9)

 8. Always Expect Rain _______425.5 _________________ 956 __________ 824 (t5)

 9. Weather-history _________ 425.0 _________________ 950 __________818  (.8.)

10  Mark Bayley ____________ 424.5 _________________ 956 __________ 822 (7)

 

example of somebody with two small scores from large errors and where they stand

 

44. Roger J Smith __________ 235.5 _________________ 780 __________ 732

 

So I think it would be mathematically possible for somebody to finish about 15th to 20th in the current system and win on the best 10 of 12 if they tended to be more hit or miss than average.

 

But you can see from the top ten that it makes very little difference to the results which system is used.

 

Since there is very little difference, I see no reason to change unless you like the greater simplicity.

 

(note 1, nobody in the top ten had any late penalties -- as I've mentioned, these seem lenient in ratio to the numbers being reduced, but in this case it never seems to factor into the outcome, I checked the highest ranked entrants who did have late penalties and the differences between systems were negligible)

 

(note 2, my system is scored without reference to order of entry and if you staggered the numbers then some of these would drop a few points assuming that top scorers are often later entrants. Again this would not really change the comparison of systems in any significant way).

Share this post


Link to post
Share on other sites

At the end of the contest year 2014-15, this, for the sake of comparison, is how all top 38 ranked entrants would have finished using simplified scoring based on best ten of twelve, losing 2 pts per 0.1 C error each month. Below 38th late penalties and lower scores in general would be more work than is needed to get a good picture of how things might compare. One or two of the lower scores might have edged out the lowest ranked below, but in any case this should give anyone the full picture of how contending scores compare and whether there are any surprises using a simplified method.

No consideration has been given to order of posting, late penalties are assessed as 10 points per day late (same as the current system but they bite harder in this system due to lower possible scores). So that you understand how the alternate system drops two months, the two largest errors which generate the two smallest scores are dropped from entrants with 12 forecasts. The largest error is dropped from entrants with 11 entries. Nothing is taken away from entrants with 10 entries. Anyone with 9 or fewer does not make either this list or the one we use now. Compare the ranks and you'll see that this simplified system is not generally much different from how we score now but it seems to be somewhat harder on those who miss a month. The advantages besides simplicity would include (a) non-frequent participants basically have no influence on scoring of regular entrants in any way and (b) where you make your errors has no effect on ranking.

 

FORECASTER ______ CURRENT SYSTEM (rank) ___ SIMPLIFIED SCORE __ BEST 10/12

 

BornFromTheVoid ____________ 492.5 (1) ___________ 1056 (2) ___________ 916 (1)

Weather26 ____________________ 473.6 (2) ___________ 1044 (3) ___________914 (2)

DAVID SNOW __________________458.5 (3) ___________ 1060 (1) __________ 912 (3)

Diagonal Red Line _____________451.5 (4) ___________ 1028 (5t)__________ 902 (5)

Mark Bayley ___________________449.0 (5) ___________ 1032 (4) __________ 898 (6)

Stargazer* ____________________ 445.5 (6) ____________ 940 (29t) _______ 876 (15t)

Weather-history ______________ 443.5 (7) ___________ 1012 (8) __________ 892 (7t)

summer blizzard _____________ 443.0 (8) ____________1028 (5t)_________ 904 (4)

Always expect rain ___________ 425.0 (9) ____________ 1006 (10t) ________892 (7t)

Costa del Fal*________________ 423.5 (10) ____________ 944 (28) _________ 870 (19)

syed2878* ___________________ 418.0 (11) ____________ 938 (31) ________ 866 (20)

stewfox ______________________ 395.5 (12) ____________ 1018 (7) __________ 886 (10t)

Norrance (L) _________________ 391.5 (13) _____________ 990 (16) _________ 882 (12t)

simshady ___________________  390.0 (14) ______________988 (17t) _________ 888 (9)

DR Hosking (L) ______________ 389.0 (15) _____________1006 (10t) ________874 (17)

Reef _________________________ 388.5(16) _____________ 1000 (12t)  ________ 882 (12t)

davehsug ___________________ 378.0 (17) _____________ 1010 (9) __________ 880 (14)

march blizzard ______________ 376.5 (18) ______________ 994 (14) _________ 872 (18)

seabreeze86* (L) ____________ 375.0 (19) ______________ 884 (35) __________ 850 (25t)

SteveB ** (L) ________________ 371.5 (20) _______________820 (38) __________ 820 (36)

damianslaw _________________ 370.5 (21) ______________ 988 (17t) _________ 886 (10t)

Stationary Front* ___________ 367.5 (22) _______________882 (36) _________ 856 (24)

Larger than Average Hobo __ 352.0 (23) _____________ 1000 (12t) ________ 876 (15t)

Midlands Ice Age ___________ 342.5 (24) _______________ 950 (26) _________ 836 (31t)

Ed Stone ___________________ 340.0 (25) _______________ 962 (23) ________ 848 (28t)

Don (LL) ____________________330.5 (26) ________________ 958 (24) ________ 822 (35)

Mulzy ______________________ 328.0 (27) ________________ 992 (15) ________ 862 (23)

Congleton Heat ___________ 326.0 (28) _________________ 982 (19) _________ 850 (26t)

Dr (S) NO __________________ 314.5 (29) _________________ 972 (20t) ________ 864 (21t)

Roger J Smith ______________ 299.5 (30) _________________ 878 (37) _________ 830 (33)

jonboy ____________________ 297.5 (31t) _________________ 924 (32) _________ 836 (31t)

easy-oasy*_________________ 297.5 (31t) ________________ 912 (34) ________ 852 (25)

Duncan McAlister __________ 295.5 (33) _________________ 954 (25) ________ 848 (28t)

RJBW _______________________ 294.5 (34) _________________ 948 (27) ________ 826 (34)

J10 _________________________ 290.0 (35) _________________ 972 (20t) ________ 864 (21t)

sundog ____________________ 289.0 (36) _________________ 940 (29t) ________ 814 (37)

The PIT ____________________ 280.0 (37) __________________ 964 (22) _________ 840 (30)

coram (L) __________________ 279.0 (38) __________________ 922 (33) _________ 804 (38)

___________________________________________________

* month missed, L late entry

(to adjust to forecast scores without late penalties, add 10 to scores in both alternative score columns for each L).

Share this post


Link to post
Share on other sites

I have uploaded the scores based on the previous method of doing scores and a slightly tweaked version.

The tweaked version goes along with Roger's suggestion to give more points for total scores and half points for Average scores.

So the new split is

1.00.51.00.51.0

Average - Monthly Error  * 1.0
Average - Monthly Ranking Points * 0.5
Total - Monthly Ranking Points * 1.0
Average - Accuracy Points * 0.5
Total - Accuracy Points * 1.0

The impact of this gives more of a bonus for people entering every week.
It also means that competitors who get some spot on and some way off will do a little better than before.

Conversely those who are steadier but never spot on will not do quite as well.

Saying that the positions after 1 month are identical under both scoring systems.

Also the score has been standardised that the maximum score will always be 500, regardless of how many enter.

I have attached interim based based on the above and assumed a 9.5c score, the final score may effect the final outcome,

but in any case IHaveNoTrueFriendsNow! (Craig Evans) will win this month.

Dec 2015 cet.xlsx (Excel 2007 format)

 

Share this post


Link to post
Share on other sites

What about just using the old system except for changing the weight of the accuracy scores? The maximum of 500 is going to give some lopsided results for months where everyone is near consensus and you have to break the tie by earliest entry. In this past month's case I think it's fair enough to see Craig's much closer guess get more points but what if it was like last October's cluster?

On the subject of fair points for an extreme guess, maybe a rule saying that when CET is more extreme than any guess or maybe all but two guesses within 1.0 deg, then three extra accuracy point awards would automatically be given to the extreme forecast, for example this past month, Craig should maybe have 3 accuracy points in addition to the one that almost everyone had, for a total of four (out of six possible). If somebody else had been within one degree (nobody was) then maybe they should have had two instead of zero additional. Or else have a system where accuracy points had to be at least 6,4,2,1... if that was not reached the usual way, in this past month the top four accuracy scores are 2, 1, 1 and 1. Being 0.5 out in a month like this past one is a lot different than being 0.5 out in an average sort of August.

We aren't going to see much difference in the "old" and "new" versions until we have several months and a few top scorers who miss a month, then we will have a better idea. But the one suggestion I would make is to drop the 500 top score and just let the scores fall as calculated but with that extra reward for the extreme forecast that verifies. That's going to maintain its clout in future months as it will take longer for rankings to change.

Bottom line is, of course, the contest entrants will be quite happy with whatever you decide, if there had been any unhappiness with the old scoring system I think there would have been more posts made. I tinkered enough with it to come to the realization that any variation will likely have almost the exact same outcomes.

This past month, assuming 9.5 is the finishing value, the raw scores on an unranked system would look like this: 90, 58, 54, 52 etc. Ouch. Those would probably be adjusted by minimum progression to 90, 80, 75, 70 and then probably one fewer point all the way down to where points ran out.

Share this post


Link to post
Share on other sites

The 500pts issue is somewhat skewed by a problem with the Accuracy.

I have changed this now to widen the accuracy as per above so that if most people are way off, you can get accuracy points further out. e.g. it is easier to 0.5c out in an average month, rather than 0.5c out this month which would win it.

So I have now attached the revised figure, which again sees very little overall change.

Dec 2015 cet New.xlsx

 

Share this post


Link to post
Share on other sites

That looks just right to me. When we get fewer entrants than in December there might be a bigger gap between the automatic first place score of 500 and the rest but you've given a very generous helping of extra accuracy points to people who will probably never have a bigger monthly error this year. I might though. ;)

 

Share this post


Link to post
Share on other sites
On 31/12/2015 at 5:48 PM, Roger J Smith said:

That looks just right to me. When we get fewer entrants than in December there might be a bigger gap between the automatic first place score of 500 and the rest but you've given a very generous helping of extra accuracy points to people who will probably never have a bigger monthly error this year. I might though. ;)

 

 

It will be interesting to see how it works out, hope I haven't messed it up come later in the year. :ninja:

 

Share this post


Link to post
Share on other sites

×
×
  • Create New...