[Home]TargetingChallenge/ResultsFastLearning

Robo Home | TargetingChallenge | Changes | Preferences | AllPages

Difference (from prior author revision) (major diff, minor diff)

Added: 4a5
| DCResearch 0071 | Simonton | 82.45 | 93.71 | 99.99 | 98.77 | 84.81 | 88.61 | 97.30 | 88.83 | 97.77 | 91.31 | 92.36 | 104.1 seasons

/Constitution - /ReferenceBots - /HowTo - /FastLearning - /Results - /ResultsFastLearning - /ResultsChat - /Calculator
Name Author DT Asp TAOW SparCig Tron Fhqw HTTC Yngw Funk Total Comment
CassiusClay 1.9.9.94* PEZ 83.90 93.92 99.91 98.32 84.23 89.16 98.22 87.09 98.08 90.79 92.36 36 seasons
DCResearch 0071 Simonton 82.45 93.71 99.99 98.77 84.81 88.61 97.30 88.83 97.77 91.31 92.36 104.1 seasons
Dookious 1.543 Voidious 86.29 92.89 99.98 98.54 83.47 88.01 97.53 85.44 98.03 90.07 92.02 100 seasons
Ascendant 1.0 Mue 82.00 93.33 99.99 97.88 84.35 86.87 96.92 85.50 98.58 89.65 91.51 15 seasons
Komarious 1.705 Voidious 79.66 93.27 99.74 98.08 84.14 87.08 97.09 86.75 98.08 89.36 91.33 100 seasons
Shadow 3.06 ABC 78.85 90.24 99.63 98.45 83.23 87.43 93.95 89.67 97.86 88.99 90.83 15 seasons
Pugilist 1.9.9.7 PEZ 81.05 91.69 99.85 98.14 82.11 84.79 93.54 85.74 97.45 88.97 90.33 16 seasons
Raiko 0.43 Jamougha 80.02 89.93 99.63 97.65 81.63 84.23 95.92 85.19 97.20 88.06 89.95 16 seasons
Toad 0.8t Florent 77.11 90.95 99.78 97.75 79.75 84.55 97.04 84.35 97.68 89.49 89.84 15 seasons
Cyanide 1.72 Alcatraz 77.26 93.38 99.74 98.39 79.21 84.71 94.57 83.83 97.53 89.47 89.81 15 seasons
Phoenix 0.2 dev David Alves 74.20 89.98 99.88 96.18 82.15 84.52 95.28 87.49 96.84 89.81 89.63 50 seasons
Ali 0.2.5 PEZ 78.17 92.09 99.75 96.42 79.35 82.26 95.79 86.48 96.67 88.66 89.56 15 seasons
DarkHallow Jim 76.71 90.34 99.30 97.42 80.72 85.74 95.14 85.35 96.90 87.59 89.52 15 Seasons
Aristocles 0.3.7* PEZ 78.50 90.33 99.72 96.47 81.86 82.41 95.86 83.58 97.34 88.25 89.43 15 seasons
Shadow 2.47 ABC 71.13 88.14 100.00 97.54 83.10 86.70 94.65 87.19 96.77 87.64 89.29 15 seasons
Aleph 0.33 rozu 72.37 88.74 99.99 97.86 79.44 86.92 93.67 87.33 96.79 88.18 89.13 15 seasons
Pear 0.58 iiley 79.08 88.02 100.00 96.39 77.65 85.18 94.42 84.72 96.48 88.60 89.05 15 seasons
YALT 1.11* David Alves 72.79 87.18 99.82 96.10 80.44 85.01 93.87 86.67 96.84 88.90 88.76 15 seasons
Okami 1.04.TC Axe 71.16 90.45 99.57 96.75 82.35 86,55 91,4985,3196,88 86,23 88.67 15 seasons
FloodMini Kawigi 71.73 89.30 99.97 97.54 73.77 85.38 95.74 84.90 96.21 86.98 88.15 15 seasons
Jekyl dev Jim 73.56 89.45 99.90 96.14 74.78 85.56 90.01 87.70 96.73 86.22 88.00 15 seasons
Quest 007dev Frakir 79.37 84.11 100.00 94.74 79.54 90.51 90.63 80.89 98.83 80.77 87.94
Musashi 2.08.8.TC Axe 72.72 89.24 99.40 96.06 82.40 83.75 87.47 87.01 94.50 84.51 87.71 15 seasons
GresSuffurd 0.2.4 GrubbmGait 75.11 86.98 99.87 96.63 74.50 83.41 93.94 84.71 95.58 85.60 87.63 15 seasons
SandboxDT 2.11 Paul Evans 66.18 89.49 99.40 98.12 74.29 83.94 94.68 84.59 97.58 84.64 87.29 15 seasons
Quantum [DevTC 1]?.0 Wolfman 70.07 83.60 99.53 92.73 69.93 84.27 96.38 83.47 98.67 87.22 86.59
FhqwhgadsMicro Kawigi 73.01 89.24 99.43 95.43 67.93 84.97 90.31 80.30 95.21 83.86 85.97
FloodMini # Kawigi 68.41 85.80 99.48 94.39 73.64 82.43 91.33 83.30 95.01 85.02 85.88
Griffon 0.9.0* PEZ 68.03 85.64 99.20 93.93 75.18 81.27 89.79 86.01 95.13 83.18 85.74
Fractal dev (0.54) Vuen 62.77 83.54 96.98 90.57 70.08 84.20 86.42 81.46 94.33 83.98 83.43 10 seasons
GloomyDark 0.5* PEZ 62.97 81.73 99.33 87.29 72.82 82.89 84.49 83.71 95.08 79.29 82.96
GrubbmGrb 1.2.1 GrubbmGait 58.72 76.04 86.00 94.01 67.05 80.57 90.32 82.37 96.37 86.50 81.79 15 seasons
Falcon PEZ 55.73 78.94 71.49 78.84 71.21 77.39 85.31 86.29 95.68 81.68 78.26
Resin 0.2 PEZ 59.53 75.33 97.04 78.61 62.52 76.28 81.39 77.03 91.51 76.64 77.59
HaikuTrogdor 1.1* Kawigi 39.12 55.21 68.69 79.95 42.38 57.35 74.50 56.28 84.87 63.16 62.15


I've found that results for 15 seasons are only accurate to within a point of the actual results. I use 50 seasons. --David Alves

CC .94* is my dev version, which might not get released. I'm testing a quite different segmentation now. Well, probably I'll release the version that scored the above record. The experiment probably won't succeed... Anyway here's the best season out of those 36:

Name Author DT Asp TAOW SparCig Tron Fhqw HTTC Yngw Funk Total Comment
CassiusClay 1.9.9.94* PEZ 88.91 92.94 100.00 100.00 86.14 97.06 100.00 91.40 98.83 90.69 94.60

Quite cool, huh? =)

-- PEZ

Amazing, that improvement will succeed in the rumble for sure. -- ABC

Well, it didn't. BeeRRGC equipped with that improvement lost 5 RR@H points. That's amazing in itself. We should think out a new TC setup I think. What about using RRGC version of our bots and a carefully selected testbed of shooting "real" bots? The problem is how to calculate the index, but maybe we can use the same as with this TC? Since all challengers share the same movement... -- PEZ

That is strange. Maybe the RRGC is not the best test either, we are probably optimising the score against RMX problem bots there. Did Ascendant gain the same number of points as AscendantRRGC? with the new gun? One thing that would be very useful, imo, is an opponent rating column in the details page, that way we could sort it and make some interesting graphs in excel... -- ABC

RMX and RRGC has nothing in common. Maybe you mean Raiko? Ascendant sure gained a lot of points using the RRGC optimzied gun. Not the same number of points since Ascendant surfs many bots to death better than you could ever shoot them dead. So far the RRGC measure is very accurate in telling me when I have reason to expect a rating boost from CC. I think you can deduce the opponent rating from the PBI on the datails page. What kind of graph are you thinking about? -- PEZ

Isn't the reason as simple as the TC having too few simple bots compared to the RR@H ? That or that your own movement creates other situations. -- Pulsar

Might be both. But an RRGC challenger setup would allow us to use any reference bots we fancy. Also closed sourced and abandoned ones. -- PEZ

I remeber now wich graph I was thinking about. It would be usefull to have a ranking/expected score column in the details comparison page, and maybe even an LRP for it. That way we could see where different versions win/lose points. Like this:

This is the comparison between Shadow 3.53.4 and 3.54. The X values are expected score, the Y values are score diff. As you can see, I lost more points against the top bots by reducing the fighting distance.

-- ABC

I'm not sure I completely follow. The graph shows 3.53.4's expected score on the X axis and score diff to 3.54 on the Y axis? What, exactly, should the extra column show? You can try adding &fullInfo=true to the details comparison page and tell me if it's any of that info you are lacking. If so it'll be really easy to fix. =) -- PEZ

Yes, that's it. Would it be easy to incorporate the expected score and an LRP to the comparison page? It would be very usefull to know where a certain version lost/gained ranking points. -- ABC

Well, adding the expected score for bot1 isn't as easy as I first thought. Does it matter if we use actual score for the LRP instead? -- PEZ

That's not the same graph. Never mind, I can use the &fullInfo=true flag and excel to get it. -- ABC

I mind! =) -- PEZ

Ok :), then find a way to add an expected score column (or, even better, opponent rating) to the comparison page and make an LRP out of it... -- ABC

How would I make an LRP using the opponent rating? -- PEZ

Using the rating as X values, instead of expected score. -- ABC


Robo Home | TargetingChallenge | Changes | Preferences | AllPages
Edit text of this page | View other revisions
Last edited October 18, 2007 0:19 EST by Simonton (diff)
Search: