Robo Home | TargetingChallenge2K6 | Changes | Preferences | AllPages

/ReferenceBots - /HowTo - /FastLearning - /Results - /ResultsFastLearning - /ResultsChat - /PreChat

So, how long will CassiusClay stay on top of the new TargetingChallenge? ;) -- Voidious

At best till the summer. I will have finalised my current projects (Griezel, GrypRepetyf) and am planning to start on my first GF-gun then. Be prepared . . . ;-D -- GrubbmGait

Hehe, sweet, I look forward to it. :) If I could just up Dooki's score against Cigaret to the score that CC gets, that would be enough for me to nearly top CC's score. Easier said than done! I am amazed that Dooki's gun does basically the same against Cigaret as against Cyanide, a bot that I consider an expert WaveSurfer and who is ranked #7 overall. -- Voidious

Mue should have something to say about that top place, Ascendant has an amazing anti-wavesurfing gun! -- ABC

I have a really, really hard time believing that head-on targeting scored 96% against pedersen.Butterfly. I guess my movement system just doesn't hold up to 500 rounds of testing. /shrug -- Martin

Well, a 96% score is 6 hits per round out of 57, still only barely above 10%... Nevertheless, it's certainly a surprise considering how poorly LT and CT do against it. -- Voidious

Two of the movement options are variations on tangental oscillation, which LT and CT have a tough time dealing with. One is fixed period and one has a variable period. I think Grubbm's observation about the ramming is probably key, and a vulnerability in the Rumble. If I see the enemy is out of energy I switch to ram mode immediately and never look back (until next round). I'll need to add a delay before ramming and a check to see if the enemy is still at 0 energy. (I'd put it in my onHitByBullet? event but it would break down if I decided to start ramming in melee.) -- Martin

In my test of Ugluk vs. Butterfly 2.0 I scored 89.262 points per round. That's using non-3.0 bullets, not firing when out of range, and my usual energy management (which would reduce scores since it is aimed at winning the round). Hopefully Butterfly 2.0 will reduce the frequency of competitors getting 96%+ scores against it. -- Martin

PEZ, I just noticed that the table on the /ResultsFastLearning page doesn't include a "gun type". It could be added as a field on the form, the output could just say "<INSERT GUN TYPE>"; or should we just leave it out? -- Voidious

I added it to the form. Very raw, but so is everything else about the calculator. =) -- ~~~~

HeadOnTargeting got scores of 66% and 68% vs pedersen.Butterfly 2.0. I would consider that quite acceptable, as it's right around what Cigaret gets, and I consider Cigaret a superb non-surfing mover. Shall I replace Butterfly with Butterfly 2.0, and re-run the current contenders' scores using it? And does anyone have a problem with changing that one bot? -- Voidious

Go for it. --wcsv

Switch away. Good to see my best wavesurfing effort scores worse than RandomMovementBot. CassiusClay's gun is scary as hell. --Corbos

Well, it still blows him away against anything but the best guns. If you don't mind me making a suggestion, I'd highly recommened looking into RollingAverage (or some type of decay) on your surfing stats. You would probably laugh out loud if you saw how shallow a rolling depth I use in Dookious's movement :) It really provided an immediate boost in his surfing against expert guns. -- Voidious

Wow, that didn't take long... Nice work, ABC!!! -- Voidious

Hehe. Well, if the RR@H was about crushing CC then this'll be a champ. =) -- PEZ

And silence fell over the RoboWiki, as everyone got to work on their guns... :-D -- Voidious

Good work PEZ, but Shadow 3.77 scored 88.57% in my last test... ;) -- ABC

Yeah, you're eating surfers for breakfast. My experiment with some anti-adaptive medicine only gained some minor points against the surfers. But I broke the 90-wall against Cigaret! That's a major thing for those of us who remember how bloody hard it was to hit Cigaret back in the days. -- PEZ

Back in the day? For you, I guess... Cigaret is beating Dookious head to head in the RoboRumble *right now*. :) -- Voidious

I guess I shouldn't take it too hard when the top bots absolutely crush Butterfly's movement. I'll just pretend I am lulling them into a false sense of security. And mutter. 99.99 score? He must have tried to escape by ramming his way out of the arena... -- Martin

Success is a poor teacher. (At least that's what I keep telling myself.) --Corbos

Yeah, it's really all about how you end up doing in the RoboRumble. And dodging high level targeters is a whole different animal... The algorithms you use to successfully dodge simpler targeters are also what make it easier for better guns to hit you; notice how RandomMovementBot's scores are pretty consistent throughout, while Butterfly does much better below a certain line, but much worse above it. -- Voidious

Martin, have you graphed your movement? You must have some very obvious spikes there that statistical guns can pick up. I find nowhere where I can read about that bot. Why don't you hijack the Butterfly page for it and just leave some see-also notes about other Butterflies? Looking at Butterfly move it looks like it is MultiMode? I'll assume it is for the purpose of this piece of hought. It might seem like multi-mode movement would force a gun like CC's to relearn often and generally confuse it. But in CC's case there are so many segmentations. Chances are that each mode finds a unique bin in one or two of those segmentations and then the different modes becomes just a dimension of segmentation. Does that make sense? For instance, CC has a segmentation that is Time-Since-Last-Velocity-Change. Many movements leave distinct trails in this dimension. And chances are that different modes of movement differs here when all segmentations are taken into account.

Cigaret's movement is awesome. No wonder many bots still struggle with it. What I meant by back-in-the-days was before Raiko found Cigaret's number.

-- PEZ

Well, I posted the index for the current dev version of PowerHouse. It's better than I expected, but still needs a lot of work before I'll be satisfied with it. And yes, Cigaret is really hard to hit. --wcsv

The movement is basically Ugluk 0.7.4's movement but without a 'mirroring' style. It tracks the total time each movement is in use and the total number of times it has been hit, and uses the least-hit-over-time movement, checked with every bullet hit. The movement options are 'full throttle' (constant movement), fixed and variable period tangental oscillation, and primitive wave surfing. There is also a push coming off of walls and opponents. I haven't done any graphing of the surfing profile (or others). Part of the reason is that I don't make any effort to smooth it, instead using randomly generated bearing offsets to go to. I'm sure my new surfing style, while still not smoothing itself, has a less obvious profile, but I'm not going to keep upgrading Butterfly as I try new things with Ugluk. It isn't really an excellent example of multi-mode. It's just the only one provided. -- Martin / Ugluk

It puzzles me that Dookious and PowerHouse have Tigger pegged so well. I really have to look into that, as it worries me that maybe it is pure luck and my gun's score should be much lower... Edit: Ok, maybe that's exaggerating, since I just realized it wouldn't lower my score that much if it were 5-10 points lower vs Tigger; still, I have to know why! -- Voidious

Yes, please investigate. And please share what you find! This could be yet another little (or big) clue for the AntiSurferTargeting quest. -- PEZ

@Voidious, I thought that was strange too, but I've run it against Tigger several times without much change. Tigger must just have some weakness in its movement that our guns are picking up better than the others for some reason. Maybe its because we are both using VirtualGuns? Hopefully i'll have some time for Robocode soon so I can investigate... wcsv

Yeah, I don't doubt the legitimacy of the scores, as I've had 90+ for several versions. My first guess is the segmentations, either very similar to or (more likely?) very different from Tigger. I read through some of the code yesterday, but the TileCoding? stuff makes it a little tough to comprehend :)

On a different note... I'm surprised more people don't post FastLearning scores. At this point, it is my primary benchmarking tool, and the full 500-round TC is more of an afterthought (although still valuable/interesting). -- Voidious

Depends on what you're after. The RR@H uses 35-round battles so the better tuning tool for that is the fast-learning scores. But I think it is over long battles that the guns really get to show what they can do. Ideally we would run the RR@H over longer battles, but I guess we'll have to stick with 35 rounds a while. Or introduce a 500-rounds division maybe. =) -- PEZ

About the fast learning, I've just been being lazy and havn't run it yet. I probably won't be moving from this desk for a while, so no reason I can run it right now... --wcsv

PEZ, I would be game for a 100- or 500-round division in the RR, for sure! I do like the idea of longer battles in general, but like you said, 35 rounds is the best for tuning for the RoboRumble. And wcsv, that would be cool, it's PowerHouse and Shadow that I've been most curious about for FastLearning? :) -- Voidious

You can always run Shadow yourself. -- PEZ

Yeah, and PowerHouse, too... But I'm usually spending that CPU time running my own TC scores, or the RR@Home client =) I do have access to some idle machines at work, though, maybe I'll run Shadow there next week. -- Voidious

I know none of this benchmarking is an exact science, but it's interesting that Shaakious's FastLearning? score went down slightly (let's say it's the same with margin of error) from 0.11 to 0.12, but the RoboRumble ranking went up a good 7-8 points. -- Voidious

That is to expect when the number of surfers in the challenge are so many. The rumble doesn't have that many surfers and then it it doesn't correlate all too exact with the challenge results. -- PEZ

Thanks a lot for running Shadow in the fast learning challenge Voidious. I like the result a lot. :) -- ABC

No problem, I wanted to see the results, too. (Impressive!) I've been doing a lot of "away from the computer" work at work lately, so I might as well put these CPU cycles to good use. =) -- Voidious

Cool results! Let's see if I can recover any soon. =) -- PEZ

Actually I thought CC2zeta would be beating that score, but it has the same score as that old CC.96bd. It has much higher top scores, but then too many low scores. What's Shadow's top score for a season V? -- PEZ

Doh, the XML file is at work... I will let you know tonight (most likely) or tomorrow when I am near there. -- Voidious

Lowest season was 85.54, highest season was 88.86. I uploaded the source XML to my web space: [shadow_369m_tc2k6-35.xml]. -- Voidious

That's the kind of stability I would like. My seasons varies from 83.93 to 89.47. But I have a few ideas on how to stabilize it yet untested. Hope never leaves me. =) -- PEZ

Wow. Now I got 90.44 in the first season. Looks like so:
Season Name Butterfly 1.0 CassiusClay 1.9996bdTC Chalk 1.2TC Cigaret 1.31TC Cyanide 1.80.bTC DuelistMicro 1.22TC FloodMini 1.4TC GrubbmGrb 1.1.3TC RandomMovementBot 1.0 Tigger 0.0.23TC Score
1: CC 2eta 96.46 66.06 90.89 94.17 83.77 99.66 97.89 97.57 89.11 88.80 90.44

But it's just fluke of course. The scores that should have been good, had the VG been doing it's job are the firts too and those are pretty lame.

-- PEZ

I sort of think we should consider upping the number of seasons in the fast learning benchmark, maybe up to 25, though even 20 would be better. I run a lot of FL benchmarks when tuning my gun, and I don't even trust 15 seasons against a given tank to give a very accurate benchmark. I've had differences of at least 1.5% over 15 seasons versus a certain tank, possibly much more that I just don't remember. The final score may even out over 10 tanks in 15 seasons, but it would be nice to have the score for each reference bot be accurate, too. I know 15 x 35 is already over 5,000 matches, though, so that is a good argument against this. Thoughts? -- Voidious

15 seasons sure isn't enough to present the results with two decimals. But it gives a ballpark figure that is quite useful. 25 isn't enough either, but it is more accurate, no doubt. If it's worth it? Dunno. -- PEZ

The current figures are undoubtedly useful, and the final score is probably very close to accurate. I guess we can just leave the mandated amount at 15, but I will probably run more than that myself when I can for my bots. (I certainly run at least 20-25 if I am trying to discern if a small tweak is better or worse against a certain tank.) -- Voidious

Is "eta" your dev version? At first, I thought it was a typo of "zeta", but I can't imagine you'd replace a 50-season benchmark with a 15-season one just to get a higher score... Just curious. Nice score! -- Voidious

Yes, 2eta is the dev version. The 50-season was with a different tweak of the VG selection. I got 87.17 at first with that one and wanted to get rid of some of the uncertainty in the results. This tweak works quite a lot better in this challenge, but looking at the score against Butterfly I wonder if I dare us it. The problem with this challange is the extra bonus it gives to anti-wave-surfing. Something that doesn't pay very well in the rumble. -- PEZ

Yeah, that's definitely something to be aware of with this TC... At first, we were talking about putting 5 WaveSurfers in, so I'm glad we at least took it down to 4. And this version of Chalk seems to be handled pretty well by general-purpose guns, perhaps because it doesn't decay the stats much or at all. The number of WaveSurfers is bound to keep increasing in the RoboRumble, though, and I think it's reasonable to say there's a difference between "overall gun performance" and "RoboRumble gun performance"; there are just a lot of non-surfers in the RoboRumble. Anyway, I still say 15 seasons isn't accurate enough to judge a score against a single opponent - I run 20-25 against a single opponent (usually Cigaret these days) if I want to know their particular FastLearning? score. -- Voidious

My intution tells me that if 15 seasons are about enough for all ten bots in the challenge then 150 seasons will be needed for a single opponent. This can be tested somewhat. The TCCalc script calculates the confidence interval. Run 15 seasons against Cigaret and see what it says and compare it to the figure reported from a full TC run. -- PEZ

It's a pretty common statistical calculation. Opinion polling comes immediately to mind. Basically you need to decide on the confidence level (e.g. +-5%? 1%? 0.001%?) and look up how many samples you need. I don't have my college statistics book handy but I'm sure it is in the appendix somewhere. Somewhat related, I recently took down periodic rating figures and number of battles processed, and I found that in the roborumble the rating after ~300 battles was within 2 points of the final rating, even though there are over 400 robots in the rumble. The meleerumble ratings never really stabilized, even after 1200 battles. -- Martin

I have definately seen my bots lose or gain 20+ points from the 300 battles mark. -- PEZ

Dookious 0.72 barely broke 2K after 1,000 matches (2000.17) but ended up at 2003 after 1400 battles. -- Voidious

Well I made note of Ugluk v0.9.1c's first momentum reversal (from positive to negative) at 1750 after 91 battles. He's at 1751 after 1304 battles. 65.6 specialization index, which isn't low, so I'd expect more volatility. Certainly not exhaustive research, but perhaps for those of us not trying to escape the Earth's gravity with our scores a few hundred battles is good enough. -- Martin

Nice score, Mue! I posted the XML file if anyone wants to see all 3 seasons: [Ascendant_TC2K6-500.xml]. I'm running 25 seasons of FastLearning now. -- Voidious

Ascendant's FastLearning score is posted, and I posted the RoboLeague file: [Ascendant_TC2K6-35.xml]. -- Voidious

Thank you for running these battles. I guess, i'd need about a day to get these results. Interesting that Shadows fast learning score is higher than Ascendants. Apart from that its about the score i expected. --mue

Truly good score against Cigaret too. -- PEZ

Wow Void, that's amazing progress! Big congrats from Sweden! -- PEZ

Thanks! I'm very pleased to have cracked the 88-barrier. And the 2 new scores in front of me will keep me humble =) Back to the MovementLaboratory for me... -- Voidious

Thanks for running Cyanide, Voidious. That's what a bad implementation of CC's segmentation looks like. I guess the strength of the bot really is in the movement. -- Alcatraz

No problem, I like having a lot of the top tanks listed for reference. Yeah, your movement must be quite excellent... and fast (CPU-wise), too - tied with CC for fastest-executing WaveSurfer among the surfers I considered when working on the TC2K6 reference bot list. It is worth noting that most of those TC points are lost to the WaveSurfers. I noticed that you don't check if the gun is aimed before firing, which gave me a significant boost in RR and TC scores once I finally implemented it. -- Voidious

I didn't expect to be the first one to crack the 90-barrier, and I'm not sure how lucky that one season was... but the more precise calculation of max escape angle seems like it might be worth it. We'll see what the RoboRumble rating says. -- Voidious

@Kev: You have a fine set of results there, alot better than mine, but scoring better against Cigaret than against GrubbmGrb ?? Are you sure the results are in the right order? -- GrubbmGait

I'm positive the results are in the correct order. Puzzle definitely does have an unusual set of scores; I'm trying to make the next version more well rounded. -- Kev

Voidious, are those DCBot scores with or without the gun heat dimension? Thanks. --Corbos

The only thing I did was set "stationary" to "true" in the DCBot that ABC posted. I'll take a look to see if there's a simple setting I could adjust in the source, and if there is, I'll re-run them with that enabled when I get a chance. I don't really know this system well enough to altering much code if it isn't something very simple... -- Voidious

Well, I don't see any setting about enabling the gunheat dimension, but I think this line means that it's enabled:

//the following line makes it score over 90%+ against Cigaret 1.31TC, but worse against others (wavesurfers in particular)
currentDistance += sqr((currentInfo.myGunHeat - info.myGunHeat)) * 100;
Does that answer your question? -- Voidious

Just inserted the comment above it, too, which I was blind enough to miss on my first copy/paste =) -- Voidious

That answers my question. Thanks a ton for running them. You've been the saint of the 2K6 challenges. I really appreciate it. The WaveSurfing/Tutorial was excellent as well. --Corbos

Yes, thanks for running it Voidious. Interesting result, I made a small change before releasing it, it should have scored a bit higher than that. I'll optimise it for the TC tomorrow. Basically, if you want it to score even higher against Cigaret and FloodMini, increase the topCount and angMax variables from 100 to 250 and leave that gunheat line like it is. If you want better performance against CC set them to 50 and comment out that line. If you want the best of both worlds, just make it a simple condition like this: if((double)bulletsFired / bulletsHit > .12) anti_cigaret; else anti_CC;. -- ABC

The filesizes of these (unzipped) data files are directly proportional to the number of segments in Dookious 0.98's gun that each bot visits (more than a couple times), recorded over a single season of the TC2k6 (500 rounds.)

14766 Apr 20 14:00 kawigi.sbf.FloodMini 1.4TC.g2
12193 Apr 20 14:00 gh.GrubbmGrb 1.1.3TC.g2
10833 Apr 20 14:00 pedersen.Butterfly 2.0.g2
10721 Apr 20 14:00 pez.rumble.CassiusClay 1.9996bdTC.g2
10483 Apr 20 14:00 wiki.etc.RandomMovementBot 1.0.g2
 9455 Apr 20 14:00 cx.mini.Cigaret 1.31TC.g2
 8582 Apr 20 14:00 stefw.Tigger 0.0.23TC.g2
 8306 Apr 20 14:00 cjm.Chalk 1.2TC.g2
 8134 Apr 20 14:00 dft.Cyanide 1.80.bTC.g2
 7357 Apr 20 14:00 davidalves.net.DuelistMicro 1.22TC.g2

I don't really know what this data implies, and there's clearly no direct correlation between segments visited and difficulty as a reference bot, but somehow it still seems interesting to me. In the 0.98 format, it should average to about 7 bytes per segment, of 13,500 total.

-- Voidious

Thanks for running ALi for me V. Seems that the overall score is quite similar to Lukious even if the individual scores differ significantly. And I have absolutely NO ideas on how to move Ali's gun anywhere near CC's. That is a bit depressing. -- PEZ

Funny, I have *lots* of ideas to try in a DynamicClustering system, but I just haven't found any that provide significant improvements enough to bring up to top-5 caliber. (Then again, maybe you've already tried a lot of these things that I have ideas about.) -- Voidious

Wow, did you notice that PowerHouse would probably be 88+ if its gun were better in the AntiSurfer department? It's higher than Shadow against the non-surfers in the TC, about the same as Pear, and not far behind the top 3. Nice! -- Voidious

Thanks, man. I think i'm going to devote some time to an antisurfer gun now... -- wcsv

I think you should test you gun in the RoboRumbleGunChallenge. It will give you an idea on how far your gun is taking you. And it will give you an idea on wether it is in targeting or movement that you have most to gain. -- PEZ

I feel like I've been saying this a lot lately, but great work on WaveSerpent's gun, Kev. Keep it up! Er, actually, you should probably turn focus back to movement now. =) And yeah, the RRGunChallenge is a very good benchmark - I wouldn't mind seeing one for WaveSerpent, and an updated one for PowerHouse. -- Voidious

Actually PowerHouse's rrgc score is fairly recent, but I have made some advances in the gun department since. I'll post an rrgc version later tonight. EDIT: I got sidetracked with something else tonight, and I have to get up for work again in too few hours. I'll post an rrgc version tomorrow. --wcsv

Thaks, Voidious, I'm pretty pleased with the result myself. I'm surprised how well it did consideringWaveSerpent's gun really isn't much more than a bug-free version of Puzzle's, although there is a little tweaking. I've just released WaveSerpent/RRGC 0.43. It will be interesting to how well it does... -- Kev

I do better than anyone else against the surfers (CC, Cyanide, Chalk, and Tigger), scoring 83.51 average against them. However, I can't seem to raise my score against Cigaret, no matter what I try. I was wondering if anyone had any tips on hitting Cigaret better. I'm especially interested in hearing from PEZ, his 87 against cigaret is stunning. --David Alves

Well, the next best thing to hearing from PEZ is hearing what he has told me about the subject... And he says it's all about wall distance segmentation. No disrespect to PEZ intended, but that 87 might be a little higher than what CC would've gotten over, say, 50 seasons; individual bot scores are just not too accurate over 15 seasons of fast learning, although the overall TC score isn't too far off. Edit: Then again, it could also be a little lower than it would be over 50 seasons. =) -- Voidious

Name Author Gun BFly CC Chk Cig Cya DM FM Grb RMB Tig Total Comment
Phoenix 0.61 David Alves GF 99.44 73.57 88.49 79.78 76.57 91.61 91.82 87.54 88.42 92.34 86.96 166 seasons, 0.088 (no joke!)
Phoenix 0.32 David Alves GF 99.54 74.36 89.08 77.75 77.71 89.36 91.70 87.71 87.75 92.90 86.79 35 seasons, 0.182

Every version from Phoenix 0.32 until (but not including) 0.6 uses the exact same gun. Because it was so good at hitting surfers, 0.6 (and 0.61) use it as an anti-surfer gun, with a different main gun used for non-surfers. My scores against the non-surfers (DuelistMicro, Cigaret, etc.) went up because I'm using this new gun. However my scores against surfers went down because Phoenix takes a while to realize that it's fighting a surfer and switch to the anti-surfer gun. --David Alves

Interesting that your PL score is as high as ever, too... -- Voidious

432 wins, 1 loss for 0.61. Hmm, gotta work on that one loss. ;-) --David Alves

That's an insane score Voidious! Wow, wow, wow. You've taken targeting to the next level. And Dookious 1.16 has 2100.99 with 866 battles. You're the man! -- PEZ

Thanks, PEZ, that means a heck of a lot coming from you. Much like the XBox 360, though, it's more of a very powerful evolution than a revolution. =)

This tip almost deserves a spoiler warning with how many TC and rating points it gained me; so for those who care about such things, there's my warning. But I sure don't mind sharing it. As you know, I've used a "more precise" calculation of GF 1 / -1 for a while - just precisely simulate moving at a right angle at the time of firing (like Math.asin(8/bullet velocity) roughly simulates), accounting for initial heading and velocity. It didn't account for walls at all - now it does. I figured WallSmoothing with a wall stick of 130 is pretty close to maximizing your escape angle along the walls. No tweaking at all yet, either - a different length stick could work better, and if the enemy would never quite reach the wall, it might be better to ditch the WallSmoothing in calculating GF1. Surprised and glad it worked so well. =)

-- Voidious

I've always assumed that my wall segmentation would take care of that. But of course that's breaking one of my main rules in living; "Never assume anything!" Thanks for sharing, I very well make take this opportunity to turn to gun development again. Movement just isn'tyt as fun for a gun nut like me. -- PEZ

V, can you run the regular (old) TargetingChallenge (35 round maches) too? -- Curious George

Sure =) Will have 'em done sometime in the next couple days. -- Voidious

I'll be finished with 100 seasons of the old TC fast learning today - so far, they look the same or just less than your (PEZ's) highest score on that page. About the wall segmention, I thought that too; I'm going to try reducing my wall distance segments now, too. Another factor in the fast learning and in the RoboRumble is that my fast-learning buffers don't have wall distance segments, so they might be more accurate and less contaminated by near-wall situations now. -- Voidious

That's pretty good for a pattern matcher, Simonton. I had to trim some features from Ugluk just to allow him to last 500 battles without running out of memory. I dunno how you manage it with a pattern matcher. -- Martin

The FoldedPatternMatcher that Corbos came up with seems significantly faster over long battles. It's used in Che, which is used in the MovementChallenge2K6 over 500 round battles. And indeed, that is a very good TC score for a PM gun! -- Voidious

Thank you both! I just updated with better scores - there were a couple bugs in it before. Hopefully not anymore! -- Simonton

Hmmm, strange i got this result with my new gun and Komarious Segmentation (I'm even not sure if i translated the minicode correctly :/ ):
Garm 0.01 | Krabb | GF | 99.52 | 75.05 | 96.85 | 95.30 | 75.28 | 97.82 | 99.25 | 97.87 | 98.28 | 90.60 | 92.58 |

I dont't believe this could be true... I get the following output (error):

Running season 1.
Next grouping: Garm 0.01 vs. Tigger 0.0.23TC (9 remaining).
Next grouping: Cigaret 1.31TC vs. Garm 0.01 (8 remaining).
Next grouping: Chalk 1.2TC vs. Garm 0.01 (7 remaining).
Next grouping: Cyanide 1.80.bTC vs. Garm 0.01 (6 remaining).
Next grouping: Garm 0.01 vs. RandomMovementBot 1.0 (5 remaining).
Next grouping: GrubbmGrb 1.1.3TC vs. Garm 0.01 (4 remaining).
Next grouping: FloodMini 1.4TC vs. Garm 0.01 (3 remaining).
Next grouping: Garm 0.01 vs. Butterfly 2.0 (2 remaining).
Next grouping: DuelistMicro 1.22TC vs. Garm 0.01 (1 remaining).
Next grouping: Garm 0.01 vs. CassiusClay 1.9996bdTC (0 remaining).
Season 1 completed.
Storing results...
[Fatal Error] :137:5: The element type "xsl:stylesheet" must be terminated by the matching end-tag "</xsl:stylesheet>".
FEHLER:  'The element type "xsl:stylesheet" must be terminated by the matching end-tag "</xsl:stylesheet>".'
SCHWER WIEGENDER FEHLER:  'Die Formatvorlage konnte nicht kompiliert werden.'
javax.xml.transform.TransformerConfigurationException: Die Formatvorlage konnte nicht kompiliert werden.
	at com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl.newTemplates(Unknown Source)
	at com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl.newTransformer(Unknown Source)
	at roboleague.XmlIO.getTransformer(XmlIO.java:203)
	at roboleague.XmlIO.getXSLTTransformer(XmlIO.java:267)
	at roboleague.XmlIO.getSeasonHtmlTransformer(XmlIO.java:303)
	at roboleague.XmlIO.htmlTransformDataModel(XmlIO.java:1619)
	at roboleague.gui.RoboLeagueGui.bg_runSeasons(RoboLeagueGui.java:679)
	at roboleague.gui.RoboLeagueGui.access$700(RoboLeagueGui.java:21)
	at roboleague.gui.RoboLeagueGui$5.run(RoboLeagueGui.java:807)
Executed 5000 battles in 00:49:05, average: 0,59 seconds per battle.
Finished (0 warnings).
could there be any connection with the "strange" results? --Krabb

Well, that was just a bug in RoboLeague's XML generation, so I'd bet it's just an issue with RoboLeague. Although it could have come from something ill-formed in Robocode's output, I guess, I don't know... As for the extra high scores, were you possibly not using power 3 bullets? What version of Robocode was this? -- Voidious

Ahh damn, it's so long ago that i run a TC season -.- forgot the power 3 bullets.
This results are realistic:
Garm 0.02 Krabb GF 96.39 62.28 85.77 84.30 69.22 92.96 85.42 87.20 86.35 80.15 83.00


Nice results, at least better than mine. If you get your WaveSurfing working soon, I have a tough job beating you for The2000Club -- GrubbmGait

Yes, im quite happy with the gun, but the segmentation is just stolen from Voidious. There is still some work to do :) --Krabb

Okay, velshea is far from being the best in this challenge (or the old one for that matter), but I am testing an virtual-gun array in velshea. currently it gets about 65% vs CC, but its gun-picker is kinda dumb and ends up causing more problems then solutions. As it is now the gun isn't bad but I wouldn't mind making it better i'm addicted to making guns (or atleast gf ones). =) -- Chase-san

Nice work Krabb, your development version beat mine by 0.01 points, I had an entire day to enjoy my score though. (if my anti-surf gun turns out good, I might have a secret weapon here soon to bolster my bots new score) -- Chase-san

Thanks, it was just the first result :) The avarage is 85.5, but lets see how my next update turns out! --Krabb

Thats okay ;). Even if you do beat my latest dev version (beats yours by 1.11 currently), I welcome the competition =). -- Chase-san

Robo Home | TargetingChallenge2K6 | Changes | Preferences | AllPages
Edit text of this page | View other revisions
Last edited December 7, 2006 14:07 EST by svr2.pace.k12.mi.us (diff)