[Home]RoboRumble/OldRankingChat 031210

Robo Home | RoboRumble | Changes | Preferences | AllPages

No diff available--this is the first major revision. (no other diffs)
Speaking of such a league, it occurs to me that F-Micro is doing superb so far considering he doesn't save data and such. Of course, some robots are just not made for 1000-round battles. Sedan, FloodMini, FloodHT, FloodNano, Fhqwhgads and FhqwhgadsMicro would do fine in such a league (probably do some pretty impressive work, actually), but FunkyChicken, FloodMicro and Teancum aren't intended for that sort of battle - Teancum would probably allocate more memory than he deserves, and FunkyChicken and FloodMicro would slow to a crawl by 250 rounds in their current versions. -- Kawigi

A sensible idea would be one bot per coder, possibly special version (more segments, you name it), then round-robin tourn. -- Frakir

Why round-robin? If all pairings are faught it shouldn't matter how you select them, should it? We would need to ban some bots though, those PM nanos comes to mind. -- PEZ

If you run all pairings for 1000 rounds battles it can take as long as 120 days!!! -- Albert

Well round-robin meaning all pairings are fought... alternative being shorter, swiss tourney (this is term taken from chess, not sure if most are familiar with it) where bots don't play everyone, but only several bots in their rating neighbourhood and it takes predefined number of rounds. In swiss pairings some luck can be involved though... -- Frakir

120 days is actually quite impressive performance! That means that after the initial run an updated/new bot would take less than a day to get all it's fights done. And that is with 200 bots, with more strict entry rules we would probably have around 50 entrants. And of course, with a ELO based ranking and "smart pairings", we would have meaningfull results much sooner... ;) -- ABC

I thought round-robin indicated a method by which all pairings are fought... Swiss tourney, that's what tobe used for the MiniBotChallenge if I remember correctly and I think it's something like that used in RobocodeLittleLeague as well. There's a huge luck factor there. Well, anyway 120 days is a long time! I wouldn't want the computing power of the regular RR@H gone for that long. -- PEZ

RLL pairings are completely random and depend on the S-Curve to eliminate some of the luck factor. If we do a 1000-round league, I think we should make it a different division with its own page to add entrants to. People know if their bots can perform reasonably in that length of match. Of course, the variety may be reduced by a little bit. -- Kawigi

Well, what if we had a seperate participants page, with say, only 200 round battles? With a highly reduced number of participants (those with either good bots or bots in production), this would take at most a week to complete. I wouldn't bother submitting bots that I'm no longer working on, as would a lot of other people; I would only submit the newest version of the bot I'd be working on.

In any case, with the amount of battles that are being fought, I suggest that we increase the amount of rounds per battle in the regular league to at least 50. Right now the default is only 35; while it was an appropriate number before, now that we have enough clients, we have the power to bring it up to 50 or even 100. Who votes we increase the round count? -- Vuen

My bots don't save data, so I would be very happy with any increase of the round number ;). PEZ, the fact is that we are allready wasting the computing power of the RR@H. It has been running the same pairings over and over for some time now, taking care of updates in a matter of hours. If we wanted a defenitive 35 round "from scratch" ranking we would allready have run many times the number of battles needed. Right now what we have is a setup where bots that can save data on 200 oponents have an allways increasing advantage and others depend on the luck factor of having data for the right bot at the right time (in the right client). Of course those without data saving can only go down. I have even stopped running the client so often because I believe my contribution just isn't necessary... -- ABC

ABC, I agree with your observations. I have even been asked about the timing of a relase on my next bot so that another could relase their new bot after mine. The stated reason was for a data savings advantage and the resultant boost in their rating. More and more a bots placement in the RR@H ladder is about the data it comes pre-loaded with. Unfortunatley I am drawn to the same arms race though so I should not cast stones. I am re-writing Jekyl with the hope of one day releasing it without pre-saved data. I have always thought that thought bots that can reach the top with no data pre-loaded are the most impressive. That you can do it without any data and without saving at all is even more impressive. -- jim

But these are the terms of the RR@H, you'll have to work with them. DT and Lacrimas (and maybe Neo?) proves you don't need the pre-loaded data advantage to rank high. And Jim, when I asked you about the expected release of Jekyl I was half joking. ABC's bots are a bit of a special case since they don't save data at all. A separate league where saved data isn't used would be highly interesting. I would be glad if you switched on you client again ABC, even if it's not strictly needed, one of the qualitioes of RR@H is that it keeps running those pairings over and over again, eliminiating the luck factor more each time. I have often seen my bots have too low or too high %share agains some bots just because they only have faught 3 or 4 pairings. Think PL, it needs those extra pairings. -- PEZ

Just a couple of observations: (1) It is not true that once a bot has all its pairings fought, running more battles is a waste of time. The %score "averages" all battles fought against an enemy, so the higher the number of battles against it, the better (also, if some bot was unlucky, it gives it the oportunitiy to correct the results). (2) We are speaking as if we had unlimited computing power, but it is not true. May be it will increase in the future when we publish RR@H in new forums, but don't expect something spectacular. The main advantage of RR@H is that there are 2 or 3 clients running at any moment, so when you update a bot its rating is recalculated easily, and what is best, you don't deppend on a single person to run it. We run about 5000-15000 battles per day (175000-525000 rounds a day) and this limit should be in mind when thinking on new leagues or rule changes. -- Albert

PEZ, I figured you were half joking (thats why I did not mention who it was) but the fact that it is a valid strategy is what I find interesting. I am on the fence about data storage. On the one hand I find it incredible that a person can compress their data requirements for 200 or so bots to under 1KB each and have a gun which is fairly solid. I find myself hard pressed to not applaud these people. On the other hand, it seems an unfair advantage given that the distributed nature of RR@H almost garuntees that your opponent will not learn about you from match to match. I fear that we will start seeing data acquisition as a primary reason for a new bot version. In the end I often come back to the old adage: "If you find yourself in a fair fight then your planning was bad." Sometimes it does not pay to fight with honor =^> -- jim

500000 rounds a day is already spectacular, imo, and is more than enough for a couple of additional competitions. We lived with much less than that for so long, the ER ran what, 50 pairings*10 rounds = 500 rounds per week average? Thats 1/1000 per week of what RR@H does per day! -- ABC

It has been a valid strategy in any competition so far. I used it in ER very much. I think it's cool that this kind of trade-off exists. And it will push for more full strategies like those Frakir is currently exploring. In any case, before we go forward adding functionality to the RR@H we will need to refactor the current prototype. Issues like using a database and/or XML for the data and a MVC-ish design need to be addressed. Else we will find RR@ getting quite unmaintainable quite soon. -- PEZ

I agree, but nothing like trying to expand it to become aware of it's structural limitations. Did I mention I miss a melee competition? :-) -- ABC

You just have to look at the code to be aware of its limitations. It is the worst programmed application I have ever seen :-) Albert

Edit conflict :(. I do also miss a melee competition; many of the top bots in 1v1 don't even do melee, and some of my bots (namely Wisp and Cake) would do much better in such a competition because melee is what Cake is best at and Wisp was designed for melee. Many of the bots in the 1v1 competition are highly underestimated, because they would perform much better in a melee.

Anyway, the way the rumble is setup now, bots replay eachother more and more often. A mere 12 hours ago, the new Mushashi had not yet played Fractal; last I checked it already has 6 battles against it. While this was probably spread out across clients, across a longer period of time recurring battles are more and more likely. Saving data can become a good strategy. Of course, if we increase the number of rounds per battle, this is less likely, but like ABC none of my bots save data so I still vote to increase the round count :D.

-- Vuen

Wow, it never occurred to me that so many of the current top 1-on-1 bots had a complete lack of melee strategy. You actually have to go down most of the top 50 to find 10 strong melee bots. -- Kawigi

I want to try some melee ideas for Shadow, but there is currently no incentive to do it... -- ABC

Aw, you should work on them anyway; you've always been the god of melee ABC. At least, make your ideas extremely good, so when we do make the RR@H for melee your bots will be sitting on top! -- Vuen

Albert, could you outline what needs to be done in order to create a /Melee rumble? On the /Melee page please. -- PEZ

I must poke some in a sour wound of some of yours'. Watching the new Jekyl after 200+ battles having 1855 rating points and now after 548 battles it has 1828 points. It doesn't seem to me that 1855 was an all too accurate guess. We still had to wait some 200+ battles before it's rating stabilized (if, indeed, it has), at which point the raw %score strength measure is quite accurate. -- PEZ

I observed that, too. Note MrHyde had quite accurate rating after 200 games. I suspect this is what happened: Since most of Jekyls matches were presumably done on jims 2 machines then learning for other bots was quick. Now, when other machines will kick in, Jekyl rating might go considerably up. This is bad side effect of prelearned files, not only 'overrated' part, but thing that bots' rating is rather unstable in long time (and hard to predict its 'real' value). -- Frakir

Can you see my point about (pre)learning vs big battles? ;) -- ABC

Of course, it could be solved by sending the data files between clients... hard work, but the perfect solution. -- Tango

When I ready a bot for release the first thing I do it wipe out my .robotcache and robot database and restart Robocode to rebuild it. This makes sure that every bot starts with a fresh data dir. I wipe out and data that Jekyl may have learned in development as well. Then I run a RoboLeague division of the whole RR@H list with Jekyl as the focused competitor. This first season is 100 rounds and takes about 4-5 hours. This serves as a base line for learning for all bot. I would like to be able to push that to 500 rounds but I am far too impatient for that. After this first round, Jekyl/MrHyde? place #2 overall in the Division. There were only 6 - 7 bots that actually beat him 1-v-1. After this round, I ran 5 or 6 more seasons of 35 rounds each just to validate and collect more data. Again Jekyl only lost to 5 - 6 bots, most of them top of the table guys. With 35 round matches anything can happen as PEZ is so found of pointing out =^>. After all this was over the list of Bots that could outright beat Jeykl 1-v-1 in raw score included: SandboxDT, Lacrimas, Neo, VertiLeach, HypoLeach, and BlestPain. Bots that occasionally beat Jekyl included Shadow, CigaretBH, Sedan, and Griffon. Some others could get close on occasion but not consistently. I thought the 1855 rating was a bit high. I got an early match vs. SandboxDT where I beat him which I think pushed me up some. My goal all along was to try and get to the head of the 182x list of bots. Thats where I thought it would end up. It will be interesting to see if it can stay there as others gain some knowledge about him now. In conducting some long tests against the top 11 bots (minus VertiLeach) Jekyl consistently place #4 on the list in raw score. These tests consisted of a focused group of just the top 11. At this point all bots started with the data from the first run of testing. I ran 3 500 round seasons not looking at the results. I then ran 10 seasons of 35 rounds each to determine that Jekyl was around 4 on the list. The range of finishes was from 6 - 2 with 4 occurring most frequently. In the end I think Jekyl is right about where he belongs. I originally told PEZ I thought it would end up at about 6th +/- 2. I think the 1855 rating was more about the uncertainty around 35 round matches than I do anything else. Longer battles might settle the rankings more quickly as you would establish a more accurate rating for the pairing in question. Or you could just wait a few days before you actually know where it is that you belong. I know I have not decided to think of Jekyl as #4 yet. I only think he is top 10 material. I have to wait thru the weekend to truly know where he belongs.

Wow... it is a lot of testing... :) I wish I had that much persistence for it. All I ever do is run roboleague for 14 opponents with 1000 rounds when I go to sleep. About your 'slow learning gun': I don't believe in slow learning guns :) Once I thought using simple patternmatcher in few first rounds was better, but then I started experimenting with data analysis. Look up Lacrimas source for one cheap way to converge your stats 'fast'. There are better ways than that, but it is simple and short, and might work for you as well. -- Frakir

(edit conflict) Wow, that is some extensive testing. I've been thinking about running a top 10/20 500/1000 round roboleague myself, but my home pc has been having some overheating problems. I'm pretty sure that Jeckyl is top 10 material (top 5 actually), my guess is it will ping-pong with BlestPain in 4-5 place. -- ABC

Hey, that's Verti's role (to ping-pong with BlestPain). =) Jekyl and MrHyde has stirred up things for VL though and I'm not sure it will recover. Note that Jim's procedure is not only testing. It's training too. Which makes it more worth the effort I guess. Note also that you don't need to wipe the .robotcache and restart robocode to get any data saved by the bots away. Just "touch" all .jar files and press F5 in the "New Battle" dialog. It's safer, if nothing else.

I don't see how the enemies learing on Jim's machine could make Jekyl's rating too highly estimated? My point is that ELO isn't much better than raw %score in getting an accurate rating fast. 35 rounds is perfect I think. It forces you to learn fast and gives megabots the fair advantage of the ability to save data. Yes, some minis (like mine) save data too, but at the expense of something else. Which is why I like it the way it is. It provides more trade-off situations.

-- PEZ

You keep separating ELO from percent score, which messed with my head. ELO IS raw percent score:

ELO=-800*log(percent_score)/log(20);

Just like in geograpgy and for some graphs it is better to re-scale it in log scale. So if B wins to A by 50 elo and C wins to B by 30 elo, then C should be better then A by 50+30 elo. In percentages it will not work... -- Frakir

And I'm not interested in deducing the theoretical (and seldom very accurate) outcome of a fight between bot C and A in that way. I rather look at the raw %score of the actual fight beween A and C. Logarithms made me fluke a major math test in school by the way. You just brought back that horrible memory. =) I'm at home in linear space. -- PEZ

Sorry for being a pest, but % are hardly linear space here. 50-51% and 70-71% both mean 1% difference but they are not the same at all. Long live linear ELO! :) -- Frakir

Got me there. I really don't understand this ELO thingy. I fully understand though that two bots with strengths 20% and 25% are about just as lousy, but that between 70% and 75% there's a huge leap. I don't need ELO to obfuscate that for me. -- PEZ

Like PEZ said that is testing and training. The 100 round season, I run overnight. The 35 round season I run during the day. I re-nice the java process to consume 90% of my CPU. I have a 3.2GHz PIV with 2 GB of RAM so it clips along pretty quickly. I think the rounds average about .4seconds per round or some such. Frakir, do you use AOL IM or some other chat program? I would be interested in speaking with you about 'converging' my stats. If this truly is as powerful as all that I may have a chance. I do not care who does it but I would soooooooo like to see a 1-v-1 page with SandboxDT in the #2 position. Heres hoping. -- jim

Aw crappy, the new Fractal dropped almost 10 places from the old version! I don't quite get what's wrong with it, but it's specialization index is through the roof; almost the entire PBI column is colored. I'm not quite sure what to do with it now. I think it's that movement bug that got inadvertently added to it, but I have no clue what caused it; it happens often though, because it often crashes into walls or gets jammed sideways in them. I do have the source to 0.32 still, and I can try restoring the movement from that, but even if I do that I don't have much of a choice now besides finish off the basics of the gun, and then scrap its movement. -- Vuen

Yeah. remove all movement and use the TargetingChallenge to measure the gun. Then when you like the results plug in movement again. I constantly do the mistake of working with both targeting and movement at the same time. -- PEZ

Wow. I made some changes, and it dropped another 10 places. Lovely. I'm impressed by MoOOo?'s ranking though; 28 seems pretty high. Doesn't compare to PEZ or Kawigi or Iiley's guns, but at least it's a step in the right direction :D -- Vuen

Wow! Quest regained the first place in the PreimierLeague?! -- Albert

Grr, I really want to update my bot. I have a new version ready that performs much better in the targeting challenge and should be able to improve its ranking. In this site update they're talking about, is that the whole tribes thing, or is it the robot versioning they have? If the latter, the RR&H client may need some changes to download bots... -- Vuen

Upload your bot anywhere and post the link here. People can download it the old fashion way. -- PEZ

Nah, I don't want to make it too much hassle for people to try out my new version. I've improved it some since I wrote that last comment anyway. I'll survive on testing in the TargetingChallenge :D -- Vuen

It's quite interesting (and pleasing) to see that VertiLeach is back fighting for the #4 slot with BlestPain. Jekyl / Hyde stirred things up quite badly and it took Verti several weeks to recover. If someone appens to look when (if) Verti grabs the 4th place for a while; Take a screenshot of it for me in case I miss the window of opportunity. =) -- PEZ

FYI. Whatever bug it is that is plauging GloomyDark's gun. I'm pretty sure VertiLeach shares it. So watch out for Verti when I manage to nail that bug! -- PEZ

Well, VertiLeach is back at #4 and has been for a few days. Jekyl continues his freefall but seems to have settled at the bottom range of where I him to finish (#6 +/-2). I have some new movement ideas but I seem to have lost all motivation for doing any robocoding. PEZ, I caution you about fixing any gun bugs. When I finally nailed down Jekyl's gun bug, my rating against several opponents dropped significantly as the bug seemed to optimize my gun for those opponents. I even managed to rationize the results as this: When facing top movement bots, which are attempting to logically balance their movement against a logically operating gun, they are not able to cope with a gun that opperates in an ill conceived fashion. Specifically, I could routinely get 85%+ against Cigaret in the TargetingChallenge with a calcuation for direction that was non-deterministic (ie: I was essentially guessing which direction he was moving at shot time). -- jim

This fruitless bug-hunt of mine is tearing on my motivation as well... Verti was in #6 when I checked yesterday and #5 when I wrote the above. Now it is in #4 with some small margin. Nice!! Can you expand some on the non-deterministic thingy? It sounds like it could be a thing I should check if I am doing... -- PEZ

Essentially my calculation of "which direction is my enemy going" at shot time had a flaw that would return different results than what it should have. This meant that when I "laid" out my stat buffer I was sometimes placing +ve results into -ve buckets and vice versa. It was not happening all the time but it was happening enough to make a difference. In the TargetingChallenge, I was getting results in the 85% range for Cigaret but in the 60% range for SandboxDT. It was frustrating as Jekyl's was essentially the same as FloodMini's (in design and concept, not code). When I finally found it (with Kawigi's help of course) my results vs. several bots shot up dramatically. My results against the top bots degraded some. I actually wrestled with putting the error back in. But as Paul has pointed out (for those that have listened), performance against all bots is more important in the rumble than performance against a select few bots so I left the fix in. The results of the chance are what pushed Jekyl to it's current position. As a point of reference, the major difference between Jekyl (around #18 at the time) and Griffon (around #6 at the time) was that each had a different gun. I improved the gun and Jekyl shot up. Based on their relative position and performance to date, I would say that the gun in Jekyl still is lacking in some key area. It may be time to redesign Jekyl from the ground up. -- jim

It should be noted that the fix also mysteriously hindered Jekyl's performance against the Flood bots, which Jekyl has been a significant problem for for some time, before he was in the top 10. I may finally be passing up DT as "the bot that requires buggy aim to be hit" (FloodHT 0.7 had a really funny targeting bug in one of its guns that ended up getting selected against SandboxDT when it kicked in). -- Kawigi

I guess that's a big advantage of VirtualGuns. Right now I have a huge bug in GD's gun though, it is definately not worth keeping there. =) -- PEZ

All, FYI - DT does not use robocode physics to calculate hits or not - it simply assumes a 50 diameter circle for the enemy bot and ignores any real hits - it's much easer to calculate angles this way. It also only checks as the wave passes the center of the enemy so that an accurate angle is recorded. -- Paul Evans

Released Fractal 0.55. Upon recording a wave hit it does a gaussian smooth at the bot's center with deviation of half the bot's width. Interestingly enough, it immediately improved its performance against many bots. It's now in the rumble, along with MoOOo?. Hopefully its ranking will improve over past versions... -- Vuen

Put the gun in MoOOo? man. -- PEZ

The new DT should put some more clear air between the rankings - I'm hopeing it will settle at around 1920 - but only time will tell. For all those tinkering with targeting note that this improvement has been made through movement changes, not targeting (and you can see from the TargetingChallenge/ResultsFastLearning - DT is not a good targeter in this competition format - yet). -- Paul Evans.

I could not believe you had the guts to predict 1900+. Then you went out and actually did it. 1910 and still trending upwards as I type. Catching you will be even tougher now. Very frustrating, but you of course know this :) For what it's worth, I have been sold on targeting as over-rated since Jekyl made it all the way to the top 15 or so with a significant bug in it's targeting. I'll tell you what: as I am ahead of you in the quick learning results I will trade you my vast and "impressive" targeting solutions for your movement solutions. Seems fair to me =^> (PS: I have no targeting solutions. I barely beat you with 4 simple segments. I am willing to bet that you are using many more segments than that and are still very close. Under selling your gun does not hold with me although I do agree that movement seems to be the most important aspect of Robocode at the moment) -- jim

Lets speculate. DT saves its data for single opponent in about 220 compressed bytes. Those byte data don't compress too well, so say 250 bytes uncompressed (one may actually get exact number by uncompressing DT data file). Take away opponents name and possibly some extra things like melee/1on1 etc and you are left with 210 bytes or so. If he does it in a way similar to mine (one bucket compressed to 1 byte, hard to get it beyond that) it means 210 different buckets. Take basic ones: [wall:2] [acceleration:3] [distance: guess is 8-9 minimum]. so 210/(2*3*9) leaves room for [4] or [2][2]. Since DT has at least 2 guns, you are left with something like [2] or maybe [3], actually. Don't seem like it is any more then your gun... -- Frakir

I seem to recall reading some where that Pauls uses 13 distance ranges. Not sure how that affects your calculations :) I would actually guess that his additional guns use a subset of data that his "best" gun uses. I infer this from something he said about Griffon (I para phrase: Only his most segmented gun seemed to be able to deal with it, and that did not kick in for 35+ rounds). Maybe he aggregates the more segmented data into another buffer without the added segments (one array [2][3][13] and one array [3][13] where all of the [2] segment is recoreded as well). Not sure, but the tryth of it is, I think many of us are close enough in gunnery to put it aside for a bit and try to close the gap in movement. -- jim

Which I think I told you some while ago. =) However, we shouldn't be too quick to draw conclusions from the outcome of the targeting challenge. DT is close enough to the other competitors that it could well be the best gunner against the full population of bots in the rumble. I am pretty sure DT has the best movement for the RR@H format. But it could also be the best targeter for all we know. -- PEZ

Looks like a rating of 1920 was a little optimistic :( - In your sizing information you forgot to allow for the number of bins kept - DT storage on an enemy is 32 bins (guess factors), 13 distances (actually flight times) and 12 segments/guns for 1v1 + separate gun statistcs - altogether around 5500 doubles but reduced to 2.5 Kbytes per bot after compression. -- Paul

Actually MoOOo? uses exactly the same engine as Fractal, but overrides onScannedRobot where Cigaret's movement code is basically pasted in. No code is duplicated between the bots, and so any changes I make to Fractal also change MoOOo? (as well as FractalTC?, FractalMC?, and any other mod versions I decide to make). Interestingly enough, the new version of MoOOo? dropped another 4 places. Fractal went up, but only because I fixed that very large movement bug that had been haunting it since 0.32. This actually means that it has been 5 consecutive updates in which the performance of Fractal's gun has dropped (0.3, 0.32, 0.45, 0.48, and now 0.55), and every single one I expected it to improve. Looks like targeting is not my thing. Maybe I'll look into an algorithm for deciding firepower; power 3 bullets just do not work : ) -- Vuen

Cool, I just tested it, my sandboxkiller flag works again against DT 2.22, maybe I should put a if (e.getName == "pe.SandboxDT") mirrorMovement = true; line in Shadow and maybe climb a few positions :) -- ABC

I came to the same conclusion yesterday that DT is poor against mirrorors which is why PrarieWolf? does so well. I may have to put in a AM gun if I keep improving movement and not my gun. I've just coded the CribSheet concept in the development version, If necesary DT can build and keep a CribSheet on a couple of thousand bots. I'll have another look at the movement to see if it can be tuned further.- Paul Evans.

Will you consider releasing a trained DT then? -- CuriousGeorge

Good work with Shadow, ABC. As always VertiLeach doesn't cope with over takers very well. But it will be back. =) -- PEZ

It's tempting to release a pre-trained DT but I don't think it will be necesssary. The CribSheet collection will be built up during the competition. --Paul

Perhaps you don't need an AM gun if you can detect when you are beeing mirrored and then maybe adjust your movement. Earlier version of VertiLeach did this with some success. Detecting if you're mirrored is quite easy I think:

    private boolean isMirrored() {
        return (new Point2D.Double(fieldRectangle.getCenterX() - (robotLocation.getX() - fieldRectangle.getCenterX()),
	    fieldRectangle.getCenterY() - (robotLocation.getY() - fieldRectangle.getCenterY()))).distance(enemyLocation) < 50;
    }
Though an AM gun to kick in here would be very effective of course. It should be quite simple to do such a gun as well since you are using a plan for your movement.

-- PEZ

Detecting when DT's in a mirroring situation is not my main problem - the main problem is knowing where I'll be in the future. My movement is so random even I don't know whats happening next! Looks like DT 2.31 is going well but it's early days yet :) -- Paul

I didn't think it was your main problem. Then I would never had offered help. =) Since knowing where you're going to be can be tricky maybe you could first try with adjusting your movement to defeat mirrorers? When working with the ChaoticMovement of Mako I could look forward into the chaos by using the AngleGenerator pattern. Since the pendulum of my generator moved as a function of time this was quite easy. But much the same trick can be done with RandomMovement as well. I have worked with movement plans (which I thought DT used as well) and then I often know about where I am heading. Even if the plan can be altered of course, but it could be enough edge for the AM gun. I have also worked with pregenerating my random numbers and then I can simulate my own movement quite predictably. I used this for an AM gun that worked but that I later trashed when I abandoned VirtualGuns for the bot I was working with. The AM gun was quite deadly against mirrorers though. -- PEZ

You could detect the mirroring and then go into ultra-conservative power saving mode in an attempt to outlast them. It is not like they will get anymore hits on you than you will on them. (Bully me for having the stones to suggest something be put into SandboxDT)-- jim

DT 2.31 currently has over 1930 ranking points, amazing! -- ABC

Congratulations, Paul, just as everyone was thinking the community was slowly catching up, you come and jump 10s of points... :-) -- Tango

Was it still movement only changes? -- PEZ

This last jump of 20 or so rating points looks like it was mainly side effect of the crib sheet code - it has brought up the performance of DT against the lower ranking bots - however it looks like it is because of the side effect that a 6 way segmented gun is now available from round 2 onwards - against good bots this is usually neutral at best - against poorer bots it seems to work very well. -- Paul

It really pays of to trash the lower ranked bots! Looking at the table that tells me the most this is very evident:

Lacrimas is king of the /PremierLeague though.

-- PEZ

Holy crap... 1930... Will we ever catch up? Anyway it's kindof funny how SandboxDT loses to mirrorers again; should this discussion on AM guns (especially the code snippet) be moved to an AM targeting page? Also, hey Paul, this 6 way segmented gun of yours, if you don't mind me asking :D... Are any of these segments based on your own movement? I had an axis in the development version of Fractal that segmented on whether or not Fractal and the enemy were moving in the same direction. I also had an axis, which I accidentally left in Fractal 0.48, that segmented on how long Fractal had been moving in the same direction (although this one seems kindof useless without the direction one). I don't know if they improved its performance at all, and I swapped it out in Fractal 0.55 to try to keep the segmentation depth down (I think 0.55 is only on 5 dimensions). I was just wondering if anyone else had any similar segmentation axis, and whether or not they're worth using. -- Vuen

The ranking-goes-up-when-I-trash-weak-enemies is one of the most perverse weakness of the current ranking system. There is no glory on smashing a bot that was badly beaten before. That's why I prefer a win/lose ranking system. -- Albert

Hmm, I disagree... Bots that prove themselves worthy against the more powerful bots should be expected to stomp all over the weaker bots. I'm less impressed by bots that beat all bots by a small margin than those who lose to some top bots but can trash the lower bots. It's more challenging to write a bot that can perform exceptionally well against such a wide variety of bots because there are so many different strategies and movement modes and targeting modes that your bot has to recognize and overcome. -- Vuen

I like the current system. A good bot should be equally good (relative to enemy ranking) against top bots and poor bots (ie. should have a very low specialisation index). -- Tango

Though if you are equally bad against top and poor bots you'll get the low specialisation index too. =) -- PEZ


Robo Home | RoboRumble | Changes | Preferences | AllPages
Edit text of this page | View other revisions
Last edited December 10, 2003 19:28 EST by PEZ (diff)
Search: