[Home]RoboRumble/RankingChat

Robo Home | RoboRumble | Changes | Preferences | AllPages

Archived Chat 2/24/2007


I read that link from the rankings page, but how exactly is scoring done in the rumble? -- Simonton

How long does it take for a newly-uploaded robot to appear in the rankings? I just uploaded my first and now I'm all excited! -- djdjdj

It depends on how many people are running their RoboRumble clients and how many other bots are in need of battles - right now, most of us have our clients set to prioritize bots with under 2000 battles. Results should start coming in quickly, within 10-15 minutes, but it is not "stable" until around 1,000 battles. If there are 2-3 clients running and your bot is reasonably fast, it should only take a few hours to get a stable ranking. Welcome to the wiki and best of luck! -- Voidious

Thanks! I have a computer at home that's currently devoted to Folding@Home. Guess what it's going to start doing instead? :) Those darn proteins can go fold themselves! -- djdjdj

2 things: 1) I just tried to archive the chat, but when I try to save all the old text on the new archive page it just comes up with "To infinity and beyond". Any ideas? I have the old text saved on my machine, for once someone figures out what I'm doing wrong. 2) I was thinking of Voidious' recent jump in rankings from a very minor tweak. So let's try to do some math. Let's say you make an improvement that avoids exactly 1 high-powered bullet per battle. Let's say that you average 5000 points per battle. So that's roughly a 16/5000~=.3% improvement in % score across the board. So my question is: how will that affect your LRP score in the rumble? -- Simonton

1) Want to e-mail me the archived text? I will post it to an archived page. I think there are still some old anti-wiki-spam measures in the script that silently prevent updates that contain too many external links. I have admin access, so it won't stop me when I try. =) 2) This is an interesting question, but I think it is difficult to answer it generally, for a couple of reasons:

The ELO system isn't linear in this fashion, but I have found that 1 rating point is about 0.05% across the board. That is an estimate and is not always the case, but I think it's pretty close to the truth. Dookious 1.534 is about 0.6% per opponent above Dookious 1.522 and rates 15 points higher, so in that case, it's more like 0.04%.

-- Voidious

It indeed is a rough estimation, but you can get a feeling by the 'total %wins' of the PL-ranking. If I am correct that number is the sum of all percentages you score throughout the rumble. Meaning Dookious scores 433/507 and that would be approx 85.4%, and WeeksOnEnd 408/507 would be 80.5%. The exact calculation is somewhere (linked to?) on this wiki, but hard to find. As Voidious already stated it is not linear at the very low and very high percentages, so beating the weak harder does pay off (looks like real life ;-) ). -- GrubbmGait

Wow. Now I want to know even more the question I left at the top of this page :). -- Simonton

And another thing! Why is it that when I upload a new version, the rumble client sometimes goes back to running the old version?? Right now LifelongObsession 0.1.3 is fighting 0.1.5. Is there some way to fix this? -- Simonton

I think what happens is this: some client is halted before uploading its battles; later, it's started again, and uploads a battle for an old version, re-inserting that bot into the rumble; when another client runs, it "removes" the old version, but sometimes will still run battles for that old version during that iteration. A permanent solution is to delete the old version from your RR client; there's no longer a URL available for it to download it again. (Yes, it's still in .robotcache, but it doesn't matter, the RR client thinks it's gone.) -- Voidious

The 0.1.3 version has not fought enough battles yet, and because it is in the list when getting the participants (it is only removed after that) it still fights battles. You can prevent that by manually removing the 0.1.3 version from the 'robots' directory in your client. You can do that even when your client is running, as the client gets its information from the .robotcache directory. When starting a new iteration, the .robotcache will be updated and so the 0.1.3 version is automatically removed there also. -- GrubbmGait

Ah, good idea, thanks! Oddly, whenever I ctrl+c the client when it's fighting an old version, invariably it will correct itself for the next iteration. It won't always stay corrected, but sometimes it does. -- Simonton

I posted Dookious 1.542 as a re-release of Dookious 1.522. I just can't help but wonder if upgrading my clients from 1.1.3 to 1.1.5 had something to do with the rating jumps. If it ranks within a few points of 2108, I'll be happy that 1.1.5 wasn't the issue; but if it does end up being a big jump without any changes, that will really suck and throw a lot of things into question. It seems 1.2+ versions don't work right with RR@Home (without changes), and 1.1.3 doesn't have the Rules class that some new bots use.

So far, 1.542 does seem to be ranking highly above 1.522. I guess I (or someone) should just look into updating the RR@Home code to work with 1.2.5a in either case, but I want to know first if 1.1.5 is affecting ratings much.

-- Voidious

Where was that talk about bullet collisions happening different numbers of times depending on the version of robocode? Is that a difference between 1.1.3 and 1.1.5? -- Simonton

Some talk about it can be found at GresSuffurd/WeefSuffurd. According to the versionhistory (or Robocode/OldNews) a change in bulletcollisiondetection (Scrabble word!) was made in 1.2.1, but a bugfix was also done in 1.1.5. Not sure if that bug existed in 1.1.3 or that it was one of the optimization-features. I don't know why everyone is so keen about using 1.1.5, as that version is pulled because of critical bugs. -- GrubbmGait

The only reason I upgraded from 1.1.3 to 1.1.5 is because it has the Rules class, which some bots have started using. I assumed it had been tested and worked fine for RRAH because people using the Rules class clearly must have been using it. So far it looks like Dookious 1.542 is proving that the different versions do yield different ratings, though, so I will definitely be rolling back to 1.1.3 in the morning. If this is the case, I'm glad it was only a few days with 1.1.5 before explicitly testing it like this... Anyway, sorry for the ratings damage I'm contributing to until then. :-\ -- Voidious

It still could be a bug in 1.1.3 regarding bulletcollision. I don't know where the bugs in 1.1.5 are, so maybe the RR@Home performance is not affected by it. Fnl should know more about it. -- GrubbmGait

You know what, I'm going to just rollback now. I think it's clear the rating differences between Robocode versions: Dookious 1.542 (on 1.1.5) is right on par with 1.534 (on 1.1.5) and destroying 1.522 (on 1.1.3, though it's identical to 1.542) after ~300 battles. And I could already tell you how impossible a 15 point rating jump from 2108 to 2123 seems to me in the first place. PEZ, for sake of reference, CC 2swarm.ab is at 2074 / +77 over 500 opponents / 1200 battles compared to 2swarm.a right now. I imagine you'll want to re-release one or both anyway.

Hopefully we can get anybody running 1.1.5 to rollback to 1.1.3 asap, but people using the Rules class are left out in the cold with that change. Getting 1.2.5a working with RRAH shouldn't be tough, and hopefully the ratings will be consistent. I've got some time tomorrow so I'll check it out.

-- Voidious

I've been out of the loop for too long and haven't had the time yet to read up on all things that happened in this world in my absence. Anyway, are you saying that CC's ratings are tainted? Up or down? And, if so, what causes it? -- PEZ

I think the rating of the Swarm versions is tainted by me (who was running most or all of the battles) using Robocode 1.1.5 in my clients. I'm not sure what it is about 1.1.5, but you may remember that 1.1.4 and 1.1.5 had some flaws and ended up being pulled. I only upgraded my clients from 1.1.3 to 1.1.5 a couple of days ago, and I've just put them back down to 1.1.3. So the comparison of .ab being better than .a is valid, but I bet they are both ~10 points higher than they would have been with 1.1.3. Sorry, man, I know it's a bummer =( -- Voidious

LOL! Then my crowd might not be so wise. I'll see how easy it would be for me to resubmit one of the tainted versions. -- PEZ

Out of curiosity, and because I had some spare time, I did a rerun of TheBestScoringBot script. The results of over a year ago are there too. Don't know if it is usefull, but there are some small surprises hidden there. Have fun digging through the data! -- GrubbmGait


Rankings disappeared, but they will restore themselves after some time. Except some of Simontons bots, as they run exclusively on his client. -- GrubbmGait

Somehow, that train ride I just took seems so much longer now that I see the rankings completely disappeared and reappeared during it! :) -- Voidious


Question: If I release 2 melee bots at the same time, will they fight each other a dis-proportionate # of times? -- Simonton

No, there will be only 1 priority-bot in the selection (the first bot mentioned). The other bots are random chosen and the other new bot will have the same chance as any other bot. -- GrubbmGait


Ok, everything I try with MeleeSeed does the same thing: while fighting in the general league it runs a score of 1730 in the nano melee rumble, 3rd place in the micro rumble, 10th place in the mini rumble, etc. But then, as soon as its battles get limited to the lower codesize leagues, the score plummets. Does anyone know what would cause my bot to do so well against the big dogs, but not as well against its fellow little guys? Is it the fact that it oscilates - do the smaller leagues have more pattern matchers? Is it because in the smaller leagues it has more competition for the corners? Do the smaller leagues have more bots that can avoid HOT really well? Note that even fighting against only nanos is the same story. -- Simonton
Question: I am using the API from Robocode 1.33 and my robot is doing very bad in ranking. So I downloaded some robots that I have very low score against and run on my robocode 1.33...and I won like 9 games to 1...I have no idea why that happened...does that have anything to do with the 1.33 API ? I am using some classes like Rules -- Patson?

Yes, that is exactly your problem. Rumble clients generally use old version. That means no Rules class (among other things). I recommend downloading the earliest version you can find on sourceforge & testing. Hey, does anyone have any more thoughts an whether the new version is rumble-worthy yet? -- Simonton

I'm using 1.3.2 for running rumble (I have a PII running almost full time) and it seems fairly stable. I don't use the Rules class, just to be safe, but it's easy to get the functionality into your bot just by looking at the source code. I am using the Utils class, but that seems to work fine. Decado is 5th in the MicroRumble?, so it doesn't seem to be affecting my score that much. ;-) A question, how do you get your country's flag to appear on the rumble rankings page? -- Skilgannon

Alright! Unless someone objects, let's make it the new versions official. Shall we wait about a week to make sure nobody has a problem with it? Oh, and also, you just leave a comment on the /CountryFlags page and sometime later somebody sets it up for you on the server. -- Simonton

Last time I tried running the 1.3+ version of Robocode as my RoboRumble client, it didn't work on the Mac yet. I'll try again sometime soon. -- Voidious

I promise I will not change any algorithm inside Robocode. Some of the problems occured when changing from Java 1.4.2 into Java 5.0 (1.5.0). So I have spend a lot of time on debugging and fixing all the issues that occured with this shift. I think everything is running as it is supposed to now. At least I will not change anything inside Robocode regarding robot behaviour. I promise!! --Fnl

We could fix the problem with the Rules class at RoboRumble server side by upgrading the server with a new rococode.jar file and make sure it runs the newest Java 5.0. Then no robot should have problems with running on the server. I can help upgrading it if necessary. --Fnl

I had NO idea that i was a problembot for ArmyOfShiz... --Starrynte

Well, a preliminary release of WeeklongObsession is a little disappointing. It's micro rumble score is 20 points lower than before, and it's sum difference against the same bots is 44 points lower (the sum of the "diff" column [here]). Perhaps someone is still running an old rumble client and got some out-of-memory exceptions. Krabb mentioned not wanting to switch yet (which is fair enough - it needs to be bug free). -- Simonton

Strange, in the last 300 battles (from 1900 to 2100) Waylander dropped 6 points, but also, Thorn dropped 2, and that's since this morning. I remember a few weeks ago Thorn was at 1984, now it's down to 1980. This 'drift' thing is really annoying. Just when I was starting to make progress, it drifts back down. -- Skilgannon

Different robocode versions might be the source, and i think its not limited to micro raitings. May be it's related to my client, i stopped it this morning. I would advice to set the new robocode as default and no longer allow old versions. The new robocode might have a bit different behaviour in comparison to ancient versions, but the most important thing is a stable result. I'm afraid there is no way to ignore old clients? --Krabb

The 'long term drift' is mostly due to new (good) opponents. As the average rating is approx 1600, every new good bot will inflict a very small drop on average to the other ones. Also the fact that new versions of the same bot start at 1600 again (although they should start at the rating of the previous version) does help lowering the ratings a bit. The 'short term drift' can partly be due to filling up the PL-ranking, bots that save and read stats from file (but not in micro) and just plain bad luck. The discussion about drift has been around since the start of the rumble, and no satisfying solution has yet been found. If you want to talk about drift, take a look at the meleerumble: in 18 months the number 1 (and the rest) has dropped 60 points! -- GrubbmGait

Why isnt my bot (ProjectNano) showing up in the Nano rankings? Its size is 1.91 KB (.jar format) (&& 2.32 KB as a .class), where WeekendObsession's size is 3.73 KB (.jar) and Splinter's is 2.52 KB (.jar)... My bot does show up in the micro and up rankings though... --ProjectX

The "code size" is not based off of the actual size of the .class or of the .jar, but a measurement of the amount of execution code in your tank. You have to use codesize.jar to find it, such as: java -jar codesize.jar ./robots/mybotjar.jar. I think codesize.jar is packaged with Robocode, but if you need to grab it, find the link off of the CodeSize page. -- Voidious

I must admit, I'm excited to see just how well DustBunny 3.1 does over time. Locally, it regularly beat the top 5 bots in 5on5 competitions and was consistantly top 3 in huge melees. Go Nano Anti-grav :) --Miked0801

Gonna go for the nano-crown? --Chase-san

Yep - that's where I get the most enjoyment. I can code and test in my limited spare time. Bigger bots just aren't possible with family and work :) --Miked0801

Nothing like a good "Upgrade" that drops you 30 points in ranking. Sigh. Next version. --Miked0801

Happens to me all the time. ;) --Chase-san

Still early, bit I just got the lead with DustBunny 3.3. WOOOHOOO! Here's hoping it stands up over night! --Miked0801

Nicely done. Don't forget to update TheBestBot page if it's in first place when its rating is stable. =) --David Alves

At this point, I'm pretty much calling it a 3-way tie between the top 3 melee nano bots. They each take turns rotating up and down the ratings. I do take solice in the fact that I have all +50 percentages against my opponents and the moment though ;) --Miked0801


Hmm, what happened to the RoboRumble rankings? They look completely jacked. -- Voidious

Not sure, but I wonder if this is related. -- Simonton

java.io.IOException: Server returned HTTP response code: 500 for URL: http://rumble.fervir.com/rumble/UploadedResults
Unable to upload results meleerumble,35,1000x1000,Simonton,1197335125717,SERVER mn.Combat 1.0,10797,2499,6 ahf.NanoAndrew .4,9780,2268,0
java.io.IOException: Server returned HTTP response code: 500 for URL: http://rumble.fervir.com/rumble/UploadedResults
Unable to upload results meleerumble,35,1000x1000,Simonton,1197335125717,SERVER mn.Combat 1.0,10797,2499,6 fm.mammillarias 1.3,8381,2752,1
java.io.IOException: Server returned HTTP response code: 500 for URL: http://rumble.fervir.com/rumble/UploadedResults
Unable to upload results meleerumble,35,1000x1000,Simonton,1197335125717,SERVER mn.Combat 1.0,10797,2499,6 adt.Ar1 2.1,8186,2914,1
java.io.IOException: Server returned HTTP response code: 500 for URL: http://rumble.fervir.com/rumble/UploadedResults
Unable to upload results meleerumble,35,1000x1000,Simonton,1197335125717,SERVER mn.Combat 1.0,10797,2499,6 amk.Punbot.Punbot 0.01,7567,1393,0
java.io.IOException: Server returned HTTP response code: 500 for URL: http://rumble.fervir.com/rumble/UploadedResults
Unable to upload results meleerumble,35,1000x1000,Simonton,1197335125717,SERVER ahf.NanoAndrew .4,9780,2268,0 fm.mammillarias 1.3,8381,2752,1
OK. ahf.NanoAndrew .4 vs. fm.mammillarias 1.3 received
OK. ahf.NanoAndrew .4 vs. fm.mammillarias 1.3 received
OK. ahf.NanoAndrew .4 vs. fm.mammillarias 1.3 received
java.io.IOException: Server returned HTTP response code: 500 for URL: http://rumble.fervir.com/rumble/UploadedResults
Unable to upload results meleerumble,35,1000x1000,Simonton,1197335125717,SERVER ahf.NanoAndrew .4,9780,2268,0 adt.Ar1 2.1,8186,2914,1
java.io.IOException: Server returned HTTP response code: 500 for URL: http://rumble.fervir.com/rumble/UploadedResults
Unable to upload results meleerumble,35,1000x1000,Simonton,1197335125717,SERVER ahf.NanoAndrew .4,9780,2268,0 amk.Punbot.Punbot 0.01,7567,1393,0

Yeah, I'm seeing that too. I guess I'll just e-mail Pulsar if it keeps up... I wonder how David's new rumble server is coming along? =) -- Voidious

Sorry, didn't have time to update this earlier this morning - I contacted Pulsar the other day and he has fixed the issue with the RoboRumble server. It was simply out of disk space. I've got my clients running again and bots are reappearing as they get new results. Hopefully there was little to no data corruption as a result... -- Voidious

There still seems to be something wrong. I don't get any priority battles for robots with less than 2000 battles. -- Ebo

It's happening again ... -- Simonton

Running more battles does not solve the corruption problem. Some files on the server are damaged (read: contain invalid lines) and need to be edited (see f.e. roborumble_Voidious.Dookious_1.585.txt). Quite some bots have these problems, also in the meleerankings. I cannot tell if it is worth fixing those files (what only Pulsar can do) or that we can wait till the new battleserver is ready. -- GrubbmGait


Wow, DrussGT 1.0.5 looking great! +64 over Dookious after ~420 pairings! Could be a new King on our hands. Quick question, just to get it out of the way ASAP - any changes in what version of RoboRumble you're running? Since I've seen up to 15 points difference from the those "wacky" Robocode versions we encountered a while back, I'd like to re-release Dookious to make sure there's no discrepancy there. (I'm still running 1.4, which is where Dookious got most of its battles.) Great work, in any case. =) Exciting! -- Voidious

I'm running 1.5.2 with a 100 times increased cpu constant. I think we should agree on one specific version in order to keep ratings comparable. --Krabb

I haven't been running a client lately, due to lack of spare computing power, but I have 1.4.9 installed. If you look at the details sheet it seems Dookious has some dodgy results, even a loss to Waylander (45%). Oh and btw, this DrussGT is with the same gun that is 10 points lower than Dookious's gun....so it may be time for another DrussiousGT =) -- Skilgannon

I am running two 1.5.3's currently, just started them up however, both on different machines, wired hot on a linux core duo and my mid-range windows athlon 64 x2. Its been running for about 2 hours now. I am on my laptop so I decided I should put that spare cpu power to use somehow. --Chase-san

I never expected such a big difference with Dookious, I guess we will have to do something drastic to the rumble. How is David's version doing? I knew there were bad results and a lot of uncertainty, but I thought they would balance eachother out. Now we'll maybe have to reset the whole ranking? :( -- ABC

I've seen both Dookious and Phoenix drift up 3-5 points in recent weeks. When we move to the new RoboRumble server, we may have to start the rankings from scratch, anyway, but I'm not sure. I think I did some rough calculations and found it would take roughly a month with 5 clients always running to get everyone back up to 2,000 battles. (I could probably be running two clients most of the time.) We could do it incrementally, too, maybe 50 at a time with current bots first, to ease the transition for active authors.

The CPU constant thing is almost definitely what hurt Dookious, IMO. It's possible that's the only issue here. I'll have to do another rerelease to see if it gets the same rating as 1.573. I don't see anything in the last few Robocode versions that sounds like it could be having any weird rating effect, for what it's worth.

-- Voidious

Well, here's my back of the envelope calculations:

Obviously the number of clients and length of the average battle are unknown to me, I could be way off. I think Dookious or Shadow takes ~2 minutes to run a battle against itself on my MacBook, while CassiusClay takes 45 seconds and many MiniBots take more like 10-20 seconds.

-- Voidious

Wow my cpu could burn out from that much use, but I could get it a shot, as I have access to remote systems and a laptop and my desktop, so when not in use, maybe 3 servers max. 6 if I run 2 on each to maximize thier dual cores. --Chase-san

We wouldn't need half that many battles to get a (mostly) stable ranking, I think. In fact, a few days of random battles should, in principle, put everyone in it's correct relative position. We could get different absolute numbers from what we have now, though. -- ABC

Well, this is interesting. It's looking pretty certain to me that there's no huge rating tainting going on with DrussGT 1.0.5 (which is a huge relief). However, so far 1.573c is a bit above 1.573 and very close to DrussGT 1.0.5. We could have a real close one on our hands =) Despite that, I do expect to be crowning a new King in the next day or so... -- Voidious

Well you did hold the crown for quite awhile (a year atleast), its only fair to pass it on. Just make sure Dookious stays near the top there, I don't want it ending up like SandboxDT, RaikoMX, Ascendant, or CassiusClay, all of which still kick major butt, but all of which are no longer in, or very soon to be shoved rudely out of the top 10. btw, I have the linux machine cranking out battles in groups of 25. I am pondering working on/finishing Seraphim 2 tonight. --Chase-san

Wow... Still 3-4 pairings left for Dookious 1.573c and it is +2.4 percentage points total over DrussGT 1.0.5. (As in, .004% per opponent better.) I guess I don't have to hand over that crown just yet... Now if only I could pull myself away from Ocarina of Time, maybe I could do some throne defense! =) I'm not sure what the procedure is here. Does the tie go to the defending King, or do we enter a period of chaos without a definitive leader among us?? -- Voidious

The winner is whoever has the shotgun, ;) --Chase-san

Wow, I only just noticed that NeophytePattern and his two siblings are now at the top of the NanoBot 1v1 rankings. Nice work, John Cleland (deduced from /CountryFlags)! Is there possibly some Robocode version issue going on (like the 1.5.3 CPU constant thing), or have you really rocketed 60+ points past the former #1? In any case, feel free to make a page for yourself and your bots, as I'm sure we'd all like more details on the new king of NanoBots. =) -- Voidious

Hey, can anyone see on their client what errors they might be getting from the rumble? The rankings seem to be taking a beating, both melee and 1v1... Maybe the wrong participants URL in there? -- Voidious

I've shut down both my clients. Looks like server problems, errors while uploading results. -- ABC

Woah... the ranking page is really messed up... upload errors here too (error code 500)... I wonder what the heck is going on. -- Rednaxela

Seems like the same problems as a few months ago, could it be 'disk full' again or maybe that harddrive is slowly giving up on us. I will not run any battles till further notice. -- GrubbmGait

Is the Roborumble back to right again? -- Martin?

Nope, still getting a http:500 error message when I try to access rankings. --Baal

Most probably the disk is full, I think. Just after I uploaded into the repository and entered my new bot name it crashed. Sorry Guys. -- aryary

Actually, I highly doubt it is your fault. I'm just worried about when it's going to be back up, but I can wait. --Baal

Well, I am able to wait, but just excited to know if I got past sandboxDT . I have been developing this bot for about 3 months. -- aryary

Hey guys, Pulsar took care of the RoboRumble server issue yesterday. I just ran my client, successfully uploaded a battle, and the General 1v1 ranking reappeared. I'm gonna leave my 2 clients running overnight. I think editing roborumble.txt to have a smaller "NUMBATTLES" is a good idea: when your client checks with the server, any bot that isn't in the listing will get battles, but it really just needs 1 battle before it reappears and has all its previous battles (assuming no data corruption). So it would be a lot faster to make them all reappear if your client checks with the server more often. -- Voidious

Awesome! I'll start running my client too. -- Rednaxela

This is good news :) -- Baal

Nice ! I updated my bot to v1.1 and I am curious about the momentum rating. How much is considered stable? And good job for repairing the server :) -- aryary

There are some major problems with the rankings....some bots still haven't had battles run yet, and some have gone WAAY down (like Phoenix, down 10 points). I think that once everything is uploaded it might settle down a bit, but the momentum isn't really reliable. Wait until your bot has 2000 battles, that's the only way to ensure a stable ranking. -- Skilgannon

Hm... I wonder if the server broke again... I'm getting error code 500 stuff again uploading :( -- Rednaxela

No errors here, 40 bots left for the full ranking, just in time for my new promising gun tweaks, great! :) -- ABC

@Rednaxela - Can you try clearing the "robocode/roborumble/files/results1v1.txt" file and see if they start coming back up? I remember still getting errors on some bots but not on others the last time this happened (server ran out of disk space). I think it is because some bots had their data files corrupted and the RoboRumble update script doesn't know how to handle it. You might be getting errors on some results, then the client keeps them in the cache of "results not uploaded" and tries again every time... -- Voidious

Thanks, Pulsar. - Martin

Thanx Pulsar for putting your time into it. The only bot in the megarumble giving trouble now is cx.mini.BlackSwans, giving that error code 500. Maybe it is possible to edit and repair its resultsfile or just delete it, so that it reappears in the ranking. -- GrubbmGait

There are at least 11 new bots in the rumble, so we'll just have to be patient while they take their share of CPU time. -- Martin

I've only got one client going today, but I'll try and have 2 going as much as possible in the coming days. I see 4 of us on [Who's Upload] right now, so that's pretty good. -- Voidious

sgs.DogManSPE? has been a long time thorn in my side. If you look at this chart, you can see that Hubris is his 2nd best positive problem bot index (behing kid.TOA), and Ugluk v1.0 is 4th best. So I went to the other end of the scale to see who is sticking it to DogMan?. I might learn something to defeat him. I copied over 3 of his 5 worst opponents, problem-bot-index-wise. One of them, Cephalosporin, has 30 battles credited with an average of 3:1 dominance over DogMan?. In my tests, DogMan? consistently slaughtered these opponents with about 5:1 score ratio. No exceptions thrown.

I bring this up to illustrate that there are some very bogus scores in the Roborumble records, dragging down bots that should have higher ratings, which in turn drags down the ratings of all of their opponents (with valid battle results). I do not know how widespread this problem is, but it may be time for a general purge and restarting of the Rumble. -- Martin

It is widespread. It drags down ratings of bots that lose against DogmanSPE? and drags up ratings for bots that defeat him. I don't believe it affects the relative positioning of bots submited now, but I'm still up for a rumble reset, even if it means a week without the rumble. -- ABC

I meant bots that got 99.9 scores against him because of the 1.5.x (before1.5.4) clients have inflated scores. Anyway, we should probably also make sure that the server rejects results from those versions. -- ABC

I also have long been bothered by DogMan?. To top it off, I don't believe the author of that bot was even around when the RoboRumble started, so it's not as if we're removing another author's bot without his permission. There are others like this - I believe TheBrainPi and/or ScruchiPu used to be one of them. I'm also up for a RoboRumble reset, but I wonder if we should just wait until David Alves finishes his RoboRumble server at roborumble.org. Lastly, just wanted to say I think it'd be more like 3+ weeks w/o the rumble if we did a full reset, possibly more than a month, but I think we could do it incrementally and all be happy in the meantime. -- Voidious

Thanks Voidious, I have finally created a basic page or three for my bots and I. Hopefully the rankings are real and the servers will behave, I have seen a few crashes lately and half the bots dropped off the score sheets. I have now moved up to micro bots, but my poor Hedgehog bots can't get past Waylander and Thorn, those guys are just too tough. -- John Cleland


Any newcomers just start running a RoboRumble client? We're missing some bots in the 1v1 rumbles, which could be caused by putting the wrong "Participants URL" in. I'm gonna let my client go for a while, it should sort itself soon enough, but please check your settings. -- Voidious

Is there a problem with the server? It seems that my bots are not battling with others for 2 days, and my new ColorNanoP? is not in the nanorumble after a day already. Does anyone know why? I am curious about how I fare. Thanks. -- Aryary

So sorry about that, and Thanks for your Info. I will change that straight away. As for Help, I will change it if I update, because I am playing with codesize restrictions. Also, can I have my flag? I have been asking it for a month in the Country Flags page already. Thanks. -- Aryary

Hi again, I thought I would start running a client for the 1v1 battles. How long would it take to finish running a match? I can't leave it all night long, and how frequent would my bots run on it? . Also, there are some errors with some other bots' URL, and the run screen exits suddenly when ITERATE=YES. Is there a problem? -- Aryary

One match will take between 2 seconds and 1 minute on a 2.6Mhz P4. Personally I have ITERATE=NO, because in the past (v1.0.7) there were problems with it. I do have a batchfile though that calls roborumble.bat and after that itself, so it continues until I stop it. If your bot is the only one with less than 2000 fights, every battle will be between your bot and some random other (prioritybattles). I'll see if I can make an update of the zipfile containing all bots tonight. -- GrubbmGait

Hi Aryary, what robocode version do you use for your roborumble client? It seems like Garm gets worse results on your machine than on my own machine. (It also could be my fault) :) --Krabb

Sorry, I have been very busy with schoolwork. Well , all I know is that I don't remember running your bot before with mine( not very sure) and what happened to the repository? I tried accessing it and my bots were , gone? -- Aryary

On the page of the robocode repository, they have a notice that they lost recently uploaded bots or recently created users, that would be why they're gone there I'd presume... -- Rednaxela

Hello, is there anything wrong, I can't seem to download any new bots and run any battles. Thanks. -- Aryary

Try deleting the robots/.robotcache/* and robots/robot.database. Also, I'd appreciate if you went through your bots on the participants page to make sure you can download all of them, a lot of your bots got lost in the recent RobocodeRepository crash, and I can't run battles against them because I can't download them. Thanks =) -- Skilgannon

I have been patiently awaiting results on my first ever robot. why can i never see it in the results(entered about two weeks ago). also why does everything about robocode seem 4 or 5 years old? -- Pakistan

It looks like there may be a problem with your entry in the participants list... perhaps it's due to the space after the comma? -- Tim Foden

Yep, that is a problem. I fixed the space after the comma, and then realised that it still won't work because you didn't include the version number. When you package the bot with Robocode you put a version number in - you need to put that after the bot name seperated by a space, before the comma, then the repository number, or the url location (without a space). The roborumble depends on every bot having a version number - and if you release your code in the .jar it works as a simple form of version control =) -- Skilgannon

Thanks guys but why does everything seem like it is a billion years old? Pakistan


Hmm, for some reason SilverSurfer is getting 0 scores for me in the rumble, it seems to be giving a "java.security.AccessControlException: Preventing Robot Loader from access to the internal Robocode pakage: robocode.exception" error. Running Robocode 1.6.0 here. Anyone else see this issue? -- Rednaxela

DrussGT just got a bunch of funny results... anybody know what's going on? -- Skilgannon

Not sure myself. I have a RR client running here and lots of DrussGT is running but nothing looks anomalous to me. I see the results you mean in the details page though against Fenix.FenixTrack? and a few others. -- Rednaxela
Update: I just updated my local copy of the complete results file, and it appears the client that did the bad results had a name of "Put_Your_Name_Here" so I'm pretty sure it's neither of us. -- Rednaxela

Gah, a bunch of funny results for RougeDC now, and according to the results file they're from someone with their client name set to "darkstorm". Perhaps this is the same as the "Put_Your_Name_Here" person before? Either way, this is being rather problematic... It also looks like bad results from "darkstorm" are not restricted to RougeDC, as they also show DrussGT 1.2.0b losing to Drifter 24 by a fairly large margin, and this is something I cannot reproduce no matter how many times I try here. -- Rednaxela

I just checked to make sure it's not me. On my work machine the name is configured to be "tcf". I'll have to look at my home machine later, but my guess is it will either be "tcf" or the default. Certainly I wouldn't expect Drifter (of any version) to get 65% against DrussGT =:) I thought I'd better check as I've made some slight changes to the client to make it easier to run battles for just the bot I want. I don't think I've changed anything that would affect the battles themselves though, just who is in them. Also the last lot I ran for Drifter 24 excluded all the other currently new bots, which means it shouldn't have done any matches against Druss at all. Anyway, my conclusion from all this I don't think that its me. -- Tim Foden

There's also lots of 2%, 9% etc scores... I'll wait a while then give it another go.. but with the next version that should take about 1/2 the time in the gun code as before. -- Skilgannon


I'm darkstorm, I'm actualy running the version 1.6.1 of the roborumble@home client under Ubuntu, i didn't get any error on the console (just some bot can't be dowload), so your bed result are true. if you want i'm redirecting the output(err and standard) on a file

For instance, I see that DrussGT 1.2.0b got 3.3% in a battle against pla.Memnoch_0.5 and 13.0% against dummy.micro.Sparrow_2.5. These are very suspect results - unless Skilgannon accidentally left the bot in TC mode or something =), these results could only happen if DrussGT were crashing or something like that. Would you mind trying to run that battle manually through Robocode and see what happens? If DrussGT crushes those bots (as he should), then the problem is probably in the RoboRumble code. If you see DrussGT crashing, if you could post the contents of his Robocode console output, that would also be really helpful in tracking it down.
-- Voidious

ok, switched on version 1.6.0.1 ... it's everythin fine? -- Asdasd


Hi guys, I am just curious. Firstly, are we supposed to use 1.5.4 or 1.6.0 for roborumble? This is because my bot, Weak, is receiving some funny data (according to 1.6.0), for example el.Attackr 0.1 should be getting > 90% on my com, and simonton.nano.GF_Nano 3.1b sgould be getting about 73%, when now it is 63%. Another includes timmit.nano.timCat_0.13, when my com shows i should get 90+% against it, when now i an getting 60%+. Could someone tell me what is wrong, like robocode versions, Weak stopNGo not functioning, Me or something else? Thanks. -- Aryary

Have you tried local testing with 1.6.0? And with 1.5.4? From my experience with Waylander etc. I found that occasionally the StopAndGo didn't detect correctly, and many points can be found in tuning your multi-mode movement to be correct as much as possible. I often find that maybe only 1 in 10 battles the multimode might incorrectly put on the flattener, but this is enough to hurt your performance a lot. On the other hand, if the flattener is NOT put on when it should be, there is the even worse risk of losing to top bots by HUGE margins, eg 24%, because they have nearly a 100% hitrate against you. But test the different versions, you might have uncovered a bug. -- Skilgannon

I will try with 1.5.4 though, because local tests with 1.6.0 made me concluded the above and my new multi-mode movement changes also ro random movement if I die more than 2 times, so naturally it will lose some points, but not to the point all 35 rounds.But then again, i have no idea what happened in 1.5.4. My local tests still show those problem bots I lose to do not get the expected score in 1.6.0, and the flattener is always there. So perhaps its got some issues with 1.5.4. Anyone mind helping me test out? My com's seems to be more restricted nowadays (meaning i have problems installing it), and upload results of some mega problem bots I have (i.e. cx family and bots with <= -10% problem bot index). Also, anyone mind helping me prove apv.NanoLauLetrik?'s score (58.8%)? It came from my own com, so maybe thats where the problem is. Thanks. -- Aryary


Robo Home | RoboRumble | Changes | Preferences | AllPages
Edit text of this page | View other revisions
Last edited July 26, 2008 10:33 EST by Aryary (diff)
Search: