[Home]RoboRumble/RankingChat20070224

Robo Home | RoboRumble | Changes | Preferences | AllPages

No diff available--this is the first major revision. (no other diffs)
DT's new performance matches the 'S' curve of ratings almost perfectly now. In other words the amount it thrashes weeker bots is, on average, the amount predicted. Previous versions of DT did not do so well against the weaker bots and these dragged the rating down. Tip - take the details ratings page for your bot, order the results by expected score (lowest first) and plot expected score (x axis) against problem bot index y axis). A linear regression through the data should run straight down the zero line for a well balanced bot. If it does not you have an improvement to make - for a stable ranked bot the regession line will pass the zero line at the center of the plot - however if the line is downhill it indicates that targeting needs improving (because good targeting works best on weak bots) or if uphill it indicates movement needs improving (because good movement prevents good bots beating you through targeting).

Vuen - DT does not use any information on it's own movement for segmentation. I think DT's gun is weak against the new movement of DT (which is why I chose the movement) - as are most guns, however it looks like there is obviously a feature of the movement which can be predicted by some enemies that DT does not use. I need to remove that part of the predictable movement or create a gun which sees this prediction or create a AM gun (all three would be best :)). A AM gun would prevent enemies using DT's movement for protection - perhaps I will start with this.

-- Paul

Picky PEZ notes: An AM gun won't prevent enemies using DT's gun for protection. But it sure will prevent them from doing so successfully. =) -- PEZ

Er.. that would be "movement" not "gun", surely? Mirroring an enemies gun would be quite an achievement... (excluding the "extends SandboxDT" method). -- Tango

That's me! ;-) -- Tango

BTW, thanks for that graphing tip, it should be really useful. Apparently my movement is letting me down, which is what i would expect, considering the gun took a few days, and the movement a few minutes. -- Tango

Does extends SandboxDT work? - I thought I had closed that loophole. -- Paul

You may well have, i haven't tested it. -- Tango

I wonder if it would be possible to generate this graph from a link on a bots results page. This would be a useful addition if someone knows how to generate the graphs. I would offer but I have no idea how to do the graphics. Not the first clue. -- jim

Do you mean a graph of the theoretical S-Curve data? -- Kawigi

Since RR keeps stats for all old versions of bots it would be cool to have a page which did comparison of performance for 2 versions (bots) against whole pack. -- Frakir

The plot should be pretty straight forward to do. I'm not sure what is meant by the regression line though. I wonder if Frakir's idea is any easy to realize... But it should be possible. And interesting to read. -- PEZ

For the "regression line" i just used Excel's "Add Trendline" feature, on linear mode. I think that's what paul meant, as what he said afterwards made sense with that line. -- Tango

Yep Tango is correct - Excel is the tool I use for such work - IMHO the best MS product ever. Actually a scatter graph of expected score (x) against problem bot index (y) does the job without the need to order the data. As my calculator can do regression analysis I'm sure it is not difficult to do - the gradient of the line would give a indication as to whether a bot is strongest in movement or targeting. The numerical result could be displayed at the botom of the page - with the momentum and specialist results. Does anyone know the maths? - Paul

Excel is one of the best products ever, almost as great as Gaffa tape. =) However I would need the math since, in all it's greatness, Excel is a poor server app. I've seen people using it like that, but always with bad service as a result. If someone creates a Java class doing the math or knows where to find one I can look at what needs to be done to spiff up the details page with this. -- PEZ

There must be some way to calculate a line of best fit (which is essentially what we want), using simple maths, possibly an iterative algo.. I'll do some research. -- Tango

OK, after some brief research, all i've found is the hard way. You check every possible line, and for it calculate the sum of the the squares of the difference between the predicted and actual y-values for each x-value. The line with the smallest sum is the best line. There is probably an easier way to do this, but i can't find it. -- Tango

Just a note - I've recently accepted an offer to work for Microsoft on Excel :-) -- Kawigi

Cool! -- PEZ

So, you can take a quick peek at the source code and tell us how it's done. (You didn't want to *keep* this job, did you? ;-)) -- Tango

Google's first hit for "linear regression java source": http://www.math.csusb.edu/faculty/stanton/m262/regress/regress.html. Looks good, source included. -- ABC

Looks very good indeed. I'm just having a look at the source, to see if i can work it out... -- Tango

I'm not sure, but it looks like the key code is this bit:

Sxx = sumxx-sumx*sumx/n;
yy = sumyy-sumy*sumy/n;
Sxy = sumxy-sumx*sumy/n;
b =Sxy/Sxx;
a = (sumy-b*sumx)/n;

sumxx is the sum of x squared for each point, sumx is the sum of x for each point, n is the number of points, sumyy and sumy are the same but for y, and sumxy is the sum of x*y for each point. a and b are the coefficents: y=a+bx. -- Tango

I've just tested it, and it gets almost exactly the same as excel. We have a winner! :-) -- Tango

Ok, i think i've made the required modifications. I don't have tomcat or anything like that, so i don't know if it will work, but i'll email it to PEZ to try. It *should* calculate the equation and print it at the bottom or the details screen. Someone needs to come up with a name for it though, at the moment it's called "line". -- Tango

Wait, is it a servlet that creates an image or something? Why not make an applet? -- Kawigi

I was thinking that too. Can you make the applet query the servlet for the data? -- PEZ

My physics lab manual has all the formulas for linear regression and gaussian standard deviation (error) on the slope and b-intercept. Then they tell us we aren't allowed to use them, that they are only there for reference and that we have to draw everything and do the line of best fit ourselves. Grr... Anyway here are the formulas for linear regression, although Tango already got them working, but this is what they look like mathematically:

for y = mx + b:

a = (n sum(xy) - sum(x)sum(y)) / (n sum(x^2) - (sum(x))^2)

b = y_avg - a x_avg

where n is the amount of points and x_avg and y_avg are the mean values of the x and y values respectively.

-- Vuen

My code just gives the numbers, it doesn't draw the graph at all. I was listening to Paul: "The numerical result could be displayed at the botom of the page - with the momentum and specialist results." -- Tango

Ah. If the numbers could be delivered in an easily parseable format, it should be quite easy to ask for them from an applet (just using http and such). What I'm picturing is a scatter plot with an S-curve through it basically. If the server can give me a page of text that is the points on the scatter plot (pairs of percent-score against a robot and expected percent-score I believe?) and if someone can help me translate that math up there into some lines, it should be very doable. -- Kawigi

There isn't an S-curve for these numbers. You get a straight line. It's expected score vs problem index. You should be able to get all the info fairly easilly. Take a look at the source code. I code understand it, so i'm sure you can. -- Tango

Ah, you're right, since the numbers are adjusted for such a curve, there would be just a line left. -- Kawigi

Here is a graph I produced for VertiLeach in the RoboRumble, let me know if this is the sort of thing we were wanting to graph. It plots expected %score on the X axis and actual %score on the Y axis. Here is my interpretation of it:

The gray lines are reference lines - the diagonal one is what a balanced bot would have. Dots below the horizontal gray line represent bots that are ranked higher than VertiLeach (or "expected to beat it"), bots to the left of the vertical gray line are the ones it loses to. The blue line is the regression line. VertiLeach underperforms against any dot above the blue line, does better than expected against dots below the blue line. -- Kawigi

I want to see it! The link is broken though. While at it, can you produce a graph for Verti in the MiniBots category too? (It is not designed compete with MegaBots really.) Ah, you fixed it. I tried every possible image format exept BMP. You deserve your job at Microsoft dude. =) -- PEZ

[edit: woops, this didn't make any sense. Lemme try this again...] This isn't exactly the graph Paul meant. It's much simpler if you plot problem bot index on the y axis rather than actual score; this way your reference line is a line parallel to the x axis (slope 0). Thus you can tell whether your robot's movement or gunning needs improvement simply by the sign of the slope of the line of best fit. -- Vuen

Hmmm... so tilt your head 45 degrees and cross your eyes and that's what you want? -- Kawigi

Heh, pretty much. More like a skew than a tilt :D. It just makes it easier mathematically because you don't have to compare the slope of the line to the slope of the reference line, since the reference line's slope is zero. [edit] Also, in your graph, VertiLeach underperforms against all bots above the grey line, not the blue line; likewise with overperforming. In Paul's graph, this would be all bots above and below a horizontal line passing through the mean (again easier to see :D) -- Vuen

How about this one:

I think the little dot by itself out in the bottom-left quadrant is SandboxDT. Other than that, though, it looks like VertiLeach is better than expected against bots that are ranked higher. -- Kawigi

Way cool! But I need yet another "this is what you see". You know how thick I am... Does each quadrant have a meaning? What would the labels on the X and Y axis be? -- PEZ

MrHyde is closing the gap on Jekyl. -- PEZ

You could argue that Jekyl is closing the gap on MrHyde too :( -- jim

There we go. I was right again above before I incorrected myself; the reference line is in fact the x axis. Also, now that the server errors have subsided, VertiLeach performs better than expected against bots above the reference line and vice versa, not the other way around. Anyway, what you can tell from the graph is that, basically regurgitating what Paul said earlier, since the slope is negative (the line is downhill), it means that VertiLeach performs well against high ranked bots and not good enough against lower ranked bots. This indicates that its targeting needs improvement, because most lower ranked bots don't have very powerful movement systems. If the line were uphill (positive slope), it would mean the movement needs improvement, because it would mean it performs badly against high ranked bots but exceptionally well against weak bots, which would mean its targeting is good against their weak movement. To summarize, work on VertiLeach's gun! :D -- Vuen

I aldready knew that! =) Actually I think I have been saying this now and then that Verti's gun is quite weak. All my guns are. -- PEZ

Paul's comments also assume that your movement and gun is roughly the same against all bots, which isn't the case with VertiLeach. VertiLeach's movement is frankly worse against most weaker bots than against stronger bots.

As far as labeling the axes, now the Y axis is the problem bot index - the horizontal gray line is 0, the top would be 100, and the bottom would be -100. Anything outside about -30 to 30 is probably extremely abnormal. The X axis is your expected score against the opponent. The vertical line is 50% (you tie), the left side is 0% (you don't score), the right side is 100% (you shut them out). -- Kawigi

Can you make an applet of this? I can add a request parameter to the details page that when added to the request would produce just the info you need, in a format of your choice. -- PEZ

I actually made it as an applet - and then temporarily made it into a frame so I could connect to the wiki without being on the wiki. Right now it parses the html details page created by the server, but I think it would be more efficient if there is some way it is stored raw on the server. Is it? Or is the data non-localized and generated into the details pages periodically?

If I were to choose the format, it would probably be a list of robot names, expected %score, and PBI. The way it is now, I wouldn't even need the names, but I could add a feature like being able to mouse over the dots to see what robots they represent (just for fun). The current program can also be any size, but probably should be square (I think I might have made some assumptions that it was square in there). It would take html parameters indicating the bot's name and which competition to take data from.

So, all that aside, how do we want to do this? Should I read the generated details page or some more raw data? If it were on the server, where would it be (more importantly, where would it be relative to the files it will need to access)? Is the mouse-over-dots feature desireable? Anything else I should do with it before sending it to you? Will you be disturbed to see Barracuda's graph? -- Kawigi

I think it might work to read the raw files directly. They are in java properties format. Let me check what the URL is. It would bypass the rumble data manager though so you might get currupted files now and then. Could we load the applet from your server and still read the data from mine? Then we can generate a request URL from the ranking table and you can build the supporting html on the fly from this request? I would really like the mouse over feature. -- PEZ

What is java properties format? We can't host it on my server and have it access your server without doing some kind of signing process, which sounds like a pain in the neck. Of course, having to develop it send it to you and hope it works on your server is only slightly less of a pain in the neck. Meanwhile, the mouseover feature is implemented. -- Kawigi

Mouse over sounds really cool. A Java properties file is of the form key=value (think of SandboxDT's properties file). You access it using the java.utils.Properties. jim

Ah, I see. So I guess I have to know what the names of the files are that the properties are stored in, and then I need to know what the relevant keys are. -- Kawigi

Do you have an upload account on the wiki? If not I can create one for you. Then you could upload the applet jar along with a html page loading it. Maybe you could make the applet read the /Participants page and provide a drop menu for selecting the bot to graph? Looking at the raw files I don't think you could work with those without calculating expected score and stuff yourself. What about I try to patch the details servlet to accept a request parameter to produce a raw file for your applet? -- PEZ

I don't yet have a wiki account. Any chance you could get on ICQ? That might make this discussion more streamlined. -- Kawigi

After some hacking and reconfiguration of the webserver, the applet producing those scatter-plots is now up and running (but it doesn't have any links to it yet. Next version of the server classes?) For instance, look at http://rumble.robowiki.net/lrp?game=roborumble&name=kawigi.sbf.FloodHT%200.8 to see how FloodHT is doing. You can change roborumble to minirumble, microrumble, nanorumble or teamrumble, and you can substitute your own bot's name in for FloodHT's. -- Kawigi

Truly, truly good work. Kawigi. /LRP deserves it's own page. -- PEZ

Wow. Excellent work Kawigi; the applet shows the extreme uphill slope that I was expecting with Fractal. It really does show how incredibly crappy its movement is. Looks like I have some work to do :) -- Vuen

Nice work Kawigi - I love it when someone does the hard work. -- Paul

About the teams ranking: I've been running the teamrumble the last couple of days. It is a slow and painfull process, I think mainly because DTTeam has that old memory leak. Anyway, I have come to the conclusion that the rating constant should be proportional to the number of competitors. ShadowTeam has all it's pairings run for a long time now, yet the momentum is still huge. Maybe the server could re-iterate through the bot's results until the momentum is close to zero? Either that or an increased constant. I believe ShadowTeam is currently the best team (what I don't know is by how much), but the time the ranking takes to show that is too long. The PL league, on the other hand, is way faster. ;) -- ABC

Hey, wait a minute, we don't know if you beat BlindFighters? yet! Seriously, though, the one thing that is humiliating is that BlindFighters? does better against AresHaikuTeam? than TidalWave does. -- Kawigi

BlindFighters? rule! It's just that my client refuses to choose that fight, the "smart" pairings code could probably be improved. Also, why the 1200x1200 battlefield? Makes it very hard to test because the sliders don't stop at 1200. And, finally, how about 35 rounds per battle? 10 rounds is very limiting for bots that don't save data. -- ABC

The sliders stop at 1600; just click the slider bar after to drop it 400 down. -- Vuen

I must be doing something wrong then, my sliders don't stop at 1600, and increase/decrease by 500 when I click the bar... -- ABC

Then it stops at 1700. : ) Wait lemme actually try this... Okay, on my computer they change by 460, and you can get exactly 1660, so you can bring it down to 1200 from there. Not sure if yours are different... You can always just decompile robocode and make it default to 1200 so you can just start it up to the right size : ) --- Vuen

You can use saved battle files also. -- PEZ


And /PremierLeague king is DT. Look at that score!! Way to go Paul! You are simply the best. -- PEZ But it is about to lose the Teams crown :-)
Now the details graph has nice coloured points. Do the colors mean something? -- Albert

Ignore them until I fix it :-p (Not sure what's wrong, actually). Eventually it will indicate your actual % score. -- Kawigi

About 2 minutes after I posted that, I all-of-a-sudden figured out what wasn't working. Now, after a little more tweaking, it's done. A line between the green dots and the cyan dots is the line between bots you beat and bots you don't. Each band to the left you lose to by more, and each band to the right you beat by more. -- Kawigi

What green dots? =)

-- PEZ

It looks to me like cyan is you lose, blue is you draw (probably 40-60%, maybe 45-55%), and magenta is you win. -- Tango

=) Look closer. -- PEZ

I've looked closer, and i stand by my assesment. (I'm assuming the explanation of green dots was simply a mistake) -- Tango

Well, it wasn't. In fact there are yellow dots for big losses. The graph above is for VertiLeach in the minibot game (look closer at the URL displayed in the screen shot). So if you look at [the Mini PremierLeague] you'll see a score of 92/0/0. No green dots for Verti. Check a bot in the middle of one of the leagues (like FloodMicro in the general game) and you'll see dots of all colors. -- PEZ

Hey, stop picking on FloodMicro (even though the nano whose gun it's trying to use is ranked higher). Red dots are REALLY bad losses, as well, look at [SpareParts] for a good example of the full range. And the next version of FloodMicro shouldn't have any yellow dots (hopefully, I'm working on getting rid of green dots in the microrumble for FloodMicro). -- Kawigi

Ouch. Red dots hurts! What kind of gunning are you using in this new FloodMicro? -- PEZ

Something more like FhqwhgadsMicro. I would think that a FhqwhgadsMicro gun with a new tweak of the Flood movement should be semi-scary in micro-land. Speaking of which, I'm always surprised how much room FloodMini has free. Every time I look at it again, it has another 100 bytes I can use. This makes me wonder if I could have put the original FloodMini into a Micro with no functional loss. I do know that without DynamicDistancing, data saving, position interpolation and the wall segment, I can fit FloodMini 1.3 into a micro. -- Kawigi

That would be cool. I don't think dynamic distancing and data saving really pays very much. It could be a very strong micro. -- PEZ

It looks good ABC:

-- PEZ

Very nice indeed! :) But I think DT will eventually rise to 2nd, Tron will be in 3rd, and HawkOnFire 4th. Don't really know about SandboxLump, haven't tested with it for a long time, maybe it will fight with GlowBlowMelee for the 5th spot after some learning... -- ABC

I don't know anything about melee battle. But I think the movement I am working on might work in melee. Now when there's an active (and eternal) league I might throw some time on trying to be competetive in melee. Though I think all top-10 bots can be quite relaxed about this. =) -- PEZ

That is exactly why a melee rumble was needed, I don't want to be relaxed! ;) I'm sure you will easily find your way into the top10, all it takes is a good (melee-functional) gun and a decent movement, like Tron, or an amazing movement and HeadOnTargeting, like HawkOnFire, the best melee movement ever made, imo. -- ABC

Yep! Indeed i think that melee needs PEZ on it's rankings, as i already said: it's only a matter of staying far from the center until u clean up the battlefied (and a lot of tricks too)! I promise that Musashi will be soon available to enter the melee (i was too busy with that 1X1). I hope that ABC and Paul will have a bad time soon. :^) -- Axe

With the best melee bots, you don't have to stay away from the center in the beginning (but I've only noticed FloodHT and DuelistMiniMelee actually keeping away from the walls when everyone else is against the walls). As long as there is room by the walls, though, it's not a bad thing ;-) The big puzzle is whether it's better to be hard to hit (move in the right direction) or be unlikely to be targeted (be in the right place). Hopefully FloodHT will be roughly in the same place as he is in 1-on-1. -- Kawigi

Yep, that is probably the main puzzle of melee movement. Of course the "perfect" movement is the one that manages to stay in the right place _and_ move in the right direction at the same time ;). I think curently DT favours moving in the right direction and Shadow favours being in the right place. What I like in melee fights is that there are many more variables involved in the outcome of a battle. With PM based guns (like mine), for example, staying near the corners also gives you the advantage of better scanner info for targeting. -- ABC

It seems counterintuitive to me, but ELO is more powerful to predict expected outcomes for melee than for 1v1 (so the overall ranking is more robust). Just take a look to the graphs and you will see that PBIs are smaller for melee. May be because there are many enemies on the field and strong points/weaknesess compensate? -- Albert

I would expect there to be higher PBIs in melee when there are few rounds played (in both melee and 1v1), and lower when more rounds are played. There a more bots, which means there is more chance for crazy results (eg. the 2 best bots kill eachother, leaving poor bots to win), but over time the crazy results cancel out more than in 1v1. -- Tango

*grin* Nice to see both of ABC's bots on top of the melee rumble! Great work! -- Vuen

I still think that corners and walls is the best place to be in a melee, staying in the center means that u can be more easily caugth in crossfire, also means that your radar will have to sweep 360 if u want to scan everybody. By the way Vuen, seems that your Preferences? are wrong, your editions appear like Axe, that is, me. :P -- Axe

The other thing against being in the middle is that a lot of bots seem to shoot at the nearest enemy. If you are in the center then you stand a real good chance of being nearest to a lot of enemies and that can never be good :) -- jim

I was going to ask, but you guys started anyway. =) Please go on with this on a page like MeleeMovement?. -- PEZ

Melee results are more volatile, but the spreads aren't as pronounced as in one-on-one, which is why the PBI's are so small. -- Kawigi

Maybe it's because melee results are more volatile, maybe it's the small number of entrants, but I never expected to see Tron in 1st place like it is now... I left my work computer running the melee rumble for the weekend, I was expecting DT to come very close to 1st place, or even take it, since it should now have good data saved on all enemies. Shadow has exactly the same "start from scratch every time" gun as Tron, and, imo, a much better movement, I never expected to see it overtaken by Tron... maybe Tron's movement is stronger against the weak? -- ABC

I also believe the gun is better against the weak - Tron has always been classic against the SampleBots. I suspect it just "fits the S-curve" better than DT does in melee. I think bots can do well in melee with a very simple movement in the right part of the battlefield (Infinity or DuelistNanoMelee are good examples, mostly because they are very simple but designed for melee). On that note, Tron will tend to make those simple bots do worse than if DT were there and not Tron. All just a rough theory, though. -- Kawigi

Tron (and probably Shadow too) kills the sampleTeam single handed! :) Maybe the increased round count, compared to the ER, and the "everybody fights everyone else" rule of the melee rumble can explain why my gun seems to be working wonders in the RR@H. SandboxLump in 4th place is also a surprise for me, since PM-based guns (again, like mine ;)) can crush it's corner movement after a few rounds. It's another case of being in the right place, even if not moving in the "right direction", working quite well in melee. -- ABC


Why is the expected score on the details page of bots in the minirumble etc based on the wrong rating (i haven't done the maths but it looks like it is using the bot whose page it is mini rating, and the oponents robo rating, or something... all Recrimpo's mini PBIs are positive, many of them with green backgrounds, yet they should average to 0, shouldn't they?)? -- Tango

When the bot rating is stable the PBIs should average to 0. But they will not when the bot is moving up or down the rankings (ie. a new version is released). Then they will average to a positive number (when the bot is climbing up) or to a negative number (when the bot is going down). The momentum index shows it. A big positive number means your bot is going up. A negative number that your bot is going down. An small number means your bot rating is stable. Note that PBI are not useful at all until your bot rating stabilizes. Recrimpo has 96 battles fought when I write this (probably less when you checked) so what you are describing is just your bot going up :-) -- Albert

Ah! That's ok then. And that also explains where momentum comes from, which was something i intended to look up in the source. :) -- Tango


In case anyone wonders. I won't leave Paolo in the rumble. I'll just keep it there until Wednesday so I get a wiki snapshot of its ranking. After that there will be one Swedish flag less in the top-10. For a while. -- PEZ

The king is dead -- long live the king!

Shadow makes takes the 1v1 top position and becomes the new King of Roboocode!

-- Albert


There's something a bit strange with what battles are fought. I released Tityus 0.8.9.1 around lunch time (CET). Then I let my client run and I could see in the ranking table that it was only my client running Tityus battles. I let the client run 200 battles and then closed it down (wanted the moment of 1900+ points to last a little). Then half an hour or so later T had a battle count of 1800! Now it's up at 2100+ battles. If it was 500 for regular rumble and 500 for mini it would be at max 1000. I'm not complaining, but this is a bit strange. -- PEZ

It can happen that some one is running the client off-line and then when they re-connect there are a lot of new results? -- Albert


I added arthord.mini.ProofOfConceptExploit 1.0. I'm 99.9 percent sure it'll work, so any comments/ideas/suggestions on how to close the hole? Obviously using the exploit this blatantly would never last a day, but if you had a few ProblemBots that needed to stop coming up this would be a formidable tactic. -- Kuuran
Fenrir dropped 15 positions :( . I did test it against my standard set of opponents and it did better than the previous version. I probably need a new set of reference opponents... --Loki

Here is my list of test opponents:

--deathcon

I use a wider set of test opps. Think that using only top bots as refs, won't give u a good representation of the RoboRumble "real world". I use instead The_Top_10+My_Problem_Bots+Some_Old_Problem_Bots. -- Axe

Smog can give you a good picture of how you're doing against deceptively simple things. -- Kuuran

thanks for the tips. DT and Shadow already were part of my test set, next to TheArtOfWar, PrairieWolf, Wolverine, Floodmini, GlowBlow and some other bots that are my problem bots (or had that status in the past).

Some testing this weekend showed that it's movement profile has a stronger peak at 0 degrees, making it more vulnerable against 'low end' bots that just shoot strait at it. Back to the drawing board.. -- Loki

Before you scrap a movement that may otherwise be good for a small +0 spike (I'm assuming you meant a moderate spike and otherwise flat movement) read MusashiTrick. -- Kuuran


Recent versions of Shadow seem to have a tendency to reapear in the rankings lately. It has happened before, but this time I noticed that they even fight the current version of themselves, and are probably influencing the ranking table. Does anyone have a theory on why this happens? -- ABC

I think it can have something to do with a dormant client that wakes up and even if it downloads the new version of a bot and starts running it when it uploads data it uploads old battles where previous versions might be included. This seems to cause the client to start running those old versions... Quite strange. But I seem to be able to get my client to snap out of this by removing the results file before starting it. -- PEZ

But in this case the client managed to run Shadow 2.42 vs Shadow 2.31. That should be impossible, since they never apeared in the participants page at the same time... The only possibility I see is if someone altered the participants file locally. -- ABC

It can happen that both versions of your bot appear in the rankings, if a client that has been running disconnected from the internet uploads results for the previous version (it can happen also if it has battles pending for upload and uploads them after some days...). It usually lasts for a short time, till a client checks the rankings against the participants and sends the order to remove the old version. In theory, it is not possible to have two versions of a bot fight in RR@H, unless there was a time window were both versions were present in the participants page. In any case, it should not affect the rankings, because only the active participants are used to calculate them. Also, because RR@H reevaluates the rating of a bot against ALL the active participants each time a battle is uploaded, any error should correct itself prety quickly. -- Albert

Remind me about why we need to reject results where one bot has got a score of zero. The later versions of Aristocles seems to shut out some of its enemies quite often. It seems unfair those 100% wins shouldn't count. If it wasn't a micro I could maybe check in the last round if my opponent has gotten any points and give it a hit, but it doesn't really fit in Aristocles and seems like a really weird thing to do anyway. -- PEZ

When we made the decision we thought it was impossible to get a 0 score unless something had gone wrong, so to stop client crashes harming results they were blocked. If we now have a bot that can get 100% then I guess it's time to remove the block (the bugs in the client that caused the original problem have been fixed, i belive). How does Aristocles manage it anyway? It seems almost too impressive... -- Tango

I've just run a few battles, and it's just a perfect implementation of the MusashiTrick, and I think perfect is the word considering it gets 100% scores... Well done! -- Tango

Sweet! Thanks for those words. But it's not the MusashiTrick actually. Though it just might be the most perfect avoid-head-on-fire implementation yet. I think the MusashiTrick is a better choice against bots that just doesn't fire GF1. Like Barracuda for instance. But Aristocles way is smaller of course. Using some more bytes it could be made quite effective against Barracuda too I think. -- PEZ

From watching it, it goes keeps going in the same direction until it gets hit, which is the basic idea behind the MusashiTrick. It may not be the same implementation, but it's the same general idea isn't it? -- Tango

The MusashiTrick is the implementation. The idea to keep going in the same direction to avoid head-on fire has been around long before Musashi. It's the basic idea behind WallsPoet and its siblings for instance. It's one of the things making Walls successful for that matter. You wouldn't say Walls is using the MusashiTrick would you? -- PEZ

No, because Walls doesn't start changing direction after it realises that going in a straight line doesn't work. The clever bit of the MusashiTrick is not the fact that it goes in one direction, it's that it stops doing it once it gets hit (at GF1, because it wall bounces, whereas your bot wall smooths so I would guess it's always at GF1, so doesn't need to check). -- Tango

Still it can and have been done using EnemyWave's. I'd say that the clever part was to do it that accurately without EnemyWave's. I tried for quite a while to fit EnemyWave's in Tityus but couldn't get them below 300 bytes. When Axe presented the MusashiTrick I immediately was saved since it only costs 40 bytes or so. And look at it this way. Paolo uses the MusashiTrick too. But not to start changing direction or anything like that. It just tries to balance its GF1 exposure. So, the MusashiTrick is the implementation of a way to quite accurately know if you are being hit at GF1. Even if you are wall bouncing. Aristocles does not use the MusashiTrick to know this. Paolo uses it, but has a different strategy for utilising it. See what I mean? -- PEZ

Yes, i think i see what you are saying. You are using the term MusashiTrick to refer to only the detection of being hit at GF1, and not to the action taken upon that detection, whereas I was using it to mean the action taken, but not how it was detected. -- Tango

Yes, and I think describing its different uses in say Paolo and Musashi would get really messy with your definition. =) The real contribution with the trick is the detection mechanism. It's very cost effective in terms of CodeSize and accuracy. The reason I don't use it in Aristocles is because I'm playing with the last 3 or so bytes there all the time and my, less accurate, way cost some 15 bytes less. It costs points against Barracuda and the likes, but gives the satisfaction of seeing those 100% wins against head-on targers. -- PEZ

I just caught my client doing the multiple versions thing:

Iteration number 21
Preparing battles list ... Using smart battles is true
Executing battles ...
Fighting battle 0 ... pe.SandboxDT 2.61,jam.micro.RaikoMicro 1.1
RESULT = pe.SandboxDT 2.61 wins 3141 to 1788
Fighting battle 1 ... pe.SandboxDT 2.61,pe.SandboxDT 2.51
java.lang.OutOfMemoryError
There is no way the participants file could have both versions of DT, so why does this happen?

-- ABC

I've added my bot to the participants list several days ago. Anyone know why it isn't on the rankings? -- Mike Z.

It's because the participants page is sensitive to formatting (naturally), And there were a couple of entries where there was a space between the "," and the bot number. This will cause the bot to not be downloaded by any client and will therefore not run any battles. I've removed the spaces on all teh one's i could see, and so hopefully you will get a ranking soon. --Brainfade

You should also download the RR@H client and help run battles. If you install it on a Robocode install separate from where you develop your bots you will also immediately see if there is some problems downloading your bots. -- PEZ

thanks. i might set up my other comp to run the rr&h client a lot--andrew


For the record. I might still think that ELO obfuscates the ranking table some. But it seems I am the only one having trouble to understand it. And, since I have given up on trying and just think "this or that many points in this or that direction" I have begun to not only accept it. I like it! ELO rocks! -- PEZ
I still think we must do something about this problem:
Fighting battle 0 ... pez.micro.Aristocles 0.3.5,kawigi.micro.Shiz 1.0
RESULT = pez.micro.Aristocles 0.3.5 wins 4555 to 0
Fighting battle 1 ... pez.micro.Aristocles 0.3.5,kawigi.micro.Shiz 1.0
RESULT = pez.micro.Aristocles 0.3.5 wins 4600 to 0
Fighting battle 2 ... pez.micro.Aristocles 0.3.5,kawigi.micro.Shiz 1.0
...
Sometimes the client locks in on pairing two bots. But that's not the problem. I think we must accept that a 100% win isn't illegal. Shiz is not the only bot that Aristocles shuts out. -- PEZ
I've noticed each time DT is updated it has to built it's ranking up from around 1900 - I think the start point the ranking of the first DT - should it use the latest DT ranking as a start point? -- Paul

I think that's just because DT is so strong. All bots actually start at 1600. But DT only needs a few battles to prove it doesn't belong there. -- PEZ

Is it just me or is DT 2.71 *extremely* slow? -- Tango

I've just run a test, and DT vs DT gets about 180fps, which isn't great, but isn't terrible... maybe the client was just competing with something else... -- Tango

How did DT 2.71 suddenly get so many battles? -- ABC

That was me - last night just as DT was getting close to completing 500 battles I set my roborumble file to run 1000 batles per iteration - the results just came in in one batch - but even with a single client contibuting to the scores it did not break the 2000 barrier - It looks like I'll have to make DT better to get there :( -- Paul

Cool hack! =) But. Everybody, please don't do that trick again. My poor Tomcat server still hasn't recovered from the massive upload. Or any other trick. If you want to experiment with stuff like this. Install a local RR@H server and go ahead on that one. If you ask me nicely I can pack the server battle data for you so you get a reference starting point. -- PEZ

Sorry about that PEZ - Is there any way the RR@H server can have a second prority setting for smart battles so that once all known bots have had their 500 battles priority for the first selected bot is given to those bots that have fought a lower number of battles? -- Paul

It's cool. My server survived. =) I have wished for that feature too. Never gotten around to try implement it though. I would like it if the client prioritized battles for bots in inverse proportion to the battles they have compared to the bot with most battles. And prefreably starting with filling the ranking tables from smallest to largest first. If it's a nano it should catch up with the other nanos quicker then. Maybe this is too complicated to implement but it would keep the battle count of the bots nice and leveled I think. -- PEZ

Well, at least, the client ensures all pairings are fought :-) The problem is that the client only downloads the complete file from time to time (to avoid overloading the server) and it would make many clients to run the same battle. But I'm sure there is some work around. I'm very busy nowdays, so It is quite difficult for me to implement. But feel free to do it. -- Albert

What Paul did would have no more effect on your servers than someone running loads of battle offline, and then uploading them all at once, so does that mean you have problems if people do that? (If so, I might need to remember to stop my client halfway through uploading, and give your server a chance to catch up after i've left it running battles for a while) -- Tango

Yes, I guess it does. But 1000 battles are a bit extreme. The server still stands! =) I have had problems like the ones I got from that huge upload before, but I have never known anything that can cause it. Maybe it's off-line clients. Maybe the clients should have a go-easy-on-PEZ's-poor-server feature... -- PEZ

I think i may have hammered you one time when i left my client running and my conenction died at some point - it took me 45 minutes to upload the results when i fixed the connection. Maybe you could just bodge it a counter that stops the client if a certain number of interations are made without reaching the server. I'd imagine it'd just be a case of incrementing a value in the server not found try catch statement. I suppsoe it just depends how much of a problem it is... --Brainfade

The PL isn't working... I agree that a better pairing logic would be great, and so would a more server controlled battle list. A client should be limited to a number of consecutive battles before an upload/update. We are not really playing the same game when this kind of experiments are made. -- ABC

What about PL? -- PEZ

I see it. It was probably just bad luck. We'll wait for the next batch. -- PEZ

You don't need to stop the client running battles, just set a limit to the number of battles that can be uploaded during one iteration (say 90 battles, or 3 iterations worth), and if execute is set to NOT, set a delay between the iterations. Also, hardcode more stuff, most of the things in the properties file should never be changed, so don't need to be changeable. And if it doesn't already, don't let the client use smart battles if it hasn't downloaded this iteration. -- Tango

Please consider implementing that. -- PEZ

It seems that it is David's client that has problems with DT (and probably Brainfade's client, but no recent records there). Please, check it. Any way, I keep insisting: Please release all your bots considering RR@H characteristics (ie. disable reference modes). Until now, all major "problems" in the rankings have appeared because of bots that allowed these problems to appear, when they were easily avoidable. Having a clean Robocode installation for RR@H is a good practice, but it soudn't be a must. -- Albert

Ahh crap, i forgot i had DT in reference mode. I changed DT over prior to releasing Slain 0.37 (i wanted to test something), and then ran a couple of iterations of RR@H that all included Slain (None of the other battles could have been me). Slain may well have fought Sandbox. I don't know whether it sorts the problem but i have since removed Slain form the rumble, as it was purely intended to be a test - does thsi eliminate its effect on the rumble rankings??. I ordinarily i run roborumble off a seperate installation, but the one i use for testing broke, and i wanted a quick release so i didnt reinstall it. Sorry guys, i didnt mean to mess things up :( --Brainfade

It's not the end of the world dude! =) And I think removing that version of Slain also removes the impact those particluar battles had. Cool that this can be sorted out soon. -- PEZ

It was me too, I had DT in reference + hiseg mode on my laptop from when I ran the TargetingChallenge in hiseg mode. Sorry Paul. :-( --David Alves

Wow... did you see what happened after David removed the two melee bots? Most of the bots using the MusashiTrick have momentums of -100 or worse. Scary... I feel like going out and writing a couple of head-on-targeters to compensate. :p -- Jamougha

Go ahead, a couple of high ranked head-on-targeters would be great. ;) -- ABC

Yep. Seems that David wants to defeat us by starving ... ;) -- Axe

I don't know if he will succeed, though ;). The bots that used to lose against Duelist*Melee will also go slightly up in the rankings, and, because we still defeat them by the same margin, the top ones will also go up again. Everything should stabilise around the same numbers, it's only a small temporary disturbance to the elo ranking system. -- ABC

Wow, why does my team have so many downloads? It can't be that good :P -- Mike

It's a strange bug in the repository download counter. It seems like you have somehow inherited Shadow's download count. -- PEZ

@ABC The thing that pushes your ranking around the most is greatly over- or underperforming. Bots using the MusashiTrick were greatly overperforming vs. Duelist*Melee. I would expect the ranking of the MusashiTrick bots to drop significantly. Maybe someone can provide their bot's rating before and after? --David Alves

Cool. My trick is a menace! :) --Axe

With more than 250 bots in the mega-rumble and several RR@Home clients it is a very good chance that DT will not see an opponent more than once before it is no longer selected for smart battles - there are two solutions for this - either include a data file with the bot (which simply encorages releases of bots with new data files) or increase the limit for smart battles on mega bots (or after all bots have had their allocated rounds prioritise on the bot with fewest battles). -- Paul

Agreed. And I think all agree that this would be good. It's open for anyone to implement it. -- PEZ

Rather than only running battles for the bot with least, I think it would be better to simply weight the random chooser by the recipricol of the number of battles. This could be instead of the current 500 battles rule, or in addition too it, I'm not sure which would be better. -- Tango (I'll look at the code latter this evening, and see if I can do it)

My client doesn't seem to be doing the 500 battles thing. It says:

 Prioritary battles file not found ...

Anything I can fix? -- Tango

You will always get this message in the first iteration. Don't care about it, the client works fine. -- Albert

Could somebody run the teams rumble? I'm not able now (not enough memory) and there are new entrants that are not ranked. -- Albert

I was the one who uploaded the last team results. As you can see I wasn't able to run any DT team battles too. I know that my team consumes also some memory... ;) BTW, I have some thought about the problem above. what we could do is to set up a new Rumble where the 50 (or 30 or 20 or 10..) best bots of the general table meet to fight for the crown. I mean the number of participants will increase all the time and the number of learned data will more and more be distributed to all the people running the client. If we have such a "new" rumble we don't need to worry about the learning stuff as much as before. Just an idea... -- rozu

Or possibly, a new rumble which has a restriction of one bot per person (or group of people). If only the person who owns the bot adds it to the new rumble, we will only get bots from active robocoders, which will also cut it down a lot. I imagine there are less than 50 active robocoders, so that means less then 50 bots. -- Tango (PS This should be as well as the current one. It wouldn't need as many battles run, so shouldn't be a problem. TheBestBot should be decided by the current one.) -- Tango

I would probably run team battles if DTTeam 2.71 was removed as well :-p Since robocode sets itself up to use a heap size of 256 Megabytes, no team or robot in the rumble should probably use more than that on their own. I won't say that a team necessarily has to be half that, maybe they can depend on their opponents using less, but using more than the default max memory seems like a bad idea. -- Kawigi

Melee rumble when DT and others like him are involved eats memory like there's no tomorrow to begin with, I really can't reasonably run teams with five of it. -- Kuuran

It seems that it is DT the bot that is preventing us from running the teams league. I'w remove it from competition till its long lasting memory problems are fixed. -- Albert

Hmm something looks strange here or my bot sure does have a problem with GlowBlowAPM rz.[GlowBlowAPM 1]?.0 0.9 1 13-5-2004:15:28 56.5 -55.6

A PBI of -55.6 is rather fascinating :) I'm not at home so can't do a test run until later tonight against that bot, but I will have som fun with that late this evening I guess! -- Pulsar

If you are shooting head-on or using a pattern-matcher, there's a good reason to have problems with GlowBlowAPM (the Anti-Pattern-Matcher GlowBlow) -- Kawigi

But it's GF targeting here. And the score is 0.9. My bet is that Pulsar has a bug in his bot. Fix it and you just might collect lots of rating points! -- PEZ

1st: pulsar.PulsarMax 0.2	3351	1350	270	1487	244	0	0	27	8	0
2nd: rz.GlowBlowAPM 1.0	        1703	400	80	1124	98	0	0	8	27	0
No tendencies from what I can see from watching that there is a bug. We'll see I guess. I wonder if it could be my constantly crashing out of memory roborumble client runing on one of the computers that produced such a result? If so this might not be the only such result. No idea why it would crash it has the memory and I increased it to 384m. oh well guess I shouldn't run it on that computer. -- Pulsar

Ooops, that computer was running with jdk 1.5 beta 1, that's probably why it kept using more and more memory (for some reason). I hope that hasn't caused any strange results etc for anyone. I stopped that one now and made sure other(s) are using 1.4 :-/ -- Pulsar

The rankings are broken, all give a fatal exception: 500 Servlet Exception

javax.servlet.ServletException?: Class `Rankings' was not found in classpath. Classes normally belong in /WEB-INF/classes.

	at com.caucho.server.http.Application.instantiateServlet(Application.java:3185)
	at com.caucho.server.http.Application.createServlet(Application.java:3093)
	at com.caucho.server.http.Application.loadServlet(Application.java:3054)
	at com.caucho.server.http.QServletConfig?.loadServlet(QServletConfig?.java:418)
	at com.caucho.server.http.Application.getFilterChainServlet?(Application.java:2794)
	at com.caucho.server.http.Application.buildFilterChain?(Application.java:2750)
	at com.caucho.server.http.Invocation.service(Invocation.java:310)
	at com.caucho.server.http.CacheInvocation?.service(CacheInvocation?.java:135)
	at com.caucho.server.http.RunnerRequest?.handleRequest(RunnerRequest?.java:342)
	at com.caucho.server.http.RunnerRequest?.handleConnection(RunnerRequest?.java:272)
	at com.caucho.server.TcpConnection?.run(TcpConnection?.java:1

Sorry I can't help more... please fix this :D

TIA M3thos

I'm working on setting them up on davidalves.net and will keep you posted on my progress. Hopefully I can be finished tonight before my girlfriend comes over, if not it won't happen until late saturday. ;-) --David Alves

Is there a way to avoid a specific pairing ?
I have fought 49 of my 404 battles (12%) against SilverSurfer, and thats nice for Axe, but in this pace I never reach rank 50. ;-) -- GrubbmGait

Never mind it. It doesn't matter too much for your ranking. In due time we will have this bug fixed in the RR@H client. -- PEZ

It doesn't bother me, I am already glad to see that tweaking with bulletpower (ok, and solving one minor bug) can give me such a leap forward. -- GrubbmGait

Are there any more thoughts about preventing/counteracting the ratingdrift? There were some reactions a few weeks ago, but I can't find them. Anyway, I have an idea to counteract it.
As everybody knows every new bot is entered at a rating of 1600. When it fight battles it wins points or looses points from others. The point is here, that the total amount of points should always stay the same. Simply said: if there are 300 bots, the total amount of points must be 300 * 1600. If not, the drift can be easily determined and be dealed with when, for example, the weekly rankings are archived.

Btw, drift is only introduced when a bot retires without successor, or when its successor does not start on its current rating.

-- GrubbmGait

Interesting. It would be really sweet to get rid of the drift. Lately there has not been so muck of it. Might be because I am not updating CC all the time. It always gets its rating reset to 1600 with each new version for reasons unknown. Not so with BeeRRGC. -- PEZ

I calculated the drift and the outcome was quite different than what I expected. Currently the average of all ratings together is 1613.79. This means that EVERY bots ranking should be lowered by 13.79 points! That is approximately 0.00002 points drift per battle.
Is it possible that a rounding somewhere in the calculation is the cause of this? For what I know the rating calculation plays break-even. (Bot A wins the same amount of points bot B looses). --GrubbmGait

I guess it could be a rounding error. One that's probably a bit hard to avoid too. But if the ratings are always compensated with the diff average_rating - 1600 then things would be cool? But we would still have the other drift you talked about, or? I'm clueless on the rating system, as you might have suspected now. I'm an advocate of using raw score_share_average_percentages instead. Produces the same ranking, but without the voodoo filter. -- PEZ

If you compensate on a weekly basis with average_rating - 1600, the corrections would be small (max 1 point I guess) and all drift is eliminated on that moment. --GrubbmGait

There seems to be a problem with rz.SmallDevil, it fights a lot of battles in Megabot (19000 in the last few days), has only 2 battles in Mini, and its details can not be viewed. Probably someone has to restart its client. --GrubbmGait

No, I think there must be something wrong with it's rating files on the server. I checked them, but couldn't see what's wrong. I'll check again now. -- PEZ

I think I have fixed it now.

About the rating drift reset. We'll have to ponder it some I guess. It's rather drastic to introduce it.

-- PEZ

Don't calculate drift by adjusting ratings so that the average is 1600! Since most new bots are introduced above 1600, if we always keep the average at 1600 then bots which are not updated will slowly lose points over time as more and more bots get added to the top. The point of drift correction (in my mind at least) is to make sure that a given bot's rating is always the same. For example re-introducing SandboxDT 1.91 should give it the same rating that it had back when it was first introduced. The way to do this is pick a bot to have it's rating fixed, for example we agree that tobe.Fusion 1.0 has a rating of 1590. Now we adjust all the ratings upward or downward so that tobe.Fusion 1.0 always has a rating of 1590. This ensures that a bot's rating is stable over time, even though new bots are generally high-rated ones. tobe.Fusion seems like a good choice to me because it's close to 1600, has a low specialization index, and is a way of honoring Tobe. If you read Paul's original document on rating robots he mentions that this is the way to do drift correction. --David Alves

And this doesn't mean that a bots rating in any way is decided by its performance against Fusion?? I guess it doesn't, but I have to ask. =) This is something for the new RR@H server to introduce I think. Once it can produce somewhat the same ratings as todays server without drift correction. -- PEZ

@David: I started this topic again, because I thought it would (partly) solve the continuus slow drop of rating. For some reason the outcome was completely different from what I expected. I don't like the degradation of rating over time, because you can't really compare two versions of your bot, even if they are a few weeks apart. So now you have my vote to use an 'achor', which one I leave to the veterans of robocoding. (and this way someone might even reach the 2100 border this year ;-) ) --GrubbmGait

The rankings seem to be drifting a LOT more than they used to, or at least they're going consistently down. It's kind of scary. How long will it be until a new server can solve this? -- Alcatraz

Well we can move to the new server right away. The only problem is that ratings on the new server will be very different than ratings on the old one. If we can all live with that then let's do it. --David Alves

What's important to me are the rankings. If the rankings are the same than I'll vote for a move. -- PEZ

If you feed all the battles to the new server the ratings end up different? If that is the case there must be something wrong in the new server code... -- ABC

I rewrote it from scratch and may be misreading some part of the original code. The order that battles are submitted may also affect the rankings, right now I'm sorting them by timestamp but that's only an approximation of the battle submission order. I'll look over the code some more later today. --David Alves

You should talk to Albert, he's the only one who truly understands what's going on in the server code... -- ABC


Florent - Congratulations on FloatingTadpole! I realize its rating hasn't stabilized but the outcome looks certain. I think the last person to break the top 50 was GrubbmGrb. If you need a break from writing bots, how about adding wiki pages for yourself and your bots? --Corbos

Actually FloatingTadpole is my project for an OO design postgraduate class, so I am already writing one report about it, I might put it on the wki when it's done. FloatingTadpole uses a wave surfing with forward/backward sensors (the precise prediction is not working yet) as a movement and virtual guns to fire (5 different GF targeting and one symbolic pattern matcher). --florent

Congratulations on becoming a stable subtopper instead of a pain in the **** ;-) --GrubbmGait


Two congratulations in one day? Guess I need another week off at work. Anyhow, congratulations to StefW on Tigger. Nice work. (I especially like new data structures.) --Corbos

How do you get a good ranking? I have tried my robot against many of the top 100 ones and I consistently do better than reported in the rumble. Does it have something to do with the fact that one battle usually consists of a few rounds? -- Kinsen

I did the exact opposite I never tested against bots stronger than me , I tried to be able to beat the ones below me first. And I am still doing that, concentrating on the ones against who I should get a high score but my bot problem index is below -10 --Florent

Most of the top-100 bots do better with longer battles, because they keep statistics between rounds (and they have GF or PM guns). A lot of them even keep stats between battles, you can check that by searching for files <packagename>.* in the .robotcache directory. Mostly I test by running 3 battles of 35 rounds. It is also good to focus on a specific aspect. I got rid of my ProblemBot ahf.nanoAndrew by implementing the StopNGo movement, and as a consequence no LinearTargeting bot can really hurt me anymore. Now I am concentrating on simple patternmovers like Infinity and strider.Mer --GrubbmGait

In the rumble each battle is for 35 rounds, so if comparing, use that number of rounds when you test. The top bots store statistics between rounds by saving the statistics in static variables (some even between battles by saving to a file but many don't save to file, I don't for example). Having said that the secret to get a high ranking is not only to beat the good bots but to be able to beat the simplier bots with a high margin. Look at what the top bots do against a robot that fires only head on for example! Quite a few bots use that targeting where you simply fire in the direction of the enemy at the time of fire. -- Pulsar

Congratulations to Florent for going up in the rankings and overtaking Tigger! -- StefW

Thanks, I had a huge bug on my GF guns, they only worked half the time. I dont doubt that Tigger will react soon ;) . --Florent


Geez... I have some crazy specialisation happening with mine. Not sure why, possibly because the CircularTargeting gun is so much more effective than the LinearTargeting one. I really need to add some half-decent guns to my bot, it's definitely the bottleneck at the moment. -- DarienPhoenix

An adequate CircularTargeting gun is quite effective (see Gruwel), but is not able to hit some movements (like cx.nano.Smog). My CT-gun is still my most used gun, although my PM-gun looks promising. Your Specialization Index is not that bad for your ranking. Even topbots do have 'angstgegners' (feared opponents). --GrubbmGait


I just have a general question about 1v1 RoboRumble scoring. As a gamer that is a big fan of 1v1, my intuition is to go for winning the majority of the rounds as opposed to getting a high score. I know that these often correlate, but sometimes they don't; the MovementChallenge and WaveSurfingChallenge are a prime example, where you can "win" the majority of the rounds but have a much lesser score than the firing bot. Has there ever been talk of, or actually been, a RoboRumble where only the number of rounds won is the deciding factor of winner? (You could still calculate %'s based on rounds won for the rankings; and the majority winner for PL, obviously.)

I certainly understand the scoring for Melee, and I really have no problem with it for 1v1, it just seems counter-intuitive to me as a gamer. The World Series, for example, doesn't count total runs, batting average, and errors... it's just games won ;) Just curious... -- Voidious

The closest thing is Premier League, which is the [PL] option to the right of the regular weight-class based listings. It scores based on matches won, rather than the ratio of scores for the match compared to the ranking of the bots facing off yada yada. You still have to outscore your opponent to win each match though. -- Martin Alan Pedersen

Yep, it is philophically closer to what I'm saying, but it's still based on scores over rounds won, which is the real issue for me. Via some digging (and, uh, searching) I found that there used to a Survivalist league in days long past run by David McCoy. I'm quite content to compete in the current ranking system based on score, but maybe I'll focus on Surivalism in a later bot, or [RobocodeNG]. -- Voidious

Firing bullets at all is bad for survival, except at very close range. A power 2 bullet does, what, 10 damage. It's very hard to get to a 20% hit rate exen against a relatively weak mover. If bullet damage didn't play a role then bots would just circle around one another, waiting for the other to fire. Not much fun! :-) -- Jamougha

Jamougha, I definitely see your point... However, that seems fun to me. ;) I've played some games of 1v1 Quake where most of the match is spent not seeing my opponent, but there's still a lot of skill and strategy involved. Not fun to watch, maybe, but I still find it fun to become skilled and win in that type of game. I'm not arguing with you, it's not like anything's stopping me from writing a survivalist, but I guess I can see how it would change the strategy of a "good tank" pretty drastically. Thanks for the input! -- Voidious

I disagree with not firing. If you try to not fire, you survive longer but it is deceiving. The main reason you can survive longer is because the enemy is not losing its energy as quickly. If you have a 100% hit rate, the battle would definately be very fast. The other problem is that you only gain 60 points for surviving in a duel. That means that the opponent can win one round and do decently the rest and still win. When you fire, if you fire a 2.0 bullet, you gain 6, and your opponent loses 10. The final result is a gain of 16 - 2 (the energy spent), or 14. All you need is a 12.5% hit rate and your opponent will lose the same amount of energy as you--provided that the opponent does not fire at you. However, if you fire a 3.0, you do need a better hit rate. -- Kinsen

Hmmm, I had completely forgotten about energy gain from hitting the opponent, I guess I've been away too long. :) That would substantially change things against much of the field. (Mind you, I doubt many of the top-ten hit one another as high as 12.5% of the time. The usual hit rates used to be 10-11%, and I think movements have improved!) Anyhow, it would be interesting to see what sort of a strategy you come up with for a Survivalist, Voidious. -- Jamougha


Hmm is there a client reporting bad results? At least for PulsarMax as it for the last weeks has dropped in ranking (about 5-8 points) and PL ranking as well and suddenly has 9 lost instead of 1-3. Has anyone else noticed this for their bots? A theory could be skipped turns. With newer computers robocode seems to set the tick time to 1 which seems dangerous to me for several reasons. First of all on windows systems measuring one 1ms using regular timers is not accurate at all. Secondly I don't trust robocode with regards to the "robocode-framework" delay as it now and then "jumps"/"holds" for a tiny fraction (look at the output window which gets refreshed at times, or look at the pause/resume behaviour). Probably due to the wait() with delays still there, even with the patched version of 1.0.7 there are threading problems (but they don't crash robocode). -- Pulsar

I just now upgraded to 1.0.7, and then am using Kawigi's version. No, I haven't run the RR@H client with this yet. But I notice that CassiusClay now skips turns like crazy. When before it skipped only 2 or so turns in a battle over 500 rounds. Anyone knows a fix for this? -- PEZ

Yes, I noticed that one of Shadow's latest versions got 95%+ against PulsarMax, an impossible score. Some client must have trouble running PulsarMax... I have the CPU constant (manually) set to 300+ in both my RR@H clients. -- ABC

For me it doesn't help whatever I set the cpu constant to. Switching back to 1.0.6 CC doesn't skip turns again. Can you check what CC says in its output window when you run it ABC? If it skips turns it says so. With a cpu constant of 300 it shouldn't skip turns at all. Except when it is cornered and has troubles smoothing out of it. -- PEZ

Version 0.65 of Dookious got 90%+ against PulsarMax, another impossible score, so something is definitely going on there. Dookious 0.66 actually dealt a legitimate loss to PulsarMax, though. :-) And one of those two Dookious/CC combos did, too, but I'm going to remove them today. -- Voidious

Thanks ABC. Can someone using 1.0.7 test run CC and see if it skips turns? -- PEZ

I'm using 1.0.7, and CC skipped 5 turns over a 35-round match vs Sitting Duck. I haven't changed any CPU constant settings or anything, AMD2000 w/768 RAM. -- Voidious

And if you run it agains Shadow? -- PEZ

Sorry, I was on my way out the door there. Now I'm at work... On the P4/2.4GHz here, it gets 8 skipped turns over 10 rounds against Sitting Duck, 105 over 10 rounds against Shadow. This is still Robocode 1.0.7, but Java 1.4.2 here at work; at home I was using Java 1.5. -- Voidious

Using Kawigi's 1.0.7 and Java 1.5, CC gets 3 skipped turns in 35 rounds against Shadow on a P3/1.0GHz with 128Mb(!) ram. The cpu.constant is set on 10. Maybe it is more Java-version related? -- GrubbmGait

Maybe it is when the CPU constant is really low, like 1, AND version 1.0.7 (patched or not). 10 is fine but 20 is better. I manually set it to 100+ though. We should probably stop using anything other than 1.0.6 for now in the RR@H then? If you remove kawigis patch PEZ what happens then? Maybe 1.0.7 is fine in itself (with a large cpu constant). -- Pulsar

I thought I had robocode.cpu.constant.1000=100 on all of my machines, but it turns out it was set to '2' on my work machine. I am guessing the general feeling is that Matt Nelson had the right idea in terms of encouraging code efficiency and limiting the impact of a robot on a machine's performance / time it takes to run a competition. Try running mn.CombatTeam? in a team battle .. each round takes longer than a normal match. However, the implementation isn't what it could be. Other programs running in the background will have an impact on the robot's time allowances. If there were a way to measure machine cycles actually used by a thread (maybe there is), I expect that would be a better approach. When I bumped up the time allowance I felt guilty about it, but I don't have an abusive bot (though it can be slowish at times) and I'd rather see the results of my work without muddying the view with skipped turns. I used to get around 3% of turns skipped, but now it is less that 1/100th of a percent. -- Martin

The funny thing is that it matters what opponent you have. In my tests with 1.0.7 CC got loads of skipped turns against Shadow but not very many against other bots. Anyway, on my machine it is a 1.0.7 issue. I've tried different JVMs and it doesn't matter. Kawigi's patch or official 1.0.7 are the same. Except that with Kawigi's version Robocode doesn't freeze ater a couple of rounds with Shadow involved. I've now set single-CPU-affinity of the 1.5.0_06 version of java.exe that I have settled on. I'll be running 1.0.6 except for when I experiment with RobocodeSG. RR@H clients should certainly not use anything else. -- PEZ

I imagine that Shadow will probably skip 100x times more rounds than CC in that setup... Any reason to justify "upgrading" to 1.0.7? Does Kawigi's patch solve that anoying debug window bug? -- ABC

Yeah. Since I am trying to figure out how to trick Shadow's scary gun I didn't like that idea running against a bot that probably skipped turns like crazy. Reasons to go for 1.0.7 other than machocistics? Doubt it. =) -- PEZ

Kawigi's version does solve the Hyperthreading bug and adds a better editor. The debug window still redraws every 10 seconds though. -- GrubbmGait

Ok, I'm a relative newbie here, so I just installed the "latest" Robocode when I got into all this. Should I find 1.0.6 and switch to it? I don't have skipped turn issues for CC, but I certainly don't mind switching, either. And I use an external text editor, anyway. -- Voidious

You don't have skipped turn issues for CC? How does 105 skipped turns over 10 rounds sound to you? =) And of course you use an external editor! -- PEZ

I still use the built-in editor! Copy-paste into notepad when you need to find/replace rules. ;) -- ABC

Hmm, I guess I have a much higher tolerance for skipped turns than you do ;) And I haven't tried upping the CPU constant on my home machine, either, which might help. But I might as well try 1.0.6 while I'm at it. -- Voidious

Any new tool takes some getting used to, but I used it (Eclipse) at a former job, installed it at my present job, and use it for Robocode now, though I started with a relatively simple text editor. I have my source files on a USB drive and Eclipse workspaces / projects set up at home and work, each set up to place the compiled-on-the-fly .class files into Robocode's robots directory. The only issue I've run into is when I have to do a 'Clean' (removes all prior output) and it wipes out the jar files of the downloaded bots. I just keep a backup of the bots I use handy in a seperate directory on my desktop. I can't imagine building a behemoth like Ugluk without an IDE. Well, I can, but it involves lots of pain. Perhaps I'll add a section on setting up Eclipse for use with Robocode. Right now I'm focused on some specific developments for Ugluk though. -- Martin

Yeah, as I mentioned on the Editors page, I use a program called Syn on my Windows machine. I haven't explored Mac programs much, but I would like to make "the switch" soon, so I will surely look into that more at some point. What little editing I've done on the Mac at work, I used a program called "JEdit" that I found to be pretty good. -- Voidious

For what it's worth, on my home machine with Robocode 1.0.7: CC skipped 121 turns over 35 rounds vs Shadow with CPU constant set to 1 (which is what it's been set to), and skipped none when I upped that to 100. For now, I'm just going to keep that constant raised until I see a reason to put it back. -- Voidious

Could you please check if Shadow skips a lot of turns in that battle? You'll need to add a line with "gunLogLevel?=10" in its properties file. And then maybe test it with the CPU constant set to 1? Thanks in advance. -- ABC

No problem... With CPU constant = 1, Shadow 3.69 skipped 291 turns vs CassiusClay 1.9.9.99b over 35 rounds. With CPU constant = 100, he skipped none. -- Voidious

Funny. For me it didn't matter what I set the CPU constant to. Thanks for testing that stuff. I might send you a test version of CC which I would like to know how much it skips turns. -- ~~~~

Thanks Voidious, 291 skipped turns shouldn't affect Shadow's performance much. PEZ, maybe you forgot to restart Robocode after changing the properties file? -- ABC

No, I didn't forget that. And I skipped a lot more turns than V did. -- PEZ

I figured it was worth mentioning this, in light of Pulsar's recent comments... I know this could just be a fluke, but I cannot come close to duplicating one result for Dookious 0.71. He got 49.1% vs dft.Virgin in the RR, but I cannot get a score lower than 60% after quite a few test matches here, even with my CPU constant set to 1. -- Voidious

Have you tried running tests with Dookious entered second into the battle? I have noticed that on some occasions the order of entry seems to have an effect on the number of skipped turns that a bot gets. --wcsv

Good thought, I hadn't tried that... But still, 3 matches all 60-65% in favor of Dookious, entering second with CPU constant = 1. I haven't found any other results that I cannot at least come close to duplicating... -- Voidious

Three battles is hardly an extensive test. The results over 35 rounds can vary greatly. Get a few strange starting postions and your surfing stats could be contaminated for a few rounds and such things. Run 100 battles in RoboLeague and see what happens. -- PEZ

I had run nearly a dozen before that, the "3 more" was just to test it with Dookious listed as the second participant. Still, I've seen some very strange results, it just seemed worth mentioning... -- Voidious

Wow, my Nanos are still fairly competative. I might have to dust off the old compiler and update them some more ;) -- Miked0801

Most of the bots I've seen lately (since I started in August) have been megabots. I know ABC and PEZ have been going back and forth with their beasts, and even Axe popped his head in, though I don't know if he's brewing something. -- Martin

Yes, but I've been paying some attention to Pugilist too. I'll force it back above 2000 points if it's the last thing I do. All this while I have allowed that surfing bug to remain unfixed in P. My bet is that once I track down the bug and fix it 2000 points and more will be there. And right now I have almost 90 bytes to play with. Plenty!

Mike, please dust that compiler off! Yeah, your nanos still compete with bravour, but I think nano development has found a few new things that you might be able to turn into gold.

-- PEZ

Ugluk v0.8.10 is not distrubuted. I am only running it from one machine so that I can build statistics correctly, which isn't working under RoboLeague. I have two other machines processing the Rumble tonight as well. Ugluk's file not being at the URL listed shouldn't really affect anyone aside from seeing the message that they cannot download it. -- Martin (pasted from up the page by Voidious)

Martin, I know this wasn't a "real" version that you were going to leave in the RoboRumble, but I still think it was kind of an unfair thing to do. This version got an unfair advantage versus every other tank it faced, and it (if very slightly) affected all of their ratings. That said, I don't think it's any kind of big deal, really, but I just wanted to point out that it seemed a bit unfair to me. -- Voidious

I used to use the latest particip1v1.txt file to build a RoboLeague file that would run my bot against all others in the rumble. I had a script for it. Let's see.... Yes, this:

#!/bin/bash
ENEMIES_FILE="$HOME/robocode/robots/roborumble/files/particip1v1.txt"
BOT=$1
echo '<?xml version="1.0" encoding="UTF-8"?>'
echo '<LEAGUE version="1.0" opponents_per_grouping="2" rounds_per_grouping="35" inactivity_time="450" gun_cooling_rate="0.1" battlefield_width="800" battlefield_height="6
00" focused_competitor_id="'${BOT}'">'
awk -F',' ' {
    id = classname = version = $1;
    sub(/.* /, "", version);
    sub(" " version "$", "", classname);
    id = classname;
    sub(/.*\./, "", id);
    id = id " " version;
    print "<COMPETITOR id=\"" id "\" classname=\"" classname "\" version=\"" version "\" use_latest=\"false\"/>"
} ' $ENEMIES_FILE
echo "</LEAGUE>"
It's probably smarter (more portable) to make this stuff in Java, but I kinda suck at Java programming and shell scripting is always easier for me.

Note that I still run this script on Windows. Cygwin is my friend in Windows land. =)

-- PEZ

Believe it or not, I have recently been thinking about this exact thing. I'm sure I could do a Perl and/or Java version of the above pretty quickly. Thanks PEZ. =) It would probably also be trivial to make a Perl script that spiders the RR details page and does a comparison of those results to your RoboLeague results. (It could be done in a more direct SQL query by someone with access to the RoboRumble server, but it should be easy enough to spider/parse with Perl for now.) Of course, it will have to wait until I can pull myself away from Dookious... -- Voidious

V, if you do a CGI of this we can host it on the robowiki.net server. The CGI could read the participants list directly from the wiki maybe? -- PEZ

Sure, how are the pages stored? Is it some kind of database, or just text files? In any case, I'm sure it's no problem. The first thing to create would be a CGI to turn the Participants List into a RoboLeague XML template. Next, I could look into the results comparison against a RoboRumble details page. -- Voidious

Thanks, V. I've placed your RoboLeague template generator script here:

I don't think the comparison part is strictly necessary. If you really want to know how your bot would compete in the RR@H then just enter it into the competition. I believe Martin's thing was about making his bot collect data on all participants. The path of the dark side really. =)

-- PEZ


Just because I was curious which bot was The best scoring bot against the most bots, (and because I need some Perl on my next assignment) I started with Perl and made a script that counts the number of times a bot is the best bot against any other bot. Example: I know GrubbmGrb is the best bot against fnc.bandit2002, you can check that. The outcome has no real surprises I think, but it can be interesting for the ones that like all sorts of rankings.

#Rankings summary, updated 1139303839212 #Tue Feb 07 10:17:19 CET 2006

rankbest botname RR-ranking
188 mue.Ascendant_1.2.5 1
247 abc.tron3.Shadow_3.69k 4
345 pez.rumble.[CassiusClay 2gamma]? 2
433 pulsar.[PulsarMax 0]?.8.9 3
530 axeBots.[SilverSurfer 2]?.53.33 5
618 jam.[RaikoMX 0]?.32 6
716 florent.test.Toad_0.14t 9
812 voidious.Dookious_0.75b 11
910 davidalves.Phoenix_0.27 10
10 9 rz.Aleph_0.34 19
11 8 dft.Cyanide_1.80.b 8
12 7 jam.mini.Raiko_0.43 29
12 7 pe.[SandboxDT 3]?.02 16
12 7 tide.pear.Pear_0.62.1 7
15 6 dft.Immortal_1.31 18
15 6 pez.mini.Pugilist_2.3 17
17 5 gh.[GrubbmGrb 1]?.1.3 50
17 5 pez.rumble.Ali_0.3.1 13
19 4 apv.test.Virus_0.6.1 15
19 4 gh.mini.Gruwel_0.8 129
19 4 voidious.Shaakious_0.12 21
19 4 wcsv.PowerHouse.[PowerHouse 1]?.514
23 3 abc.tron3.Tron_3.11 26
23 3 jam.micro.[RaikoMicro 1]?.44 38
25 2 6 bots
31 1 18 bots

-- GrubbmGait

Hey, that's neat! I like any type of ranking that puts Dookious higher than his true RoboRumble rating =) Wow, Ascendant really is a monster. -- Voidious

Very cool indeed! -- ABC

Yes, very cool. And I think there is something for us to learn from this, even if it eludes me exactly what the lesson is... -- PEZ

Knowing the average ranking of the bots you'r the best against should also be interesting, maybe. The more rankings the better. :) -- ABC

By the way, PEZ, I remember somewhere on the wiki a situation (I checked, ABCsLinkedListChallenge) where you likened yourself to Pooh... Your comment here also seemed very Pooh-like to me. =) -- Voidious

For the complete list including victims, but without ranking, see http://home.versatel.nl/gheijenk/robocode/things/bestbot.html Note that this ranking is from the 7th! -- GrubbmGait

Wow, thanks. That's some really interesting reading. I have none of Kawigi's bots on CC's list. Pooh is starting to think here. Think, think, think. Hmmm, I think I need some food. =) -- PEZ

Paul had asked about the momentum calculation but I didn't spot an answer to it. My question about it is if the figure is relative to the number of battles processed, so that rating fluctuation is dampened for bots that have been around a long time. Really the question is 'how many battles is good enough'? For example, if you want a rating that you are confident is within 0.5 points of the 'true rating', and your specialization index is x, you have to fight n or more battles ... -- Martin

One thing I noticed is that the value was 3 times the sum of the ProblemBot Index column. My understanding is that your rating can go up or down a maximum of 0.01 times this value. (e.g. rating of 1700 with a momentum of -33 can drop to 1699.67 in one fight). I think this system has resulted in wild fluctuations in early melee ratings, since a bad run can drop you 9 times as fast. -- Martin

I'm not really making a proposal here or anything, but an idea did occur to me: might it not be reasonable to weight an opponent's importance (in your rating) inversely to their specialization index? With the recent onslaught of rammers, the whole top 10 has gone down 3-4 points in recent weeks. That just seems kinda weird, as I tend to think of the ratings as more "objective" than that. Changing the rating criteria would obviously be a majorly big deal, and it's really not fair to just change it like that, but it's just a thought I felt like sharing. (It's not already that way, right?) -- Voidious

The rating reflects the opponent's ability to defeat the competition. The competition is changing (though not significantly), and with it the ratings. I am guessing that your reasons for wanting rating stability boil down to (a) 2000 Club members should not lose their status, and (b) you use rating as a benchmark for performance testing. Of those, I can see the benchmark being a problem but hey .. time marches on .. like the money lending banks in The Grapes of Wrath, "If you aren't growing, you're dying." If you want your rating testbed you can always run RoboLeague with the same set of bots. That doesn't have the same 'community' feel (or bragging rights) but if you want to keep your rating high you need to find ways to crush enemies who choose to fight at point blank range, or whatever other techniques attract interest. Assuming I develop Banzai! to a competitive level, I plan to use Banzai! mode in Ugluk against opponents for which that technique gives a better rating. Underhanded? Maybe .. but I don't think anyone is handing out awards for being nice. -- Martin

(Edit conflict)It seems that the SPI (Specialization index) is also dependent of the number of battles. NanoDeath for instance has a low index, while its results are comparable to GrubbmThree or MaxRisk. The whole ratingsystem is build upon the results against everyone. Replacing your bot with a newer version immediately influences the momentum of everyone else, it is all tied together. Just look upon rammers as sortof Robin Hoods, stealing points from the rich and giving them to the poor, instead of the mean thieves they actually are. ;-) -- GrubbmGait

(Edit conflict x2, heh) Nobody's getting kicked out of any 2000 club (Shiva and DarkHallow are both below 2K now), and I don't really think it's right to change a rating system after so long. However, if I were coding a rating system from scratch right now, I think taking the specialization index into account would be a very reasonable move; I think the ratings would be more stable on the whole, a new bot's ranking might stabilize much more quickly, and a low specialization index means that a tank is a "better reference" to set your rating against, IMO. I guess I just don't think that Ascendant's rating should go down another 50 points if 50 more RamBots entered the competition =) But maybe it should, I dunno. @Grubb: Ok, I'll think of it that way now, then :-) What you said about number of battles intrigues me, I'm going to have to check that out. -- Voidious

Whoah, weird indeed... Is this just not true? Specialization index measures how much specialized is your bot. A big index means your bot is highly specialized. A low index means your bot is a generalist. It is calculated as the average squared value for the ProblemBot index. Average squared value of PBI for NanoDeath is 145, but his SPI is listed as 34. -- Voidious

I'm not sure it's the rambots who has induced the rating drop. It's something that happens now and then for reasons unknown. And there probably won't be a time when a large part of the bot population are rammers anyway. My main problem with the rankings is the added 1600 points. I think that if people saw that rammers actually has scores around 0 then they would become less interesting to contribute with. -- PEZ

So then some would have negative ratings? From a little Googling on ELO ratings, itlooks like it's standard for "average" to be 1300 - 1700, so the adding of 1600 could at least provide some continuity with other ELO systems. Althought ratings well over 2500 (as in Chess champions) are still quite unheard of in these parts =) -- Voidious

Sorry, but I can't for my life see the point with continuity with other ELO systems. Unlike other ELO systems our participants fight all other participants. Our rating are real hard facts. What's wrong with negative ratings? A bot reaching the equivalent of the chess rating 2500 would enter the RR@H 1000 club, which would make quite much sense. -- PEZ

Well, only because some people might be familiar with the significance of certain ELO ratings - like I played some Magic: The Gathering some years ago (that collectible card game), and I remember from those ratings how impressive a 2,000 ELO rating is. I'm guessing at least some of these guys are familiar with Chess ratings (not that I am). But you're right that the RoboRumble is very different from other ELO rating systems, so I suppose it might as well be zero-based with negative ratings... -- Voidious

It is zero based already. All we do is add 1600 to it. Which seems really silly to me. I have problems relating to the whole ELO thing actually and I would much rather we used the raw average score %. It produces the exact same rankings and the worst bots get 0%, the average gets 50% and the best imaginable would be 100%. But that's just me and I think it would be cool just to get rid of the silly 1600 fluff-points. Not that it's a biggie for me. I'll continue subtract 1600. =) -- PEZ

Average score percent of the pairings sounds good to me. This would pretty much stop the rating shifts and it could even make all the pairings of a bot contribute equally to its ranking (right now the impact depends on the number of battles for that matchup). But how do you know that this would result in the same ranking? --mue

Because I have tested it. Before the big ugly server crash (see WikiOutage and OperationRecovery and probably other pages) we had a servlet that produced these alterenate ratings. It was always the same rankings. And so it should be if ELO is doing its job I guess. Knowing that Ascendant scores 83.8% on average against the population of RR@H bots impresses me quite a lot more than seeing it have a rating of 2080 (well, 480 then). To pick the #10 bot (Dookius) and see it scores 80.9% puts that bot's real strength in clearer view too, in my book. I'd like if the ranking servlet would produce:

Sorted by score % by default but sortable on any of the others at the viewer's whim.

-- PEZ

(Edit conflict) Average score % is the same as 'total % wins' in Premier League. The ELO-rating is exactly the same, but blurred with the momentum-thingy. Using the average score % makes the whole momentum thingy obsolete, so your 'exact' ranking will be only known if all pairings have been fought. Another thing is the rolling average on any pairing. Removing it does influence the data-saving bots (and therefor every bot). I can live with the current rating system, frankly I do not care which system is used, as long as the results are clear. As long as new bots and updates appear, the ratings will fluctuate. Adding a few rambots will let the topbots lose some points, also with raw average score %. -- GrubbmGait

Last time I checked the "total % wins" in the PL was something like 300+% for the top bots. Truly wierd and I think Pulsar has said he will remove it once he has some free minutes for it. As the ELO ratings do not stabalize until all pairings are fought twice over it doesn't matter if the score % average doesn't produce a "final" result until all pairings are fought. I just like to be able to see that why a bot has the rating it has in plain view. "It scores 30% on average against the competition". As opposed to "It has a rating of 1300 (or whatever it would be) as a result of some magic function and rolling averages and then we add 1600." With my suggested update of the ranking servlet we'll have both ratings anyway. As I would definately focues on the score % there I could even let you guys have your silly 1600 added to the ELO encrypted figures. -- PEZ

(edit conflict) I like the ELO rating, it makes winning against a top bot more valuable than crushing a weak one. In that score% ranking we must wait for more pairings for it to be close to correct. I think that the current fluctuation comes from many bots starting far from their last position. Every time a bot gets a 300+ momentum it makes a shockwave that takes a while to settle. What we could do is wait for a bot to get 100 pairings (instead of the current 10 battles) before it starts to influence other bots. -- ABC

@PEZ: As the 'total % wins' just is the total of all your score %, you have to divide by the number of pairings to get the 'average score %'. For Ascendant this means 347/415 = 83.6% (as the total is rounded) -- GrubbmGait

Wierd. But finally someone knows what that figure means. It won't be needed if write a new ranking servlet, according to my specs above, anyway. It would be PL and Score share rankings in the same table. We could sort it by the ELO figure by default if you are fond of it. (Doesn't matter since it produces the same table as sorting by score %.) -- PEZ

The average-score-percentage idea seems sound. I would still like having ELO ratings listed, since that is what I have grown used to. Are the score percentages the only thing stored from past matches, or are all of the scoring details stored? I think it would be cool to see Survivalist rankings, too, but I'm guessing that data isn't kept around. -- Voidious

I don't remember if that is saved. Totally agree a survivalist ranking would be cool too. Let's hope it's available to us without messing around with the upload code. -- PEZ

All uploaded battles are saved from stardate 1130869000000 (approx) till now (huge file!). These battles contain Totalscore, bulletdamage and rounds won for both bots. Next to that each bot has its own file containing the current ELO rating, and results against all enemies it ever fought (being winpercentage and number of battles). So it is possible to make a survivalist ranking, but you have to iterate through at least a year worth of battles. -- GrubbmGait

Cool! Hey, we're all used to running many-thousand round benchmarks overnight, what's a year's worth of battle totals that need processing? =) I don't know who all maintains / has access to this RoboRumble code, but I'm willing to help however possible. -- Voidious

Not too long ago I was stoked when I scored 99.7 in 1 battle in the Roborumble against ScruchiPu, a long time serious problem bot. I just got the same score against DogManSPE?, another long time serious problem bot. As much as I like the thought of such victories, I don't believe it is more than a glitch in the matrix. My beating all of Loki's bots, however, was not. -- Martin / Ugluk

Both ScruchiPu and DogManSPE? are vulnerable for getting disabled, so I do not pay any attention to the score against them. Beating all of Loki's bots, although close, is something I only did in melee. -- GrubbmGait

Does this mean it's completely random if you get 99.x% or a "real score" against these tanks? If so, I think they should be removed from the RoboRumble. (Another good case for why Specialization Index should be factored in. :P) -- Voidious

It's just a game. -- Martin

Yes, this is true... However, that's far from a good reason not to remove them, IMO. ;) -- Voidious

I think you need to look at it from these particular bots' point of view. It's them getting disabled and thus not reaching their true potential. If their authors still want them in the rumble than so they should remain. For all other bots it's just one or two games out of 1000 and thus doesn't begin to really impact the rating. Besides, it might be that something provokes the shutout and then bots that succeed in that has a fair advantage. "Might", it's pure speculation from my side, but prove me wrong if you object to the possibility. =) -- PEZ

Well, I wouldn't want to go removing anybody's tanks without permission, especially Albert's, father of the RoboRumble (right?). Clearly, looking at his RR details, it's a rare occurence with ScruchiPu - they just got lumped together from a previous comment. But DogManSPE?, was he even entered by his author, or just put in because he was a classic? And he gets 76.6% against ad.last.Bottom, which makes me think he might crash something like 1 in 4 times. The way it ends up, he has a super low rating (I think) because he crashes so much, which means when another tank faces him, they: (a) get him to crash, and get much higher than expected %; (b) don't get him to crash, and get much lower than expected; or (c) face him multiple times, so that it averages out to the expected %.

I know it's "just a game", but if there's a tank with a meaningless rating, whose effect on the RR scores is to add a Math.random() to everyone's rating, I don't see why we shouldn't consider removing it. For now, I'll look into making him crash, even though it makes me feel really dirty =) -- Voidious

hehe. I'w have to update ScuchiPu? and make sure it doesn't crash, event if it gets a little bit slower :-) Anyway, it's rating is absolutely meaningful and I don't think it will make a difference for anyone (1 bot on 400!), so I'd prefer to maintain it in the RoboRumble. -- Albert

Yeah, I'm really sorry if I got a little grumpy there :-\ It's definitely no big deal, and I think DogManSPE? is really the main tank I was talking about, even if I wasn't too clear. -- Voidious


This is the first time I see a draw in the PL -- GrubbmGait
99timmit.mini.TimVA? 0.43 597 298/1/115 414 242

It seems that the melee-rankings can not really be trusted at the moment as there have been some strange battleresults in melee. Just see the details of f.e. FloodHT, CoriantumrWithDMMGun or Logic. -- GrubbmGait

I have stopped my instance of MeleeRumble?@Home as the uploading of results hangs after a random number of results have been uploaded. Sometimes i get a server error message, but most times it just hangs. So maybe my RR-client is the cause, maybe its the server... --Loki

It may be me, as I just downloaded the client last night and may not have gotten everything right in the installation process. Someone should redo the RoboRumble/StartingWithRoboRumble page as it is a little confusing and maybe out of date. -- KID

Yeah it's me. I just run a test round with Coriantumr and now that bot has problem bots when before it didn't. oh boy... and i'm not sure what the problem is, so some help would be nice. -- KID

I will post my meleerumble.txt file when I get home - I think GrubbmGait might've posted his elsewhere on the site as an example, too, but I'm not sure where. I agree that the directions for setting up the RR client could be a little easier to follow for someone just setting it up. The bad results will fade away, as the most recent matches for any pairing get the highest weight. -- Voidious

You may use http://home.comcast.net/~kokyunage/robocode/roborumble.zip for my version of the complete setup (with my name, not that anyone is counting uploads). You can copy over the robot jar files to the /roborumble/robots directory if you don't want to download them all again. -- Martin

I don't think it's the meleerumble.txt file. i used the one GrubbmGait posted and just changed the name and how many matches that it ran to 1. i think the problem is some where else because Logic and Coriantumr normally crash on my computer when running Robocode. it might be my start .bat, is it ok that i use: "../newrobocode.zip;" and not: "../robocode.jar;", and use "-Xmx1024M"? -- KID

Do you have robocode.cpu.constant.1000=100 in your robocode.properties file? -- Florent

No it's not set to 100 but 1, i'll change that and try it. -- KID

Thank you! that made it work! i run a match of all the bots that would normally crash on me and non of them did! i'll let the meleerumble run for a while just to let it even out. -- KID

I don't think it's mandatory, but having -Dsun.io.useCanonCaches=false in your .bat files is also good, to avoid the JRE 1.4.2 SecurityException Bug. -- Voidious

I tried to repair the the biggest part of the damage taken by Coriantumr, FloodHT and CoriantumrWithDMMGun (all from Kawigi ?). The rest will be done as battles go by. -- GrubbmGait

Sorry if this is the wrong place but I couldn't find a better one. Is there a newbie league? --T.

As far as I know, right now there is only the RoboRumble. (Someone will correct me if I'm wrong.) However, there is a wide range of skill levels of the tanks there, it's not only for "big bad tanks". With a program like RoboLeague, you can run matches among bots on your own to test them. Another idea is to start with a NanoBot, MiniBot, or MicroBot, which is a restriction on the code size of a tank - those tanks are a bit simpler and less feature-filled, and they have separate rankings in the RoboRumble, as well. -- Voidious

Hey Grubb, it's been a while - could you possibly rerun your "The best scoring bot against the most bots" script sometime? That'd be sweet. Or just send it along ;) -- Voidious

Running the script is no problem, collecting the info needed is a hell of a job. I did it when I was in between projects and getting familiar with Perl. I have mailed you the script. -- GrubbmGait

Gladiator seems to be droping in the melee rankings. I just don't think that after being 7th, a bot should drop to 13th. He also has 8 ProblemBots. My thought was that someone is running the MeleeRumble? on a computer that Gladiator crushes on. Thoughts? -- KID (EDIT: X2, FloodHT, and Shiz all have the same problem)

You are right, and it has happened a couple of times in the last few months. And it mostly affects the same bots every time. I'll try if I can run some dedicated battles this evening. -- GrubbmGait

I've done some dedicated battles, Gladiator, X2, FloodHT and Shiz are close to their rightful ranking again. The rest will be done gradually by normal battles. -- GrubbmGait

I haven't had time to investigate it yet, but I wanted to mention that it seems like some bad results may being reported from some RR client. Shiva has dropped about 10 points in the last day or two, and these three results from recent bots really stand out: yk.[JahMicro 1]?.0 0.2 = (-91.6); kinsen.nano.Hoplomachy_1.4 27.4 = (-53.4); stelo.[IntrinsicVolatility 1]?.0 = 3.1 (-66.7). I will investigate when I have some time (which may not be today), but maybe someone will notice an issue with their client before then. -- Voidious

Does Shiva do file i/o? -- Martin

Shiva is not the only one with problems, also Locke, SilverSurfer, Aristocles and some more. As far as I can tell, Stelokim has run (some of) the battles that give strange results. Could you give some insight on the environment you are running RR@Home on? -- GrubbmGait

The bad battles don't discriminate. There are data savers, non-savers, fast bots, slow bots and everything in between. It's effected a LOT of bots. Just check out ProblemBot Indexes > 25 on stelo.[IntrinsicVolatility 1]?.0, stelo.Mirror_1.1 and kinsen.nano.Hoplomachy_1.4. Not sure what's next but this is killing ratings at every level. Hrmph. --Corbos

If the bad results are centered on a few new bots, it should fix itself quickly when new versions of them are posted. I'll be running my clients a lot in the meantime... (Makes it easier to resist RC while I study, too, hehe.) -- Voidious

They're centered on a few bots, but aren't caused by them. There's a bad client out there. It happened to a lesser extent a month or so ago. If the client continues to run, I think we'll still get bad results. Is there a way to view who turned in some of the crazy battles? If we can isolate the client, maybe we can troubleshoot what's wrong with it. --Corbos

Stelokim, have you stopped your client, or looked into why it might be giving bad results? It would probably be best if you stopped your client until it's figured out. Actually, same goes for anyone who might've setup a new client lately. Re-releasing those bots with the most bad results as new versions would be a huge help once the client issue is resolved, too. -- Voidious

-- What's the problem with my bots?

I'm using roborumbleathome_beta9_server_b12.ZIP. I run RR@home client these days. (8 hours a day I think.)

stelo.Mirror and stelo.IntrinsicVolatility is my recently developed bots. Mirror does coordinate-based MirrorMovement. IntrinsicVolatility has ActualMovements (in contrast to VirtualBullets) composed of (MirrorMovement, RandomMovement), choosing best-win-rate movement every round.

I have run a battle between Shiva and IntrinsicVolatility.

Shiva's output window:

=========================
Round 1 of 10
=========================
Error while reading:java.io.FileNotFoundException: C:\Robocode\.robotcache\cf.proto.Shiva_2.2.jar_\cf\proto\Shiva.data\stelo.IntrinsicVolatility.txt.gz (��d�� ����; ã; �� ��4ϴ�)
cf.proto.Shiva 2.2: Exception: java.util.ConcurrentModificationException
java.util.ConcurrentModificationException
    at java.util.AbstractList$Itr.checkForComodification(Unknown Source)
    at java.util.AbstractList$Itr.next(Unknown Source)
    at robocode.peer.robot.EventManager.processEvents(Unknown Source)
    at robocode.peer.RobotPeer.tick(Unknown Source)
    at robocode.AdvancedRobot.execute(Unknown Source)
    at cf.proto.Shiva.run(Unknown Source)
    at robocode.peer.RobotPeer.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
=========================
Round 2 of 10
=========================
cf.proto.Shiva 2.2: Exception: java.util.ConcurrentModificationException
java.util.ConcurrentModificationException
    at java.util.AbstractList$Itr.checkForComodification(Unknown Source)
    at java.util.AbstractList$Itr.next(Unknown Source)
    at robocode.peer.robot.EventManager.processEvents(Unknown Source)
    at robocode.peer.RobotPeer.tick(Unknown Source)
    at robocode.AdvancedRobot.execute(Unknown Source)
    at cf.proto.Shiva.run(Unknown Source)
    at robocode.peer.RobotPeer.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
=========================
Round 3 of 10
=========================
cf.proto.Shiva 2.2: Exception: java.util.ConcurrentModificationException
java.util.ConcurrentModificationException
    at java.util.AbstractList$Itr.checkForComodification(Unknown Source)
    at java.util.AbstractList$Itr.next(Unknown Source)
    at robocode.peer.robot.EventManager.processEvents(Unknown Source)
    at robocode.peer.RobotPeer.tick(Unknown Source)
    at robocode.AdvancedRobot.execute(Unknown Source)
    at cf.proto.Shiva.run(Unknown Source)
    at robocode.peer.RobotPeer.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)

I don't know why. I will not run RR@home client until it's figured out. -- Stelokim

That's very odd. What version of java are you using? --wcsv

java version "1.5.0_06"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_06-b05)
Java HotSpot(TM) Client VM (build 1.5.0_06-b05, mixed mode, sharing)
-- Stelokim

I'm not sure if this is it, but do you have the argument "-Dsun.io.useCanonCaches=false" in your Robocode / RoboRumble startup file? If not, could you try adding it and see if you get the same result? -- Voidious

lol this is like a real-time chat. :) I run robocode.bat so it's guaranteed:

java -Xmx512M -Dsun.io.useCanonCaches=false -jar robocode.jar
robocode version is 1.1.4 Beta

 -- Stelokim

Hehe. Ok, I forgot that it was added to the default .bat files in the latest Robocode releases (thanks to Fnl). I'll try some things, but hopefully someone with more Java knowledge will have a clue about it. -- Voidious

Hey Stelokim, if you have time could you run a battle between Aristocles 0.3.7 and Mirror 1.1 and see if you get the same error? It would be easier to study this problem if we could look at the code of the bot throwing the exception. --wcsv

=========================
Round 1 of 10
=========================
pez.micro.Aristocles 0.3.7: Exception: java.util.ConcurrentModificationException
java.util.ConcurrentModificationException
    at java.util.AbstractList$Itr.checkForComodification(Unknown Source)
    at java.util.AbstractList$Itr.next(Unknown Source)
    at robocode.peer.robot.EventManager.processEvents(Unknown Source)
    at robocode.peer.RobotPeer.tick(Unknown Source)
    at robocode.peer.RobotPeer.turnRadar(Unknown Source)
    at robocode._AdvancedRadiansRobot.turnRadarRightRadians(Unknown Source)
    at pez.micro.Aristocles.run(Aristocles.java:47)
    at robocode.peer.RobotPeer.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
=========================
Round 2 of 10
=========================
pez.micro.Aristocles 0.3.7: Exception: java.util.ConcurrentModificationException
java.util.ConcurrentModificationException
    at java.util.AbstractList$Itr.checkForComodification(Unknown Source)
    at java.util.AbstractList$Itr.next(Unknown Source)
    at robocode.peer.robot.EventManager.processEvents(Unknown Source)
    at robocode.peer.RobotPeer.tick(Unknown Source)
    at robocode.peer.RobotPeer.turnRadar(Unknown Source)
    at robocode._AdvancedRadiansRobot.turnRadarRightRadians(Unknown Source)
    at pez.micro.Aristocles.run(Aristocles.java:47)
    at robocode.peer.RobotPeer.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)

(

and this: Mirror 1.1 vs Randomness 1.1: Randomness crashes.

=========================
Round 1 of 10
=========================
stelo.Mirror 1.1
stelo.Randomness 1.1: Exception: java.util.ConcurrentModificationException
java.util.ConcurrentModificationException
    at java.util.AbstractList$Itr.checkForComodification(Unknown Source)
    at java.util.AbstractList$Itr.next(Unknown Source)
    at robocode.peer.robot.EventManager.processEvents(Unknown Source)
    at robocode.peer.RobotPeer.tick(Unknown Source)
    at robocode.peer.RobotPeer.turnRadar(Unknown Source)
    at robocode._AdvancedRadiansRobot.turnRadarRightRadians(Unknown Source)
    at stelo.Randomness.run(Randomness.java:36)
    at robocode.peer.RobotPeer.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)

 -- Stelokim 20061013 0110 KST(GMT+0900)

and this: My bot crash too! stelo.Randomness 1.1 vs sample.MyFirstRobot:

=========================
Round 1 of 10
=========================
sample.MyFirstRobot
stelo.Randomness 1.1: Exception: java.util.ConcurrentModificationException
java.util.ConcurrentModificationException
    at java.util.AbstractList$Itr.checkForComodification(Unknown Source)
    at java.util.AbstractList$Itr.next(Unknown Source)
    at robocode.peer.robot.EventManager.processEvents(Unknown Source)
    at robocode.peer.RobotPeer.tick(Unknown Source)
    at robocode.peer.RobotPeer.turnRadar(Unknown Source)
    at robocode._AdvancedRadiansRobot.turnRadarRightRadians(Unknown Source)
    at stelo.Randomness.run(Randomness.java:36)
    at robocode.peer.RobotPeer.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
-- Stelokim 20061013 0114 KST(GMT+0900)

Are you using a CPU that has HyperThreading, by any chance? -- Voidious

I think not. My CPU is Intel Celeron M. (1.5Ghz) -- Stelokim 20061013 0117 KST(GMT+0900)

Do you use the same directory for RR@Home and for your own development? If so, split it up so you will have separate directories for own development and RR@Home. Doing own development in another directory while running RR@Home should not be a problem itself if you have enough processingpower and memory. -- GrubbmGait

OK. I will split it up :) -- Stelokim 20061014 1126 KST(GMT+0900)

Just a note that I think it may be the new version of Robocode that is causing this error with some bots, as I just got that error in 1.1.4 with Quest. I definitely didn't have multiple copies open. I hate to cause a scare if it's something else, but it would be really good to avoid too many bad results... My RR@Home clients are 1.1.2, and seem to be OK. -- Voidious

I didn't see any reason to update my RR@Home robocode.jar files. All of the updates have been related to the visual interface. -- Martin

Well, one of them added some new setColors methods, which 1.07 wouldnn't support. I also wasn't sure if the drawing related code in tanks might be causing problems. *shrug* -- Voidious

I don't want to seem rude, but could we maybe only have one version of MirrorNano and Mirror at a time? -- Alcatraz

I was about to remove old versions after some battles. It's ok that you removed them. -- Stelokim

Phoenix 0.805 got 27% against Ascendant. While Ascendant is very strong, this seemed wrong to me. So I ran 10 battles of Ascendant vs. Phoenix, and Phoenix's score was always between 43% and 61%. The 35% vs. DT and 53% vs. bing2.Melody 1.3.1 are also much lower than they should be. I know DT's memory usage is quite high, maybe someone doesn't have -Xmx set? Could we check and see who submitted Phoenix 0.805 vs. Ascendant 1.2.27 so that we can try to figure out what the issue is? --David Alves

I've looked into the issue a bit and found the specific battles. The weirdest results were

35,800x600,Kinsen,1161325590921,mue.Ascendant 1.2.27,3936,1914,28,davidalves.Phoenix 0.805,1348,879,7
35,800x600,Ugluk,1161325180937,pe.SandboxDT 3.02,3102,1380,25,davidalves.Phoenix 0.805,1693,998,10
35,800x600,Kinsen,1161325923453,davidalves.Phoenix 0.805,2492,1004,22,bing2.Melody 1.3.1,2207,1287,13
Kinsen and Martin, could you use your roborumble install directory to manually run some 35 round battles of those matches? That way if you're able to reproduce those score, we can see if there's an OutOfMemoryError? or something like that going on? Also, what do your roborumble command lines look like and what versions of robocode are you running on your RRAH clients? --David Alves

java -Dsun.io.useCanonCaches=false -Xmx512M -cp .;../robocode.jar;../codesize.jar; roborumble.RoboRumbleAtHome ./roborumble/roborumble.txt
I have 1.1.3 at work and on my main computer. 1.0.7 on my wife's computer. All 3 were running last night.
Machine at work got 2707 vs. 2267 in favor of Phoenix. -- Martin

Thanks for the prompt response. Is the command line you gave the same for all three computers? I don't suppose there's any way of figuring out which of the three was causing the trouble. =/ Also, I found another strange one for 0.805:

35,800x600,Kinsen,1161330190093,davidalves.Phoenix 0.805,3098,1434,25,wiki.BasicGFSurfer 1.0,1934,1227,11
Normally Phoenix would get 80% or higher against BasicGFSurfer, not 60%. Here's a comparison of Phoenix 0.805 vs. Phoenix 0.805b, which identical but it only running on my home computer: http://rumble.fervir.com/rumble/RatingDetailsComparison?game=roborumble&name1=davidalves.Phoenix%200.805b&name2=davidalves.Phoenix%200.805 --David Alves

I'll be leaving for home in about 15 minutes. The command line should be the same for all though. I keep that stuff on a USB drive (along with all of Ugluk's source code) for easy infestation .. erm installation. I had brought a comparision of 805 vs. 805b earlier on my own and there is a fair amount of fluctuation both up and down between the bots. I think the results you are seeing are too limited to be a question of a bad RR@Home setup. Dunno though. -- Martin

I am running RoboCode 1.1.3 with the command line

java -Xmx256M -cp .;../robocode.jar;../codesize.jar; roborumble.RoboRumbleAtHome ./roborumble/roborumble.txt
-- Kinsen

@Martin: No way, the results are not within the normal range. I haven't gotten a red result (<40%) against any bot in the past 90+ releases, except for releases that contained big bugs. 27% just isn't possible without Phoenix crashing.

@Kinsen: Could you please add -Dsun.io.useCanonCaches=false to your batch file the way that Martin has it? This could be due to the CanonCaches issue. It's also possible that there's some bug in 1.1.3. Personally I run all RoboRumble battles on Robocode 1.0.6, the newer versions are only GUI updates and I think they run more slowly than 1.0.6. --David Alves

I tried running a battle with Pheonix and the BasicSurfer and Pheonix got 70% both times. -- Kinsen

Hmm, it's possible that the BasicSurfer result is ok. I was just comparing 0.805b (85%) to 0.805 (60%). Looking at other recent versions, I seem to average around 75% though, so 60 might be a legitimate result. Could you run Phoenix vs. Ascendant using that same robocode installation a few times and see if Phoenix ever crashes? If it does, could you paste the console into here? --David Alves

I ran Phoenix and Ascendant and the first time Pheonix won with about 54% but the second time it lost having 33% of the total score. There were no exceptions printed out to the console except that it couldn't read the gun data file the first round. -- Kinsen

I ran 60 battles of Ascendant vs Phoenix. Phoenix's mean score % was 48.5 and the standard deviation was 5.55%. That means the chance of a battle happening where Phoenix gets only 33% is .25%, or one in 400. The chance of a battle happening where Phoenix gets 27% (the Roborumble battle) is .005253%, or one in 19036. So I think your client is producing weird results. Could you run a few more battles after you add -Dsun.io.useCanonCaches=false to your batch file to see if that was it? If that wasn't it I really don't have a clue what it might be. =/ --David Alves

I was just missing the CanonCaches argument in the roborumble batch file, not in the robocode.bat file. I will post more results once I run them. -- Kinsen

What's your robocode.cpu.constant set at in your robocode.properties file? Also, what cpu do you have? --David Alves

@David, just wanted to mention that there were some new setColors methods made available in the new versions, so bots that use them will get 0 scores every time in 1.06/1.07. (I am running 1.1.2 in my RR clients.) -- Voidious

WaveSerpent is getting some really weird battles (0.9 vs FalseProphet, 37.6 vs. Mirror, 69.2 vs. UnderDark4). Does anyone know why this happened? In tests WaveSerpent 1.102 preforms about the same as 1.101 against them, and only skips a couple turns. Maybe someone is running an old version of Robocode while WaveSerpent uses the new color methods instead of setColors()? I've re-released WaveSerpent with setColors(), so the problem might go away... -- Kev

Bots that use the new setColors method will simply get a score of 0 in battles, and when the score is uploaded, it will be ignored. So that shouldn't be the source of the problem. I know David Alves was also reporting some discouraging results. Really I suspect that it is a problem that has crept into the newer versions of Robocode. I am going to roll back my Roborumble robocode.jar files to 1.0.7 across the board. I'll use the latest client for testing / watching battles, since it does have some handy features. It definitely has some bugs in the score though. -- Martin

I'm seeing strange results for "sam.samspin" in the nano rumble. When I run him against bots on my machine, he rarely scores points. But check out his results and you'll see certain bots whose butts he seems to be kicking pretty well (e.g. GFNano). When I run those same battles myself, though, he rarely connects with a single bullet. Any ideas what's going on? -- Simonton


Robo Home | RoboRumble | Changes | Preferences | AllPages
Edit text of this page | View other revisions
Last edited February 24, 2007 19:56 EST by Voidious (diff)
Search: