Please vote on these rules and explain your vote below. I think we should call these rules binding when at least 10 people have voted.
Should bots entered in the RoboRumble@Home be allowed to come preloaded with data?
Should bots entered in the RoboRumble@Home be allowed to come preloaded with more data than the Robocode client allows to be written?
Should we change the default limit (200K) on how much data Robocode allows to be written to #_RR_participants * 1000 bytes of data?
Hmmm... loading your bot up with data is fair, although a bit cheesy if you ask me. But changing your copy of robocode so you can load it with more than 200000 bytes of data seems like cheating to me. Granted, you're not over that 200000 by much, but still... --David Alves
I can bet you my house and family that Axe has '''not''' changed his copy of RC to be able to load more data. Something in RC must allow it under some circumstances. -- [PEZ
I'm not saying he decompiled robocode or disabled the security manager to do this... You don't need to, just change robocode.properties. Or you could gather half the data, move it out of your data directory, and gather the other half... then manually add it all to your jar after packaging. But both of those methods still seem like cheating the system to me. --David Alves
Doesn't matter. The Axe I know would never do anything to try cheat the system. RC must have a glitch in how it ensures the 200k limit. -- PEZ
Glad u all noticed the point with this POD version! The POD funny name was to call your attention, indeed...
Thats what i had do: Run 297 battles with league (100 battles, then saved the data in versions dir, more 197 battles and saved them too), then packed the version (not much diff. than other similar processes like WT).
I have no intention of maintaining that version. Once done the test, iīll recall it. The idea came when trying to reduce saving data for 300 bots into 200K... I realized that RC actualy doesnīt have any restriction of more than 200Kb in data dir, but only forbiden your bot to SAVE data beyond that limit.
The question here is: Is this allowed? I think that should be discussed, but to me seems that i didnt brake directly any rule, but i actually did it indirectly. And also i think that the INTENTION with that canīt save > 200Kb rule is not to allow more than 200Kb in the dir. I this was a voting, my vote goes to forbidden pre-loaded data beyond 200Kb. @PEZ: Thanks for your faith in me. -- Axe
Btw: In recent talking to PEZ, he called me attention to the fact that i didnīt have to do this way, by exploiting the flaw without explaining first. He is right, i think. Sorry for the experiment. But I still think that we have to explicite this rule. -- Axe
Actually the reason that RC has this feature is probably to stop rouge bots from filling our hard disks. But agree with both of you that it should be dissallowed to package more than 200k with your bot. -- PEZ
I vote to disallow _any_ kind of pre-saved data. ;) -- ABC
I can sympathize with that. I have actually suggested that before, with you voting against it. But to do it now would be to change the rules too much. And place an extra burden on the participants to remember to clean their data dirs before release. (I saw your smiley, but since I think this rule should be pondered I answered as if it was a serious suggestion.) -- PEZ
I have always been against pre-saved data. I think a bot must be able to adapt to the environment and not rely on pre-aquired knowledge. That smiley was exactly because of that fact, I'm against it but have learned to live with it. This POD exploit, otoh, is a bit too much, I think the RR client should wipe out the data dir from the packaged bot before the first run. -- ABC
I'm for it. -- PEZ
I also dont like much pre-saved stuff... -- Axe
In my opinion, the 200k rule is a pretty limiting ...eeehmm... limitation. And I think it WAS the intention of Mat to make sure with this rule data saving would not be too easy. I think it is very interesting and challenging to get the most out of 200k so I would vote for allowing data saving and preloading, if there was a vote. By the way, I saw this discussion coming, but expected it sooner after BeeWT hit the charts. --Vic
Yeah. BeeWT is a bit too interesting to dissalow just yet maybe. It would be cool to at least try push more and see what happens. Wiping data dirs is easier said than done. I can imagine bots that can't handle it gracefully. It would also be a step away from Mat's intentions since he obviously choosed to let the RC packager include data collected by the dev bot. So I'm not that much for wiping any data dirs after all. =) -- PEZ
BeeWT's data saving strategy is very interesting indeed, but it could surely be made to collect it from real world battles, just like most other bots do. It is basically a great "crib sheet" saving method. @Vic: We already had this discussion before, more than once. I have never used (and never will) pre-saved data because I don't like the idea, but everyone is free to do whatever they want. And I enjoy it even more when I manage to beat a bot that does use it... ;) -- ABC
So do I. So do I. -- PEZ
Ok, I re-thinked my position, and i vote to allow pre-saved. Pre-saved could be home for lots of interesting solutions & possibilities, that i dont think is fair to prohibit. Of course if under 200K... -- Axe
I didn't use data saving in any of my bots at all. Still I'd allow bots to use this feature. Moreover I think that the limit should be above the 200k, for me a dynamic limit of ''number of bots in the rumble * 1K'' would sound more reasonable. And in case of breaking the limit the client should wipe the dir. --lRem
@ABC: Yes, it's easy to make it save data from the actual battles. In fact, I purposely disabled that function in BeeWT 1.2. However, the quality of the data would not be as good as when you run 100+ rounds instead of 35. And as the design is now, you would be stuck with the Nodes and corresponding GF's found after those 35 rounds for all battles after that. But that could be changed. @lRem: At this point we can't (legally) change the Robocode client, we can only do something about the RoboRumble client. And cheating the RC package using the POD method doesn't seem right. Your idea is good though. We should remember it when RC ever becomes OpenSource. --Vic
Actually this limit is configurable via a config parameter. The RR client could probably change the limit at will. -- PEZ
Yes. Limitations is the source of all inventions. -- PEZ
In fact I meant the client changing the Robocode properties file. This can be done quite easilly, in fact much more easilly than wiping the dirs, what should be implemented in any case. Maybe we should vote on that limit? -- lRem
Table added for this vote. --Vic
@Vic: There are over 200 bots in RR@H right now, so your votes seem contradictory... :-p --David Alves
Trying to interpret Vic's vote I have changed the suggestion texts. Change back if it is now wrong or isn't clearer than before. Change your vote if your vote isn't valid for the current text. -- PEZ
I'm fairly new to Robocode - I've still have the intention to do some kind of AI based on NeuralNetworks? - therefor it's necessary to preload bot's, sometimes with more then 200K. Nevertheless it would be a nice feature to get back some saved Data from other RR@home host ... what's your oppinion on this? Any ideas how to manage this? -- bjd
Personally I think the questions as stated are a bit too limited. So I haven't actually voted, since I found mayslef wanting to say "maybe" in each case :)
There is absolutely no reason why data should not be pre-packaged with a bot. Any sort of config/properties file is an example of this.
In my mind ethical issues start to arise depending on the nature of the data. For example, if it is purely the normal data that a bot can calculate for itself and save, pre-packaging is not actually giving any advantage, it merely speeds up the learning curve of the bot. Presumably its rating would stabilise at the same level in the end. (My opinion would be different for a fixed-duration league however).
Although I personally prefer the idea of self-contained bots that learn for themselves from scratch every battle (in fact I've not written a bot yet that even stores any data between rounds!), bots exhibitng longer term learning seem quite an interesting area to explore as well. So far so good.
If however data is being stored that cannot be derived by the bot itself I start to seriously question it. i.e. if somebody wrote a bot that had an array of guns, but didn't have any means of its own such as virtual bullets and virtual hit/miss statistics to choose which was best against different targets, and hand-crafted a data file saying what gun to use against what specific named opponent, then I would have a problem with that approach. Not really any different to hard-coding enemy bot names into if statements in the code!
If stored data falls into my "ethical" category, then I'm not too bothered how much of it there is. It is prudent though to have some sort of ruling just to encourage people to not be too profligate with diskspace, in a similar way to the restrictions on cpu time per turn. -- Shrubbery
What I will always tell is that the save quota (if any) should be dependent on participants number. Tu put it simply: whatever you set the limits for data saved on one bot, one day that will be insufficient for ther rumble if a static limit is applied. And any special algorithms on deciding which oponnents deserve for saving their data are possible only in megabots. --lRem
Well Im not totally sure how the data thing works (can bots continue to add data once they are submitted to the RR?). But If we are talking simply preloaded data then you can turn those data files into classes and simply pull from those, without it counting, and thus allowing for incredibly detailed cribsheets to be made. In fact, (while in poor taste) if you are up against a bot that does not make any random calls, you could have a copy of that bots code and be able to counteract its targeting and movement using that. In a way, I would say that would be the ultimate bot, the HolyGrail of targeting and movement, to have a bot that can create those code profiles itself without you copying it in. That is actually something I have been toying around with. If you fix the starting positions, a bot that does not make random calls will perform exactly the same way every battle (provided the opponent does not make random calls either). so by extensive analysis, one could feasibly recreate the code of your opponent. This would be the ultimate form of pattern matching, Although it would probably become infinitely complex to recreate even the middle range bots, but if it could perform excellently against any opensource bot that has been preloaded (as long as you can solidify those first 8 ticks). Sorry that this post has kind of degenerated from a comment about data size to a bot idea... -- Jokester
I think we should limit the number of bots you are allowed to upload! Itīs no fun to wait 8h to have 300 rounds :( Esecially if you are the one running the client. I suggest that it is only allowed to upload a new bot if all your other bots have more than 999 battles. --Krabb
I agree. This goes hand-in-hand with the unspoken rule of good taste: Please test your bots. The recent flurry of florent.* bots contained more than one version that simply threw an error and quit or stopped responding. It's depressing to watch your RoboRumble@home client churn through battles to stabilize these bots when you know they'll be replaced in less than 24 hours. It needs to be stressed that the RoboRumble is the competition - not the testing ground. --Corbos
While I agree that we need to have a limit: I think that you should only be allowed to post a new version of bot once/week (168 hours). This way, you can't use the RR@H for testing, but can still post new bots as you make them --UnderDark
I agree with Krabb, the limitation of one update per week I find too strict. Sometimes when I have time I want to put my ideas into action and improve them as necessary. Although my testbed becomes bigger and bigger, I only test against bots who I think (read wish) I am improving against. You cannot expect someone to test against 370 bots, so still some unexpected results (which will result in bugfixes) are possible. One note: having fought 999 battles does not guarantee that all pairings are fought. The unfilled pairings (see PL-ranking) will be held when no prioritybattles are present anymore. --GrubbmGait
Maybe, and although I think that there should be some limit, during the summer and certain other times, I have much more time and try to get my latest robot revision up quickly so that I can see how it fares. -- Kinsen
Can we change the way the melee chooses its targets to include at least 2 "new" bots and perhaps 3? This will speed up the new melee bot rating process signifigantly. Melee is slow enough to run without having to wait forever to chew through new bots. It would also be nice to have a Mini/Micro?/Nano? only couple of rounds to help out those rankings as well. --Miked0801
I agree that it might skew the ratings. If you want the rating faster, just ask people to run RRAH. I'd be happy to, and I know others would too. I'll start some clients now. --David Alves
For the next version of robocode, could it be possible to display the codesize of the compiled bot within the compiled popup box? That would save me running a batch file every time I compile.
I have an ant script that has a target for "check codesize" and a target for "make jar". Ant integrates pretty well into eclipse. I can post my script on the wiki if you like. (Actually I think there may already be a version at Ant...) --David Alves