[Home]Lukious/Rewrite

Robo Home | Lukious | Changes | Preferences | AllPages

Showing revision 43
I'm re-deriving Lukious from post-rewrite Dookious. Simonton demanded I post some TC results by today, so here are the latest. I'll clean this up later.

TargetingChallenge2K7/Results (500 rounds):

Name Gun CC RMX SHA WS WOE SurfDM FT GG RMC WLO No SurfTotal Comment
Lukious 1.19.10 DC/GF 68.03 74.49 64.95 69.13 69.21 69.16 87.11 77.76 82.63 77.79 81.97 81.45 75.31 3 seasons
Lukious 1.19.13 DC/GF 72.94 76.93 69.59 76.67 72.84 73.79 87.99 79.71 84.85 78.04 80.63 82.24 78.02 3 seasons
Lukious 1.19.15 DC/GF 69.82 76.40 71.37 75.43 69.87 72.58 87.66 78.94 83.03 76.44 79.56 81.13 76.85 3 seasons
Lukious 1.19.16 DC/GF 71.21 76.80 70.67 76.96 71.50 73.43 88.06 78.39 83.55 77.13 81.40 81.71 77.57 3 seasons
Lukious 1.19.20 DC/GF 70.95 78.45 72.45 77.76 71.60 74.23 86.83 78.06 85.96 77.66 79.59 81.62 77.93 3 seasons
Lukious 1.19.24 DC/GF 87.79 77.84 84.14 77.65 81.27 81.74 3 seasons

TargetingChallenge2K7/ResultsFastLearning:

Name Gun CC RMX SHA WS WOE SurfDM FT GG RMC WLO No SurfTotal Commentsteady
Lukious 1.19.10 DC/GF 67.87 75.65 61.07 71.89 68.87 69.07 87.47 78.77 80.88 78.29 81.64 81.41 75.24 40 seasons
Lukious 1.19.13 DC/GF 70.65 77.40 64.45 76.03 71.93 72.09 86.77 80.14 81.91 78.17 81.73 81.74 76.92 40 seasons
Lukious 1.19.20 DC/GF 69.49 76.99 68.53 75.58 70.63 72.24 86.63 78.52 82.71 77.86 79.41 81.03 76.63 40 seasons
Lukious 1.19.24 DC/GF 68.39 76.93 67.22 74.21 69.14 71.18 86.23 76.21 82.17 75.34 79.92 79.25 75.58 50 seasons

TargetingChallenge2K6/Results (500 rounds):

Name Gun BFly CC Chk Cig Cya DM FM Grb RMB Tig Total Comment
Lukious 1.19.10 DC/GF 99.76 67.58 94.57 87.73 77.83 93.31 89.75 87.44 90.99 83.54 87.25 4 seasons

Version Notes:


Some info on the above:

A couple key speed improvements I've learned along the way:

-- Voidious


What exactly do you mean by "keeping the list of closest scans sorted"? Closest in terms of match closeness, time, or what? I'm not seeing how any of those gives you a speedup, you still have to check all stored scans, right? --David Alves

Ok, my bad for being so terse. Yes, match closeness. And yes, you still have to check all the scans. It's just one element - the updating of maximum distance among the closest scans - that is sped up here. As you're iterating through the 20,000 (or whatever) total scans, you need to keep track of what the maximum distance in the closestScans collection is, so that you can check if the distance for the current scan (in the loop through the 20,000) is less than that maximum; if it is, you sub the new scan into the closestScans array and then update the new maxDistance. I used to just iterate through all 50 elements in closestScans to find the maximum again after I changed one out. Now, I keep the closestScans list sorted by distance; if a new scan has a distance < maxDist, I start from the end of closestScans, shuffle the existing scans up, and insert the new scan into the correct spot. So maxDistance is just the distance on the last element in closestScans.

So you have far fewer total iteretions in all those recalculate-maxDistance loops and a lot fewer data reads, but a lot more data writes (updating references in closestScans). I actually wouldn't be 100% sure off the top of my head which "should be" worse, but I was able to go from ~10,000 to ~25,000 scans without skipping turns after I made that change.

-- Voidious

Thanks for the update! I know that the non-surfer scores are way more important, but improving the surfer scores is so fun! I don't understand the whole Gaussian distribution thing, I may have to look into that, just to learn what it means. But a note ... keeping your cluster sorted could result in O(log n), if you used a tree instead of an array to store them. -- Simonton

Added another nice speed optimization to my list of tips above, took me from 25k to 40k scans. -- Voidious

Added info for 1.19.20 and noted for 1.19.15 that I'd enabled a bunch of attributes that weren't previously. I think all these extra attributes might be a problem - some of them have mutual information, yet they are being weighted individually, as if they are all independent variables. Thus in 1.19.21, 5 of them are disabled. Also, the entropy based weights in 1.19.21 are not quite linear (Math.pow(x, 0.75), not quite square root either). Fingers crossed =) I know entropy is a widely used way of judging what we want to judge - the information gain of a specific attribute in a data set - but I'm starting to doubt it is really what I want to use to determine my dynamic weights. I'll stick with it a little longer... -- Voidious

I dunno if i understand DC-guns correctly, but what about segmenting your scans (or should i say clustering :) )? Searching over ~20k scans gives me a headache :/ You wrote something about sorting your scans above, just ignore this post if you meant exactly this... I think it could be done in the same way like we segment our GF-stats: Put every scan in a multidimensional array (with as many dimensions as you have attributes, a multidimensional array of ArrayLists? would do the job). If you want to find the closest scan to your current scan, just search in segments around the segment in which the current scan is located. That should save you alot of needless searching :D You could first calculate the distance of your current scan to the "middle" of each nearby segment (I hope you get what i mean) and than search in all segments whith a short distance. May be it is faster to store your scans in a tree structure, but I don't know much about woods :P --Krabb

kd-trees (http://en.wikipedia.org/wiki/Kd-tree)? --Corbos

Surprisingly, the methodology I use in 1.19.24 is not that much code or that ugly. It is about as intensive as aiming. With 10 attributes, I'm doing 54 getClosestScans (10 with one attribute, 44 with two), cluster size 50 and log size 1,000, and an equal number of entropy calculations. The results are nothing too impressive, though. My next test is using this system for the info gain but not doing all the mutual information stuff - this also lets you use much bigger log size for the initial information gain calculation. -- Voidious


Robo Home | Lukious | Changes | Preferences | AllPages
Edit revision 43 of this page | View other revisions | View current revision
Edited September 29, 2007 0:07 EST by Voidious (diff)
Search: