[Home]RougeDC/TargetingLab

Robo Home | RougeDC | Changes | Preferences | AllPages

Showing revision 81
Well, inspired by Simonton/DCResearch and my poor targeting results, here's a page where I'll document my progress with the gun in RougeDC, including versions that won't get to the RoboRumble.


TargetingChallenge2K7/ResultsFastLearning
RR Version TC Version CC RMX SHA WS WOE Surf DM FT GG RMC WLO No Surf Total Comment
Alpha15 TC_RAIKO 58.51 56.58 58.22 61.92 62.85 59.61 86.77 80.17 82.11 78.24 86.28 82.71 71.16 103.0 seasons
Beta3 TC01 51.03 69.51 59.16 60.15 58.35 59.64 87.59 73.75 71.49 74.19 76.96 76.80 68.22 15.0 seasons
TC02 51.91 65.88 58.87 59.57 58.91 59.03 86.23 73.59 74.34 73.63 76.73 76.90 67.96 16.0 seasons
TC03 51.38 66.41 61.82 62.30 63.72 61.13 82.49 77.31 69.67 74.78 70.66 74.98 68.05 15.0 seasons
TC04 59.06 76.54 62.82 76.33 79.53 70.86 81.53 73.99 69.21 70.64 75.76 74.23 72.54 15.0 seasons
TC05 51.68 70.76 60.72 64.79 61.43 61.87 84.44 75.47 70.25 74.26 75.08 75.90 68.89 25.0 seasons
TC06 65.00 83.24 64.38 85.72 78.58 75.38 83.22 76.74 74.88 73.29 76.85 77.00 76.19 16.0 seasons
TC07 87.08 78.54 80.34 79.17 81.29 81.28 25.0 seasons
TC08 86.30 79.08 81.54 78.96 81.22 81.42 15.0 seasons
TC09 66.40 79.80 62.69 76.97 69.78 71.13 86.77 79.88 80.57 78.40 81.66 81.46 76.29 114.0 seasons
TC10 85.35 78.68 79.82 77.89 79.65 80.28 16.0 seasons
TC11 87.86 80.48 80.89 77.91 82.85 82.00 115.0 seasons
TC12 86.46 79.45 80.04 77.90 82.66 81.30 18.0 seasons
TC13 87.23 79.39 79.80 77.34 82.81 81.32 15.0 seasons
TC14 87.42 81.56 78.34 78.25 83.25 81.76 15.0 seasons
TC15 86.81 80.17 81.13 78.41 82.32 81.77 119.0 seasons
Beta4a TC16 65.64 77.91 63.22 75.33 69.38 70.29 87.28 80.27 80.38 78.04 83.07 81.81 76.05 183.0 seasons
TC17 68.43 80.32 61.73 77.52 70.96 71.79 86.76 80.27 80.13 79.95 83.13 82.05 76.92 43.0 seasons
TC18 67.57 78.95 62.13 76.98 71.35 71.40 85.91 80.49 80.62 79.30 84.18 82.10 76.75 125.0 seasons
TC19 68.23 80.74 63.20 76.94 72.04 72.23 86.46 81.15 81.40 79.62 83.11 82.35 77.29 107.0 seasons
Gamma1 TC20 69.08 80.25 63.84 77.69 72.42 72.66 86.38 80.86 81.74 78.49 84.12 82.32 77.49 113.0 seasons
TC22 68.84 80.82 63.79 77.66 71.27 72.48 86.46 80.16 80.38 78.90 83.34 81.85 77.16 45.0 seasons
TC23 70.03 85.56 65.06 83.92 83.19 77.55 82.44 77.40 76.26 75.64 79.76 78.30 77.92 41.0 seasons
TC24 69.58 82.56 65.77 83.49 83.99 77.08 81.69 74.85 74.42 73.41 77.20 76.31 76.70 16.0 seasons
TC25 69.49 76.97 66.56 75.55 72.74 72.26 78.39 74.85 82.83 75.63 80.70 78.48 75.37 2.8 seasons
TC26 73.55 86.94 65.82 87.70 83.70 79.54 85.75 79.59 79.90 79.04 83.08 81.47 80.51 84.0 seasons
Gamma2 TC27 73.73 87.15 66.57 86.48 82.45 79.28 86.45 80.40 80.54 78.47 82.06 81.58 80.43 60.0 seasons

TargetingChallengeRM/Results
RR Version TC Version Aspd Sprw Fhqw Yngw FlMn EASY Tron HTTC RnMB DlMc Grbb MEDIUM SnDT Cgrt Frtn WkOb RkMc HARD TOTAL Comments
Gamma1 TC20 91.65 97.50 94.84 96.50 92.18 94.53 87.27 87.86 87.94 85.98 80.64 85.94 75.15 83.55 80.23 83.65 79.68 80.45 86.98 50.0 seasons
Gamma2 TC27 89.34 96.95 94.67 96.45 92.79 94.04 86.32 87.37 88.09 86.45 80.54 85.75 75.18 83.13 80.40 82.06 78.47 79.85 86.55 53.0 seasons
TC28 88.47 95.21 94.68 96.50 91.95 93.36 85.87 87.15 87.48 85.47 77.14 84.62 73.13 82.95 79.88 81.93 79.16 79.41 85.80 48.0 seasons
TC29 90.81 96.82 95.90 96.55 93.00 94.62 87.39 88.27 87.97 88.27 81.30 86.64 75.29 83.85 79.19 83.99 78.62 80.19 87.15 50.0 seasons
TC31 90.19 97.53 95.29 96.13 92.94 94.42 87.25 88.00 88.69 89.11 80.70 86.75 73.92 84.36 78.30 82.07 77.71 79.27 86.81 50.0 seasons
TC32 92.92 95.97 94.65 96.42 92.50 94.49 87.22 88.35 87.01 87.37 81.61 86.31 74.33 84.20 80.44 82.23 78.47 79.93 86.91 50.0 seasons
PM01 67.30 95.51 92.36 94.58 84.01 86.75 83.76 82.12 87.23 62.91 70.10 77.23 61.72 60.59 70.10 78.35 72.29 68.61 77.53 30.0 seasons
PM05 85.21 96.42 91.85 94.75 87.28 91.10 81.59 80.48 86.69 87.41 76.23 82.48 63.13 74.91 75.52 79.08 76.84 73.90 82.49 30.0 seasons
TC33 89.87 97.53 94.47 96.93 93.59 94.48 87.52 87.94 88.10 89.50 81.54 86.92 74.62 83.87 78.95 82.21 80.63 80.06 87.15 50.0 seasons
TC34 90.57 96.06 94.29 96.23 92.57 93.94 85.83 86.64 88.22 87.75 80.05 85.70 73.47 82.41 79.26 82.81 78.84 79.36 86.33 10.0 seasons
TC36 90.27 97.03 94.41 96.48 92.95 94.23 86.01 88.59 88.41 88.03 81.64 86.54 74.82 84.56 77.90 82.35 77.27 79.38 86.71 50.0 seasons

Custom Surfer Challenge. TargetingChallenge2K7 surfers, plus: Dookious 1.573c, Chalk 2.5.Al, Komarious 1.78b, GresSuffurd 0.2.10, DarkHallow .90.9, and MatchupWS 1.2c
RR Version TC Version CC RMX SHA WS WOE TC2K7 Dooki Chalk Kom Sub1 Gres DH MWS Sub2 Total Comment
Gamma1 TC20 68.80 79.67 64.90 78.63 73.47 73.09 58.31 67.50 84.68 70.16 86.59 76.21 85.07 82.63 75.29 50.0 seasons
Gamma2 TC27 73.84 87.15 66.57 86.48 82.45 79.30 55.08 71.14 90.14 72.12 94.60 84.09 89.21 89.30 80.24 40.0 seasons
Gamma3 TC33 67.35 76.17 62.42 75.21 70.73 70.38 58.68 68.72 82.30 69.90 86.12 74.94 80.77 80.61 73.63 50.0 seasons

PatternMatcherChallenge/PMCIndex
Version PMCIndex Comment
PM01 95.00% 3 seasons
PM02 58.74% 1 season*
PM03 100.02% 1 season
PM04 98.71% 1 season
TC20 97.71% 1 season
PM05 99.32% 1 season

Version History:

Todo:

Discussion

Hmm, one trend I'm finding, is that many things I do that help against surfers, hurt against non-surfers, and vice versa. It's rather bothersome really. It's looking like I'll either need to go the virtual gun route, OR use dynamic weighting of some kind. What bothers me about dynamic weighting though, is I'm not sure how I could make that work with my kd-tree. At best, it would be a very invasive change to my kd-tree to make it able to support dynamic weightings... not to mention how I'm somewhat dissatisfied with most existing metrics I've seen for dynamically deciding weightings. I think for now I'll first focus on adding some wall segments, because firing out of bounds when CassiusClay has run away to the other wall, is tiresome. -- Rednaxela

Don't put too much weighting on the Surfer score - having a strong surfer gun will at most give you a 10-15 point boost. The majority of the bots in the rumble don't have adaptive movement, and they are where you get the majority of your points from. If you can increase your gun against non-adaptive movement by 1% I would say it is worth a 8% drop against surfers. Oh yes, a *very* useful segment is time-since-deceleration, or time-since-lateral-direction-change, possibly divided by bullet flight time to make it relevant at all distances. -- Skilgannon

Well, while a strong surfer gun doesn't give much of a bonus in ELO rating, I think that it can still be very worthwhile as far as primere league score (which I am rather interested in) because it tends to be more about beating as many different bots as you can in 1v1 as opposed to trashing the weak bots by as big a margin as possible. Right now, it's my goal to try and make the gun as strong as I can in both categories, and then see how well I cam make some sort of dynamic weighting tune against both sorts (and if not, split the gun into two version and use virtual guns). About time-since-lateral-direction-change segments and such, yes, I have heard good things about those and those are up soon on my list of things to try :) -- Rednaxela
Update: Well I just added the results with the Raiko gun above to compare. As I had been expecting it scored a fair bit lower against the surfers, but a fair bit higher against the non-surfers, in fact it's non-surfer score is among the best in TargetingChallenge2K7/ResultsFastLearning. Hmm, I suppose I'll do some tests with the non-surfers only, but primarily because that will speed the test runs up (waiting at least 2 hours to get results is kind of tiresome). -- Rednaxela

By the way Skilgannon, I'm currently experimenting with the time-since segments as you noted. One interesting thing I found though, is that dividing by "bullet flight time" makes it perform even worse than without the time-since segments at all. Haven't been able to get much out of the time-since segments yet though either way... still experimenting with it though. -- Rednaxela

Hmm, lately the numbers I'm getting seem bothersomely similar for the most part. I wonder if the 15 or even 24 seasons is not enough to get a reasonably noise-free result. I'm going to run a much larger number of seasons on version TC09 overnight and see how that goes. I hope I won't have to start using more seasons to get useful results, because even 15 seasons takes longer than I'd really like on my not-quite-state-of-the-art Core 2 Duo 1.7GHz laptop. Unfortunately, the only other computer I could possibly use to augment the processing speed is an old AMD Athlon 2000+ and if I'm lucky an Athlon 2400+ too perhaps, which even together probably would at best double the rate I could run seasons. Of course I can't get RoboResearch to work in a distributed way anyway for some reason too. Processing power limitations are rather bothersome, they really make me wish I had access to a computer lab like Simonton had access to during one stage of his DCResearch. -- Rednaxela

Another suggestion for clustering, try making some of the dimensions non-linear. The best example I can think of is the time-since segments, take a sqrt of the result before putting it into your tree, so that for short times small differences have a larger effect, but for big values of time-since they don't make as much of a difference (think of a sqrt graph, as it flattens out as the x value gets higher, so the delta-y is smaller for the same delta-x). -- Skilgannon

Actually, I already am making some of my segments non-linear kind of like that. In particular, my wall-distance and time-since segments go through the function f(x) = 1 / (1 + x) which would should a similar effect as the sqrt one except it's always bounded between 1 and 0 (note, like sqrt, only makes sense for when x is greater than or equal to 0) :) -- Rednaxela

Hmm, after re-running test 09, with 114 seasons now, it appears my earlier luck with that test was partially luck... So this means that 24 seasons is not enough, and to be honest I'm not sure that 114 seasons is really enough for a decently accurate value anymore with some of the smaller changes I've been working with. It's times like these that I wish I had either a faster computer or access to a larger number. -- Rednaxela

Huh.... one thing that's interesting, is that based on the roborumble results, even the rather crappy-against-everything Beta3 gun (TC01), seems to score better than Raiko against one specific rare subset of bots: DC-Surfers. In particular, RougeDC Beta3, is scoring at least 5 or 10 points higher against Lukious, Hydra, Horizon, and Firebird, all of which are DC-surfers, than Alpha15 did... I wonder what about my gun throws off some DC-surfers so much [trouble] when it's so harmless to most other things... -- Rednaxela

Well, RougeDC Beta4a seems to be doing pretty good so far. Tomorrow morning I'll have the TC test corresponding to it (TC16) done, but it should be nearly exactly the same as TC11. I'm getting a little tired of trying endless tweaks, so now I'll try to add a 'secret' ingredient. If anyone is curious I might say more, but for now I'll keep it a surprise until I've run a test on it. :) -- Rednaxela

Haha! I've successfully combined a DC/GF gun and a PM gun in a way that makes a better gun! Now if only I can get the MultipleChoice feature into the PM gun! I'm curious, anyone know if anyone has put both GF and PM in the same bot (other than with VirtualGuns) before? -- Rednaxela

Nope, although I wouldn't be surprised if SandboxDT had some sort of mixture =). I've mixed DC with VCS for movement (not that it worked), but that's as far as its gone AFAIK. PEZ always wanted to, and that was one of my earlier ideas (look at the WaveSurfing/Goto? page). I'm curious - are you weighting each gun based on what its individual hitrate is and how sure the gun is of being correct (number of overlapping GF ranges/height of spike for DC/VCS, match length for PM)? Because long ago, before I got distracted with perfecting my surfing, that was what I was planning to do =) Seems you beat me to it =) -- Skilgannon

Well, currently I weight by a rolling rate-like value based on "the maximum height of the profile within the GF range the enemy covered". So basically it's rather similar to a rolling hitrate, however it accounts for the fact that any intersection has at least some value, as far as getting a better MultipleChoice result is concerned. I'm not 100% satisfied with my weighting scheme though. For one, the way that I normalize the max height prevents how confident a targeting method is of itself from being taken into account, so I'm intending to fix that some time by normalizing the area of the profile instead of the maximum height (which will also make the handling more mathematically consistent with how probability curves should be dealt with). While I'm not completely happy with the weighting right now though, it does seem capable enough of even extracting targeting value from something as mundane as CircularTargeting, so it's clearly not bad either. I also believe that the way that I use perfect-precision (perfect botwidth) GF ranges is part of what makes my MultipleChoice use effective too, due to how it finds peaks to be extracted that less accurately inclusive GF range calculations may not see. By the way, I see you make a comment about match length for PM, which I find interesting. I haven't implemented it quite yet, but for my soon-to-be-multiple-choice PM gun I believe I've also come up with a way (which I think statistically sound) to calculate a probability/confidence in a PM match, which while generally higher with longer matches can occasionally be highest in a match which isn't quite longest, so I hope to use that to prioritize different possible permutations of the future in the MC-PM. -- Rednaxela

Gah! I finally get my MultipleChoice SingleTick PatternMatcher working, only to discover that it is FAR too slow (45 minutes to do 3 seasons of TargetingChallenge2K7/ResultsFastLearning)! I think I unfortunately need to either:
- Go the route of just improving the non-MultipleChoice SingleTick PatternMatching
- Go the route of non-SingleTick MultipleChoice PatternMatching
- Or find some MASSIVE performance optimization...
In any case... I'm rather disheartened now... :( -- Rednaxela

I was thinking it might be slow, but that is SLOW! Although, thinking of the algorithm: regular pattern matching has 1 nested loop (to find the match), followed by an inline loop (rebuild data), therefore x*x + x iterations. SingleTick has a nested loop (find first match) followed by a loop within a loop (find another match for each tick) within a loop (rebuild data) therefore x*x + x*x*x - it's bound to be a lot slower. Put the entire thing in a loop again (multiple choice) and it's just going to slow down even more (x*x*x + x*x, and x*x*x + x*x*x*x). Thus, I guess MultipleChoice with a regular PatternMatcher would be about as fast as a SingleTick PatternMatcher because they have the same depth of nested loops - make sense? It's kind of like doing limits of rational functions in calculus =) -- Skilgannon

Yep... It's really slow like that, even with a limiting factor of never considering more than the 5 most likely permutations of each tick. Personally, I think MultipleChoice-SingleTick has to most raw potential as an algorithm (i.e. with the method I was using, you could very accurately calculate the probability of a given "reality", and consider a large number of possible branchings) however it seems prohibitively time consuming. Of course, one factor contributing factor to it, was that my coding style for it was not at all designed for speed, and rather for flexibility and "correctness" to my intuitions about what is completely statistically sound. I'm sure that someone could make something functionally identical if they really tried, but is 5x faster at least (by using more raw arrays as opposed to OOP-ness, avoiding a bunch of the unnecessary object allocation and recalculation I tend to do, etc.) but even so I'm not sure it could be optimized enough to be practical. Of course.... despite that failed experiment, not all is a waste... I've learned a bit about PatternMatching, and also some of my theories about how to improve the statistical accuracy (or at least "Correctness") of MultipleChoice PatternMatching may still end up useful to me in a non-SingleTick gun. Also, if you're wondering why I'm a little lighter spirited than when I was disheartened, some of that green in the diff column [here] can be rather uplifting :) -- Rednaxela

Oh, just wondering, are you only aiming your gun when your gunheat is low? If you're aiming your gun the whole time it could slow you down a LOT. In DrussGT I start aiming my gun as soon as the number of ticks for my gun point at MaxEscapeAngle >= the number of ticks until my gunheat is 0 -- Skilgannon

Already do that to an extent: I aim head-on except for 4 ticks of time given to aim which seems to be enough. -- Rednaxela

Woah there! It's not even close to done running yet, but TC26 is looking extremely promising! Very good scores against WaveSerpent and Shadow for one! It would place in TargetingChallenge2K7/ResultsFastLearning just a hair below Dookious and Phoenix! I've sure got one strong gun! -- Rednaxela

Wow! That's one strong anti-surfer gun you have there! And not at all shabby against the random movers, either. -- Skilgannon

Thanks! Indeed, and hitting surfers better than Master D is not a small feat I think! Now I just need to get that random mover score increased some, at least back to TC20 levels. It's frustrating how I still haven't been able to beat Raiko's random mover score yet. -- Rednaxela

Oh! Just to check, did you disable Raiko's data saving? That would influence the results a lot! -- Skilgannon

Yeah, it's the same version of Raiko's gun as is required for MovementChallenge2K7, which does have the data saving disabled. -- Rednaxela

Heh, Gamma2 is performing worse than I had hoped... In particular, my AntiSurferTargeting additions actually decreased score against Dookious, DrussGT, Chalk and some others I was hoping for an increase against, not to mention it seemed to have notably more trouble with a few bots that use StopAndGo at first and later switch to RandomMovement for some reason. Overall it scores about the same as Gamma1, but with a very different profile of problematic and unproblematic bots... -- Rednaxela

Well, I've decided that due to in the inadequacies of TargetingChallenge2K7, I'm going to do my further testing in two sections. I will use TargetingChallengeRM for my testing against random movement as it seems fairly good, and for surfing I will use a custom challenge consisting of: The surfers from TargetingChallenge2K7, 3 surfers that my anti-surfer gun hurt results against (Dookious, Chalk, and Komarious), and 3 surfers that anti-surfer gun helped results against (GresSuffurd, DarkHallow, and MatchupWS). All the added bots modified not to fire of course, like the TC2K7 reference bots. I feel that this should give me notably more helpful challenge stats... (though it will be more time consuming to run) -- Rednaxela

Huh... this is bothersome how giving a better PM gun degraded overall performance. I'm suspecting that this is because now the new PM gun is getting higher weighting than is best for the overall health of the gun. Running tests now with the PM gun along and the DC gun alone, and it's looking like the DC gun alone is stronger overall in the RandomMover? challenge than the combined gun now, however against certain bots, is measurably weaker than the PM gun alone (and the combined gun is usually somewhere inbetween against those ones). I think this means that the biggest thing I need to work on right now, is improving my CrowdTargeting weighting and profile mixing scheme... -- Rednaxela

Well, after testing TC32 (DC gun only), it appears that it's strength is somewhere between TC31 and TC29, which to me indicates that either my CrowdTargeting weighting is indeed the problem, or that my new matcher is unexpectedly weaker than the old matcher. I suspect the former more, but I'm running TargetingChallengeRM tests with the PM guns only to see how they each compare. -- Rednaxela

So... PM05 is indeed overall far stronger against most random movers than PM01, though it's weaker against a couple (Perhaps weaker against a couple because of the change from matching on acceleration to matching on velocity?). The places I make the notable losses though on TC31 are against the "hard" category bots, which PM05 performs far better against than PM01, therefore I guess I can conclude that the CrowdTargeting weighting algorithm is what needs some work... gah... -- Rednaxela

Alright... seems like the new multiplicative CrowdTargeting scheme in TC33 brings performance up to TC29 levels, however I'm still hoping for better, because after all, the PM is much stronger now. I think my merging of the profiles from each gun is better now, but the dynamic weighting could still use a little work -- Rednaxela

Well... I'm having a lot of trouble evaluating changes... When I test something in the random mover challenge, even at 50 seasons, things that are virtually no different and only affect the first 20 bullets I ever fire and should at that improve results then, seem to erratically show as half-point performance degradations sometimes!I have no clue what the heck is going on anymore really and I really wonder if I'd be better off not spending time with these tests and instead just do more frequent rumble releases. Also for example, when I first added PatternMatching it a very slight increase against surfers and random movers, yet in the Rumble I gained 14 points overall. To make matters worse, when I switched to a multiplicative scheme in the CrowdTargeting weighting, it seemed to show a measurable increase in the challenge score against random movers, for some bothersome reason it killed results against surfers a bit. While I was successful in making an anti-surfer mode and integrating it, any rumble points I gained against surfers was canceled out by losses against other things (though it did help the PL rank). Also, it's seeming like I've hit somewhat of a BrickWall as far as the gun. Adding a PatternMatcher that's confirmed to be considerably stronger didn't help gun performance at all despite how adding the first simple PatternMatcher gained 14 rumble points. All in all, I'm quite frustrated with my lack of being able to improve the gun past RougeDC Gamma1 in rumble points or RougeDC Gamma2 in PL ranking. Making matters worse, is preliminary tests with RoussGT and DrugeDC seem to indicate that even after all this hard work, my gun is STILL my weak spot compared to my surfing which was only developed over a far far smaller number of revisions. They say WaveSuffering is bad? Bah! It ain't so bad! At least not compared to these gun woes! -- Rednaxela

I personally like movement more than gun, so for gun, rather than going the perfecly-tuned-rumble-gun (Raiko, Ascendant) I did the brute force method: Just throw as many attributes as you can at them, you'll catch them on one of them :-D. Which is why DrussGT currently has 10 attributes for the gun =). If you're going for rumble points, distance is a very important attribute. Lots of bots don't have movement as good as the ones in the challenges, and a distance attribute can help a lot. Time-since-direction-change or time-since-deceleration helps against bots that have a simple occilation, acceleration is obviously important, as is lateral velocity. But maybe your problem is getting the recordings into a decent gun angle? Do you smooth the angles? ie, if you have about 10 shots all right next to each other, but not quite overlapping, and 2 shots on the other side that do overlap, which do you shoot for? Heavy smoothing would make you shoot for the 10. In DrussGT I do smoothing until I have about 7000 scans, and I'm pulling a cluster size of 80, after which I follow a method similar to DCBot, but with a constant botwidth tolerance. The smoothing method seems better at hitting random movers with limited information, but the 'overlap' method seems a little better once a lot of info has been gathered, and against surfers. -- Skilgannon

Well, I currently use a time-since-deceleration segment, and indeed that seems to help, plus the PM helps destroy most oscillators even more throughly too. I tried adding distance a couple times, but at least in the challenges, I got absolutely no gain from it and in fact some loss of score. As far as smoothing and such, the method I use, is the "overlap" one, however instead of going by their current botwidth, I instead overlap the whole perfect-precision GF range that would have hit them, from the wave where the original record is from. I think this causes fewer "not-quite-overlapping" cases probably. It seems to work fairly well though I suppose I might try other things. By the way, you should see my bot's debugging graphics some time, which include very pretty graphs of the "movement profile" it estimates for the enemy at the current moment in time, I find it fun to watch anyways. What frustrates me the most though, is that I'm 90% certain that my CrowdTargeting weighting scheme has room for improvement, yet I can't seem to find where that room to improve it is, particularly considering how adding a much better PM didn't do any good (in fact it now seems to give the PM too much weight when the DC is significantly better, because the PM's guesses aren't so bad anymore against non-simple movers). In any case, that about the smoothing is interesting and I might give switching away from the overlap method a try. -- Rednaxela


Robo Home | RougeDC | Changes | Preferences | AllPages
Edit revision 81 of this page | View other revisions | View current revision
Edited June 4, 2008 9:14 EST by Rednaxela (diff)
Search: