[Home]ZoomTargeting

Robo Home | Changes | Preferences | AllPages

Showing revision 22
ZoomTargeting is a hybrid of StatisticalTargeting and a new form of PatternMatching and should combine the best of both worlds. It uses SegmentedData (currently 6 dimensions) and gets this data using a variant of Waves.

Its StatisticalTargeting properties should make sure that

Its PatternMatching properties ensure that

The new thing is that this targeting method basically zooms in (more PatternMatching) or out (more StatisticalTargeting) depending on the amount of data it has and the zoom level's segment size. Currently I use 8 zoom levels with increasing segment sizes. PatternMatching takes place in what I call the MicroSegments. In these tiny segments even only 1 previously reported angle is enough to deliver a firing angle with a high probability of hitting the enemy. StatisticalTargeting takes place when you zoom out, in the MacroSegments?. The segments here are much larger and contain more data to do statistical calculations with. Learning speed increases when the zoom level increases.

The PatternMatching algorithm that I use differs greatly from traditional PatternMatchers?. You might even argue if it is PatternMatching at all. I call it PatternMatching because it matches to a very specific situation. As a result it performs very well against bots that other PM guns perform well against :-). A situation is basically a very small segment with a lot of dimensions. Currently I use 6 dimensions, but if I can solve a memory issue I will use even more. I might even try a dimension that uses the movement history to make it a more official pattern matcher :-). In practise, this algorithm is a bit less accurate against truly predictable movers like SpinBot or PatternBot, but more effective against random movers. And since none of the top bots are truly predictable.........

There is a nice side-effect to using MicroSegments: I keep track of the number of Segments ('situations') that are used. This number increases more rapidly as the enemy has better movement. I am guessing this Segment count (say after 1000 rounds) could be a nice alternative Movement Index.

-- Vic Stewart

How far are you in developing this awesome gun?

Yeah, yeah, yeah..... You talk the talk, but does it walk the walk?

Sounds great, can I help in anyway?


Question 1:

Currently I am reserving memoryspace for every zoom level whether it is used or not. I use a simple (6 dimensional) array for that. The drawback of this is that memory is reserved for segments that I never use and memory soon spins out of control. Now I want to implement a dynamic array (Vector or ArrayList or something). The question is, which type of dynamic array would be most suitable for this situation? Most importantly I require the lowest possible memory footprint.

-- Vic

Both an ArrayList and a Vector would work here. The constructor for each allow you to specify the size of the Arraylist to be built. They will also grow as more items are added to them. The "growing" part can be expensive in terms of CPU cycles so you will most likely need to play with them to see how it works out with skipped turns. Of the two, I would imagine that the ArrayList would be more appropriate than the Vector. I am also unsure if this will buy you anything in terms of memory footprint as I think in the end both classes use an Array with wrapper methods to implement their functionality. -- jim

LOL, I already implemented this gun about three months ago for Statistician -- it's sitting on my hard drive right now. This gun was the main feature of Statistician (and his namesake) and it will be one of Tax's main guns. You can probably find me talking about it here and there on the wiki. Although, it will be better implemented in Tax due to the ability to segment stats more intelligently with his neural network. Oh, to answer your question, I used a HashMap? that stores strings of movement. For instance, if I tokenize a bot's velocity into the letters a-p, where a is -8 and p is +8, then I might have a string that looks like this: "aaaacegiiii". I stick that in the HashMap?, along with a two-dimensional array of all my segments, and I just update the array as I need to. This makes sure that you only use memory for the patterns that you have seen, and it is a very fast pattern-matcher. -- nano

Interestingly, this approach also allows an infinite amount of "zooming". Of course, there are memory limits, but I ran my algorithm at a maximum of 200 frames of movement, and it ran fast with no memory problems. Thank God for a fast and efficient HashMap? implementation. -- nano

Ok, thanx. What would I use for index if I'd want to implement an ArrayList? Say, for instance, in my original multidimensional Array I want to access Angle[0][1][1][0][1][1]. I need to do some sort of mapping from the 6 original indices to a single one. A possibility might be to convert these indices to a single integer, in this case ranging from 0 to 63 (6 bits). But can I insert the Angle at position 54 [ArrayList: add(54, Angle)] without my ArrayList or Vector immediately reserving memory space for the preceding 0 to 53? I think not, but I'm not sure. Does anyone know for sure?

That probably means the HashMap? is the better solution. I can put my single Index 54 in it along with my Angle value [HashMap?: put(Indexobject, Angleobject). However, a HashMap? stores both key and value objects. Suppose my original value to store is an int. I now need to construct a wrapper object for it, an Int. Also I need to construct an Int object for my key. Assuming an Int takes equal memoryspace (it is just an int with wrapper methods, right?) then I must conclude that using a HashMap? requires double the amount of memory PER ENTRY. So, if I have less than 32 entries I should use the HashMap?, else I should stick to my original Array. This is getting complicated... Although I think that in practise I almost never will get more then 32 MicroSegments per Segment. I'll try the HashMap? and report back if it works...

-- Vic

It might be more sensible to just use a binary string instead of using an Integer object (might as well use java.lang.Integer, though, if you do that), meaning you would access the said angle by using map.get("011011") or something. Might be simpler. If you use the array thing, it might make sense to just go ahead and allocate the thing (4 bytes * size of the array for that many null pointers), it may not be a big hit when it's empty anyways. If you know some segments won't be hit, though, that's a consideration (particularly if it happens to be a lot of segments) -- Kawigi

It would sure be easier and more readable to use such a binary string. But how does the HashMap? store this? I guess it would generate a hash number so it will not use more memory then using my Int.. Is this assumption correct? -- Vic

That should be correct. If a HashMap? turns out to be weightier than you can handle, you could also try just implementing your own Hashtable. For even more efficient memory (but a compromise on speed somewhat), you could throw everything into a BST or something. -- Kawigi

Binary Search Tree I presume.. Hmm gotta think about that tomorrow....it's getting late here. By the way, I already tried using a normal array and memory consumption goes through the roof, even with a high percentage of nullpointers. -- Vic

Kawigi, I don't understand how I would use a BST in my situation. I'm still interested since you spoke of more efficient memory so could you explain a little? But for now I've decided to go and try make my own container class. I only need a byte (8 bits) of memory for my key and I'm pretty sure all the java.util containers use an int (32 bits). Besides, it's a good exercise :-) -- Vic

Well, you sort them into the BST by key. The root key can either be the first MicroSegment? you use, or an intelligently chosen constant one. Then you sort new MicroSegments into the BST as you have them. It shouldn't really ever take more memory than it needs, and in the best case, look-ups and insertions are O(log(n)). In the worst case, without intervention, look-ups are O(n), which probably isn't what you want. You can reduce that by a constant (in half, actually) by picking an intelligent root node all the time. If you think look-ups are more common/time critical than insertions, you can do checks to make the tree more balanced for consistent O(log(n)) look-ups at the expense of some checks and possible AVL rotations within the insertion part. -- Kawigi

Just use a TreeMap?. ;) If memory is a problem with a HashMap? (it was never really a problem with mine -- do you have less than 256MB?), then you can try it with a TreeMap?. If that isn't sufficient, then consider implementing your own structure. These two Java implementations are incredibly fast and efficient (though they can be improved upon), and I would definitely try both before hacking it out on my own. -- nano


Question 2:

Suppose I have a segment with data and I want to calculate the best angle that would have the most chance of hitting the enemy.

some ascii art:

         -47_______*_**__**_____________________0________*_______________*_*____*_*______+47 degrees

The * means a previously observed succesful targeting angle. I can clearly see that i should not take the average angle (AveragedBearingOffsetTargeting) This data shows me that there are two spikes, one at -30 and one at +30 degrees. The -30 angle has the most occurences and has a narrower spike, so this is the angle to choose.

I could of course calculate the angle range for that specific distance, loop from -47 to +47 and determine at what angle the most occurences fall within the angle range. But that would mean a loop within a loop (times number of zoom levels) so the performance would be not so good.

I could also calculate the angle range for that distance and iterate over the occurences, determining which occurence picks the most neighbouring occurences. Again a loop within a loop, but this time both loops are limited in size. And the result may not be optimal in some specific situations....

So, what would be the most elegant way to calculate this angle? Is there an algo with only one loop?

-- Vic

Well, I've thought about this some, as I also do statistics with waves, and it seems like it would be more accurate to use those old-school virtual bullets, because *every* bucket that hits is recorded (and as you observe, you want the mode, not the mean). There are two ways I've thought of to tune this - the first would be to collect data the same way and then figure out how wide of a margin of error you are allowed. The eventual calculation does take a loop inside of the loop, but the inside loop would be trivial. The second way would be to have your wave bullet tell you every bucket in your array that would have hit. In this case there is much more to do toward the end of the wave's life, but calculating the angle is as simple as finding the index of the max. -- Kawigi

I have just the same problem with a new pattern matcher I'm developing. At the end, I get N bearings distributed in a line and I have to choose the best place to fire. None of the approaches I have testes satisfy me: Sometimes granularity is to big, sometimes to low... Now I'm using an approach of segmenting it it lets say 6 overlaped parts, search for the one with the higher density, zoom into the segment, do it again, and so on. But I'm thinking seriously in using a NN to learn the best bearing from the distribution. -- Albert

It seems to me that a binary tree structure would be perfect for this. The top node will have the total number of hits, then its children node will have the total number of hits on the left, and the total number of hits on the right side, and their children will be split up left and right as well, and so on. Then you can choose which level to shoot at based on the hit percentage for the best node at each level among all sibling nodes (nodes which have the same parent). For instance, at the second level you have full left and full right. Let's say after you have 100 measurements, you have 60 on the left, and 40 on the right. Then on the next level you have 40 20 10 30, from left to right. Then you will shoot at the third level, because 40 / 60 is a better percentage than 60 / 100. Of course, margin of error should play a part in this calculation as well. You could choose based on the highest minimum percentage after margin of error, or the highest percentage (probably a bad idea), or you could randomly choose based on the amount of overlap among all of the choices (that's how I do my dynamic distancing).

For instance, in this case, the margin of error for the 40/60 measurement at 95% confidence is 1.96 * sqrt(0.66 * 0.33 / 60) = 11.9%. Thus, the minimum hit percentage is 66.6% - 11.9% = 54.7%. For the 60 / 100 measurement, we get 60% - 9.6% = 50.4%. Therefore, we will choose the third level of segmentation (which is a particularly good term for this structure), and we will fire full left.

As for using a NN to learn the best bearing...how will you train it? It seems to me that it is a statistical problem no matter how you look at it -- though I suppose a kind of fuzzy logic would apply here. I don't know anything about that though. Have you done any research on using fuzzy networks to do statistical analysis?

-- nano

Well, I have a cloud of points (always a fixed number of N points) and I know the right bearing to hit for all my historical data. So I just have to train the NN with input vector (PredBearing1?, ... , PredBearingN?) and output value the right bearing to hit. What I want it the NN to decide which way is better to "average" the bearings. I think it should work, but I haven't tried it yet. -- Albert

nano's idea of dividing it up and going in the direction where there are more is what I did with my first stat gun (although it didn't work at all) in SpareParts. -- Kawigi

Here is an old idea I had but never got around to: how about using Java2D rectangles to represent the enemy in the hit positions, then take intersections of these until you find the point with most intersections? -- tobe

Sounds pretty CPU-intensive for thousands of rectangles, but I'm sure it could be optimized somehow. -- nano

If you found thousands of matches in the pattern-analyzer, that would be a big hit :-p Of course, on the statistical side, that's still a problem. -- Kawigi

Thanks folks, there a some nifty solutions mentioned here.. however most of them seem not appropriate for my situation. My use of MicroSegments means that I have A HUGE NUMBER of them. That means I must minimize my use of memory. I think I'll try my own second option (which is basically the same as Kawigi's first option). That way I'm sure to minimize the amount of memory taken up (only the actual angle values, nothing more). Performance should be ok, and I'm only losing a slight bit of accuracy, but I actually think that this is discardable. -- Vic


Sorry for my late reaction. I was away for a week and this week I was very busy with work. Anyway, thanx for thinking with me here :-) Appreciate it. -- Vic


Robo Home | Changes | Preferences | AllPages
Edit revision 22 of this page | View other revisions | View current revision
Edited August 8, 2003 0:48 EST by Nano (diff)
Search: