[Home]ZoomTargeting/MemoryIssue

Robo Home | ZoomTargeting | Changes | Preferences | AllPages

Currently I am reserving memoryspace for every zoom level whether it is used or not. I use a simple (6 dimensional) array for that. The drawback of this is that memory is reserved for segments that I never use and memory soon spins out of control. Now I want to implement a dynamic array (Vector or ArrayList or something). The question is, which type of dynamic array would be most suitable for this situation? Most importantly I require the lowest possible memory footprint.

-- Vic

Both an ArrayList and a Vector would work here. The constructor for each allow you to specify the size of the Arraylist to be built. They will also grow as more items are added to them. The "growing" part can be expensive in terms of CPU cycles so you will most likely need to play with them to see how it works out with skipped turns. Of the two, I would imagine that the ArrayList would be more appropriate than the Vector. I am also unsure if this will buy you anything in terms of memory footprint as I think in the end both classes use an Array with wrapper methods to implement their functionality. -- jim

LOL, I already implemented this gun about three months ago for Statistician -- it's sitting on my hard drive right now. This gun was the main feature of Statistician (and his namesake) and it will be one of Tax's main guns. You can probably find me talking about it here and there on the wiki. Although, it will be better implemented in Tax due to the ability to segment stats more intelligently with his neural network. Oh, to answer your question, I used a HashMap? that stores strings of movement. For instance, if I tokenize a bot's velocity into the letters a-p, where a is -8 and p is +8, then I might have a string that looks like this: "aaaacegiiii". I stick that in the HashMap?, along with a two-dimensional array of all my segments, and I just update the array as I need to. This makes sure that you only use memory for the patterns that you have seen, and it is a very fast pattern-matcher. -- nano

Interestingly, this approach also allows an infinite amount of "zooming". Of course, there are memory limits, but I ran my algorithm at a maximum of 200 frames of movement, and it ran fast with no memory problems. Thank God for a fast and efficient HashMap? implementation. -- nano

Ok, thanx. What would I use for index if I'd want to implement an ArrayList? Say, for instance, in my original multidimensional Array I want to access Angle[0][1][1][0][1][1]. I need to do some sort of mapping from the 6 original indices to a single one. A possibility might be to convert these indices to a single integer, in this case ranging from 0 to 63 (6 bits). But can I insert the Angle at position 54 [ArrayList: add(54, Angle)] without my ArrayList or Vector immediately reserving memory space for the preceding 0 to 53? I think not, but I'm not sure. Does anyone know for sure?

That probably means the HashMap? is the better solution. I can put my single Index 54 in it along with my Angle value [HashMap?: put(Indexobject, Angleobject). However, a HashMap? stores both key and value objects. Suppose my original value to store is an int. I now need to construct a wrapper object for it, an Int. Also I need to construct an Int object for my key. Assuming an Int takes equal memoryspace (it is just an int with wrapper methods, right?) then I must conclude that using a HashMap? requires double the amount of memory PER ENTRY. So, if I have less than 32 entries I should use the HashMap?, else I should stick to my original Array. This is getting complicated... Although I think that in practise I almost never will get more then 32 MicroSegments per Segment. I'll try the HashMap? and report back if it works...

-- Vic

It might be more sensible to just use a binary string instead of using an Integer object (might as well use java.lang.Integer, though, if you do that), meaning you would access the said angle by using map.get("011011") or something. Might be simpler. If you use the array thing, it might make sense to just go ahead and allocate the thing (4 bytes * size of the array for that many null pointers), it may not be a big hit when it's empty anyways. If you know some segments won't be hit, though, that's a consideration (particularly if it happens to be a lot of segments) -- Kawigi

It would sure be easier and more readable to use such a binary string. But how does the HashMap? store this? I guess it would generate a hash number so it will not use more memory then using my Int.. Is this assumption correct? -- Vic

That should be correct. If a HashMap? turns out to be weightier than you can handle, you could also try just implementing your own Hashtable. For even more efficient memory (but a compromise on speed somewhat), you could throw everything into a BST or something. -- Kawigi

Binary Search Tree I presume.. Hmm gotta think about that tomorrow....it's getting late here. By the way, I already tried using a normal array and memory consumption goes through the roof, even with a high percentage of nullpointers. -- Vic

Kawigi, I don't understand how I would use a BST in my situation. I'm still interested since you spoke of more efficient memory so could you explain a little? But for now I've decided to go and try make my own container class. I only need a byte (8 bits) of memory for my key and I'm pretty sure all the java.util containers use an int (32 bits). Besides, it's a good exercise :-) -- Vic

Well, you sort them into the BST by key. The root key can either be the first MicroSegment? you use, or an intelligently chosen constant one. Then you sort new MicroSegments into the BST as you have them. It shouldn't really ever take more memory than it needs, and in the best case, look-ups and insertions are O(log(n)). In the worst case, without intervention, look-ups are O(n), which probably isn't what you want. You can reduce that by a constant (in half, actually) by picking an intelligent root node all the time. If you think look-ups are more common/time critical than insertions, you can do checks to make the tree more balanced for consistent O(log(n)) look-ups at the expense of some checks and possible AVL rotations within the insertion part. -- Kawigi

Just use a TreeMap?. ;) If memory is a problem with a HashMap? (it was never really a problem with mine -- do you have less than 256MB?), then you can try it with a TreeMap?. If that isn't sufficient, then consider implementing your own structure. These two Java implementations are incredibly fast and efficient (though they can be improved upon), and I would definitely try both before hacking it out on my own. -- nano

Ok, I've used the HashMap? in the end and ... hey! it works :-) Even without optimizing initial capacity and load factor the memory consumption is a fraction of what it was. Thanks for your help guys! -- Vic

This is belated, but have you considered BSTs? -- Kuuran

Ok, next time read the page first. Umm.. the best way to imp a BST for future revisions is probably to keep it sorted on numerical key, the way you first proposed, with each number being a 6-bit long bit vector (with a couple bits of wasted memory, since you'll no doubt be using shorts for this). Then simply fly through, find your object, use the reference to get whatever data is attached to the key, et voila. Since memory consumption and readability are issues, simply use the binary operators for everything, they ensure every bit represents something instead of every byte, long, double or whatnot, this is cutting your memory consumption severely, and they maintain easy readability, everyone knows what's happening when you binary OR or AND values. BSTs also have the property of growing/shrinking to accomodate only the given data, just make sure to do your housekeeping. -- Kuuran


Robo Home | ZoomTargeting | Changes | Preferences | AllPages
Edit text of this page | View other revisions
Last edited August 26, 2003 10:33 EST by Kuuran (diff)
Search: