[Home]History of WikiTargeting/DynamicSegmentation

Robo Home | Changes | Preferences | AllPages


Revision 17 . . September 16, 2007 18:26 EST by Skilgannon [potential]
Revision 16 . . June 27, 2005 4:12 EST by Jokester
  

Difference (from prior major revision) (no other diffs)

Changed: 196c196,198
A few things that I have been thinking about. First, your visit count method has the problem of, given a flat distribution, you could have your spike being the max (ie a 99999999 you could have 99990000 and 00009999). What I am contimplating is a combination of the visit counts, an entropy calculation, and a maximum hit probability. Also, you dont need to perform the recursive divisions, because if the first split is not the most useful, the profiles will just make a useful split the next time. They will just start off the same. I am also working on a system to see how similar two arrays are, which should be useful both on selecting the most appropriate split, and for comparing on fire behavior to general behavior (if they can be shown to be the same it will increase the learning speed 10 fold. -- Jokester
A few things that I have been thinking about. First, your visit count method has the problem of, given a flat distribution, you could have your spike being the max (ie a 99999999 you could have 99990000 and 00009999). What I am contimplating is a combination of the visit counts, an entropy calculation, and a maximum hit probability. Also, you dont need to perform the recursive divisions, because if the first split is not the most useful, the profiles will just make a useful split the next time. They will just start off the same. I am also working on a system to see how similar two arrays are, which should be useful both on selecting the most appropriate split, and for comparing on fire behavior to general behavior (if they can be shown to be the same it will increase the learning speed 10 fold. -- Jokester

I'm thinking that my gun for DrussGT will be something down these lines. And, from what I can reason out, it shouldn't matter which split comes first. If it is a 'bad' choice, then the data afterwards should make *each* of the subnodes do a good split when we have more data. It may slow down our learning time, but that shouldn't matter with this gun anyways. If it is a problem, one way to stabilize the initial splits would be to add a few simple segments as default (eg. lateral velocity, walls) and let the dynamic segmentation add in all the fancy stuff (distance, accel, decel, time since decel, distance last x ticks, time since direction change, gunheat, headingchange, advancing velocity, etc) individually for each bot. If you kept a counter it would even be possible to do rolling averages, weighting each entry by how long ago it was collected. As mentioned above, one of the requirements for a node is that it splits the data so that there is a balance of data to each side (we ARE using a binary tree here), that there are enough entries in the segment beforehand, and that the resultant nodes have a higher Entropy than the initial node. Additionally, by using this arrangement, you will never have an empty node when you want to fire your data, and each subnode is optimised for its conditions: for example, it may be useful to make a decel split inside the nearwall segment, but not otherwise. Anyways, if anybody else wants to check my code when I've got something working, feel free =) -- Skilgannon

Robo Home | Changes | Preferences | AllPages
Search: