[Home]WikiTargeting

Robo Home | Changes | Preferences | AllPages

/DynamicSegmentation

I've had an idea recently, and I thought I'd throw it into the community. I hope this will result in a lively discussion and hopefully I (or someone else) can implement it when the theory seems to check out. (Actually I'd like to get working on it right now, but I promised myself to first finish my SOOL targeting gun and my new movement)

Feel free to shoot at / correct any assumptions I'm making on this page. Also feel free to add or modify anything if you feel it improves the design or adds to the discussion. It would be cool if this would turn out a community design. (Hence the name WikiTargeting)

--Vic

So here's the theory thusfar: (a summary is provided below)

GuessFactor guns have proven to be very successful over the years. In fact PEZ's CassiusClay gun holds the crown in the TargetingChallenge at the moment. So why do these guns perform so well?

These guns generally have a fixed number of segmentation nodes. Take for instance CC's gun: it has 6 segmentation dimensions, most of them with 5 divisions, except for WALL_INDICES which has 4. That means that it has a total of 5*5*5*5*5*4 = 12500 nodes.

In the RoboRumble CC will encounter most robots in only one 35 round match. Let's assume it gathers 1000 waves in that match. That results in a total of 35000 waves to be distributed over 12500 nodes. 35000/12500 nodes equals on average 2.8 waves per node.

I obviously don't think the strength of CC's gun lies in that number. The thing is that some nodes will be more populated than others. (It would be cool if someone has data on how the waves are distributed.) My guess is that a good GF gun like CC's gains most of its points using nodes that are much more frequently visited. The assumption for now would be that nodes with only 1 or two waves in it do not contribute to CC's success.

For now, I'm going to assume that a REALLY good node needs 100 waves collected or more. Probably this number could be much less but I think we can agree that such a population will give a pretty good estimate of the enemies futute position. Opinions?

Now I will make a bold assumption: I think on average 80% of CC's score can be contributed to 20% of its nodes. That means 2500 nodes do most of the work. If this WikiTargeting gun would use only 2500 nodes (dynamically segmented for each individual enemy) that would slightly decrease performance, but we can compensate for that. Let's suppose that CC would have 100 waves collected in each of those 2500 uber-nodes. Would that compensate the above loss? I'd say yes it does. This would mean that on total 2500 * 100 = 250,000 waves would have to be collected. Assuming 1000 waves per battle, that would mean 250 battles should be enough to gather the required amount of data. And the more battles, the better these nodes will predict the enemies future position. Off course all these assumptions need proof. Thoughts?

It is easy to conclude this would require prepopulating the gun with data. I will come back to that later.

Now we have a problem: some nodes will have hundreds or even over a thousand of waves collected, while other may only have 10! The distribution is not equal. This is caused by the fixed segmentation. Could we dynamically segment this data so we end up with exactly 2500 nodes each populated with 100 waves? I think this can be done by using a tree structure where we cut the population in half, using the most efficient dimension, as we climb up the tree. Probably there are other solutions. Suggestions?

Now suppose we have this stuff all working. Would one 35 round match have much effect on the data gathered in 250 matches? Not very much I think. That means the optimal GF found in each node should be enough data to store per node. No need to save all the other bins! This will reduce memory consumption. It also means that during matches no waves have to be fired and collected anymore. This will improve performance leaving more room for expensive movement algorithms. It will also decrease the data file sizes, leaving room for more prepopulated data files on individual enemies.

This creates a new problem: the gun will not be able to adapt to adversaries using adaptive movement. I think this problem can be discarded because there are only a few of those in the rumble. Opinions?

About prepopulating: now we have 2500 bins that need to be stored. We have reduced each node to one bin. So that requires at least one char to be saved. Because we will need dynamic segmentation, we will also need data on which segments this node has. Of course this depends greatly on the chosen algorithm but we will need at least one char to identify which segmentation parameter to use and a char for the division border. Probably we will need some more data if we were to use a tree structure, but for now let's assume we need 3 bytes (chars) per node. This means we will have to serialize 2500 * 3 = 7 kB. This obviously is way too much, because ideally we want to save data on all 286 RR participants.

We need to solve another set of problems: How do we collect such huge amounts of wave data, when we can only store 200kB in our robot's data directory? And should our robot perform the dynamic segmentation itself? I have experimented with the latter a bit with Locke, and I must conclude that there is no way a robot can perform such heavy algorithms in-game. It will even time out in the post-match time. Therefore a separate application should gather the data out of the robot directory and process it. This application will generate the file(s) needed by the robot to populate its gun. We can solve the first problem by letting the application read and empty the robot's data directory every few seconds (asynchronously to the robot which at that moment is fighting its matches).

This will significantly reduce the amount of code required for the robot itself, meaning it will probably fit in a micro- or even nanobot.

Summary:

Challenges to overcome:

What do you all think of these ideas? Don't be shy now ;-) --Vic


Way cool! I don't follow along all the rocket science stuff, but I like that you keep thinking big. =) I agree that a pre-trained gun like this would not be too impaired by the adaptive movers yet since they are not plenty. I wonder one thing though. What is it that makes a super-node? I understand that tghe figure 100 is a very, very raw estimate, but I still wonder where it comes from. A node with just one sample in it doesn't contriubute of course. With two samples there's at least a theoretical possibility, though only with 1/16 or so. I'm assuming here that with enough dimensions one previous sample is enough to make it useful. Maybe that's a wrong assumption? I think that the number of super-nodes will vary greatly depending on the enemy bot. Some simple movers might have very few. If we only save the data from the actual super nodes we might shrink the data size greatly. And this might also make the dynamic segmentation unecessary? (Not that I really have a clue what dynamic segmentation is about...) -- PEZ

A node is a super-node when it produces successful predictions. How exactly it would be defined I don't know. The definition is only that is contributes significantly to the score. That means it's a combination of how predictable the enemy is in that particular node and the number of waves collected in that node. I am not a math wizard so the figure 100 is not based on anything fancy. It's an intuitive number with which I get the feeling that if there is a tendency in the enemies movement in that particular segment, I will find it. But as I said, it could be much lower: 50 or even 20. I would love a statistics wizard's view on this! You say 1 previous sample can be useful. My view here is that since it has only one sample, this node is obviously hardly ever visited by the enemy, so discard it for the sake of file reduction and run the risk that you miss one or two shots more when it unexpectedly does visit that segment. Of course the loss of these shots (in total estimated at 20%) will have to be compensated by the remaining super-nodes. These need to be at least 20% more accurate after 250+ matches than CC would gather in 35 matches. I think you are right that the number of super-nodes will depend on the quality of the enemy's movement. This could definitely work in our advantage when reducing filesize. A lot of assumptions I made are very, very raw. I am hoping that some people have investigated some of this stuff before and can fill in the right numbers. If not, we must find the right ones ourselves. Maybe in that case we can use CC or another top GF gun to find out some of this stuff. Shouldn't be too hard. About dynamic segmentation: I'm a sucker for that stuff. But if this can be done without DS I'm all for it. I can think of a couple of reasons why we *should* use it, but not using it would save a lot of file space. We should discuss that in some more depth. Thanx for thinking with me! --Vic

What I meant was that 1 previous sample is useful if that's what you have when you are to fire. If it was correct last time, chances are it's correct this time too, in'it? Of course if after a 250 round battle you have nodes with only one visit in them these nodes are pretty useless. But I think a super node should have a spiky distribution. This is more important than the number of samples in the node. (If it's 10000 samples with a flat profile, we're not any better of than without samples, are we?) Of course we'll need enough samples to tell that the spike is statistically significant. On the issue of dynamic segmentation. Can you name the main reasons you think it's needed? That would maybe help me understand what DS is and also make us able to see if DS can be substituted with one or several other techniques or not. -- PEZ

I agree with you on the spiky nodes issue:

But, I do think there may be a category of supernodes that is visited very, very often, but is quite flat. Let's call these Nemesis Nodes :-) Even if the rate of scoring in these nodes is much lower than that of the spiky nodes it still contributes significantly to the score because it is visited so darn often. Of course these are the nodes that we hate :-) But we cannot discard them in my opinion. This is also a good example where dynamic segmentation may be useful: DS may be able to split these Nemesis Nodes up into smaller nodes that may be more spiky. I don't think DS is needed per se, but at least it would be interesting :-) without DS the data files could probably be smaller. Other reasons why DS may be useful:

brainstorming on fixed segmentation and supernodes: suppose we simply use fixed segmentation like used in CC. We run 250+ matches and discard the non-supernodes. We save the supernodes in a file. Now our robot retrieves that file and fills part of the array (the supernodes only). That means most of the nhe n will be empty. If an empty node is visited we could

How would the saved file look like? How can we make sure the right supernodes will be loaded into the array in the correct spot? If we could solve this, then I think implementing such a gun would be much easier than I initially thought. --Vic

Version 1.9.6.13 of CC tried to hold fire when it didn't have a certain amount of visits in the node. I tried different limits, but using "roundNum * 3 + 3" seemed to only actually hold fire in the beginning of the first round. This indicates that most nodes in the CC guns actually are visited multiple times during even a short battle. From my tests and the RR results this variant of the gun didn't perform any better than without this feature (rather sligthly worse). Note that I didn't check for spikeyness, which might tell a different story all together. As for data format for a non-DS version. We could try simple write the array object like CC does with its surf data. Before writinig we could mark all non-super nodes by writing -1 in all the buckets of them. I think this would help zip shrink the file considerably. Another option would be to use a crib sheet like Swiffer's. Again with -1 marking a non-super node. (And then filling all buckets with -1 upon reaiding the data.) -- PEZ

Sounds like an interesting idea. The biggest advantage i see is that you can revert to only use information gathered from waves that are aligned to real bullets (since the lack of samples is no problem anymore). That should produce better results vs movements that react to bullet fire (except real adaptive movements of course). On the other hand the inability to adapt might be a problem. I dont know how many bots in the rumble use some kind of adaptive movement, but im sure its not only wave surfers (thinking of MusashiTrick...). So combining this gun with a learning one might be necessary; perhaps using WikiTargeting first and switch to the other gun if the hit ratio is lower than it was while recording the data?

I just looked up what CribSheet means. If i understood it correctly, it will produce one value for each node (so in case of CC's segmentation 12500 values, which would be too much). Perhaps it is better to build up this crip sheet array and store just the pairs of index (2 bytes) and corresponding best guess factor (1 byte) for super nodes. If less than one third of the nodes are super nodes this should consume less memory, shouldn't it? Of course using zip might change things..., i have no experience there. --Mue

Well, It's easy enough to write two nodes in one byte. Which would amount to 6225 bytes. Zip ratios of 90% ave been seen before so it would fit. At least it could be tried without too much effort. We could write two or three different implementations and just pick the most efficient one if there is a big difference, or the simplest one if the difference isn't too big. -- PEZ

Two nodes in one byte? Can you explain how? If you are right about most nodes being visited above your limit then the entire assumption of the existance of supernodes is false. In that case this whole theory is false and it is back to the drawing board (oh wait...that's where we are now :-). I think we should find out more about the distribution of nodes in the array before we can progress. I have thought of a formula that returns a node's Contribution to the gun. It is simply VirtualHits? (as I described above) * BulletDamage?. I think we could modify CC slightly to save a file containing a node per line, including this Contribution number and the segmentation indices. With that file we can check if super nodes really exist, and if so, how much of those there are. This modified CC would only need one 'shadow' array with pure visits (I assume CC takes botwidth into account with the existing array) and a function that analyses the nodes and writes the results to a file at the end of the match. If I have some extra time the coming days I'll try to make this CCWT version, but you are also welcome to do it if you want and have the time. Your solution for saving the data could work. I forgot about the fact that a file will be zipped when using a zipstream. Another (more difficult) solution would be saving a bitstream with 1 bit to denote a supernode/non-supernote and 6, 7 or 8 bits describing the optimal GF (only in case of a supernode). If that can be zipped then I think we're as small as we can. --Vic

Mue, yes, we could try firing only real bullet waves. In theory that should work, provided the number of matches we need does not spin out of control. We will need about 15 times more matches in this case. On the other side of the spectrum, I was thinking of sending multiple waves per tick, for different bullet power levels. That would give us an extra choice: use the bullet power which corresponding node has the highest contribution (VirtualHits? * BulletDamage?) to the gun's success..... Switching to an adaptive gun in case of adaptive movement should work, although I think that the gain here would not be very high. About crib sheets: I think even one third would be too much. Also 2 bytes extra info per supernode is as expensive as using dynamic segmentation as described somewhere above. PEZ's solution or my solution below would produce smaller files I think. I appreciate your input! Are you by any chance good at statistics? We could use some guru input in that field as well. --Vic

I was too quick saying you could write two nodes in one byte... That would limit us to using 16 bins and that's not really useful. Sending multiple waves per tick doesn't work for the purposes you want. Even if most bots are non-adapters, plenty of bots use the actual bullet power in their movement. As for the existance of super nodes. They might still exist if they are defined as "being used enough times to become accurate and being spikey". As for firing only "real" waves. I'm not sure it pays. But it can be tested. CC doesn't care about bot widths anywhere, though it has a method in its wave objects that returns the target bot width in factors. (This function is going to be deleted in the next version of CC though, since it's just dead meat.) I guess, in your parlance, this means CC is only using pure visits. I won't have the time to code much at all the coming days. The time I have I will spend try finding out why CC is the only surfer that does not shut out japs.Serenity... -- PEZ

One more thing. While Bee might be the best TC gun by some margin, it's not the best RR gun. I think that this WikiTargeting thingy's biggest promise it that it could provide us with a tool to explore just what segmentations to do and how to optimize the results. -- PEZ

Well, CC (or Bee) is the best GF gun in the RR, tied with Raiko's gun. Plus it's open source and very clearly coded. That's why I picked it as an example. I hope we get to the stage that we build the tool you mention. That would indeed be interesting. But if we decide to stick to fixed segmentation the tool is not needed for this gun. It would be needed if we went with dynamic segmentation. Of course DS could be a separate thing to explore. Ok, I'll see if I can find the time to hack together a CCWT and analyse some data. --Vic

Just the results from trying to identify the super nodes will make a good tool if you find a good test bed and are ready to hand tune the segmentations. Maybe DS is a better path, but I'll keep the judgement of that until I've seen it working. =) Please base any CassiusClay/Bee WT bot on BeeRRGC rather than CC. That will allow direct comparison with the other RRGC bots. How about BeeWT? -- PEZ

@Vic: No i'm not that good at statistics :-) --Mue

I haven't had the time yet to create BeeWT yet. I'm too busy with work right now, and next week I'll be off on a 3 week vacation. Until then a few Locke tweaks will be the only thing I can manage. --Vic

Maybe this could spark some discussion on the subject of /DynamicSegmentation --Vic

The first results of the BeeWT test are in: in a 35 round match against RaikoMX, BeeWT fired and collected 19885 waves, and distributed them amongst a total of ................. no more then 572 nodes!!!!!!!!!!! (out of a possible 12500). Of these nodes the sizes differ greatly, and I have counted 52 nodes with 100+ visits. I am now confident that supernodes DO exist. I'll post BeeWT on the repository right away, so you can verify my results are correct. Here's the piece of code responsible for saving the visits data:

    public static void serialize(File file)
    {
        try {

            ZipOutputStream zipout = new ZipOutputStream(new RobocodeFileOutputStream(file));
            zipout.putNextEntry(new ZipEntry("data.txt"));
            PrintWriter out = new PrintWriter(zipout);

            double totalVisits = 0;
            int totalFilled = 0;
            boolean check = true;
            for(int a=0;a<BULLET_POWER_INDEXES;a++) {
                for(int b=0;b<DISTANCE_INDEXES;b++) {
                    for(int c=0;c<VELOCITY_INDEXES;c++) {
                        for(int d=0;d<VELOCITY_INDEXES;d++) {
                            for(int e=0;e<TIMER_INDEXES;e++) {
                                for(int f=0;f<WALL_INDEXES;f++) {
                                    double visits = 0;
                                    double coverage = 0;
                                    double value;
                                    //count visits
                                    for(int bucket=1; bucket<FACTORS; bucket++) {
                                        value = factorsFull[a][b][c][d][e][f][bucket];
                                        visits += value;
                                        if (value > coverage) coverage = value;
                                    }
                                    if(visits > 0){
                                        totalFilled++;
                                        totalVisits += visits;
                                        String line = (int)visits + " " + (int)coverage;
                                        out.println(line);
                                    }
                                }
                            }
                        }
                    }
                }
            }
            out.flush();
            zipout.closeEntry();
            out.close();
            System.out.println("total visits:" + totalVisits + ", used nodes:" + totalFilled + " (of 12500)");
        }
        catch (IOException e) {
            System.out.println("Error writing file:" + e);
        }
    }

No mistakes I hope, PEZ?

--Vic

That looks correct. -- PEZ

Wow! interesting ideas, Vic! First time reading this, and it sounds indeed interesting. Unfortunately i had only take an overview, iŽll read it with more calm later... -- Axe

I've analyzed the data some more. The big conclusion is:

95% of BeeWT's hits are done by only 450 supernodes!

I have defined hits as the number of visits in the most visited bucket of each node. In this case a total of 3361 visits are 'hits'. The 450 nodes with the highest 'hits' number have a summed total of 3211 hits. So if you discard the non-supernodes, you only discard 150 hits, or 5%.

Loading data from a file containing 450 supernodes would make Bee operate very strongly (as strong as 95% efficiency of a 35 rounds filled array) from the start of a new match. Making Bee perform even better is as simple as saving data after longer matches.

Now how do we serialize those supernodes? My suggestion would be: save each supernode as a index-value pair (as Mue suggested). Each node has an index 0-12499. This can be coded into 14 bits. A node contains 27 buckets, so the best bucket (most visited GF) can be encoded in 5 bits. 450 supernodes * (14+5 bits) = 8550 bits = 1069 bytes = 1.04 kB. Hopefully zipped this number can fall below 1 kB, meaning we can save data on all RR participants. I think this can be done, and I think this could make BeeWT the strongest gun yet.

Thoughts?

--Vic


I think it's fair to say the SuperNodes theory checks out. BeeWT, CassiusClayWT? and SilverFistWT are all doing insanely well with prepopulated data files on all RR participants. Each data file contains only the 200 best SuperNodes and their best GuessFactor. Should we pursue this path even more and experiment with things like:

  1. cramming more data in a file
  2. identifying which enemies it actually performs worse against compared to not using saved data
  3. use longer matches when generating the data files
  4. (dis)allowing saving data on short matches (after you save a file, you're stuck with those SuperNodes+bestGF forever)

About WikiTargeting in general, is there interest to continue the discussion on this page? Now that the SuperNodes issue is "resolved" the discussion on how to get the most efficient segmentation can continue if anyone is interested.

--Vic

How many rounds did you use to populate BeeWT 1.2? -- PEZ

An interesting (imho) observation to make is that Bee's rationale for using the segmentations it does was formulated without SuperNodes in mind. I'm trying to strike a balance between high granularity and fast enough learning. But with SuperNodes data management we can try segmenting much harder. -- PEZ

100 rounds. Segmenting harder does come with a price: for each SuperNode that you divide up (into more spiky nodes) you have to drop one existing SuperNode from the datafile. I agree it does become easier to experiment when you don't have to worry at all about fast learing. Wait a minute, you do have to worry when you have no data on the enemy. Then it becomes just regular (non WT) targeting. --Vic

That's not a problem. Just use the current buffers for targeting while you are collecting data in the SuperNode aware ones. -- PEZ

Yes, if you simply keep those buffers separated then there's no problem. About harder segmentation: we could use an offline application to read all WT data files and generate an overview of which nodes are most visited overall. That would give us insight on what to split and what to lose. Anyone care to write it? It shouldn't be that hard, since the code of reading the files is already written. I have my hands full at the moment. --Vic

This discovery of SuperNodes has convinced me even more that dynamic segmentation is the way to go. One worry I had before was if I could manage saving data of enough tree nodes to compete with GF targeting. Now the'e've found that only 200-500 nodes do all of the work, it may even be possible to save segmentation data AND best GF's of that amount of tree nodes. --Vic

Somewhere I saw discussion about reducing the file size for WT. These are generally small zipped files, where the zip header is a big part of the size. However there is a quite simple solution:

 bash-2.05b$ echo qwerty >qwerty
 bash-2.05b$ zip qwerty.zip qwerty 
   adding: qwerty (stored 0%)
 bash-2.05b$ gzip qwerty
 bash-2.05b$ ls -l qwerty*
 -rw-r--r--  1 lrem users  34 Sep 27 19:45 qwerty.gz
 -rw-r--r--  1 lrem users 151 Sep 27 19:45 qwerty.zip

Any conclusions? :> -lRem

Yeah. I agree. I wrote above that I thought putting all files in one archive would help reduce zip overhead. Your example there is pretty convincing. =) -- PEZ

That't't all. I'm not quite sure how the files look (I'm to lazy to check it by myself :P), but I suspect that not zipping at all could even reduce the size. This statement could be even more true if you did the bit-shifting trick, which is in fact a very effective number compression ;) And this could be done in quite short code, at least I was able to do so in C++. -- lRem

I doubt the raw files would be smaller unzipped. We need to save an index into the soap of 12500 nodes and also what bin is the most visited. This for 200 nodes. Without bit shifting this means 2 bytes for the index and 1 byte for the bin number. 600 bytes. Well, that means the zipped files are aboutthe same size as the uncompressed ones. But with all enemies' data in the same archive this could probably be shrunk considerably. Bit shifting should be pretty simililar in C++ and Java since they share the operators for it. But lets not resort to that until we know how far we get with the single-archive change. -- PEZ

The bit-optimised raw files _are_ smaller unzipped. I use them in Shadow's movement, the smallest ones grow to double the size when zipped. You don't have to shift bits though, just some multiplications and additions. Anyway, the single file solution is probably the best. -- ABC

Hehe. Bit shifting and additions can be very similar to multiplications and additions. In fact back in the days when CPU's didn't have multiplication operations you had to bit shift to achieve things like multiplication and division. Anyway, the single file solution is probably the best. =) -- PEZ

If your multiplications/divisions are all by 2^x you'll just gain execution speed. Acceleration, for example, is normally defined by 0-3, you'll lose 1 bit by bit-shifting :). The single file solution is, otoh (and imho), harder to code, you'll have to include some time-stamp if you don't want it to crash after a lot of rumble updates. With a multiple file solution you can use the file creation date to delete the oldest file when there isn't enough room for a new one... -- ABC

Does anyone have any example code on how to pick the right data file from the zipfile when reading the data? Currently the solution to ABC's problem is quite simple: don't save any data when there is no data file available. Besides the data quotum reason I did this for two other reasons: 1>having no data file could be by choice. (i.e. against adaptive movers) 2>datafiles on short matches are not very good anyway. --Vic

Isn't it as simple as calling getEntry(enemyName) on the zip file instance? Check this page: http://java.sun.com/j2se/1.4.2/docs/api/java/util/zip/ZipFile.html#getInputStream(java.util.zip.ZipEntry) -- PEZ

You are right, a multiple entry zip file should be just as easy as a multiple file solution, but I don't know if the compression rate will be the best. I was thinking about a single entry zipped file where you would store a big structure with all the enemies data... -- ABC

Ah, I had only looked at ZipInputStream?. Cramming a ZipFile? inbetween should do the trick then. lRem, thanx for your input! Could you provide a code example of how you would implement the bit-shifting trick? Pseudocode will do, but the real thing would also be appreciated :-) As far as I've considered it the main problem is the absence of a BitOutputStream? class and its Input couterpart. So I'm curious how you have solved it "with little code" as you said it. --Vic

For example: Assuming 10 bulletTime segments, 3 vel and 3 accel. To store the node coordinates in one byte you could do this:

byteVal = (byte)(bt + v * 10 + acc * 30);

(Note: in this case you waste some bits, since the max value is 139. For perfect bit usage you'd have to use only power-2 segmentation)

To convert it back to ints:

int acc = byteVal / 30;
int v = (byteVal - acc * 30) / 10;
int bt = byteVal - acc * 30 - v * 10;

-- ABC

This is similar to building hashCode() keys with the exception that here you must build uniqe keys of course. Anyway, I think this might not be necessary. Why not try zipping several files in one first. If it's not small enough try writing all data to one single zip file (or gzip which might get smaller). If that's not small eneough we can consider resorting to custom compression schemes imho. -- PEZ

Use gzip. It has very very low overhead... Even for data that can't be compressed the file size will only be 10 bytes larger than the raw data. In addition you get the file modification times which can be useful as ABC pointed out. I would only use ZIP for larger files, or files which you know will compress well. In those cases the ability to set the compression level for a ZipOutputStream? might make up for the larger header. --David Alves

David, something weird happened with your last post. Some words in unrelated paragraphs where garbled. Look at your diff and you will see what I mean. Any clues to why this happened? --Vic

I've seen it before. Thought it was a one of those moron vandals that did it. But it was in a page that David posted actually. Strange thing anyway! -- PEZ

I'm using wireless internet access... maybe it's messing things up? I've got a wall between me and the access point... --[iDvad Alevs]? <-- kidding

---

I wrote a lot of words here, then thought it might be better to move them to a linked page PhaseSpaceRambling as it's somewhat off the main topic! -- Shrubbery

I'm thinking...couldn't you use Entropy to refine what segments to keep? You process all the bins as usual, sort them by the actual bins' Entropy and then only save the top 50 or so... Skilgannon

There's still a lot of benefit to keeping the most visited ones instead - a bin visited once in a 35 round battle would probably have the highest possible entropy, but it's surely not as useful to keep as a bin that was visited 500 times but that has a lower entropy. It could be a good tie breaker to choose among bins with similar numbers of visits, though. -- Voidious

Yeah, I'm thinking maybe just discarding if the entropy is above a certain level, or the hits are below a certain level. It doesn't help to keep a bin with a perfectly flat (with flat juice on top =) ) profile.


Robo Home | Changes | Preferences | AllPages
Edit text of this page | View other revisions
Last edited July 25, 2007 20:11 EST by Skilgannon (diff)
Search: