You are assuming a linear relation between the dimension and the guessfactor, but that is true only for linear targeting. Won't that be a problem? -- ABC |
You are assuming a linear relation between the dimension and the guessfactor, but that is true only for linear targeting. Won't that be a problem? -- ABC Well, I wasn't proposing it as a new method all on it's own, I'm proposing combining this with DynamicClustering, and using this to refine the "cluster"'s data points for the current situation. So, yes, while it does assume linear relationships between dimensions and the guessfactor, and thus it won't get amazing results on it's own, it should serve well to refine the data from DynamicClustering somewhat. The point of it isn't to be perfect, but is to be better than just plain weighting the points in the cluster by their similarity to the current situation, by a technique repositioning the points to what they roughly "ought" to be in the current situation. I would estimate the lower bound of the performance of this, would be about same as the performance of the DynamicClustering, and the upper bound to be no better than the upper bound of the DynamicClustering performance, however I think it may be able to give more refined results when there are relatively few available data points. The more I think about it though, it may be interesting to toss the DynamicClustering and just use recent logs instead of near neighbors, and replace the linear regression with a separate neural network for each cluster (after all neural networks are good for being nonlinear function approximators). For that to work, I would need to toss out old data and only keep a certain length of logs, due to not being able to optimize it like a KDTree, but it could be an interesting hybrid of neural, clustering, and log based. Actually, I think I may give that a try some day... -- Rednaxela |
So, let's start with the basics of using function approximation on cluster data. So here's what the GuessFactors of LinearTargeting (with it doesn't account for walls) look like:
It's nice and simple. Easy to create a function for by regression. Note that in the multiple dimensions of segmentation you might use, this line would be a plane (or hyperplane) which is flat in all axis except the lateral velocity. Keep in mind that lines are basically planes in 2 dimensions and are interchangeable for the purpose of this discussion.
Well, that's all well and nice, but what happens when multimodal things come along?
Well, I know the above isn't a realistic targeting to expect in RoboCode, but it's a hypothetical multimodal one for sake of discussion. So let's say that hypothetically those given points are the points your DynamicClustering gave. Well, as a person, you can pretty clearly see there are two vaguely linear lines there, but how do you make an algorithm know that?
Well, the algorithm I came up with (still untested though), is essentially a hybrid of [K-means clustering] and [linear regression]. So, similar to how you pick random centroids in K-means, you pick two or more random planes. Then for each data point, you find the plane which it is located closest to. Then you regenerate each plane using multiple linear regression (linear regression across more than 2 dimensions), and repeat the process until you have a level of convergence you are satisfied with. Note, that in this case, rather than "clustering" by situations as you do with DynamicClustering, you are clustering different modes.
To use this data of the lines, you simply take every data point, shift it's GuessFactor by however much the formula for the plane it is clustered with says it should be shifted, and then use a kernel density algorithm on these shifted values. If you wanted to, you could also take statistics, on the upper and lower bounding planes of the cluster to improve it's reproduction of clusters with more complex probability distributions. In particular I think keeping track of the upper and lower bounding planes of a cluster, would allow you to do things like with less data create a truly accurate model of the probability of where Splinter is targeting for instance. Also, if you wanted more flexibility, you could very well replace the multiple linear regression, with a neural network, however I'm reluctant to consider that on account of the cpu requirements of that.
In any case, I believe that this algorithm could be an extremely effective way to model multi modal targeting/movement algorithms. Any comments/questions about this idea? :) -- Rednaxela
You are assuming a linear relation between the dimension and the guessfactor, but that is true only for linear targeting. Won't that be a problem? -- ABC
Well, I wasn't proposing it as a new method all on it's own, I'm proposing combining this with DynamicClustering, and using this to refine the "cluster"'s data points for the current situation. So, yes, while it does assume linear relationships between dimensions and the guessfactor, and thus it won't get amazing results on it's own, it should serve well to refine the data from DynamicClustering somewhat. The point of it isn't to be perfect, but is to be better than just plain weighting the points in the cluster by their similarity to the current situation, by a technique repositioning the points to what they roughly "ought" to be in the current situation. I would estimate the lower bound of the performance of this, would be about same as the performance of the DynamicClustering, and the upper bound to be no better than the upper bound of the DynamicClustering performance, however I think it may be able to give more refined results when there are relatively few available data points.
The more I think about it though, it may be interesting to toss the DynamicClustering and just use recent logs instead of near neighbors, and replace the linear regression with a separate neural network for each cluster (after all neural networks are good for being nonlinear function approximators). For that to work, I would need to toss out old data and only keep a certain length of logs, due to not being able to optimize it like a KDTree, but it could be an interesting hybrid of neural, clustering, and log based. Actually, I think I may give that a try some day...
-- Rednaxela