[Home]AgentSmith/Old

Robo Home | AgentSmith | Changes | Preferences | AllPages

No diff available--this is the first major revision. (no other diffs)
Well TestBot finaly got a name. And the name was AgentSmith! I thought I had better have a relavant name, so I came up with AgentSmith as in the Matrix Agent Smith was an advanced AI, allthough it couldnt learn beyond the boundaries of its programming. Which is pretty relavant to my bot as well!

Since becoming AgentSmith I've tried to get it to do well with its current learning system. But it doesnt, so im in the process of a major overhaul. Hope it works!

-wolfman

Ok, ive decided to stop development of AgentSmith. I might continue it at some later point, but it wasnt working that well. Instead im going to take some of the lessons I have learnt whilst doing it and update Warlord. - wolfman


Comments:

I just stopped coding and I'm trying to create some movement theory that helps me to create better movements (I think that for now, all the efforts have a lot of trial and error, and I *hate* testing my bots). Unfortunately, I'm not able to find a good approach. Could you explain which approach are you using to make your bot learn the best movements? (I understand that may be you preffer not to explain it - or may be it is to early to do it. Please, decline to answer if you are not happy with the question). -- Albert

Hrm, well I've tried several methods, the first I tried was to make the robot learn several things at once: 1) How often to change direction 2) What distance to move (Including a random factor with learned boundaries) 3) What angle to move (with random factor with learned boundaries). I did this for several distances away from the target, so that it might learn that close up that an angle that moves away from the target is better, but further away its better to move closer etc. This method works, but not that well. What I mean is that it gets better, but the best is not as good as my other robots movement (Warlord).

The method I am trying to use currently is to learn which movement vectors are the best - ie a move at 40 deg to the opponent with distance 50 is better than a move 90 deg to the opponent with distance 10. The robot tries out a whole range of these vectors. When selecting a move vector I select a random vector but limit myself to the X best of these, and also bias towards the best vector found so far. This is a bit better than the above method, but still not that good. Im thinking of giving up on this as I have spent far too long on it to little success. - wolfman


Robo Home | AgentSmith | Changes | Preferences | AllPages
Edit text of this page | View other revisions
Last edited October 26, 2004 10:33 EST by Wolfman (diff)
Search: