Robo Home | Changes | Preferences | AllPages

Training the network is not straight ahead. It is sensitive both to the training parameters and the order the input/output pairs are presented to it.

Much of the experience explained here comes from the development of TheBrainPi.

-- Albert

Thanks for sharing! I will try both the momentum and the learning rate levers and see what happens. What about the bias stuff? You have mentioned it elsewhere on the wiki, but I don't quite follow. Maybe you could elaborate some on it? This with the order of the input/output pairs. It sounds amazing. But the Orcas (and GB) don't use paired input/outputs. They have plenty of inputs (from 4 up to 50 something) and just one output. I couldn't just randomize the order of the inputs, could I? GB is quite interesting since it's so arbritrary. In the last Outpost 1-v-1 league GB was one of very few bots beating SandboxDT. It quite often beats the new Marshmallow also. But then, most of the time, GB just shoots like a very drunk robot... I feed a very shallow history (2 ticks if I remember correctly) of "bearing deltas" (think of it as LateralVelocity with all your own movement removed), distance and bullet power into the net and out comes a guessed bearing. I've tried to compensate for the mislearning by using many nets in a VirtualGuns type of array, but it would be much better to try understand how to get more stable predictions from one net. Your path with one brain for all bots sounds great and I think I will try walk it as well. -- PEZ

By input/output pairs I mean the input vector and its associated output vector (ie. the expected value) : input0 = (x0...xn) output0 = (y0...ym). So you have to keep the inputs and the associated outputs always together, but present the different pairs (that correspond to different times) in a random order, instead of presenting them in arrival order.

The bias is controlled (as the learning rate and the momentum) by a function in the NNPop class (just take a look to TheBrainPi source code).

-- Albert

Well, but what is the bias? How does it affect the net? -- PEZ

If you wanted to go to the completely biological solution - and had a week or so to spare for simulations :p - you could simply have the neural net wipe and retrain over and over with a genetic algorithm designed to optimize some arbitrary function combining hitrate and learning time (a possible example is: minimize learning time / hitrate) controlling the parameters. The program would find everything best for you, you'd just need to be able to write the function. I doubt this is the best tweaking solution, though, it's far too time consuming and the function may not be able to analyze performance nearly as well as a human. -- Kuuran

Hmmm, well, this is how I often seek the best parameters for stuff. I call them tuning factors. Sometimes I let my bot find them in battle and sometimes it's just hard labour back at the lab. =) Of course I'm not using any genetic algorithm, since I'm an illiterate dude and know nothing about such stuff. If I can't find the correct parameters for learning rate and momentum just by pondering what Albert wrote (be it because I'm thick headed or because my nets are too different from Alberts) I'll start a lab session with it. Good idea Kuuran, thanks! -- PEZ

Bias is a fixed weight that's added to the neuron inputs.

Just think for a moment in a linnear regression:

 y = a + b * x (let's say x is not a single variable but a vector)

The bias would be "a", that's a constant weight added to the inputs, that allows to center them. A linnear regression without bias would be:

 y = b * x (so the plane defined would always cross 0 when x is 0).

The concept is just the same for NN, because to have a neuron you just have to add the non-linnear function to the equations above:

 with bias --> z = f(y) = f(a + b * x)
 without bias --> z = f(y) = f(b * x)

By default NRLIBJ doesn't use biases, so every neuron is modelled as f(b*x). You need to activate the "bias learning" if you want to use it.

Note that bias behaves in a similar way than "a" in a linnear regression: it can be necessary to predict the outputs accurately, but if not necessary, it can disturb the results.

-- Albert

Bias basically shifts the centerline where the threshold lies, think of it as memory/general experience. When you're trying to figure out what shade of red something is, you work in the ballpark of reds, you don't have to take time to figure out it's not black, indigo or teal, you immediately start thinking 'is it maroon or burgundy or pink?'. That's the best analogy I can think of at 5 in the morning... -- Kuuran

To get better performance for constant like wall (I mean if say north wall =0 west =1 and so) it can be easier to learn for the network if you put it as 4 input with one value set to 1 and all others set to 0. it can also be true for large data range (like distance to ennemi) but in this case you must discretize the value by yourself. one other thing, for circular data like angle(or date or things like this) it can be easier to learn to put Math.sin or Math.cos since then the network don't need to learn that it is circular. Synnalagma

Anyone try to use genetic algorithms to evolve the weights? If so, could anyone give me some help before I jump into the problem to make a bot with evolving weights in a neural network? --DragonTamer

If you know the correct value for the neural network at any point, then you shouldn't use a GA. Use back propagation or another supervised learning algorithm. However, if you don't know the correct answers for your particular network, GAs can sometimes find reasonable answers. I have experimented with GAs for evolving neural networks (about a year ago), but I never got useful results. They seem to be better suited for building programs (oftentimes in LISP) than for evolving neural networks. -- nano

In NRLIBJ you can find some methods to train the network using GA, this training is usefull for global search (finding some good initial weight) but not for fine tuning (GA permit avoiding to fall into a local minima). So the response I think is train with GA at the beginning and next use backpropagation. --Synnalagma

Robo Home | Changes | Preferences | AllPages
Edit text of this page | View other revisions
Last edited January 14, 2004 7:38 EST by Albert (diff)