Using constructive neural networks for serial learning

A three-dimensional mapping of the equation learned by a back-propogation neural network trained on the exclusive-or dataset

Here’s a scary thought, five years ago today I submitted the final print of my master’s thesis. Having found one of the backups of the final repository (nothing is lost, but some of it is buried deep) I thought I would post it here as a record of note. It took me longer than it was supposed to, but I was also teaching part-time and being lazy about half-time.

Looking back on it, I’m still pretty proud of my work and my writing. Here, then, is the final print of my thesis, presented for your (uh huh) enjoyment.

Since finishing it and becoming employed I haven’t fooled around with any of the stuff in there again. This is probably a universal truth about theses in general. One thing I have always wanted to do, though, is write some more about some of the visualisation techniques I developed while (instead of?) working on my actual research.

I spent entirely too much time using my trained neural networks to make pretty pictures, rather than SCIENCE.

I wasted a fair amount of time creating neural networks the generate pretty pictures, and coming up with fun ways of animating the neural network training process so that you could see in real time (well, rendered afterwards) exactly what the network learned and how it did.

Maybe I’ll get around to posting some more of the off-cuts from my thesis work here later on.

(P.S. Don’t read the code. It’s terrible.)