Nova

March 03, 2010

Near the end of high school (2009-ish), I realized that I really wanted my robot butler. At that point, I thought that I was a programming superstar, because I had no problems writing a LOLCode interpreter in VB. In any case, I decided that AI couldn’t be much harder than that.

So I decided to make a “recurrent neural network” in C++.

Having always just messed around with code until stuff worked, I took the same approach to AI - I randomly generated recurrent neural networks, and ran them through thousands of modifications until they started to do what I wanted them to do. It was a type of simplified genetic algorithm.

What was unusual about Nova, though, is that it was not limited to logistic or binary neurons. I thought of neurons as a circuit. There were “if” neurons, “and” neurons, etc. In effect, my program was randomly constructing programs/circuits, and using genetic algorithms to get them close to the desired functionality.

I was young, and naive. I didn’t get much further than ‘teaching’ a virtual robot to follow a gradient.

In any case, you can get the code here (bitbucket).