Freshman year of college (2010), I came to the conclusion that there wasn’t a good way to store the knowledge that a machine learning algorithm gains. When experimenting with AI, I frequently stored my “learned” networks as XML - I found that to be really slow, and suboptimal. Ideally, an AI program would be learning continuously, and updating its state to non-volatile media automatically, so that if (when!) the computer crashes, nothing would be lost.
At first, I experimented with sqlite, but quickly realized that the optimal storage medium for machine learning is a graph database. So I decided to make one.
nDB was a combination of a SQL-like database for data, with an underlying graphical structure. Entries were directly linked to other entries.
With nDB, it was possible to keep a graph structure in RAM, while continuously updating the disk representation, such that minimal data is lost in the event of crashing/failure. nDB also fixed the problem of saving and loading neural networks from files.
The database was most definitely not ACID compliant.
While the project fizzled out before I really put it to good use, it had a huge impact on my ideas for an intelligent file system.
The code is available here (7zip file).