As with all projects there is a story with this one.
I was working on my RNN code, hoping to eventually get a GPU version of it running. Somewhere between formulas and implementations I couldn't help but notice convergence and initial conditions are very fragile things. So out of frustration I went and did some research on what could possibly make my RNN a little more stable. Amidst it all I came across some articles explaining the fractal structure of the convergence of recurrent networks. "Egads!" I thought to my self, "No wonder!" Both systems have a persisting state variable that is recursively sent through a (non)linear transform function. The weight matrix of the RNN works as the linear components of the IFS transformation matrix. The RNN activation function works as the final transformation to be applied to the result.
I embarked on designing a flame fractal renderer with the intentions of writing a renderer for the RNN basins of attraction. (Somewhat similar to all those Newton fractals you see out there: originally designed to solve for roots of quintic equations but in the end all they're truly used for is their pretty pictures.) Sadly my GPU greedy nature got the best of me and I went awry and ended up making a GPU-powered flame fractal renderer.
This thing is powered by FBO's and float textures to iterate all its points. It uses PBO's and VBO's to direct the point renderer in their direction. As a result it can do 16k points iterated 20 times a frame at 85fps. Looking at those million-particle simulators on the GPU, I'm sure there's room for improvement.
Thanks much the authors of the flame fractal paper is for all the help it has been in implementing this.