For my final in the shader class I decided to do hardware-based radiosity.
The coolest thing about video card based radiosity rendering is the hardware based box filtering. Combine that with FBOs and VBOs and you scarcely need any shader code at all. Maybe possibly none at all.
I thought I would just take my 3D realtime environment map code, toss the result into a buffer, find the buffer average by hardware calculating all mipmap levels through a hardware-based box filter (generate_mipmap), taking the furthest mipmap level (which conveniently contains the average of all colors in the rendered buffer) and store that result into a vertex buffer object set aside for the scene's colors. This goes on in between frame updates and eventually - between the energy inserted into the scene via surface emissivity and the energy lost through reflectivity (absorption) eventually the luminance values converge to a steady state.
The load of the algorithm can be balanced between the two to the implementer's desire. The less radiosity updates the faster the application frame rate will be but the slower the radiosity convergence will be.
Originally I had intentions on doing this all in floating point operations. Soon after I found a few limitations with floating-point ops on my Radeon X1600. I think they might have been centered around the hardware based mipmap generation, but I don't remember all the details of my results. I hope I left them in the code's comments.
This project is implemented in my GL/Lua Shell program. Therefore all code is written in Lua and written specifically to be used in my little nifty shell.