I’ve wanted to learn more about GLSL shaders for a while now. GLSL is a high-level shading language created to give developers more direct control of the graphics pipeline. In short, you can tell your GPU directly what it should be doing. These last two months I’ve been reading up on syntax and diving into shader development headfirst to gain the basic knowledge and experience needed to write shaders that work. My intention with this blog post is to share some of my initial experiments and the lessons learned along the way!
Let’s go back to where this process started for me. Two months ago I read this fascinating article about warping by demoscener and Pixar employee Iñigo Quílez. With his description in hand, I was able to quickly produce some interesting visuals in Processing (see the above screenshot). Unfortunately this sketch was incredibly slow. So I started thinking about using the GPU to do some of the heavy lifting (read: everything). You can see the rudimentary result of that very first shader in this vimeo video. All of this is made possible by Andres Colubri’s GLGraphics library, which significantly expands the OPENGL capabilities of Processing. On an important sidenote: many of these capabilities will also be added to Processing 2.0. So among other things, it becomes possible to use shaders. That is, if you know how to write them. Because there are definitely a few challenges one has to face when trying to learn and use GLSL. Especially for those with Processing-only background. The major challenges are:
- The big picture. When you start using GLSL, you just don’t get what shaders are. GLSL shader replace very specific parts of the graphics pipeline. This gives a great deal of flexibility and opens up a world of possibilities. But in most cases you can’t just translate a ‘regular’ program to a shader 1-on-1. It’s very important to get a basic grasp of the graphics pipeline and the role of shaders within it.
- The syntax. GLSL is based on the syntax of the C programming language, which most Processing users won’t be familiar with. There is no way around it. If you want to use shaders, you will need to learn the syntax. The good news is that the number of functions etc. is relatively limited and they are fairly straightforward. One first advice is to download a quick reference card for the GLSL version you are going to be working with.
- The Black Box of Perfection. GLSL is very unforgiving when it comes to errors. One little typo is all it takes to turn a perfect and smooth-running shader into a static black screen. Furthermore, since shaders run on the GPU, there is little to no feedback when things go south. In contrast, Processing shields users even from the try-catch error handling of JAVA and it often specifically points out errors in the code. In this area GLSL and Processing are really worlds apart. With GLSL iterative development is the way to go. Run/test your code as often as possible to make sure things are still working. Otherwise bug-tracking is gonna be tough.
Fortunately there are many helpful resources out there. To quickly get a first grasp of the big picture, I recommend checking out this tutorial. For those that are into more modern capabilities such as geometry and tessellation shaders, there is an updated version of the tutorial, which you can find here. To get a more complete and in-depth overview of GLSL there is the essential orange book. The full title is OpenGL Shading Language (3rd Edition) by Randi J. Rost & Bill Licea-Kane. This book quasi-assumes knowledge of OPENGL and C++ (making it a challenging read for Processing users!), but currently it is the number one book on OPENGL shading. Another important resource are the examples that come with the GLGraphics library. There are different ways to use shaders through GLGraphics, for example via the respective GLSLShader, GLTextureFilter or GLModelEffect classes. The information on the Processing forums with regard to shaders is so far still quite limited, but it’s slowly increasing. However on the web in general there is actually quite a lot of information on GLSL to be found and many open-source shaders have been made available for download. So googling something with the search term “GLSL” included will often give some useful results, for example from Stackoverflow or Gamedev.net. The thing with GLSL is that there ain’t no shortcut. Unlike many other Processing examples, only copy-pasting some code and running it won’t help you much when it comes to learning the language. You’ll really have to dig in deep and invest the time yourself.
All right, fast forward to my most recent finished project. Where ‘finished’ is a somewhat relative term, since all experiments are ongoing! 😀 Proudly I present… the showcase video (after which I will explain everything).
The basic shader that’s running this is actually quite simple. I’m just doing it the old-fashioned, brute-force way. Which means I’m calculating the total blobness of each fragment based the distance to all the existing metaball center points. I am fully aware that this is not the most efficient way possible. For now I just wanted to take this method as far as I could take it. Most of the time was spent adding and tweaking features for the physics system and the color controls (which both run on the CPU). One of the most important improvements with regard to the shader itself was when I changed my data transfer method from arrays to textures. And by data transfer I mean the transfer of information from CPU to GPU. In my current code a total of two textures transfer all the particle positions (including each particle’s radius) and the whole color palette to the GPU. This makes the code on both sides much more concise and flexible. It’s also better for my desktop, which tends to turn black when it has to deal with too many variables. The code now runs smoothly on both my desktop’s old ATI Radeon graphics card and my laptop’s new NVIDIA graphics card.
Apart from Processing, three contributed libraries were used in the making of this. First, the GLGraphics library by Andres Colubri for handling the GLSL shaders. Second, the Toxiclibs physics and color libraries by Karsten Schmidt for the particle physics system and the different coloring options and strategies. Third, the controlP5 library for the graphical user interface. The main customizable features of this sketch/application are:
- Runs Full HD 1920×1080 with on-screen GUI
- Runs up to 100 metaballs in real-time, but capable of handling much more (tested up to 5000)
- Physics system that includes gravity, drag and world boundaries
- Particle behavior with regard to radius, strength and jitter
- Attractors that interact with the particles and can be controlled through intuitive mouse controls
- Metaballs appearance settings for the minimum and maximum radius
- Extensive color settings such as number of colors, stepsize, manual color selection and thresholds
- Three color modes: gradient, color theories and lerp
- Three color sort methods: HSV, RGB, CMYK
- All the Toxiclibs color strategies such as complement, analogous, monochrome and others
- Interactive text mode with real-time text input, interpolation and shader-based blur settings
- Automatic randomization for colors, color settings and physics settings
- Saving UHQ output (up to 16.000 pixels, see the GLSL Flickr set for some examples)
- Saving continuous image sequences with or without GUI
All in all I’ve learned a lot while working on my first shaders. The most important epiphany is probably at the start when things actually work for the first time. From there on it’s just incremental development. Apart from expanding my programming range into GLSL, this whole endeavour has also broadened my view to the world beyond Processing. I’ve even installed Cinder (and started reading up on C++), but that’s something for another time! With regard to the metaball sketch, I have some ideas for further development. I’d like to change the metaball technique from brute-force calculation to ‘gradient blending’. The latter is image-based, so the number of metaballs won’t matter, which makes it much more efficient. I’d also like to investigate an alternative option with regard to movement, namely a flowfield instead of a physics-based particle system. These were obviously the first steps with shaders, as I’ve only really experimented with fragment shaders so far. I’d like to move to 3D and expand my experiments to vertex/geometry shaders. This is only the beginning!