Playing with GLSL in Processing


I’ve wanted to learn more about GLSL shaders for a while now. GLSL is a high-level shading language created to give developers more direct control of the graphics pipeline. In short, you can tell your GPU directly what it should be doing. These last two months I’ve been reading up on syntax and diving into shader development headfirst to gain the basic knowledge and experience needed to write shaders that work. My intention with this blog post is to share some of my initial experiments and the lessons learned along the way!

Let’s go back to where this process started for me. Two months ago I read this fascinating article about warping by demoscener and Pixar employee Iñigo Quílez. With his description in hand, I was able to quickly produce some interesting visuals in Processing (see the above screenshot). Unfortunately this sketch was incredibly slow. So I started thinking about using the GPU to do some of the heavy lifting (read: everything). You can see the rudimentary result of that very first shader in this vimeo video. All of this is made possible by Andres Colubri’s GLGraphics library, which significantly expands the OPENGL capabilities of Processing. On an important sidenote: many of these capabilities will also be added to Processing 2.0. So among other things, it becomes possible to use shaders. That is, if you know how to write them. Because there are definitely a few challenges one has to face when trying to learn and use GLSL. Especially for those with Processing-only background. The major challenges are:

  • The big picture. When you start using GLSL, you just don’t get what shaders are. GLSL shader replace very specific parts of the graphics pipeline. This gives a great deal of flexibility and opens up a world of possibilities. But in most cases you can’t just translate a ‘regular’ program to a shader 1-on-1. It’s very important to get a basic grasp of the graphics pipeline and the role of shaders within it.
  • The syntax. GLSL is based on the syntax of the C programming language, which most Processing users won’t be familiar with. There is no way around it. If you want to use shaders, you will need to learn the syntax. The good news is that the number of functions etc. is relatively limited and they are fairly straightforward. One first advice is to download a quick reference card for the GLSL version you are going to be working with.
  • The Black Box of Perfection. GLSL is very unforgiving when it comes to errors. One little typo is all it takes to turn a perfect and smooth-running shader into a static black screen. Furthermore, since shaders run on the GPU, there is little to no feedback when things go south. In contrast, Processing shields users even from the try-catch error handling of JAVA and it often specifically points out errors in the code. In this area GLSL and Processing are really worlds apart. With GLSL iterative development is the way to go. Run/test your code as often as possible to make sure things are still working. Otherwise bug-tracking is gonna be tough.

Fortunately there are many helpful resources out there. To quickly get a first grasp of the big picture, I recommend checking out this tutorial. For those that are into more modern capabilities such as geometry and tessellation shaders, there is an updated version of the tutorial, which you can find here. To get a more complete and in-depth overview of GLSL there is the essential orange book. The full title is OpenGL Shading Language (3rd Edition) by Randi J. Rost & Bill Licea-Kane. This book quasi-assumes knowledge of OPENGL and C++ (making it a challenging read for Processing users!), but currently it is the number one book on OPENGL shading. Another important resource are the examples that come with the GLGraphics library. There are different ways to use shaders through GLGraphics, for example via the respective GLSLShader, GLTextureFilter or GLModelEffect classes. The information on the Processing forums with regard to shaders is so far still quite limited, but it’s slowly increasing. However on the web in general there is actually quite a lot of information on GLSL to be found and many open-source shaders have been made available for download. So googling something with the search term “GLSL” included will often give some useful results, for example from Stackoverflow or Gamedev.net. The thing with GLSL is that there ain’t no shortcut. Unlike many other Processing examples, only copy-pasting some code and running it won’t help you much when it comes to learning the language. You’ll really have to dig in deep and invest the time yourself.

All right, fast forward to my most recent finished project. Where ‘finished’ is a somewhat relative term, since all experiments are ongoing! :D Proudly I present… the showcase video (after which I will explain everything).

The basic shader that’s running this is actually quite simple. I’m just doing it the old-fashioned, brute-force way. Which means I’m calculating the total blobness of each fragment based the distance to all the existing metaball center points. I am fully aware that this is not the most efficient way possible. For now I just wanted to take this method as far as I could take it. Most of the time was spent adding and tweaking features for the physics system and the color controls (which both run on the CPU). One of the most important improvements with regard to the shader itself was when I changed my data transfer method from arrays to textures. And by data transfer I mean the transfer of information from CPU to GPU. In my current code a total of two textures transfer all the particle positions (including each particle’s radius) and the whole color palette to the GPU. This makes the code on both sides much more concise and flexible. It’s also better for my desktop, which tends to turn black when it has to deal with too many variables. The code now runs smoothly on both my desktop’s old ATI Radeon graphics card and my laptop’s new NVIDIA graphics card.

Apart from Processing, three contributed libraries were used in the making of this. First, the GLGraphics library by Andres Colubri for handling the GLSL shaders. Second, the Toxiclibs physics and color libraries by Karsten Schmidt for the particle physics system and the different coloring options and strategies. Third, the controlP5 library for the graphical user interface. The main customizable features of this sketch/application are:

  • Runs Full HD 1920×1080 with on-screen GUI
  • Runs up to 100 metaballs in real-time, but capable of handling much more (tested up to 5000)
  • Physics system that includes gravity, drag and world boundaries
  • Particle behavior with regard to radius, strength and jitter
  • Attractors that interact with the particles and can be controlled through intuitive mouse controls
  • Metaballs appearance settings for the minimum and maximum radius
  • Extensive color settings such as number of colors, stepsize, manual color selection and thresholds
  • Three color modes: gradient, color theories and lerp
  • Three color sort methods: HSV, RGB, CMYK
  • All the Toxiclibs color strategies such as complement, analogous, monochrome and others
  • Interactive text mode with real-time text input, interpolation and shader-based blur settings
  • Automatic randomization for colors, color settings and physics settings
  • Saving UHQ output (up to 16.000 pixels, see the GLSL Flickr set for some examples)
  • Saving continuous image sequences with or without GUI

All in all I’ve learned a lot while working on my first shaders. The most important epiphany is probably at the start when things actually work for the first time. From there on it’s just incremental development. Apart from expanding my programming range into GLSL, this whole endeavour has also broadened my view to the world beyond Processing. I’ve even installed Cinder (and started reading up on C++), but that’s something for another time! With regard to the metaball sketch, I have some ideas for further development. I’d like to change the metaball technique from brute-force calculation to ‘gradient blending’. The latter is image-based, so the number of metaballs won’t matter, which makes it much more efficient. I’d also like to investigate an alternative option with regard to movement, namely a flowfield instead of a physics-based particle system. These were obviously the first steps with shaders, as I’ve only really experimented with fragment shaders so far. I’d like to move to 3D and expand my experiments to vertex/geometry shaders. This is only the beginning!

About these ads
Comments
21 Responses to “Playing with GLSL in Processing”
  1. Hey Amnon, Do you have any code that you can share? I’ve just started learning glsl too and I’m trying to figure out how to do a simple metaball 2d shader. I’d be interested in at least understanding the general flow of making a shader using the techniques you described. Thanks!

    • Amnon says:

      Hi Greg,

      I copy-pasted the shader below. It’s actually really, really simple this one. It started out a bit more elaborate, but gradually I simplified it and moved things into textures, like I describe in the article. For every fragment it calculates the blobness by adding up all the distances to all the particles.

      uniform sampler2D src_tex_unit0;
      uniform sampler2D src_tex_unit1;
      uniform sampler2D src_tex_unit2;
      
      uniform int numColors;
      uniform int MAX_PARTICLES;
      uniform int NUM_PARTICLES;
      
      float fmp = float(MAX_PARTICLES);
      
      float sumXY() {
      	float sum = 0.0;
      	for (int i=0; i<NUM_PARTICLES; i++) {
      		vec2 coordinate = vec2((float(i)+0.5)/fmp, 0.5);
      		vec4 p = texture2D(src_tex_unit1, coordinate).rgba;
      		sum += p.a * p.z / distance(p.xy, gl_FragCoord.xy);
      	}
      	return sum;
      }
      
      void main() {
      	float s = sumXY();
          vec4 offscreen_color = texture2D(src_tex_unit2, gl_TexCoord[2].st).rgba;
      	s *= smoothstep(0.0, 1.0, offscreen_color.r);
      	int colorSelect = int((float(numColors)-1.0)*clamp(s, 0.0, 1.0));
      	vec2 coordinate = vec2((float(colorSelect)+0.5)/float(numColors), 0.5);
      	gl_FragColor = texture2D(src_tex_unit0, coordinate).rgba;
      }
      
      • Guy Baston says:

        Hi,

        Could you please show how to write data into the textures?

        Thanks
        Guy Baston

      • Teis says:

        Very beautiful!
        What information is stored in the p.a and p.z component of the particle? Mass and something else?

      • Amnon says:

        The p.z component contains the metaBallRadius value (which you could indeed see as the mass) for each individual particle. As you can see in the process video, the application allows the user to set the range for this and it shows the impact of changing these settings.

        The p.a component is a failsafe that is either 1.0 for active particles or 0.0 for non-active particles to make double-sure that they won’t impact the calculation. In most cases, the NUM_PARTICLES variable will also take care of this, but on some video cards (such as my old Radeon I believe) you need a fixed number inside the shader, because a uniform int within the for loop will slow down the shader drastically.

      • Teis says:

        ok – thanks for explaining this,

  2. maweibezahn says:

    Hi, thanks for the tutorial, this stuff looks really great. I am not really a math guy and have problems with the very first step which has not even to do anything with the shaders: how do you implement the warping? I am a bit confused because I do not know if the fbm function alters the color of a given position, or the position itself. If the latter is the case, one must use a source image, right?
    Thanks!

    • Amnon says:

      fbm is just a way to make noise more interesting by using multiple layers of noise on top of each other. See this page: http://code.google.com/p/fractalterraingeneration/wiki/Fractional_Brownian_Motion

      • maweibezahn says:

        Amnon, thank you for this link! My problem was that I searched for a way to set the amplitude and frequency somewhere else, with something like “noiseDetail()” :)

        I still have a some problems porting Iñigo Quilez’ code to Processing, maybe it has to do something with the vector math… Whenever he uses simple operators, I must resort to the PVector methods for adding and multiplying. So the first domain warping looks like this:


        float pattern( PVector p ){
        PVector q = new PVector( fbm( PVector.add(p, new PVector(0.0,0.0))), fbm( PVector.add( p, new PVector(5.2,1.3) ) ) );
        q.mult(4.0);
        p.add( q );
        return fbm( p );
        }

        But the result is just a blurry noise image and looks not similar to the picture he shows.
        My fbm method is ported from the link you gave me. I played around with the different values, but had no success.

        Would you mind looking at the code above and give me a hint, what could be the problem? I would really appreciate it!

        Thank you! :)

      • Amnon says:

        The code you posted is fine. It is only first domain, but it should work.

        Is your PVector p at least 2 dimensional (x,y)?
        You could scale the PVector p by a factor 0.01 before sending it to the pattern method().
        Does the fbm() method return at least 2 dimensional noise?

      • maweibezahn says:

        Yes, both the PVector and the fbm noise are 2d. Here is the complete code, if you want to check it: http://weibezahn.com/dlstuff/fbm.txt
        I played around with all values, also used the scaling you recommended, but with no success. I just can’t see what I am doing wrong…

      • Amnon says:

        The thing is that noise() in Processing is not really Perlin noise. The noise code was actually taken from the 64k demo Art (2001) by the german demo group farbrausch. Not kidding. Anyhow, a beneficial side effect of this quasi-bug is that you can take a short cut.

        Add the following line to the top of the pattern method():

        p.mult(0.01);

        Then use the following fbm() method instead:

        float fbm( PVector p ) {
        return noise(p.x, p.y, p.z);
        }

        The downside of course is that it’s not pure fbm nor reproducible in other languages. The upside is that it makes pretty pictures with less code.

      • maweibezahn says:

        Cool, thank you! I have made those changes, and they work out well :) I should have been sceptical because one can set the octaves in the Processing noise implementation which makes it clear that it is not a simple noise function.
        With your help I arrived at the (moving) pretty pictures now and am trying to create a nice coloring. But as you experienced, it is dead slow. I made some experiments in c++ too, but walking through all the pixels and applying noise multiple times is slow there as well, with a simple fbm I get 2 fps or so in OpenFrameworks. I think I will also look into GLSL for that! :)
        Thanks again for the help!

  3. 60rpm says:

    Hi Amnon!
    Inspired by this post I researched a little on noise based warping. I did this explorer, trying to understand the effect of each parameter of the technique into the final result. I tried with standard Perlin and Simplex:
    http://www.openprocessing.org/sketch/68369
    Maybe you find it interesting.

    Thanks, btw. Your researchs are always a source of inspiration. :-)

    • Amnon says:

      Very cool! Thanks for sharing. I’m sure your code will be useful for others that acquire an interest in this topic. As shaders become part of Processing 2.0 it will probably become easier to use them for calculations like these, which will in turn facilitate even more interactive experimentation. Good times. :-)

  4. imaginaryhuman says:

    How are you storing enough precision in the particle coordinates texture? Is it just an RGBA 32-bit (1 byte per component) texture, so you only have a coordinate range of 0..255? … because in your video it looks like particles can contribute to the whole across a much larger span of coordinates and yet still have accurate per-pixel precision? or are you using like a floating point texture format?

    • Amnon says:

      Yes. Floating point texture, because I was having exactly the problem you describe which was causing jittery movement. With floating point texture the movement is smooth.

  5. Hi man! I was giving a try to GLSL + Processing this holidays and I made this: https://vimeo.com/56333207#at=0

    I pass all my data using arrays because I can’t find information enough to deal with the floating point textures procedure. Do you have any resource to understand that process better?

    Thanks in advance & best regards.

Trackbacks
Check out what others are saying...
  1. [...] Examples written in the OpenGL shader language and modified/implemented using Processing by Amnon Owed [...]



Follow

Get every new post delivered to your Inbox.

Join 98 other followers

%d bloggers like this: