This is the breaking news that crashed into our mailboxes yesterday. The PC 64kB Intro we released last year at Revision, F – Felix’s workshop, has been selected to be shown at SIGGRAPH 2013 as part of the Real-Time Live! demoscene reel event.

Unfortunately we won’t be able to attend SIGGRAPH this year, but to have our work there is quite some awesome news.

I don’t know if this is going to become some sort of tradition for us, but as a matter of fact, we attended all Easter parties since the creation of our group. This year was no exception, and we had a really great time at Revision.

Revision is the kind of party that is just big enough so even though at some point you think “Ok, I’ve met pretty much every one I wanted”, when you get home you realize how many people you wanted to meet and did not. It’s also the kind of party that is so massively awesome that when you get back to your normal life, you experience some sort of post-party depression, on top of the exhaustion, and you have to get prepared for when it strikes.

Sidrip Alliance performing at Revision

So we’ve been there, and this year we presented the result of the last months of work in the PC 64k competition. The discussion of the concept started back in May 2011, and we seriously started working on it maybe around August.

While Revision was approaching, rumors were getting stronger about who would enter the competition, how serious they were about it, and how likely they’d finish in time. It became very clear that the competition was going to be very interesting, but even though, it completely outran expectations. It even got mentioned on Slashdot!

Our intro, F – Felix’s workshop, ended up at the 2nd place, after Approximate‘s gorgeous hypno-strawberries, Gaia Machina. The feedback has been very cheerful, during the competition as well as thereafter. Also, as if it was not enough, to our surprise, our previous intro, D – Four, has been nominated for two Scene.org Awards: Most Original Concept and Public Choice. Do I need to state we’re pretty happy with so many good news? :) Thank you all!

Now a week has passed already, we’re back at our daily lives, slowly recovering, and already thinking of what we’re going to do next. :) Until then, here is a capture of our intro:

I’ve just released Shader Minifier 1.1. You can download the binary at the usual place.

Changes

  • New output options: use --format js to generate a Javascript file, and --format c-array to get a comma-separated list of strings (to be included in a C array).
  • Use the new option --no-renaming-list if there are identifiers you don’t want to get renamed (e.g. entry point functions in HLSL)
  • If you have #define macros, Shader Minifier will now avoid conflicts between macros and identifier renaming.
  • If your code has conditions with compile-time known values, they will get simplified (e.g. if (false), or int i_tag = 2; if (i_tag < 4) ...).
  • If there are many identifiers in your code (this probably won’t happen in a 4k intro), Minifier will now use 2-letter names if needed. This is not correctly optimized, but some people use this tool for obfuscation, instead of size-optimization.
  • The option --preserve-all-globals won’t rename any global variable or any function. This is useful if your shader is split between multiple files.
  • You can now tell the Minifier not to parse some block of code. Put your code between //[ and //] and it won’t be parsed, identifiers won’t be renamed, but spaces and comments will still be removed. This is very useful if you want to use features not yet supported in the tool (e.g. forward declarations, or layouts).
  • Other fixes in the parser (macros can be used in a function block, numbers can have suffixes, etc.)
  • Unix support. Use this zip file instead of the standard release, install a recent version of Mono (at least 2.10) and run mono shader_minifier.exe. This should work.
  • Online Minifier. You can use shader minifier online, but only one shader at a time.

Who is using it?

During the last year, many great intros have used it with great success.

  • Another Theory, by FRequency (#1 at Main 2010)
  • white one by Never (#1 at the Ultimate Meeting 2010)
  • anglerfish by Cubicle (#1 at Assembly 2011)
  • RED by BluFlame (#1 at Evoke 2011)
  • akiko by flopi (#2 at Riverwash 2011)
  • fr-071: sunr4y by farbrausch (#1 at Sunrise 2011)

Every year, we enjoy going to the German demoparty at Easter. It’s always a great party, where we meet our friends again. However, we’ve been invited this year to the Scene.org Awards ceremony in Norway, and it was hard to make a choice between them. We eventually decided to follow Truck’s suggestion and go to both (thanks again Truck, you’ve been extremely helpful). We chose to take ten days of holiday, and visit Norway at the same time (Bergen, Oslo, fjords, etc.). Everything was awesome (and very expensive!).

The Gathering is a good party, it is huge, well organized. There are many compos, seminars, separate area for sceners. Alcohol is not allowed, but there is Easter Garden, a small and cosy place for sceners, next to it. Having both parties is a great idea, as they complement each other. The main problem with The Gathering is that the screen and the sound system are not big enough to cover the whole huge place, so many people (gamers) end up ignoring the compo shows.

Scene.org awards

Fairlight statues in front of our nominee certificate.

I’ve been very happy with the Scene.org Award organisation. The team has done a wonderful job, both for the show and the after party, thank you very much for this!

Revision was a great party, with almost the same mood as Breakpoint. Beer was cheap (especially when you come from Norway), organizers were friendly. Revision has been the occasion for Zavie and Cyborg Jeff to release D – Four (ranked 3/11).

Everyone should go to a demoparty at Easter.

We have been taught that B – Incubation was nominated for the Breakthrough Performance category of the Scene.org Awards 2010. Needless to say this was an awesome news, that made our day (and probably the upcoming ones too; being nominated certainly does not mean actually winning the award, but that’s still quite something).

Whoever played a role in this: thank you very much!

As a side note, I’m pretty happy to see that most of the productions I hoped to see being nominated, were nominated. :-)

Since last release, the number of users of GLSL Minifier has increased, while number of bug reports decreased. I think it’s a good reason to move to version 1.0. This obfuscator has been used in a few great intros, such as Another Theory (winner at Main) and White one (winner at tUM). Hopefully we will see many intros using it at Revision.

The main feature of this release is the support for HLSL obfuscation, so we’ll now call this tool “Shader Minifier”. I got pretty good results with it, and I’ve been able to reduce the size of some famous 4k intros. Renaming strategy has been improved a bit; Detailed statistics will come later.

Try it now, download Shader Minifier!

Changes
  • HLSL Support. Please use –hlsl flag.
  • in/out now behave like uniform on global values (requested by several people), you can choose to preserve them.
  • Information on console output is removed unless you use -v (verbose).
  • Spaces in macros are now stripped (thanks to @lx!).
  • New flag –no-renaming to prevent from doing renaming.
  • Various fixes and improvements.

Our last production, E – Departure, is completely music driven and provides some fast camera moves. To make those visually more interesting we use a motion blur we are fairly happy with. This article will be about explaining how this motion blur post processing effect is achieved. The shader language used in the code sample is GLSL.

E - Departure

Motion blur in real life

A photo with motion blur

A photo with motion blur

First, let’s take a moment to remind what motion blur physically is. When a photo or a video is shot with a camera, the aperture is a parameter commanding how long the diaphragm will remain open. The longer it will, the more light will be caught by the sensor, thus the lighter the image will get, and the more blurry moving elements will get. This can be seen as a drawback (in a dark environment having crisp images gets more difficult) or as a wanted effect (since motion blur gives the feeling of speed).

While the diaphragm is open, a same given amount of incoming light leads to a same amount of light caught by the sensor: a light trail should have a more or less constant lightness. I feel this point needs to be stated since some people get confused by the artistic mean consisting of giving more contrast at the base of the trail than at the tail. In photography such an effect can be achieved by firing a flash right before the diaphragm closes (a technique known as rear curtain sync), but this is not the kind of motion effect I am referring to in this article.

Velocity map based motion blur

I am not an expert, but I’d say there are mainly two ways of achieving motion blur: accumulative based and velocity map based.

The accumulative motion blur consists in imitating the physical effect by blending together snapshots taken over the duration of the simulated aperture. Doing so is obviously very expensive since for each frame the geometry has to be processed and shaded, but it is also pretty close to real motion blur.

The velocity map based motion blur consists in rendering one image of the scene, no different from the regular rendering one would have without blur, and to make every point of this image bleed in some direction according to its speed. To do so, the speed of each point is computed and stored in a buffer: the velocity map. This is the way we chose.

Making three educated steps away from reality

Bleeding is not motion blur

At this point of the description, the velocity map technique is already expected to give a result that is not matching reality. The reason simply being that having object colors bleeding over each other is not equivalent to the accumulation of light during an aperture. Let’s give a simple example to illustrate this.

Viewer's point of view

Imagine we have a black background, a static red sphere, and a moving white sphere passing in front of the red one. If the snapshot used for the final image is taken at the moment the white sphere is completely occluding the red one from the viewer point of view, the red sphere won’t contribute at all to the final image. With a camera taking a photo, there may be a part of the aperture duration when the red ball is still visible, hence contributing to the amount of light caught by the sensor. Accumulative motion blur will reproduce this effect while velocity map will not.

Expected resulting images

We know this is not accurate, but we are trading accuracy for speed, and as long as we don’t forget this and we accept the resulting quality this is ok. After all what we are interested in here is more a good looking result than physically correct one.

Linear motion versus arbitrary motion

Now another inaccuracy lies hidden in the velocity map technique description: the fact that we will make colors bleed in the direction of the speed. The wording suggests that the blur for a given point will always be linear. And this is what will be assumed, since it is way easier to store the representation of a linear move than any other kind of move. But then it means that points moving along a circle for instance, like with a rotation, will have a linear trail instead of a curved one. This approximation works as long as the amount of blur is supposed to be very limited compared to the path. But with elements making fast enough, like small circular moves, this will not work any more and glitches will become noticeable. Should this be a problem, I would suggest instead to store an instant center of rotation for example.

As long as we are using linear operations, we can also avoid computing the speed for each point, and instead just compute it for each vertex and interpolate between them. The benefit of doing so is obvious: the speed will be a varying value computed in the vertex shader, and available thereafter in the fragment shader.

Scatter versus gather

At last, let’s get even farther from reality for a technical reason, still trading accuracy for speed. As explained, we will have two images: a snapshot of the scene, and an image representing the speed of each point. Ideally, we would like to have each moving point bleed over its neighbors according to its speed. Unfortunately nowadays GPU are designed in a way so they can easily gather, but cannot scatter information: in a fragment shader, you can retrieve data from other fragments, but you cannot modify other fragments. So if you want to have one fragment affecting some other one, you have to read it from that fragment. And if you don’t know in advance which fragments are affected, which is the case here, for a given fragment you have to check all neighboring fragments and gather the ones affecting it. If a fragment might affect up to n fragments away, this mean having to check fragments, thus O(n²) (for more on this topic, see chapter 32 of the book GPU Gems 2). This quickly becomes very expensive.

So what we will do here is consider motion blur to be homogeneous enough so we can trade the scattering we would like to do for a simple gather operation. Instead of making our fragment bleed over, we will dilute it among other fragments. This is a strong assumption, that will prove false most of the time. But fortunately, the average result will still look nice, even though some obvious artifacts will appear here and there.

Computing the velocity map

Let’s now get to the implementation part. I am supposing here we have some FBO (Frame Buffer Objects) ready, in order to be able to have efficient render to texture, or the equivalent if the API is not OpenGL. I won’t go into further details about this since it is quite a different topic from what is being discussed here. If you don’t know how to set up and use FBO, at least you now have the keywords and can Google it.

We will need a couple of things to build our velocity map. First we will need a separate buffer to store it. With OpenGL this is done with glDrawBuffers(), which allows to tell which buffers we are going to writen into. By default there is only one such buffer, but we can have various of them; here we just need an additional one. Second we will also need a way to compute the speed of the vertices. The motion blur is a screen space effect, so all we need is the speed in screen space.

During a typical forward rendering pass, each object will have some kind of matrice, pile of matrices, or any equivalent mean to compute its position in world coordinates. The camera is likely to have its own set of matrices, to transform points from world space to camera space, then from camera space to screen space (in OpenGL this last transformation is typically defined by the matrix referred by GL_PROJECTION_MATRIX). The position of a vertex at the given time t is therefore multiplied by each of those matrices to get the final position in screen space. To know the speed of such a vertex in screen space, we just need to know where it was at the time t – dt. If dt is exactly the duration of the aperture, the two positions will define the limits of the motion during that time. This is exactly what we want.

So basically we need a way to retrieve the transformation at t – dt while rendering at t. Using an animation, this shouldn’t be too difficult: you just have to query your system for t – dt. If your animation is ad hoc, like with some real-time interaction, then you may store the previous transformations. Anyway, as long as you have both the previous object transformation position and the previous camera position transformation, you have it all.

In a minimalistic vertex shader, one would have something like the following:

  1. void main()
  2. {
  3.   gl_Position = gl_ProjectionModelViewMatrix * gl_Vertex; // equivalent to ftransform(), which is a deprecated function
  4. }

For each object we will feed the shading pipeline with an additional information: the transformation matrix of t – dt. I considered that the projection matrix was unlikely to change fast enough to have any consequence on motion blur, so I decided not to store it, and to use the current one for both the old and the current position. It is up to you to choose your policy, but you have to be consistent when computing the speed in the end. Anyway, this leads to the following new vertex shader:

  1. uniform mat4 oldTransformation;
  2. varying vec2 speed;
  3.  
  4. vec2 getSpeed()
  5. {
  6.   vec4 oldScreenCoord = gl_ProjectionMatrix * oldTransformation * gl_Vertex;
  7.   vec4 newScreenCoord = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
  8.   vec2 v = newScreenCoord.xy / newScreenCoord.w – oldScreenCoord.xy / oldScreenCoord.w;
  9.   return v;
  10. }
  11.  
  12. void main()
  13. {
  14.   gl_Position = gl_ProjectionModelViewMatrix * gl_Vertex; // No change here
  15.   speed = getSpeed();
  16. }

You may notice that the current position is computed twice (gl_Position and newScreenCoord). I let it this way to make the code easier to read, since it seems the shader compiler will optimize this anyway. Also, you have to be careful about the homogeneous coordinates operation.

From now on let’s retrieve the interpolated speed in the fragment shader and finally store it in the velocity map. A typical fragment shader would look like this:

  1. void main()
  2. {
  3.   gl_FragColor = /* whatever */
  4. }

We are simply modifying it the following way:

  1. varying vec2 speed;
  2. vec3 getSpeedColor()
  3. {
  4.   return vec3(0.5 + 0.5 * speed, 0.);
  5. }
  6.  
  7. void main()
  8. {
  9.   gl_FragData[0] = /* whatever the fragment color was */
  10.   gl_FragData[1] = getSpeedColor();
  11. }

This is it. At this point we have a velocity map where color represents the motion of each point in screen space. So far so good.

Update: Actually, so far, not so good. The above code contains a bug that I didn’t notice at the time because we rarely met the conditions to make it visible. But since then it has been bugging me as we have been working on a scene that exhibits it pretty badly.

When polygons get clipped, it may lead to broken speed values, resulting in annoying strong blur artifacts. The reason behind this is the non linear operation consisting in dividing by the w component in the vertex shader, that brings wrong values after clipping.

To solve this, I suggest not dividing in the vertex shader and pass the w component to the fragment shader in order to divide only at that time.

Using the velocity map and applying the blur

Once the color buffer and velocity map are filled, the post processing pass will use both to generate the motion blurred image. This is done very simply: while in a very minimal blitting shader you would have something like the following,

  1. uniform sampler2D colorBuffer;
  2.  
  3. void main()
  4. {
  5.   gl_FragColor = texture2D(colorBuffer, gl_TexCoord[0].xy);
  6. }

here it becomes

  1. uniform sampler2D colorBuffer;
  2. uniform sampler2D velocityMap;
  3.  
  4. vec4 motionBlur(sampler2D color, sampler2D motion, vec2 uv, float intensity)
  5. {
  6.   vec2 speed = 2. * texture2D(motion, uv).rg1.;
  7.   vec2 offset = intensity * speed;
  8.   vec3 c = vec3(0.);
  9.  
  10.   float inc = 0.1;
  11.   float weight = 0.;
  12.   for (float i = 0.; i <= 1.; i += inc)
  13.   {
  14.     c += texture2D(color, uv + i * offset).rgb;
  15.     weight += 1.;
  16.   }
  17.   c /= weight;
  18.   return vec4(c, 1.);
  19. }
  20.  
  21. void main()
  22. {
  23.   gl_FragColor = motionBlur(colorBuffer, velocityMap, gl_TexCoord[0].xy, 0.5);
  24. }

In this example, the inc value will define a 10 fetches motion blur. It proved to be sufficient in our case, and even as little as 8 fetches could be enough for moderated blur. With 20 fetches it becomes really hard to notice artifacts, but it is also slower of course.

The function argument intensity controls the length of the motion trails. You can set it to be consistent with your rendering speed, or low for a fainter effect, of high for an exaggerated effect.

Dealing with precision matter

At this point, I had a nice looking motion blur already, but I noticed it tended to introduce instability at low speeds. This is due to the fact that low speed means short velocity vector, which being stored with two integers loses precision both in speed and direction. To overcome this issue, I decided to represent differently the velocity vector.

First, instead of storing its components, I stored the components of the normalized velocity vector on one side, and the norm on the other side. This dealt with the direction problem.

Second, I applied the very same trick gamma correction relies onto: using the power function to increase the precision of low values, at the cost of precision for high values. The norm would then be stored in non linear space, and transformed back when using it. Since varyings are interpolated linearly, we have no choice but stay in linear space in the vertex shader, and change for non linear space only in the fragment shader.

So in the vertex shader, once we have our per vertex speed vector we store it this way:

  1. varying vec3 speed;
  2.  
  3. vec3 getSpeed()
  4. {
  5.   vec2 v = /* … */
  6.   float norm = length(v);
  7.   return vec3(normalize(v), norm);
  8. }

While in the fragment shader the color we write becomes:

  1. vec3 getSpeedColor()
  2. {
  3.   return vec3(0.5 + 0.5 * vSpeed.xy, pow(vSpeed.z, 0.5));
  4. }

During the postprocessing pass, the speed is then read this way:

  1. vec3 speedInfo = texture2D(motion, uv).rgb;
  2. vec2 speed = (2. * speedInfo.xy1.) * pow(speedInfo.z, 2.);

Update: again, for the reason mentioned above, this code has to be changed and norm has to be computed in the fragment shader. I am too lazy to fix the code here so this is let as an exercise to the reader. ;-)

Bugs and limitations

Motion blur artifacts

Motion blur artifacts

Just like said, each trade off introduces inaccuracies, that result in visual artifacts. The most important one is obviously the scatter vs gather one. You can see the visual cost implied here: notice the ghost effect visible on the edge of some elements.

Another problem is how to manage the edges of the image. When an object is moving near the border, the algorithm may try to fetch colors outside the image. In that case, I see three solutions: leaving it as is if this is fine in your case (it might simply not happen often enough to be a problem), clamping the fetch to the border (this is what is done in E – Departure; notice how it affects the right side of the sample screenshot), or generating a bigger image to have a thin space outside the displayed frame where colors can still be fetched.

Those problems though are difficult to notice at low speed, and will appear only briefly at higher speed. So for E – Departure, this was an acceptable trade off.

Conclusion

I presented here a technique fairly simple to implement, allowing to get an effect which is visually pleasing, and that noticeably increases the realism of the image for a limited cost. It played an important role in our demo; I hope you will find a good use for it too.

Twenty four hours after getting back from tUM 2010 I almost completely recovered from the party coding, sleep deprivation, walking in the snow, and from the two hours late train ride to get there as well as get back from there. As planned, our new 64kB PC intro, E – Departure was presented there (sadly there was no other 64kB entry, so it was shown in a combined competition with PC demos).

What first drew my attention on this party was the website: carefully polished, with a beautiful layout, loads of useful and detailed information, random photos and links to productions from the previous years, etc. Also some details like the number of different music competitions (executable, tracked, streamed, loop : even though I am not into music myself, I think this is great), the fact that tracked music would feature a tracker rendering, the namevoting preventing rule, all stacked together left no doubt it was a carefully prepared event. Moreover, the so called Ultimate Breakfast and the free coffee and tea gave the feeling it would be pretty cozy.

Once there, all of this proved to be true. Flawless German organization. The Intranet had an embedded Google Map with everything a scener might possibly need, from food and cash dispenser to computer stores, and featured a food delivery service: the so called Foodwave. No need to get lost outside to find some hypothetical pizza, one just had to choose from the broad choice, pay at the food desk, and get back to coding or to whatever other activity more interesting than braving the cold the party was providing, until the food delivery was announced. A truly great service.

fr-063: Magellan, by Farbrausch

fr-063: Magellan, by Farbrausch

The party place itself, at a walking distance from the train station, is a very nice hall with a nice looking inside iron architecture. The wooden floor, although making stomps more disturbing during the night, contributes to making it a very warm environment. The work done on the lighting and theme is nice too: this year it was the 10th edition, and various artifacts from previous years were exhibited here and there, as well as some printed graphic productions. Those details and the rather small, but not narrow, size of the place make a cozy feeling, but not a mediocre party: there are great people there, and the competition have a fair level even though the number of entries could be way higher. The real downside of this place though, and maybe the biggest downside of the party altogether, is the absence of showers, and the absence of hot water in the toilet. Washing as little as hands and face with cold water during a German Winter is probably among the rites of passage into adulthood, along with explaining to Scamp you’re launching in Norway a demoparty called The Breakpoint Replacement.

Something I absolutely loved there was the job Franky did by bringing one of his pinball games. A good old late 80′s blinking and blipping pinball! It became a hot spot, where people gathered and chatted while playing when there was nothing to see on screen. I doubt I would have met as many sceners otherwise. The Ultimate Meat Thing was a very nice job done by the Nuance folks, even though a bonfire in the snow isn’t exactly as much fun as one during summer. The Ultimate Breakfast on the other hand was pretty cool, as a way to wake up and all have a meal together while waiting for the first events.

Rumors and Facts, by Rebels

Rumors and Facts, by Rebels

I won’t list all competitions, but the main productions were without a doubt the new Farbrausch demo, fr-063: Magellan, and the much awaited Easter Party invitation, Rumors and Facts, by Rebels. The 4k intro White One, by Never, also got the audience with its delicate feedback effect. Another production that blew me off was the epic streamed music entry, End Credits, by jco.

At last, a thing that disappointed me (apart from how we ranked) was the way the party feeling suddenly vanished once the competitions ended. While Breakpoint, Evoke and Main all had a live show to end up with, nothing happened and many people left, leaving the place with a weird feeling. It’s not like there wasn’t any live acts, there was even one right before the demo competition, but this felt like it was missing a decent final.

So as a conclusion, the Ultimate Meeting is definitely a good party, with the nice feeling brought by its small size and perfect organization. But to compare it to another German party, I prefered Evoke 2010 (which doesn’t have this family feeling though).

Here is a sketch made about a month ago, after Cyborg Jeff, our dear composer, let us hear the first elements of his soundtrack. This was the base of our upcoming next production, a 64kB PC intro codenamed E. Our first demo, B – Incubation, have received a positive feedback, but was criticized design wise. E is music driven and focused on the visual experience; I hope it will succeed on the points the previous demo missed.

There is still work to do, but I am pretty confident we will be able to release it at the Ultimate Meeting that will take in Karlsruhe, Germany, starting tomorrow. I do not guaranty the fulfilling of this assertion though, should any supervening circumstances amounting to force majeure. Such circumstances shall include, but shall not be limited to, strike, snow storm, riots, train hitting a truck, hard drive toast, power plug left at home, earthquake, war or acts of God.

See you at tUM!

Update: nor the snow and the resulting almost three hours delay on train schedule, nor the very addictive pinball game could prevent the release of E as planned. :-)

The new version of our tool is released! Here is the changelog:

  • Allow forward declarations in the input code and remove them (functions are automatically reordered). Please use the syntax “int foo(int x)” and not “int foo(int)”.
  • More intelligent renaming based on the context the variable is used.
  • Allow structs in source code, fields are not renamed. Field names cannot look like vec fields (.rgb, .r…) because I haven’t written the typer yet.
  • Remove the –macro-threshold option. Will be fixed in a future version.
  • As usual, several bug fixes

The most important news is the improvement on the renaming strategy. In the 0.4 version, the Minifier tried to reuse the same variables again and again, and increased the frequency of a few characters. Now, it’s getting more complex: the name of a variable  depends on how it is used.

For instance, if you often call functions “max” and “mix”, you’ll often have the “x(” pattern. Thus, GLSL Minifier will probably name your function x to increase of the frequency of this pattern. The same goes for each two-char pattern the tool will find.

Here are some statistics I’ve just made, using shaders from 4k intros. I’ve taken a short C file, inserted the shader as a string, compiled, and compressed using Crinkler (/COMPMODE:SLOW /ORDERTRIES:3000). So, it’s all about making the shader compress better. Numbers are the filesize in bytes:

  • Retrospection
    • Original: 1462 (hand optimized)
    • Minifier 0.4: 1429
    • Minifier 0.5: 1421
  • Valleyball
    • Original: 2240 (using old BluFlame minifier)
    • Minifier 0.5: 2184
  • Another theory
    • Original: 1511 (hand optimized)
    • Minifier 0.4: 1475
    • Minifier 0.5: 1463
  • Lunaquatic
    • Minifier 0.4: 2635
    • Minifier 0.5: 2613
  • Sult
    • Minifier 0.4: 1411
    • Minifier 0.5: 1408
  • Slicesix
    • Minifier 0.4: 2493
    • Minifier 0.5: 2432

Conclusion: If you’re not using any tool to minify your GLSL shader, I bet you could save at least 20 bytes on your 4k intro. Try and see!

=> GLSL Minifier 0.5