In the previous part, we saw how textures are generated in H – Immersion. This time, we’ll have a look at another important tool for size coding: procedural geometry.
More specifically, since our rendering uses traditional polygons, we wrote a procedural mesh generator. We’ll see how with a few well chosen techniques, it is possible to create a variety of shapes, or make a viewer believe we did.
First, Cubes
When we started making demos, the 64kB limit felt intimidating. We didn’t know anything about procedural mesh generation, and we already had a lot to do with the rendering, the camera, the textures, the story… well, with everything. So in our first demo, B – Incubation, we took the early decision to skip 3D modeling altogether. Instead, we chose to use only cubes and designed the demo around this concept.
This is an example of how a technical constraint can become a creative challenge, and force us to look for new ideas and do something unexpected. In all of our 64kB intros, the size limitation affects the design, sometimes in small and unexpected ways: we are constantly looking for tricks, code reuse, and workarounds to evade this barrier.
After this first 64kB, it was time to introduce procedural meshes at last! For F – Felix’s Workshop, we implemented some rudimentary mesh generation. The demo received good feedback, but the code is probably simpler than what many people expect.
If you pay close attention to the image below, you might notice that there are only two kinds of shapes used by all geometry. Some elements, like the table, the shelf and the wall, are made by assembling deformed cubes. The rest have varied shapes, but are all sort of cylindrical. Indeed we built them using surfaces of revolution.
The idea is to draw simple splines, then rotate them around an axis to create 3D models. Here is the spline we used to create pawns on a chessboard.
The numbers on the left are 2D coordinates of a list of control points. We interpolate between the points using Catmull-Rom splines. Catmull-Rom is a nice algorithm first published in 1974, which Iñigo Quilez details and recommends. The shape on the right is the result (after symmetry) of applying the technique on the list of points.
Once done, we can convert the data to 3D by creating faces along the spline. With little variation we can also create other chess pieces. Here’s the final result.
How many bytes do we need for this? Not too many, especially when you reuse the technique in lots of ways throughout the demo. If we stored each number on one byte, we would need less than 40 bytes of data to represent the pawn… and this doesn’t take into account the compression step.
If you look at the source code for the chessboard, you’ll notice that we actually use a floating-point type to store these integers between 0 and 255. These 32-bit floats use 4 bytes each. Is it a waste of bytes? Not quite: as said in the previous paragraph, the program is compressed. If you check the binary representation of those floats, you’ll see they are very similar and end with a bunch of 0s. The compression tool (kkrunchy) will pack this efficiently, and it can be smaller than if we tried to be smart. Going further, delta encoding could improve compression rates, but it only becomes beneficial when there’s enough data to store. There’s more to say about floats, and we’ve touched the topic before in the article Making floating point numbers smaller.
In the scene above, notice how the drum has distinct faces. Our function lets us control the number of faces to generate, so not everything has to be perfectly circular. For example, the pencil on the desk is hexagonal.
In the background of the chess scene, even the white ornament at the top of the hearth is made with this technique: it is built as a pointy octogonal shape. Then the central part is elongated along one axis, resulting in a large shape with beveled corners. We can not only elongate the shape along an axis, but also generate it along a curve. This is how the train ramp is made, with its path described by another spline. If you watch again Felix’s Workshop, you can see how everything comes either from a revolution or from a cube. We create a wide range of objects just by combining these two primitives.
Growing Cubes
Combining and deforming simple cubes also has a lot of potential. For the vegetation in H – Immersion, we started from a cuboid, and deformed it a bit. Then we made many copies of the mesh placed vertically around an imaginary axis, with random size and orientation. This creates something that vaguely looks like a plant. We repeat, again with random parameters, to create more of them:
This looks very rough and you’re probably expecting to read what the next steps are to refine the shape. There aren’t any: this is the final mesh. We didn’t even create a custom texture for it. Instead, we just applied the ground texture on that mesh!
But during the demo, the effect works well enough thanks to the rendering, the lights and shadows, and a simple but convincing animation. The editing also helps a lot: the shapes and movements set the mood, but the viewer doesn’t have time to notice the details before the camera moves on. Sometimes evoking a shape with a proper mood is more effective than painstakingly modeling it.
At some point, we wanted to have more complex meshes. As usual, we started from our beloved cube and decided to modify it. Merely deforming a cube will still result in.. well, a cuboid. So we needed something more. Enters extrusion: we pick a face, and extrude it. This operation will create a new face, which we can pull from the object, resize, or transform in any way we like.
We iterate multiple times, to create the shape we want. Each extrusion will add more details. The result is often low poly, but we use the Catmull-Clark subdivision algorithm to smooth the result out. This approach was inspired by the Qoob modeling tool.
What we’ve described is exactly what we did to generate the small statues used as decorating props in several places during the demo:
Since it’s all procedural generation, we can pass arguments to the function. These arguments can control some angles for the legs, arms, etc. Write a loop generating lots of statues with random parameters, et voilà! You have enough variation, so that it doesn’t get visually boring.
We also created two statues with hard-coded parameters, for better results. Here is how it looks after applying textures and lighting.
In the temple, we wanted to show a colossal statue of Poseidon seating on its throne. The technique used for the small statues was too rough for a model that would have more focus. Poseidon is huge and we wanted more details. The demo has a lot of content and fitting everything was a challenge. After a lot of size optimization work, we managed to get around one spare kilobyte. We decided to use it to get a better model for Poseidon.
To do so, we used a completely different technique than what we’ve seen so far: implicit surface expressed with signed-distance fields (SDF). This is a technique very popular in 4kB intros, usually used with the ray marching algorithm to generate the result, and implemented as a screenspace shader. But since our rendering is based on meshes, we generated a polygon mesh by evaluating the SDF function with a marching cube algorithm instead of ray marching. We built the humanoid shape as a series of segments, with a little bit of tweaking to give it an organic look.
On the left, the normals deduced from the SDF reveal the underlying simple shapes. On the right, the normals estimated from the triangle mesh reveal the sampling grid, but hide the structure and give a desirable rough looking shape.
There was only so much detail we could afford, not to mention that modeling humans is difficult and people are very good at spotting issues in human-like models. We used to our advantage the low resolution of the generated mesh. It turns out that evaluating the normals on the final mesh (as opposed to deducing them from the SDF function) creates visible artifacts: the surface is full of smooth creases and edges. This very rough appearance can give sort of a sculpture look. We used lighting and cinematographic techniques on top of that to trick the viewer into filling the details. In the final shot, the statue seems more detailed than it actually is.
In creative activities, it is often crucial to iterate quickly on a design. You cannot do everything right from the first try, so you need to easily make changes, iterate, explore, see what works.
At some point, we put our mesh generator on a web server, just like we did with the textures. The webpage had a textarea where we could write C++ code. When we clicked on a button, it compiled the code on the server and returned the mesh in a JSON format. The webpage displayed the result with three.js, so that we could view and rotate the model with the mouse. Just like in Shadertoy, this allowed us to quickly try ideas, share them with the team, fork and tweak other models.
We later moved to a different solution, C++ recompilation, which we mentioned in the first part.
Conclusion
Mesh generation is arguably more difficult to design and more involved to implement than texture generation. When textures are just flat surfaces, meshes have different topologies, which adds a new layer of complexity.
But like with textures, the simplest building blocks can offer a wide range of possibilities to explore, as long as it’s possible to combine them in various ways. A few simple elements used creatively can give a wide range of shapes.
Moreover, as we’ve seen with at least two examples, the power of suggestion can play an important part and replace modeling work that would be tedious or even impossible to do with the available building blocks.
Using both of these observations can go a long way, as we hope to have demonstrated. The trick is to find the right balance between modeling work and expressiveness.
Earlier this month, the large demoscene Easter meeting, Revision, took place with all its range of events and competitions, despite a new online format due to the pandemic situation. One of those competitions is the Shader Showdown: a tournament where participants are to write the best possible visual effect within a limited time. Each round opposes two of them on stage for 25 minutes while a DJ is playing and the public is cheering. At the end of the round, the public decides the winner who moves up to the next round.
After this year’s competition, Friol interviewed six contestants, including the winners of the last three editions:
ferris is a software engineer coming from the US. He’s currently working on machine learning processors for Graphcore in Norway.
fizzer comes from the UK and is currently living in Finland. He is a graphics engineer who has been working on Cinema4D before doing GPU path tracing in Notch, a tool to create visual effects for events and concerts.
Flopine is a French PhD student in digital art and a technical artist at Dontnod. She won the Revision Shader Showdown 2020.
Gargaj is a Hungarian game developer. He has worked on the indie MMO Perpetuum before moving to the AAA industry with the Project CARS racing game series.
NuSan is a French technical artist who works in the game studio Dontnod and also creates small video games with Unity and PICO-8. He is the 2019 winner of the Revision Shader Showdown.
provod is a Russian software engineer from Siberia who recently moved to California and is currently working on the game platform Roblox. He won the Revision Shader Showdown in 2018.
For reasons we will not elaborate on, the original article was eventually withdrawn. With their permission, we’re sharing here their stories and insights for everyone to read.
What is this all about?
Whether it is at home, during a stream on Twitch, or as a performance during a live event, live coding consists in writing code with immediate visual feedback. In this article, it refers more specifically to writing a shader effect on stage to entertain an audience. At Revision and several other demoparties, participants have a given time to write their effect from scratch. A DJ is accompanying them with a live mix, and their code and its result are projected on a big screen for everyone to see, in a collaborative audio-visual performance.
When asked to define live coding, ferris mentioned a “competitive aspect which makes it at least e-sports like, while still being an inherently artistic medium”. Gargaj pointed out that “the shaders themselves are rarely rewatched afterwards”, unlike “a good demo that lasts years to come”. His comments emphasized the “in-the-moment performance”, comparing it to “rap or beatbox battles”. fizzer further described the “various twists and build-ups that occur during the coding session”, comparing it to the World Cup. “The end result does of course matter, but everyone shows up to watch the game in full because that’s where the excitement is”, he added.
This early video from 2011 can be a good introduction to shader live coding. It presents some simple 2D effects that are relatively accessible for a new programmer, compared to the 3D effects and advanced tricks seen in live coding competitions:
When did you first hear of live coding in the demoscene?
Flopine: When I entered Revision’s main Hall in 2016. I didn’t know it existed at all! That people would code live in front of others…
fizzer: I think it was when I read about IQ coding shaders as a member of Beautypi, which was a group that put on a show with live music and visuals. It wasn’t competitive, but it was live and I thought back then that it must be extremely difficult. It turned out of course that many years later it would be made an official competition at Revision and I found it to be quite fun when I tried it myself!
provod: At the demoparty WeCan 2013 in Poland. I was accidentally visiting this party (was visiting friends in Germany and became aware of a demoparty nearby) and only at the party place noticed that they had a live coding competition. A few years before that I had a rather small experience of doing music performances live coding PureData patches and live soldering custom digital synthesizers on stage, so that thrill looked like a lot of fun, even though I only began exploring shaders back then. So I just applied as a contestant and got selected. I’ve heard about live coding performances before that outside demoscene, mostly in the experimental electronic music scene. I always wanted to try doing something like that also with visuals myself, so I guess it felt like a good opportunity to just try.
ferris: In the form it is now, I guess 5+ years ago. I’m not sure exactly when. But I was aware of the first competition at WeCan (I think that’s where it was?) and I helped do some mac fixes for Bonzomatic in 2015 or so (which I think were superseded by more proper work by alkama etc) and I’ve tried to participate in most of the competitions that were happening at parties that I was attending around that time, especially as Solskogen was embracing them. That said, I remember there being compos with a similar format long before this. The “drunk master coding compo” comes to mind, which I never personally saw, but heard about from the likes of kusma / excess and a few others. I believe this was a feature of Kindergarden in Norway, but I’m not sure. This was more theme-focused, i.e. make a “texture mapped tunnel” and every time you hit compile you had to take a shot. Or something. :)
NuSan: I saw some people live coding sounds at local concerts. I also saw some live coding by IQ around 2012 on his youtube channel, but it’s really Revision 2018 live stream when I saw a proper shader showdown and that blew my mind.
fizzer: Actually I had never heard of live coding before I came across it in the demoscene. I had tried a few times to code something in a limited amount of time as a challenge for myself, such as coding an effect on a laptop during a train journey home, but I hadn’t ever imagined it as a real sport. I think it’s quite futuristic.
The editors
If live coding is a sport, the editor is its field, where the champions show their skills. Two shader live coding editors popular in the demoscene are ShaderToy and Bonzomatic. ShaderToy is a creative website where a community writes and shares shaders. It is a very nice place to find shaders and analyze how they are made. Bonzomatic is a software focusing on live shader competitions. Its author Gargaj shared the story behind the creation of Bonzomatic.
Gargaj: I know live coding has been a thing outside the scene since the mid-2000s, but I never really saw an appeal because the people doing it in “algoraves” were usually using very basic graphics or sound libraries. So the actual results were never particularly pleasant, and they were always somehow laced with an influence for either databending or 8-bit or both, which got really old after a while, and I never really saw the appeal.
The demoscene insertion happened when Bonzaj and Misz came up with the idea for WeCan 2013 and they created the competition using two projectors and a custom editor integrated into with Visual Studio; D.Fox (editor’s note: D.Fox is the lead organizer of Revision) liked the idea and tried to bring it to Revision 2014, but when it was announced (about a week or two before the party). I saw the package they were trying to use, which was really janky and obviously required extra hardware. Just to prove a point I found a Github repo by a guy called Sopyer that implemented a syntax highlighting editor over OpenGL, and within about 2 hours I had the basics for a tool that was able to show a shader that was being ran and recompiled. I then merged in some of Bonzaj’s code (like the FFT and MIDI code), and posted it back on Pouet. After the party itself, I was aware of the many bugs and the overall hacky nature of the editor, so I decided to write one from the ground up with a bit more thought and cleaner basics in mind. When I was looking for a name, I obviously thought of Bonzaj again and named it after him — something which he apparently never realized until years later.
A two hour long commented live coding session from NuSan using Bonzomatic.
When asked about the editor, the participants have all sorts of suggestions as to where to take it next. At a basic level, provod regrets the lack of a vi mode causing him to “feel crippled, and sometimes do slip live” while Flopine found its automatic indentation frustrating. To “keep the concept fresh”, fizzer suggests high level features like multipass shaders which “would allow a lot of unseen effects”, and mentions how a stateful shader would make effects “from basic feedback zoomers right up to physics and particle simulations” possible. These features became available in ShaderToy in 2016, enabling such effects.
NuSan was more interested in collaborative editing. The Cookie collective experimented that idea recently with a collaborative live coding where several coders were editing a shader simultaneously. Going further, fizzer wonders “what it would be like if the two contestants had access to each other’s shader output as an input texture”.
The heat of the competition
“I generally think that it doesn’t make sense to be alive and not constantly try to do the most complex and exhausting stuff in the dumbest way possible and with the silliest look.” — provod
The Revision 2018 final match: provod vs Flopine, with Ronny mixing.
Do you prepare for a live coding competition?
provod: Usually I do. I did not prepare for WeCan 2013 at all, however, and entered it at the party place on a whim really. It had a different format where you could continue to work on your shader in the next round, and in fact I immediately started making very specific preparations between rounds when I got the hang of it after the first round.
For subsequent compos I looked at participants list, got scared by all the big names, and decided that if I’m to at least show something comparable I should really have a few techniques memorized and should try to train myself to make a few scenes with these techniques combinations. I don’t do graphics professionally, and I’m not always involved with demoscene, so it’s not rare that there are a few consecutive months in which I don’t touch shaders (or any graphics stuff really) at all, so skills get very rusty. :D I certainly would not be able to write anything coherent if e.g. I forgot all the relevant math.
Usually a week or two before the compo I try to make a list of techniques to rehearse (e.g. raymarching basic loop, normals, basic shading) that I should be able to write with my eyes closed in a few minutes. Then I come up with a few scenes, maybe remember some ideas I had and did not have time to try previously. Then I just set up a timer and try a scene. Then I try to tweak a scene for a few hours to feel its “phase space” and get an idea what works and what doesn’t. Some of these end up being 4kB intros later :D.
NuSan: I try to train myself regularly by doing a shader coding live stream on Twitch each week so I have some pressure from an audience. Before an important showdown like the ones from Revision, I try to find an idea that could be original. A few hours before the match I mess around in Bonzomatic with that idea and try to find a nice visual target.
Flopine: I’m always prepared because I absolutely love coding shaders, thus I’m doing it all year long. And before a match or a VJ set, I usually practice and test ideas 2 hours before, until it’s my turn to go on stage! For the Revision showdown I often have 2 ideas at least, and then for the finals I’m always surprised and need to improvise more because I never think I’ll be able to pass the quarter and semi-finals!
Nusan: I don’t try to memorize each value or piece of code, it’s more about what steps I want to follow, so that once in the match I already know where to start and where to go. I don’t prepare for more than one round and usually there is at least an hour between two rounds so I can find a new idea. It’s a good process to get something cool on the screen without destroying all improvisation.
provod: I try to have at least three solid scene ideas each of which I can do within 20 minutes and tweak in the remaining minutes. There’s an interesting meta with these scenes, which scene I should play against whom and in which order. And I think I completely failed the meta for the past two Revision parties! It doesn’t mean that I have a super rigid scene idea for each round that I just type out (there were people doing this, I think, but my memory is way worse than that). Scene is mostly a vague pointer to a collection rendering techniques, geometries and maybe palette and camera movements. For some rounds I wasn’t sure what exactly I was doing before I started typing. And for some I just went and combined things from different scenes I practiced previously.
fizzer: When I know that I will be doing any kind of live coding, I brush up on basic flexible effects that I can code out quickly at the start. So during a round I will start with one of those and then just improvise on top of that. I think this is a nice compromise between going in cold and memorising an entire shader. Complete memorisation can be very risky. If it goes wrong, the time that it takes to come up with a Plan B can be significant. Likewise, if you have memorised some basic small effects that take little time to code then you have the choice of switching to another effect mid-round if it’s not going so well. You can also entertain the audience better that way, by making big changes to your shader mid-round.
What kind of shader will win over the audience?
Gargaj: That’s always a tough one, sometimes it’s just sheer visual splendor, sometimes it’s hitting the right note with the audience, sometimes it’s doing something more interesting than the other person even if it’s not as good looking. Plus there’s always audience bias as well :)
provod: Path tracing! It’s super trivial to do, and any random crap rendered with it looks good. It’s like pentatonic scale :D.
ferris: It depends on the audience of course. It’s the same with demos – people like what they like. For the demoscene, a cool 3D scene with lots of shiny things will often win. However, sometimes a very stylistic 2D shader will also win if it captures the audience (which is particularly common if it appeals to an “oldschool” palate, eg. by using a classic C64 color scheme/dithering style).
NuSan: Making some 2D neon lasers rotating in all directions always gives a good effect.
Flopine: The MirrorOctant from mercury sdf library! Such a cool way to duplicate space seamlessly. That function combined with a cool shape and maybe an angular repetition can win almost all shader showdown matches I’ve been into :) but I like to try different things because I don’t want to do stuff that are “too easy” on stage.
Flopine and Cupe look at each other’s creation after a match at Revision 2018.
Any particularly memorable live coding event?
NuSan: My first shader showdown in public (at Cookie demoparty) was very intense, my fingers were shaking but it was awesome. I had some bad live coding sessions mostly when I’m tired or if I do several sessions in a row. After a while, you can’t really think anymore and your creativity runs dry. Then it’s not fun and usually you lose ^^.
Flopine: The first time I went on stage at Revision 2018 will remain in my memory for decades I think! So exciting, so many emotions and fun… I think I want to feel that again over and over, that might be why I’m doing a lot of showdowns, but obviously (and unfortunately) the feelings are never as strong as they were the first time!
fizzer: A terrible one. It was Revision 2018. I had requested (or so I thought) to be coding in GLSL, but my computer on-stage had been set up for HLSL. This would ordinarily not be a huge problem, except that I didn’t notice this problem until some minutes into the round. I had no idea how to switch the shader language, and in my focused state of mind I had no choice but to continue, rewrite the code I had written in GLSL into HLSL and make the most of it. I still can’t believe that I was able to keep coding coherently after that!
provod: The whole Revision 2018 experience was completely surreal, I felt like such a huge impostor really. I was recovering from a severe flu (had been hospitalized just a few days before the party), so the preparation was interrupted, was sleep deprived due to long flights from Siberia, and also was desperately trying to finish a 4k and 8k before their deadlines. I did not expect to be able to write anything coherent on stage, even less so to win against all those strong sceners I still consider lightyears ahead of me.
Revision 2019 final, with NuSan and Flopine.
“Absolute panic”
On the third day of Revision 2020, a DJ set took place, together with another live coding event. Unlike the competition rounds, this session was much longer and not meant to be a competition, and more of a freestyle VJ. Accompanying Bullet during his mix, it featured not two, but three participants: Flopine, Gargaj, and IQ. IQ is widely known for having pioneered many ray marching techniques, creating the vegetation in Pixar’s film Brave, and being the coauthor of ShaderToy.
Gargaj talked about his reaction when he was asked to take part in the event.
Gargaj: Oh my first reaction was absolute panic when I found out at around 1:30AM that they want me in it too, and then just mounting anxiety. Then the whole thing kicked off and I just locked in to the keyboard and the screen. Aside from the quick HLSL-to-GLSL port, my focus narrowed in so much that I didn’t watch the stream, had no idea what the other two were doing, and completely forgot about the relaxo-beer I opened myself 10 seconds earlier. I don’t know if I’d call it “fun” to be honest, because while I was incredibly lucky that the idea I decided to go with worked out, it was really just luck and it could’ve been equally as much of a disaster with me staring at a blank screen for an hour. I really genuinely dislike things that have a time limit and try to actively avoid them. I got to participate (and let’s face it, totally undeservedly, compared to what others can do) against the two best people who do this, so it was at the very least a good entry on my demoscene CV. :) But I don’t know if I ever want to do it again.
The online event between Flopine, IQ, and Gargaj, with DJ Bullet, all streaming from different countries.
“It’s important to have fun during a match or else the pressure can be a pain.” — NuSan
Did live coding shaders get better with time, technically?
Flopine: For me I’m getting better each year, but not because I’m working on the same thing again and again. I’m learning new things every month and filling my own palette of techniques with them, which helps me build more complex/mastered images in my opinion.
In general, live coding shaders is also becoming better and better technically because intros and 4k are getting better and better! I once saw a path-traced shader typed under 25 minutes for a showdown! I think the demoscene and also papers that come out of research labs will continue to improve, thus improving what type of effect you can make in 25 minutes.
Gargaj: I don’t know, I don’t remember the first ones; if they’ve gotten better, I think it’s because people learned to scope the 25 minutes in better and started to explore more ideas.
NuSan: I think that live coding did get better, as new techniques are found regularly. Also there is a lot of knowledge sharing between demosceners so we can all improve, for example with shadertoy and live coding streams. Then some tricks might sometimes get lost, and also we are maybe too focused on technical skills and impressive shaders. We could spend more time on simpler but more improvised shaders that would be more personal and maybe fit the music better.
ferris: I think they’ve peaked a bit actually, and I think it has to do with a certain overlap of 1. what people can conceive of in their heads with such a limited format and 2. the round time limit. Admittedly I think it’s usually a bit stale, though we are seeing better and more impressive space folding/lighting stuff as GPUs get faster in particular. In fact I think GPU technology advancements are what drive live coding advancements more than coder skill, but it’s certainly not the only factor.
fizzer: I think they did. When the event was first introduced at Revision it was totally new. The idea of having the volume of the cheering from the audience dictate the winner in a very subjective manner was even more radical. It was quite hard to guess how it would work, and what the best strategy would be, how much stress it would be to code on stage in front of everyone. I think now it’s quite well established what makes a good live coding round and what to expect from it. It’s getting more intense every year, and some participants are effectively training all year round by doing time-limited live coding streams on Twitch in their own time, for example. I’ve been out of the game for a while, but I may yet return. We’ll see.
provod: Absolutely! My impression is that previously people generally were just improvising on the spot. But for the past year or two I see people both make special preparations for live coding compos (you can see that from the way people write complex scenes with great confidence) and keep themselves in good shape and are constantly refining their skills by doing shaders regularly. Like NuSan, evvvvil and Flopine streaming on Twitch (i’m sure there are other sceners who also stream shaders but I can’t remember them all). Unfortunately, I don’t have a time budget to keep a similar routine.
provod vs Ferris at Revision 2018.
Does live coding require distinct skills from coding?
Flopine: The demoscene is full of super great coders that don’t livecode! Live coding is stressful not only because you have to type and think fast, but also because you have to be present, to be on stage in front of hundreds of people that will judge both your production AND the code of it. I think a super talented coder needs to be sure every line is in its right place, that he does not implement something in a dull way or whatever… but in a live coding environment you don’t have time to think about all those!
ferris: Frankly I think the best coders are the ones that take their time and do something more comprehensive. live coding to me isn’t better/worse, it’s a very different thing, and it’s an odd skillset that to me doesn’t really overlap much with what I’ve typically thought of as “best” coding skills, though I have a lot of respect for it certainly!
NuSan: Live coding is a very special thing, focused around short and intense sessions. Also you have to sacrifice some elegance in coding style to get faster. Some coders prefer having time and peace of mind while creating and it doesn’t make them less impressive. Right now, I like live coding because it lets me create fast and I don’t have the patience to take my time.
provod: Yes. You have to be able to do a lot of stuff consistently in a very limited time and under quite a bit of pressure. Not everyone will find this fun. It is also laser focused on short-term results sacrificing everything else, so it’s kind of orthogonal or even outright adversarial to other code writing dimensions like code quality, performance, cleanliness etc, which are very important in real life, and lack of which would cause physical pain to many great coders I know.
provod vs NuSan at Revision 2019.
Who is the livecoder in the demoscene you appreciate the most?
fizzer: I would have to say Flopine, she has a real sense of showmanship and personality resulting in a real cool performance when she is live coding, and the shaders she produces are likewise always charming and funny. In the demoscene it was a bit like she came out of nowhere and suddenly started making these awesome live coding performances. She has also produced demos besides that too, and is very active online sharing her code and livestreaming on Twitch and such.
Gargaj: That’s a tough one; the NuSan / Evvvvil battle this friday was probably the most insane thing I’ve ever seen, and yet when you ask the question, I find myself thinking about people like Ferris and Cupe and Visy who usually do something completely lateral and non-3D, because I think that’s not only harder to pull off, but also shows guts. On a higher level though, what NuSan, Flopine, Evvvvil, yx, and the likes are doing with their weekly streams is fantastic and helps a great deal with community building and knowledge-sharing.
During the Revision 2020 quarterfinals, NuSan vs Evvvvil has been a memorable match.
provod: It feels weird to pick just one person — I envy all of them and would love to grow and be like them some day. Evvvvil is doing some crazy space warping stuff that I can’t wrap my head around. LJ is an absolute god of making complex (and realistic!) geometry. Nusan’s stuff is very beautiful and clean. And of course Flopine always comes up with some unconventional approach and kicks all of us default abstract raymarchers :D.
But if I absolutely had to pick one scener, it’d be Ferris. I love the diversity of the stuff he’s doing and the polish of it. And I absolutely hate the fact that he’s always ahead of me in whatever and is also doing it way better than I could ever event attempt (I wanted to do FPGAs, he’s already streaming it; I wanted to do some DJing, and he immediately streams a DJ set breakdown he did weeks earlier; etc. He’s also the person who showed me shaders in 2009 when I didn’t even know what those were.
Flopine: Tough question… I think I’ll always admire LJ, both for what he did in previous showdown tournaments and for the quality of his 4k. He gave me the initial spark not only because what he did was amazing but also because it “hit the spot” artistically, at least for me. The first time I saw him on stage I saw an artist, being in such control with his medium that he was capable of improvising a masterpiece on short notice. And then with his 4k I think he is also able to express a mood, a feeling and to share it with us. I’m very receptive to that and I’m glad to see more and more entries being sensible, like some from Primsbeing for example.
ferris: Admittedly, while I’ve tried to be somewhat involved in the live coding scene, as it’s become larger/more competitive, my interest has waned a bit. I’ve been in the scene for a long time and I appreciate the typical compo entries more, especially larger ones like 64k and full sized demos (even though I focused specifically on 4k’s in my late teens). When live coding happened, the fresh part for me was the spontaneity of it, that you could just bang out something and it would be cool. As people started memorizing more and more it was less interesting to watch, even though I know that’s not an easy feat and it’s impressive, it just doesn’t capture me personally as much. That said, blueberry is always my favorite person to watch because he comes up with crazy colors and usually fresh ideas, and I think he has a similarly not-so-competitive attitude as me. So, he’s both my favorite contestant and my favorite opponent :)
NuSan: Flopine often manages to surprise me with a new effect or style so watching her is a lot of fun. I also like coders who make impressive 2D effects, like Cupe or Ferris, it always feels magical to me.
When making a demo in 64kB (or less!), many unexpected issues arise. One issue is that floating point numbers can take quite a lot of space. Floats are found everywhere: position of objects in the world, position of the camera, constants for the effects, colors in the texture generator, etc. In practice, we often don’t need as much precision as offered by floats. Can we take advantage of that to pack more data in a smaller space?
In many cases, it’s not important if an object is 2.2 or 2.21 meters high. Our goal is to reduce the amount of space used by those numbers. A float takes 4 bytes (8 bytes for a double). This can be reduced a bit with compression, but when there are thousands of them, it’s still quite big. We can do better.
A naïve solution
Suppose we have some numbers between 0 and 1000, and we need a precision of 0.1. We could store those numbers as integers between 0 and 10000 and then divide by 10. Whether we use 32 bit or 16 bit integers in the code doesn’t make a difference: since we don’t use their full range of values, all these integers start with leading 0s. The compression code will detect such repetitive 0s and use around 13 bits per number in both cases.
The problem with this solution is that we need some processing in the runtime code. Each time we use a number, we have to convert it to a float and divide it by 10. If all our data is in a same place, we can loop over it. But if we have numbers all over our code base, we’ll also need processing code in all those places. This simple operation can be cumbersome and expensive in terms of space.
It turns out we can get rid of the processing, and use directly floating point numbers.
A note on IEEE floats
Floats are stored using the IEEE 754 standard. Some of them have a binary representation that contains lots of 0 and compress better than others.
Let’s look at two examples using a binary representation. The IEEE representation is not exactly the same as in the example below (it has to store the exponent), but almost.
6.25 -> 110.01
6.3 -> 110.010011…
In fact, 6.3 has no exact representation in base 2: the number stored is an approximation, and it would require an infinite number of digits to represent 6.3. On the other hand, the binary representation for 6.25 is compact and exact.
If we’re optimizing for size, we should prefer numbers like 6.25, that have a compact binary representation. For example, 0.125, 0.5, 0.75, 0.875 have at most 3 digits in binary after the decimal mark. The binary representation will have a lot of 0s at the end of the number, which will compress really well. The great thing is that we don’t need processing code anymore because we’re still using standard floats.
To better understand IEEE representation, try some tools to visualize the floats. You’ll see how removing the last 1s will reduce the precision.
How much precision do we need?
Floats are much more precise for values around 0. As our numbers get bigger, we’ll have less and less precision (or we’ll need more bits).
The table below is useful to check how much precision is needed. It tells you the worst error to expect based on the number of bits, and the scale of the input numbers. For example, if the input numbers are around 100 and we use 16 bits per float, the error will be at most 0.25. If we want the error to be less than 0.01, we need 21 bits per float.
Of course, each time you add a bit, you divide by two the expected error.
How to automate it?
An ad hoc solution is to remember this list of numbers and use them in the code when possible: 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875. An alternative is to use a list of macros from Iñigo Quilez. As Iñigo points out, this is not very elegant. Fortunately, this is hardly a problem because chances are this is not where most of your data lies.
64kB can actually contain a lot of data. Developers often rely on tools and custom editors to quickly modify and iterate on the data. In that case, we can easily use code to truncate the floating point numbers as part of the process.
Here is the function we use to round the binary representation of the floats:
// roundb(f, 15) => keep 15 bits in the float, set the other bits to zero
float roundb(float f, int bits) {
union { int i; float f; } num;
bits = 32 - bits; // assuming sizeof(int) == sizeof(float) == 4
num.f = f;
num.i = num.i + (1 << (bits - 1)); // round instead of truncate
num.i = num.i & (-1 << bits);
return num.f;
}
Just pass the float, choose how many bits you want to keep, and you’ll get a new float that will compress much better. If you generate C++ code with that number, be careful when printing it (make sure you print it with enough decimals):
printf("%.10ff\n", roundb(myinput, 12));
The great thing about this function is that we decide exactly how much precision we want to keep. If we desperately need space at some point, we can try to reduce that number and see what happens.
By applying this technique, we’ve managed to save several kilobytes on our 64kB executable.
Hopefully you will, too.
People not familiar with the demoscene often ask us how it works. How is it possible that a 64kB file contain so much? It can seem magical, since a typical music compressed as mp3 can be 100 times as big as our animations – not to mention the graphics. People also ask why other programs or games are getting so big. In 1990, when games had to fit on one or two floppy disks, they used only 1 or 2MB (which is still 20 times as much as our 64kB intros). Modern games now use 10-100 GB.
The reason for that is simple: Software engineering is all about making trade-offs. The typical trade-off is to use more memory to improve performance. But when you write a program, there are many more dimensions to consider. If you optimize on one dimension, you might lose on the other fronts. We make optimizations and trade-offs that wouldn’t make any sense outside the demoscene.
First, we optimize all the data we store in the binary. We use JSON files during the development for convenience, but then we generate compact C++ files to embed in the binary. This saves us a few kilobytes. Is it worth doing it? If you had to make a demo without the 64kB limit, you wouldn’t waste time on this. You’d prefer the 70kB executable instead. It’s almost the same.
Then, we compress our file (kkrunchy for 64kB intros, crinkler for 4kB intros). Compression slows down the startup time and antivirus software may complain about the file. It’s generally not a good deal. I bet you’ll choose the 300kB file instead. It’s still small, right?
We use compiler optimizations that will slightly slow down the execution to save bytes. That’s not what most users want. We disable language features like C++ exceptions, we give up object oriented programming (no inheritance) and we avoid external libraries – including the STL. This is a bad trade-off for most developers, because this slows down the development. Instead of rewriting existing functions, you’ll prefer the 600kB file.
Our music is computed in real-time (more precisely, we start a separate thread that fills the audio buffer). This means that our musician has to use a special synth and cannot use his favorite instruments. That’s a huge constraint that very few musicians would accept outside the demoscene. They will send you a mp3 file instead. You also need a mp3 player, and your demo is now 10MB.
Similarly, we generate all textures procedurally. And all the 3D models. For that, we write code and this is a lot of work. This adds a huge constraint on what we do (but constraints are fun and make us more creative). While procedural texture have lots of benefits, your graphists will prefer using their normal tools. You get JPEG images and – even if you’re careful – your demo size increases to 20MB.
At this point, you may wonder if it makes sense to write your own engine. It’s hard, error-prone and other people probably made one better than you would. You could use an existing engine and it would add at least 50MB. Of course, it’s still a simple application made by a small team, you can imagine what happens when you scale this up to a full game studio.
So demosceners achieve very small executable sizes because we care deeply about it. In many regards, demoscene works are an art form. We make decisions meant to support the artistic traits we’re pursuing. In this case, we’re willing to give up development velocity, flexibility, loading time, and a lot of potential content to fit everything in 64kB. Is it worth it? No idea, but it’s a lot of fun. You should try it.
It’s been 4 years since the last message here. Sorry about that.
It’s time for a quick update. Here is what we did since the last blog post:
G – Level One was released at Tokyo Demofest 2014 and got the 1st place. One year later, we made a final version.
We released H – Immersion last week at Revision. It ranked 5th (out of 10). If you haven’t seen them, you should check the other prods of the competition.
I’ve just released Shader Minifier 1.1. You can download the binary at the usual place.
Changes
New output options: use --format js to generate a Javascript file, and --format c-array to get a comma-separated list of strings (to be included in a C array).
Use the new option --no-renaming-list if there are identifiers you don’t want to get renamed (e.g. entry point functions in HLSL)
If you have #define macros, Shader Minifier will now avoid conflicts between macros and identifier renaming.
If your code has conditions with compile-time known values, they will get simplified (e.g. if (false), or int i_tag = 2; if (i_tag < 4) ...).
If there are many identifiers in your code (this probably won’t happen in a 4k intro), Minifier will now use 2-letter names if needed. This is not correctly optimized, but some people use this tool for obfuscation, instead of size-optimization.
The option --preserve-all-globals won’t rename any global variable or any function. This is useful if your shader is split between multiple files.
You can now tell the Minifier not to parse some block of code. Put your code between //[ and //] and it won’t be parsed, identifiers won’t be renamed, but spaces and comments will still be removed. This is very useful if you want to use features not yet supported in the tool (e.g. forward declarations, or layouts).
Other fixes in the parser (macros can be used in a function block, numbers can have suffixes, etc.)
Unix support. Use this zip file instead of the standard release, install a recent version of Mono (at least 2.10) and run mono shader_minifier.exe. This should work.
Online Minifier. You can use shader minifier online, but only one shader at a time.
Who is using it?
During the last year, many great intros have used it with great success.
Every year, we enjoy going to the German demoparty at Easter. It’s always a great party, where we meet our friends again. However, we’ve been invited this year to the Scene.org Awards ceremony in Norway, and it was hard to make a choice between them. We eventually decided to follow Truck’s suggestion and go to both (thanks again Truck, you’ve been extremely helpful). We chose to take ten days of holiday, and visit Norway at the same time (Bergen, Oslo, fjords, etc.). Everything was awesome (and very expensive!).
The Gathering is a good party, it is huge, well organized. There are many compos, seminars, separate area for sceners. Alcohol is not allowed, but there is Easter Garden, a small and cosy place for sceners, next to it. Having both parties is a great idea, as they complement each other. The main problem with The Gathering is that the screen and the sound system are not big enough to cover the whole huge place, so many people (gamers) end up ignoring the compo shows.
Fairlight statues in front of our nominee certificate.
I’ve been very happy with the Scene.org Award organisation. The team has done a wonderful job, both for the show and the after party, thank you very much for this!
Revision was a great party, with almost the same mood as Breakpoint. Beer was cheap (especially when you come from Norway), organizers were friendly. Revision has been the occasion for Zavie and Cyborg Jeff to release D – Four (ranked 3/11).
Since last release, the number of users of GLSL Minifier has increased, while number of bug reports decreased. I think it’s a good reason to move to version 1.0. This obfuscator has been used in a few great intros, such as Another Theory (winner at Main) and White one (winner at tUM). Hopefully we will see many intros using it at Revision.
The main feature of this release is the support for HLSL obfuscation, so we’ll now call this tool “Shader Minifier”. I got pretty good results with it, and I’ve been able to reduce the size of some famous 4k intros. Renaming strategy has been improved a bit; Detailed statistics will come later.
The new version of our tool is released! Here is the changelog:
Allow forward declarations in the input code and remove them (functions are automatically reordered). Please use the syntax “int foo(int x)” and not “int foo(int)”.
More intelligent renaming based on the context the variable is used.
Allow structs in source code, fields are not renamed. Field names cannot look like vec fields (.rgb, .r…) because I haven’t written the typer yet.
Remove the –macro-threshold option. Will be fixed in a future version.
As usual, several bug fixes
The most important news is the improvement on the renaming strategy. In the 0.4 version, the Minifier tried to reuse the same variables again and again, and increased the frequency of a few characters. Now, it’s getting more complex: the name of a variable depends on how it is used.
For instance, if you often call functions “max” and “mix”, you’ll often have the “x(” pattern. Thus, GLSL Minifier will probably name your function x to increase of the frequency of this pattern. The same goes for each two-char pattern the tool will find.
Here are some statistics I’ve just made, using shaders from 4k intros. I’ve taken a short C file, inserted the shader as a string, compiled, and compressed using Crinkler (/COMPMODE:SLOW /ORDERTRIES:3000). So, it’s all about making the shader compress better. Numbers are the filesize in bytes:
Retrospection
Original: 1462 (hand optimized)
Minifier 0.4: 1429
Minifier 0.5: 1421
Valleyball
Original: 2240 (using old BluFlame minifier)
Minifier 0.5: 2184
Another theory
Original: 1511 (hand optimized)
Minifier 0.4: 1475
Minifier 0.5: 1463
Lunaquatic
Minifier 0.4: 2635
Minifier 0.5: 2613
Sult
Minifier 0.4: 1411
Minifier 0.5: 1408
Slicesix
Minifier 0.4: 2493
Minifier 0.5: 2432
Conclusion: If you’re not using any tool to minify your GLSL shader, I bet you could save at least 20 bytes on your 4k intro. Try and see!
I’ve just fixed a few bugs in GLSL Minifier. Here is the list of changes for the 0.4.2 version:
Smaller file to download (700kb instead of 1.8Mb), using MPress. Thanks eyebex!
Print -.5 instead of -0.5. Thanks to stan_1901!
Parse octal and hexadecimal numbers. Bug found in Valleyball source code, thanks BluFlame!
Can compress several shaders at once, but only if the –preserve-externals flag is set.
Reorder uniform/varying/attribute declarations. This reduces the size of some shaders.
Fix a bug where the order of instructions was messed-up. Thanks to XT95!
Fix the –macro-threshold option. Thanks to Řrřola!
Forbid the reusing variable names in the same function (which is rejected by ATI compiler). Thanks again to Řrřola!
Handle multiline macros in the parser. Bug found in The Wind under my wing code, thanks Navis!
Improve the way the C header file is generated, trying to avoid name clashing. Thanks again eyebex!
My testing scripts are not fully set up, so you might find some other bugs. Please report them! If you use the –preserve-externals option, you might get name clashes if you use one letter names. That will be fixed another time.