Oct 26, 2011

Signed Distance Fields

I was first exposed to Signed Distance Fields through this white paper written by Chris Green from Valve Software. Valve often make technical documentation on their graphics technology available, which is awesome.  In this case, the technique was used in Team Fortress 2, specifically for some of the signs and decals in the game.

Signed Distance Fields are a different way of storing (typically monochrome) bitmap data.  Instead of each pixel in the bitmap representing the intensity at that point, it represents the distance to the nearest non black pixel in the source image. Pixels outside of the shape have positive distance values, while pixels inside the shape have negative distance values, hence the "signed" part of the name.

Figure 1. An example of how a signed distance field looks at the pixel level.

Note that in reality the distance field is made up of floating point data, but for illustrative purposes I've shown them as integers.

To create a signed distance field you apply a distance transform to a source image. Then, to render the original image you do the following:

For each distance value in the field,
    is the value <= 0?
    if yes, draw the pixel
    if no, ignore it.

The thing that makes signed distance fields really useful is what happens if you need to resize the image. The distance values are interpolated by the 3D hardware in the same way as intensity values in a traditional bitmap, but because the edge comes from a test, it remains crisp. This means you can magnify the bitmap many times and not suffer the blurry or chunky edges that traditional bitmaps exhibit due to filtering artifacts.

Creating the Distance Field

There are many different transforms you can use to calculate the distance field.  It turns out that the best approach is to use a high resolution source image and then downsample the result to an acceptable size for your application, as the nature of the distance field preserves a lot of the information.

The brute force method (by which I mean calculating the euclidean distance between a given pixel and every other pixel in the bitmap) will generate a perfect distance field but is particularly slow for large bitmaps as it has O(n^2) complexity. There are faster (albeit less accurate) algorithms out there though the selection of algorithm isn't too important, as it doesn't need to be done at runtime. Valve actually used the brute force solution and ran the processing as a batch job overnight on their servers. I chose a faster approach simply because I was eager to get some results.

Figure 2. An example of what the final distance field looks like in Bulldog.

An application of Distance Fields

Since the source data for Distance Fields is generally monochromatic, their usefulness is limited, however one application that is particularly well suited to distance fields is drawing text.

For example, a TrueType or OpenType font (or "typeface" to use the correct term) is stored in a vector format. This means each of the glyphs (characters in the typeface) are represented as shapes with x,y coordinates for their corners and control points. This is converted into a bitmap or "rasterised" for display on the screen. Depending on the size of the text you want, the x,y coordinates are scaled and the curves that represent the shape of the letter are scaled too. The rasterisation process can be a little slow even on modern hardware, so in games the preferred technique is to pre-rasterise the font at the size you want, store it as a bitmap and then construct the text required by picking individual letters from the bitmap.

This technique is very fast, but has some limitations. You need to create a seperate font bitmap for every size and style combination you want to use in your game since bitmaps don't scale well. Take for example a game that wants to have 2 different typefaces, say a normal looking sans serif typeface like Arial, and maybe a fancier caligraphy typeface for ye olde books and scrolls and the like. You may want to support 3 different sizes for stylistic reasons, such as titles/captions or tooltips. Or perhaps you want to support a wide range of screen resolutions and don't want people to require a microscope to read your UI elements. Already we have 2 * 3 = 6 different bitmaps you need to create and manage in the game. It gets even worse if you want to support italics or bold styles as well.

The signed distance field approach helps alleviate this burden by taking the sizing out of the equation since the distance field can be scaled up and down with less graphical artifacts.

As an added bonus distance field data can be used to anti-alias the edges. This can be achieved by adding an additional term to the distance test to smoothly ramp from lit to unlit for distance values within a certain range. The Valve paper covers a few other effects, such as drop shadows and outlines.

In Bulldog, I implemented signed distance field fonts using a pixel shader, which makes juggling the distance field data considerably easier.

The shader code is surprisingly simple, and for simple anti-aliased text is as follows:

float4 ps( VS_OUTPUT IN) : COLOR { // get the alpha value from the distance field texture float rawAlpha = tex2D( TextureSampler, IN.TilingCoords).a;
clip (rawAlpha - (0.5f-delta)); return float4(fillColour, smoothstep(0.5f-delta,0.5f+delta,rawAlpha) ); }
delta is a constant which determines how far from the edge of the letter the anti-aliasing extends. The actual shader code I use in bulldog is slightly more complex as it handles borders and reversed characters (such as when selecting text in a textbox), as well as adjusting the anti-aliasing for different font sizes. I'll write up a more detailed post on the new font system I built using signed distance fields at some point. In the mean-time, here's an example of the same font rendered at several different sizes:

Figure 3. An example of the font output at 12, 24 and 64 pixels high.

As a last note, I should mention I've also used signed distance fields in an unusual way in a different part of Bulldog, but that's a post for another time...


  1. Very interesting, thank you ;)

  2. This post is a great explanation of distance fields. Keep up the good work!

  3. You can considerably speed up the brute force method by limiting your distance calculations with your spread value. This is mentioned in the valve paper. Basically, if your spread is 4 then you only need to check a maximum of 4 pixels away from each pixel to find a distance. If you don't find an opposite pixel within the spread, then set the distance for the pixel to min or max distance depending on your implementation. For example, a 4096x4096 with 16 pixel spread takes about 18 hours on my 3.4ghz gen 3 i7 processor with the brute force method. By considering the spread, this now only takes 20s. Still not fast enough for real-time, but more than tolerable for preprocessing.

    1. My math was a off. Running a 4096x4096 image through my generator, I came up with the following numbers.

      Brute force(1 scan line): 2m 19s
      Brute force(extrapolated 4096 scanlines): 158h 9m 4s
      Optimized (16 pixel spread): 32.516s
      Optimized & Multithreaded (16 pixel spread, 8 threads): 7.047s

      Note these numbers are from a command line C++ generator with no SSE or other architecture optimizations so it could probably still be sped up some. For a preprocessor, though, it is fast enough.

      The benefits of optimizing by limiting the search area by the spread is obvious. There is a downside, though. You don't have as much information to work with for certain uses of the SDF such as collision detection. This could probably be mitigated some by having two spread values with the inside spread being larger to capture more information about penatration depth.

    2. I didn't go into much detail about the brute force method (optimisation etc) as you can never quite get enough speed out of it to make it real-time. Though, as you discovered, you can get pretty close. Less than 10 seconds is pretty good for such a large source image.

      Re: your pixel spread idea, you could start with a certain pixel spread and then expand your search if you don't find any opposite pixels. You lose some speed, but you regain accuracy, which is a valid reason for sticking with the brute force method in my opinion, as all of the real-time techniques I've seen sacrifice some accuracy.

    3. Your suggestion inspired another optimization. I'm still limiting myself to a spread value but I imagine the optimization would work well for computing full accuracy.

      What I did was change my generator to divide the image into "chunks". I then process each chunk to determine if its pixels are all-in, all-out, or mixed. Then when processing each image pixel I first look at the states of the chunks surrounding the pixel. If the state of a chunk matches the pixel state, the pixels in the chunk are skipped. If the states don't match, then the distance is calculated against chunk's pixels. For full accuracy, you would just continue to check the chunks outward from the pixel until you exhausted them or you found a chunk with a differet state.

      So far using this technique, processing a 4096x4096 image takes about 1.5s. That's down from about 7s when just using the spread as a limit. Again, there are drawbacks. Speed improvements from "chunking" are highly dependent on the image. It works best on larger images that contain large areas of in or out pixels. For the graphical images I'm processing it is ideal. If you had data that had a lot of noise, the extra checks might sap some speed, but I can't imagine it would be too bad.

    4. Glad to hear you could speed up the processing of your images.

      The "chunked" approach you mentioned is actually a key part of how I handle my game world. The world is broken down into 32x32 tile "zones", which makes all sorts of processing MUCH easier. I never thought of using it to speed up distance field generation.