Unordered dithering gives better form shading as there is no structure overlaying the shape,I would love to see "Recursive Wang Tiles for Real-Time Blue Noise
ps://www.youtube.com/watch?v=ykACzjtR6rc" combined with that technique
That does seem lovely, I might experiment with that at some point.
You don't need to make things all that difficult just to get some random looking blue noise though. You can get some pretty reasonable randomized ordered dithering by just rotating the 2x2, 4x4, 8x8 blocks of the bayer matrix randomly. Though this makes more sense if you view it as adding octaves of the 2x2 bayer matrix together randomly.
Technically, this is very cool. Aesthetically, though, the end result doesn't look very good, at least in my opinion. The Obra Dinn visual style is attempting to find a compromise between having the game look like its visuals are prerendered 3D scenes, dithered for an old 1-bit display. As the video explains, a lot of work had to go into striking a balance between the intended aesthetic, and playability, because it turns out that the dithered aesthetic is difficult to work with. This, though, just kind of ends up looking like a pseudo-halftone style applied to high-res polygonal models. Maybe it would look better at 320x240 or something?
I got the itch to try and create an Atkinson dithering paint program.
What do I mean by that? Imagine a grayscale paint program (like, say, Procreate, just all grays) but all the pixels go through an Atkinson dither before hitting the screen.
To the artist you're kind of darkening or lightening the dither, so to speak, in areas of the canvas with your brush/eraser. (Kind of crowding or thinning the resulting B&W pixels).
An hour spent with Claude to make this happen in HTML5 caused me to set the experiment aside. It was okay, but only did the dither after the mouse was released. I wasn't driven enough to try to get it to dither in real-time (as the brush is being stroked).
The mouse is a terrible painting tool too — with a touch interface on an iPad (again, like Procreate) it might be worth pursuing further. It would need to be very performant as I say — so that you could see the dither as the brush is moving. (This might require a special bitmap and code where you store away the diffusion error so that you can update only a portion of the screen — where the brush has move - rather than having to re-dither the entire document 60 fps.)
I distinctly remember that "Paintbrush" in Windows 3.1 had something very similar to this. Check it out in the Win3.1 emulator: https://archive.org/details/win3_stock -- open Paintbrush, go to Options -> Image Attributes and set "Colors" to "Black and White".
However, the dithering there is not fixed to the background but depends on your brushstroke / mouse position.
This is correct. Atkinson dithering looks cool (at least to my eyes) but is not the best choice for real time work. In particular, it cannot be implemented in a shader, although you can approximate it. But computers are fast enough that a CPU-bound algorithm can still work at interactive speeds. Not sure how well it would work in an editor though - in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
I did an implementation of Atkinson dithering for a web component in case anyone is feeling the itch to dither like it is 1985.
> in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
My thought is to store the error for each pixel in a separate channel. When a portion of the bitmap is "dirtied" you could start in the top left of the dirty rectangle, re-Atkinson until, once outside the dirty rect, you compute the same error as the existing error for a given pixel. At that point there would follow the same dither pattern and you can stop.
As you say, it's conceivable you would have to go to the very end of the document. If the error is an integer though I feel like you would hit a point where you can stop early. Maybe I am misunderstanding how error diffusion works or grossly misjudging how wild mismatched the before/after errors would be.
Neat idea, be curious to see it implemented! You can probably get away with just stopping at the bounds of the dirty area. Sure, the error would accumulate, but they'd be localized to a single pixel. I doubt it would be visible.
One issue I suspect you'll encounter is that redrawing a restricted area during movement will cause that area to flicker weirdly while the rest of the image stays relatively stable.
> in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
For a paint program, I think it would be acceptable if painting with the brush never changed existing pixels, only pixels newly painted with the brush, and you'd apply dithering only to newly added pixels in the brush stroke as the mouse dragged. The fact that you might be able to kinda see the discontinuities at the edge of the brush feels like it would be a feature, not a bug -- that you can see the brush strokes.
The really interesting effect would come when you implemented a dodge or burn tool...
The trouble is that dithering works by smearing the error over an area of pixels in a way that your brain unsmears into something close to the original image. If you start manipulating some pixels but not nearby areas then you will get very visible artifacts.
Maybe that is OK if you are drawing straight lines and boxes but any kind of detail is going to be destroyed.
Well sure, that's why you work in grayscale and only dither in the end if you want to maximize quality and detail.
I'm talking about from an artistic perspective. The way you see the brush strokes in certain styles of painting, it would add character to see where two dithered areas of the same lightness had a slightly visible discontinuity. I think it could be a very cool artistic effect.
* Generate motion vectors for every pixel from the previous frame
Now, to make the next frame:
* Take the previously displayed (dithered) image and apply the motion vectors.
* Now use that as the threshold map to do error diffusion dithering on the next frame.
The threshold doesn't really matter for error diffusion dithering - since any error will be propagated to the next pixel. However, if you use a previous frame as a threshold map, it will encourage pixels not to 'flicker' every frame.
Could this somehow be repurposed such that the points would be "check points" for a generative texture algorithm, with zoom level somehow taken into account (distance of dots maybe)?
Then one could, in a computer game, for example look at a brick wall. At first, as one is further back, the surface from tiles look somewhat matte smooth. But when one gets closer, the features become more and more coarse with more and more detail, eventually even some tiny tiny holes in the surface are visible, and so on.
Another example: sand, it looks smooth from afar but as one zooms in, actual grains of sand become visible.
That's level of detail (LOD) and there are various ways of implementing it. This dither technique incorporates LOD but I'm not sure how it would be useful for the type of LOD you are suggesting unless perhaps you think it might be applicable as a procedural technique in which case some of the observations might be an inspiration.
Doing a straight 2D dither on each frame will result in noisy artifacts and won't be "stable".
Doing an "ordered" dithering produces a "porch screen" artifact effect, where there's a stable pattern in screen space that can be seen.
Trying to naively map dithering patterns as textures on 3D objects again results in unstable patterns as the dithering is dependent on the distance of the screen to the object and produces noise.
This is a proposed solution that maps a dithering pattern as a texture that doesn't have the "porch screen" effect and is stable.
The video linked [0] to in the repo talks about experiments and shortcomings from the above I just listed. This work is in response to some challenges the developer of the "Return of the Obra Dinn" game experienced when trying to create a dithering effect in game [1] [2].
I do wonder if it would be possible now to not map this to textures, but to pixels in screen space.
Because it looks like high-res 3D mapped textures with circular dots on them now, rather than low-res screen pixel dithering.
I remember seeing the video about the difficulties of dithering stability in "Return of the Obra Dinn".
The solution here has it rendered on textures, but the dots still all have similar size in screen space, so mapping this to actual screen space might still be possible (as in, no circular dots on textures, but low-res pixels on screen being turned on/off directly)?
If dithering isn't stable in motion it creates distracting 'shimmering' and other effects as the camera or objects move, which can be unpleasant to look at.
Unstable dithering is also potentially harder to compress in videos.
If this is surface stable it should also work for stereo/VR without much stereo disparity, where normal dithering would have a mismatch. And PCVR is often streamed over video codecs now so what you said aboutvideo compression should help there too.
I was also wondering how well it would look in stereo. My guess is it would still look strange (the "depth map" would also appear dithered) but the effect would be interesting to experiment with.
Unordered dithering gives better form shading as there is no structure overlaying the shape,I would love to see "Recursive Wang Tiles for Real-Time Blue Noise ps://www.youtube.com/watch?v=ykACzjtR6rc" combined with that technique
https://www.youtube.com/watch?v=ykACzjtR6rc
That does seem lovely, I might experiment with that at some point.
You don't need to make things all that difficult just to get some random looking blue noise though. You can get some pretty reasonable randomized ordered dithering by just rotating the 2x2, 4x4, 8x8 blocks of the bayer matrix randomly. Though this makes more sense if you view it as adding octaves of the 2x2 bayer matrix together randomly.
Wow, that's a very cool paper and demo!
Damn, the demo around 3:30 is lovely.
Yes, but not as good as in the submitted post, alas. You can see how the points 'flicker' when they zoom in.
Link to part of video that shows it in action:
https://youtu.be/HPqGaIMVuLs?si=P11cFnSLcv57Wj3K&t=1236
Technically, this is very cool. Aesthetically, though, the end result doesn't look very good, at least in my opinion. The Obra Dinn visual style is attempting to find a compromise between having the game look like its visuals are prerendered 3D scenes, dithered for an old 1-bit display. As the video explains, a lot of work had to go into striking a balance between the intended aesthetic, and playability, because it turns out that the dithered aesthetic is difficult to work with. This, though, just kind of ends up looking like a pseudo-halftone style applied to high-res polygonal models. Maybe it would look better at 320x240 or something?
Related discussion a couple of months ago https://news.ycombinator.com/item?id=42084080
Lots of related links including one to a tweetier version of this work.
I got the itch to try and create an Atkinson dithering paint program.
What do I mean by that? Imagine a grayscale paint program (like, say, Procreate, just all grays) but all the pixels go through an Atkinson dither before hitting the screen.
To the artist you're kind of darkening or lightening the dither, so to speak, in areas of the canvas with your brush/eraser. (Kind of crowding or thinning the resulting B&W pixels).
An hour spent with Claude to make this happen in HTML5 caused me to set the experiment aside. It was okay, but only did the dither after the mouse was released. I wasn't driven enough to try to get it to dither in real-time (as the brush is being stroked).
The mouse is a terrible painting tool too — with a touch interface on an iPad (again, like Procreate) it might be worth pursuing further. It would need to be very performant as I say — so that you could see the dither as the brush is moving. (This might require a special bitmap and code where you store away the diffusion error so that you can update only a portion of the screen — where the brush has move - rather than having to re-dither the entire document 60 fps.)
I distinctly remember that "Paintbrush" in Windows 3.1 had something very similar to this. Check it out in the Win3.1 emulator: https://archive.org/details/win3_stock -- open Paintbrush, go to Options -> Image Attributes and set "Colors" to "Black and White".
However, the dithering there is not fixed to the background but depends on your brushstroke / mouse position.
Error diffusion is not ideal for real time work, and it cannot be parallelized. Consider storing a precomputed blue noise texture.
This is correct. Atkinson dithering looks cool (at least to my eyes) but is not the best choice for real time work. In particular, it cannot be implemented in a shader, although you can approximate it. But computers are fast enough that a CPU-bound algorithm can still work at interactive speeds. Not sure how well it would work in an editor though - in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
I did an implementation of Atkinson dithering for a web component in case anyone is feeling the itch to dither like it is 1985.
Demo: https://sheep.horse/2023/1/improved_web_component_for_pixel-...
Source: https://github.com/andrewstephens75/as-dithered-image
> in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
My thought is to store the error for each pixel in a separate channel. When a portion of the bitmap is "dirtied" you could start in the top left of the dirty rectangle, re-Atkinson until, once outside the dirty rect, you compute the same error as the existing error for a given pixel. At that point there would follow the same dither pattern and you can stop.
As you say, it's conceivable you would have to go to the very end of the document. If the error is an integer though I feel like you would hit a point where you can stop early. Maybe I am misunderstanding how error diffusion works or grossly misjudging how wild mismatched the before/after errors would be.
Neat idea, be curious to see it implemented! You can probably get away with just stopping at the bounds of the dirty area. Sure, the error would accumulate, but they'd be localized to a single pixel. I doubt it would be visible.
One issue I suspect you'll encounter is that redrawing a restricted area during movement will cause that area to flicker weirdly while the rest of the image stays relatively stable.
> in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
For a paint program, I think it would be acceptable if painting with the brush never changed existing pixels, only pixels newly painted with the brush, and you'd apply dithering only to newly added pixels in the brush stroke as the mouse dragged. The fact that you might be able to kinda see the discontinuities at the edge of the brush feels like it would be a feature, not a bug -- that you can see the brush strokes.
The really interesting effect would come when you implemented a dodge or burn tool...
The trouble is that dithering works by smearing the error over an area of pixels in a way that your brain unsmears into something close to the original image. If you start manipulating some pixels but not nearby areas then you will get very visible artifacts.
Maybe that is OK if you are drawing straight lines and boxes but any kind of detail is going to be destroyed.
Well sure, that's why you work in grayscale and only dither in the end if you want to maximize quality and detail.
I'm talking about from an artistic perspective. The way you see the brush strokes in certain styles of painting, it would add character to see where two dithered areas of the same lightness had a slightly visible discontinuity. I think it could be a very cool artistic effect.
I am pondering a different approach:
* Use error diffusion dithering in screen space
* Generate motion vectors for every pixel from the previous frame
Now, to make the next frame:
* Take the previously displayed (dithered) image and apply the motion vectors.
* Now use that as the threshold map to do error diffusion dithering on the next frame.
The threshold doesn't really matter for error diffusion dithering - since any error will be propagated to the next pixel. However, if you use a previous frame as a threshold map, it will encourage pixels not to 'flicker' every frame.
As I watched the video I got an idea.
Could this somehow be repurposed such that the points would be "check points" for a generative texture algorithm, with zoom level somehow taken into account (distance of dots maybe)?
Then one could, in a computer game, for example look at a brick wall. At first, as one is further back, the surface from tiles look somewhat matte smooth. But when one gets closer, the features become more and more coarse with more and more detail, eventually even some tiny tiny holes in the surface are visible, and so on.
Another example: sand, it looks smooth from afar but as one zooms in, actual grains of sand become visible.
That's level of detail (LOD) and there are various ways of implementing it. This dither technique incorporates LOD but I'm not sure how it would be useful for the type of LOD you are suggesting unless perhaps you think it might be applicable as a procedural technique in which case some of the observations might be an inspiration.
That looks crazy good ! I'm speechless.
What is the point of this?
To create a dithering effect for 3D scenes.
Doing a straight 2D dither on each frame will result in noisy artifacts and won't be "stable".
Doing an "ordered" dithering produces a "porch screen" artifact effect, where there's a stable pattern in screen space that can be seen.
Trying to naively map dithering patterns as textures on 3D objects again results in unstable patterns as the dithering is dependent on the distance of the screen to the object and produces noise.
This is a proposed solution that maps a dithering pattern as a texture that doesn't have the "porch screen" effect and is stable.
The video linked [0] to in the repo talks about experiments and shortcomings from the above I just listed. This work is in response to some challenges the developer of the "Return of the Obra Dinn" game experienced when trying to create a dithering effect in game [1] [2].
[0] https://www.youtube.com/watch?v=HPqGaIMVuLs&t=201s
[1] https://news.ycombinator.com/item?id=42084080
[2] https://forums.tigsource.com/index.php?topic=40832.msg136374...
I do wonder if it would be possible now to not map this to textures, but to pixels in screen space.
Because it looks like high-res 3D mapped textures with circular dots on them now, rather than low-res screen pixel dithering.
I remember seeing the video about the difficulties of dithering stability in "Return of the Obra Dinn".
The solution here has it rendered on textures, but the dots still all have similar size in screen space, so mapping this to actual screen space might still be possible (as in, no circular dots on textures, but low-res pixels on screen being turned on/off directly)?
As far as I can tell, it's purely artistic.
If dithering isn't stable in motion it creates distracting 'shimmering' and other effects as the camera or objects move, which can be unpleasant to look at.
Unstable dithering is also potentially harder to compress in videos.
If this is surface stable it should also work for stereo/VR without much stereo disparity, where normal dithering would have a mismatch. And PCVR is often streamed over video codecs now so what you said aboutvideo compression should help there too.
I was also wondering how well it would look in stereo. My guess is it would still look strange (the "depth map" would also appear dithered) but the effect would be interesting to experiment with.
Yeah but if your dithering is noticable in the first place you're doing it wrong.
Unless you want it as an artistic style.