Inspired by this amazing videoM by Sebastian Lague I thought I'd see if I could reproduce his results in WebGL2. In his example he uses compute shaders but WebGL2 doesn't have compute shaders so I used the techniques mentioned here.
The simple explanation is there are N particles, each with a position and direction. They move every frame each drawing a single pixel, so they effectively leave trails in the image. The image has a post processing affect to spread the trails out (blur) and fade them out to black over time. Added to that, each particle looks forward, left, and right a few pixels ahead of its direction and turns a random amount toward which ever of those 3 directions is brightest. That's it.
One big optimization is instead of sensorSize
being used to decide how large an area of pixels when sensing (that's extremely slow), instead I generate a mipmap
and sensorSize
is just a bias for which level of mips to use. It's not exactly the same but at a glance I mostly did't notice a difference and it's significantly faster.
You can try a version that scans here. On my 2014 MBP it runs too slow (10/15fps)
Another optimization is the option to use 8bit textures. That is also significantly faster. For most examples I don't notice much of a difference in results.
One obvious place would be if you set trailWeight
above 60
This brings up one major difference which is my coloring is not nearly as pretty as Sebastian's. Would love to know what I'm doing wrong. The output of the simulation is just a grayscale texture. I applying a gradient map which basically maps each of the gray values to some color gradient. Maybe I just need to experiment with more/better gradients.
Update:
I decide to make a WebGL1 version as well which uses these techniques. It's funny though that the WebGL2 version is more likely to work on mobile. This is because the WebGL2 doesn't need floating point textures because it can do the particle math using the transform feedback feature of WebGL2. The WebGL1 version requires floating point textures which are not available on mobile. There are ways to work around that problem but I was not interested in trying. WebGL2 exists on the majority of android devices and it's behind a flag in Safari 14. Hopefully it will be enabled by default by Safari 15.
Inspired by the Secret Sky Music Festival. It just uses a low-res video to set the height and color of a grid lines. warning: there is sound! To stop close the tab or press back
PS: I'm not 100% sure about the license on the videos. I made them myself on my PS4 using the PS4's OS built in feature which is there entirely for sharing and let's users upload directly to various video sharing sites as well as copy to USB. But, they are of a copywritten game with copywritten music. They are fairly low-res. so I'm just crossing my fingers it's ok to post them here.
Resurrecting an old example I wrote about using WebGL via three.js across multiple windows. It used to be part of the three.js examples but was removed
It shows an example of having multiple windows with different views into the same scene, like for example a 3D editor like Maya would allow
Click create for new window in the top left then drag a sphere. The windows stay in sync. You can create more than one extra window
A port of https://github.com/greggman/hft-syncThreeJS that only works on the same machine without happyFunTimes. The Original syncs across machines using happyFunTimes
This a similar technique to the one used for the multi-screen webgl aqaurium.
Be sure to open multiple windows!!!! and move and resize them so you can see all of them at once
This was a test to learn how to get canvas relative positions even with CSS transforms. The result is spelled out in this gist.
For a stackoverflow question: How to draw a grid that adds / removes detail as you zoom in and out.
Basically it just draws 2 grids. One for the "current" zoom level and one at 10x that zoom level
and fades them out as a transition. The important code is probably the part that decides what the current
zoom level is. zoom
is the distance from the origin that goes from 0.0001 to 10000 in this example.
const gridLevel = Math.log10(zoom * zoomAdjust); const gridFract = euclideanModulo(gridLevel, 1); const gridZoom = Math.pow(10, Math.floor(gridLevel));
gridLevel
is then the power we need to raise 10 to for our current zoom level. gridFract
is a linear interpolation from 0 to 1 between the current 2 levels we can use for deciding how much to fade. gridZoom
is
effectively the scale to draw our grid.
This code computes the alpha values to use for the 2 grids
const alpha1 = Math.max((1 - gridFract) * 1); const alpha2 = Math.max(gridFract * 10) - 1;
zoomAdjust
is 1. but you can change it to decide where the transition between grids happens.
This was some experiments in trying to make a camera that shows a group of players. Drag the 3 blue dots and see the camera try to show them. Unfortunately I never got it to work. The idea was to try to show them from the best angle so I never want them to line up behind each other but, the problem is which side to look from ends up swinging wildly which is no good. This is probably a solved problem written in some book on game dev but I actually don't know a game that has solved this. Most games either let the user control the camera, OR, they put the camera behind the player and let him drag it as though it's on a rope, OR, the pick a fixed angle and at most zoom in and out without changing the direction of the camera. If you know of a solution post an issue.
This shows an issue in Chrome (at least still as of v66) that is that when asked if a webpage can use your camera (or your mic) that permission is permament. You can see why you'd might want this. Using Google Hangouts or Facebook messenger maybe you just want the camera to work.
Unfortunately it has a bunch of issues.
The biggest first and that is that the permission is for the domain and works in iframes. So, some site says "Check out this cool circus mirror demo" and you click to enable the camera. Now that same domain can embed invisible iframes all around the net in ads that turn on your camera/mic and spy on you.
Another issue related issue is user expectation. When I run Facetime on my my phone I expect the camera to come on. When I go to randomsite.com I don't. So, before I run the facetime app I'll make sure I'm presentable. Where as when I go to randomsite.com I will not. Randomsite.com/old-post.html might have had a camera demo I viewed year ago. Randomsite.com/new-post.html might have a site that takes a picture immediately and uploads it publically. Maybe it's called "what I'm doing now" that takes a pic and immediately uploads to imgur for fun. But I didn't know my picture was going to be taken yet randomsite.com has permission to use my camera at anytime once I've given it permission once.
It seems to me browsers should treat the camera just like GPS. They should ask for permission EVERY SINGLE TIME. There should maybe be the option to give permanent permission but the default should be every time. Firefox has those options. Chrome does not.
I'm surprised people still use skyboxes in 3d. They make a 3d cube and map a texture to the inside and then go through various contortions to render that without clipping issues etc.
It seems like a simpler solution is just to render a cubemap to a plane with a very simple shader. The plane won't have any clipping issues
I have a feeling the reasons are (1) tradition. It worked before shaders so why change. And (2) asset pipeline. If you have a system to import models and the textures they use then adding a cube model with textures just works where as to use this method you'd need at least some way of tagging the model to use a custom shader.
In any case I wrote this back when I first started doing webgl stuff. I put an example on twgl and then here's a three.js example.
Someone asked about GPU perf and overdraw. This example shows how GPUs are not really that fast. Or rather that we're asking them to do a lot of work and that can really add up.
This sample just draws simple fullscreen quads with very simple shaders. Up the count to draw more. You'll see even fast GPUs can't draw that many before not being able to reach 60fps. Increase the value slowly else your OS might reset your GPU.
Someone asked how to make an x-ray-ish looking image. You can get that by making this brighter when their viewspace normal is perpendicular to the view.
For HappyFunTimes I need to pick unique colors for each player. I have no idea when I start how many players will be playing. Players come and go as a game progreses. So for example I can't evenly space the colors since I don't know up front how many players there will be. But, even spacing the colors evenly isn't enough, after about 8 colors they start to blend together. So I need to do something else, make them darker or make them saturated.
Of course ideally each game would let you choose a character or assign you one. So if there are 8 characters + 8 colors that's probably enough. Tonde-Iko did this with something like 12 characters to pick from. It also let the players pick the character and their own color but of course that was a bunch of work to implement.
Another idea is to use patterns. One player gets a solid color, another stripes, another checkers or an outline or something. PowPow tried to do that. Watching players play I'm not sure it was successful. Of course maybe the issue with that is more the game and less the colors of player. On Tonde-Iko with 6 screens and knowning where you start it's easy to just look at your character alone where as on Powpow is relatively hard to pick yourself out. You have to really be concentrating.
I needed to see what colors were being picked and experiement with picking algorthms. I can't really say I succeeded but at least I can visualize the colors
Tweening probably has many defintions. I heard one guy call it mapping as in mapping one range to another range. I've always called that lerping. The simplest lerp is something like
value = start + (end - start) * lerp
Then given lerp goes from 0 to 1 value will go from start to end.
People have come up with easing functions so instead of linearally going from 0 to 1 you get some kind of ease in or ease out or both.
value = start + (end - start) * easingFunc(lerp)
Where easingFunc might be something like
function easingFunc(lerp) { return (-0.5 * (Math.cos(Math.PI * lerp) - 1)); }
This gives you a nice ease in and ease out.
I'd been doing that for years but I'd always hand coded it meaning I'd just put in code calls directly to the easing functions where I needed them.
Recently I saw this pretty cool presention about making your game "juicy" and I decided to look into the code a little to see what ideas to learn from and the big one I came away with from looking at their code was using a tweening library.
Basically a tweening library lets you create tiny mini tasks that lerp some value over time for you. What makes them useful is you can just fire and forget. Much easier than hand writing each case.
So anyway I wrote my own tweening library and have been playing around with it.
My math sucks. I needed to convert from field of view Y to field of view X. It googled it, found an answer on stack overflow. The marked answer was WRONG! :P One of the unmarked answers was correct. I mentioned it to the guy about the wrong answer. With just a little thought it was clearly 100% wrong. His first response was effectively, don't blame answers you don't like on the math. The math is correct. I wrote this to show him it wasn't
This was an attempt at making a glyph postprocessing effect to turn whatever is being displayed into text.
textme01: First step, make a grid of letters
textme02: Second step, organize by brightness
textme03: Third step, support different glyphs
textme04: Fourth step, render with it
textme05: Fifth step, add more levels through double indirection
textme06: Sixth step, render with it
textme07: Seventh step, try just brightness, no corners.
textme08: Eighth step, render with it
textme09: Ninth step, add inverse and clip glyphs.
textme10: Tenth step, render with it
Note: under glyphs de-select ascii and select boxElements and geometricShapes. Also under debug select dosColors. Try changing the scene as well.
---
Originally I thought I'd take 2x2 pixels from the original picture and try to map that do a glyph. So, I look at the 4 corners of each glyph rectangle and compute a brightness. Then for 8 levels of brightness of each corner I try to find the best matching glyph. With 8 levels of brightness there are 4096 possible combinations. For each of those I try to find the best glyph. That's effectively 64x64 glyphs across a texture. If each glyph is 32x32 pixels that means that's already a giant 2048x2048 texture so I couldn't do more than 8 levels.
It works and sometimes you can even tell it's matching shapes on the edges. I wasn't totally happy with the results. It seemed like the system was quite often picking the same glyph as the closests match for some combination of colors. For example if you check this example where it shows each glpyh over the 2x2 brightness levels it most closely matches you'll see '[', 't', 'F', and 'T' repeat as the best match in a ton of places.
I thought maybe I should say a glyph is brighter if 2 pixels next to each other are on. In other words
**..**
is considered brighter than *.*.
. That's the weightNeighbors option. But, it
wasn't enough of a help.
So... step05, I tried adding a level of indirection. Instead of making the glyph texture be 8x8x8x8 make a texture with just each glyph once and then a mapping texture that maps from a 2x2 brightness to a glyph. In that case I can do 64 levels of brightness because the map will be 64x64x64x64 which is 4096x4096 texture.
Unfortunately it doesn't look much better and it takes way too long to find the closest match for each 2x2 level of brightness. At 32 levels that's 32*32*32*32 or 1 million 2x2 brighness combinations for which it needs to find the best matching glyph. It takes 3 to 4 minutes! That's including using a kd-tree to make it faster to search. I never tried 64 levels which is 16 million combinations so that should take 16 times longer or about 1 hour to compute! Of course I could pre-compute but I wanted to be able to pick the font at runtime. I also wanted to use the user's fonts which are different on each browser and OS rather than ship a glyph texture.
So...I said to heck with the 4 corners, I'll just do 1 brightness level. I'll just sort the glyphs by brightness, put all them in a texture and then map to brightness directly. I still pull out 4 pixels from the original image and average them to get the brightness when mapping to glyphs. The hope is that gives me 10 bits of brightness instead of just 8. It's different but definitely not better.
Jaume Sanchez Elias pointed me to this well known? sorted glyph texture.
I noticed it includes all of the characters inversed (black on white instead of white on black). So, I added that, which made me notice the padding around my glpyhs
was really affecting the glyph choices. If there is a border around the glyph then when its inverse is used it will have a white border making it bright.
I had put an option to trim the border call pack
so I exposed that. Trimming 2 pixels off each glyph gets
something similar to Jaume's demo.
At least if you look at the sphere you'll see it's fairly evenly shaded with characters.
This points out a problem, I have no way to know what size the glyphs are in HTML5. I figure out the
glyph extents by drawing each glyph one at a time to the middle of a canvas and then scan the pixels of
the canvas to compute the extents. You'd think if you ask for a monospace
font that all
the glyphs would be the same size or at least no larger that say a `W` or an `M` but as you add in characters
higher up the unicode chart that rule doesn't hold. So, as you add in characters you have to manually bump
the pack
value.
Anyway, I tried applying the pack and inverse options to the 2x2 version of the glyph postprocessing but so far it doesn't look any better so I haven't pushed that version.
Also under debug is the dosColors
option which I really haven't given much thought on how to do it better.
an attempt to make lines look like they've been sketched in a low framerate hand drawn animation kind of way. Still a work in progress.
This was a step on the way to sketchtext. It's was easier to use built in primitives first rather than figure out how to get the outlines of glyphs from a font.
A work in progress that started as an example to an answer on SO.
Brendan Annable suggested moving the vertices behind the camera in the vertex shader instead of discarding in the fragment shader because that would be faster. The GPU will clip the triangle instead of trying to draw every pixel and having it get discarded. So, here's those versions.
A sort visualizer inspired by this video.
Mostly I had just never personally implemented a quicksort. I first googled "quicksort explained" which bought up this AWESOME video that extermely clearly explains the algorithm. So I implemented that.
But, then I found this page which shows a bunch of algorthms and it has them detailed very tersely. Their quicksort is nothing like the one in the video above. Their's moves a random `pivot` through the array going only to the right where as the one above uses the left most element as its `pivot`, takes it out of the array and uses left and right pointers to move stuff before and after the pivot finally inserting the pivot in the space found.
The LR one appears to be much faster than the right only one. I also don't quite get the point of choosing a random pivot. I suppose it's to try to avoid the worst case which I think is an inversely sorted array and starting from element #1. I'm sure somewhere on the net explains why a random pivot is best for algo #2. I'm a little worried there's a bug because the non LR one take so long on the uniform array example.
Anyway, I added a bunch more including the quicksort 3 way partition. I also added cycle times but they make no sense currently. It's basically 2 cycles for a load, 3 for a store, 1 for a compare though a compare includes a load so it's 3 cycles. But cache hits are not currently taken into account nor are a few array accesses in the algos. Maybe the only thing you care about is array accesses and cache misses in which case I should remove the various set, cmp, copy, swap stuff and just add have get/put and count cycles and try to guess cache hits.
Inspired by an exhibit I saw about Marcel Duchamp at the Moderna Museet in Stockholm.
Some samples:
Inspired by Spirograph toys.
Some Samples:
Written to demonstrate how the original Missile Command was created.
Most games today erase the entire display every frame. Back when Missile Command was created machines were not fast enough to draw the entire display in pixels at 60 frames per second. So, Missile Command never erases the display. Missiles or Bombs are just a line drawing function drawing the lead pixel and then erasing that lead pixel with a trail pixel. This leaves the trail drawing only 2 pixels per frame per bomb/missile. To erase the trail the line is draw again with black.
The explosions similarly only draw the outer edge of the circle. Because nothing is erased the circle expands. To clear the explosion black circles are drawn in the opposite direction.
These techniques make the game fast even on the 1mhz machine it was originally written on. They also give some of Missile Command's distinctive looks like when two explosions overlap one will start erasing the other leaving crescent shapes. In this case though I'm using HTML5's Canvas arc function to draw the circle. The original Missile Command likely used a 1 pixel thick function to draw the circle. Unfortunately HTML5 Canvas has no way to draw circles exactly 1 pixel thick. I could implement my own circle function but I was too lazy.
The glowing colors were achieved back in the 80s by color cycling or rather the system could only display 4 colors but you could choose which 4 colors. By changing 1 of the 4 colors every frame to a different color you'd get a glowing color "for free". Almost zero work on the part of the processor. Systems today with their 24bit color displays can't really do color cycling. To simulate color cycling requires re-drawing every pixel that changes color either directly or through various GPU techniques. Since Missile Command only really needs 1 glowing color I achieve the glow by changing the background color of the canvas every frame and filling it with opaque black. I draw the tips of the missiles and the explosions by setting pixels to 0,0,0,0. This makes them show the through to the background color. Since it's changing every frame I get "free" glowing similar to the color cycling technique from the 80s.
Just wanted to see if canvas supports drawing with emoji. Yes on Safari 7+, FF 29+. No on Chrome as of v34.
Update: not sure when but Chrome now works.