Shooting for a Homepage Feature: Timelapse and Multi-exposure Photography the DIY Way (Make or Write Your Own Code!)

by SteveMann in Circuits > Cameras

4574 Views, 40 Favorites, 0 Comments

Shooting for a Homepage Feature: Timelapse and Multi-exposure Photography the DIY Way (Make or Write Your Own Code!)

swimstructable9up.gif
cement_diagram.png

What I love about Instructables is that it is photo-centric: the first thing you see when creating a new Instructable is "Add Images", before any text entry dialog appears! In the world we live in today, pictures are everything.

My last Instructable was featured on the Instrucables homepage. The editors wrote "it's excellent, wonderful, and just plain awesome.".

People have been asking me how I made my pictures, so in this Instructable I teach how to make animated .gif images and multiple-exposures in a true DIY way, using very simple computer programs you can build or write yourself. I also note some simple observations like how to shoot for Instructables, e.g. make image borders with RGB=(246,246,246) and make sure images uploaded to Instructables do not exceed 10,000,000 bytes.

Since childhood I've been fascinated by the passage and stoppage of time, inspired by the work of Harold Edgerton (pictures of bullets going through apples, etc.) and I've been particularly interested in timelapse on the spacetime continuum of radio waves that normally travel at the speed of light, i.e. making them visible.

Whatever photographic subject you're shooting, timelapse is a good medium in which to express it, and especially if your project emits light, there are some great opportunities for multiple exposure photography and something I invented many years ago I call the Computer Enhanced Multiple Exposure Numerical Technique (CEMENT), a generalization of one of my other inventions, HDR (High Dynamic Range) Imaging.

Whereas HDR uses comparametric equations to analyze and combine differently exposed pictures of the same subject matter, CEMENT uses superposimetric equations to analyze and combine differently illuminated pictures of the same subject matter.

Make or Improvise an Optimal Environment for Shooting Great Photos

nine-up08flat.png
swimstructable_tabletop_photo_setup.jpg
swimstructable_tabletop_photo_setup_closeup.jpg

There are two key tricks to good documentation of an Instructable: (1) consistency (e.g. so pictures are in good alignment), and (2) good lighting.

For consistency, a good tripod is useful, but if you don't have a tripod, or if a tripod is getting in the way of the shot (getting in the way of your working, or casting shadows on your work), you can affix the camera to a home made mount. If I'm using a camera phone, I usually secure it to a support overhanging my work area.

I also usually affix things to the work area, e.g. I glue the breadboard to the desk temporarily (using hot melt glue -- enough to hold the board securely but not so much as to make it difficult to remove).

I have a DIY-style holography lab with an optical table that has threaded holes for securing objects, but any good solid workbench will work quite well. Try to avoid flimsy tables that shake between exposures. For lighting I prefer to experiment and improvise with cheap and simple home-made or improvised fixtures, e.g. simple lamp holders, lamps, etc.. DC light sources tend to give better results (less flicker, etc.). We like to build our own LED power supplies so we have better control of the lighting. In this way, for example, the lights can be blanked more quickly than simply cutting the power to a light that has a filter capacitor in it (and therefore a lot of afterglow). Blanking can even be done on a per-frame basis (e.g. to have the lights on in alternating frames of video using an LM1881 sync separator and therefore generate lightspaces). The more control you can establish over lighting, the better you can manage it creatively and artistically. Plus I just simply prefer the DIY approach to building my own systems rather than using expensive professional photography equipment.

In the old days I used to shoot with film which had some gate weave as the film moved around from frame-to-frame, requiring pins to register the sprocket holes, but with modern filmless cameras, getting good stability in an image sequence is much simpler.

For manual cameras I usually affix the zoom and focus with a small drop of hot melt glue to keep the lens from jiggling around while I'm working.

I also use manual exposure settings so that the exposure doesn't dance around from frame-to-frame as subject matter varies. A manually operated remote trigger is very helpful, to grab a frame for each step of a sequence. Typically I like to grab at least one frame for every component or wire inserted onto a breadboard. These can be downsampled later if desired.

In my studio, lab, or the like, I usually paint the walls black, and wear black clothing so that I have better control of the lighting. If you don't want to commit to black walls, you can temporarily use some black cloth to make a "blackground" or "blackdrop" (black background or black backdrop). I find that the choice of lighting is far more important than the choice of camera; most modern cameras have enough resolution, so the difference between a great picture and a good picture is in lightspace rather than imagespace.

For the pictures in my last Instructable I glued a piece of black acrylic to my table, and then glued my breadboard to the acrylic. I also glued two desk lamps to the acrylic and glued one floor lamp to the floor to keep it stable. After completing the circuit on the breadboard I pried the glue away so that I could wave the breadboard back and forth and show the radio waves as an augmented reality overlay.

The human eye does a really good job of integrating light, so that when you wave something back and forth the eye can see it nicely. But many cameras do a poor job of capturing a true and accurate rendition of what the eye sees.

If your project produces light, you have a really great opportunity to make it really shine, by using multiple exposures to capture the project the way the human eye perceives it. In my case I took a set of pictures in ambient light, which I named as filenames like a1.jpg, a2.jpg, a3.jpg, etc. (a sequence of images shot with ambient light). Then I shot another sequence of images in the dark, with longer exposures, to show light trails the same way that the eye sees them. I labeled these h1.jpg, h2.jpg, h3.jpg, ... for "head", and t1.jpg, t2.jpg, t3.jpg, etc., for "tail". The above example shows three ambient light images in the top row, three "heads" in the middle row, and three "CEMENTs" in the bottom row. Each CEMENTs was made by CEMENTing the two images above it.

CEMENT (Computer Enhanced Multiple Exposure Technique)

guelph_sleeman.jpg
yearsAGO_years_proc.jpg

CEMENT (Computer Enhanced Multiple Exposure Numerical Technique) is a concept and simple computer program that I created about 30 years ago, in the 1980s, in FORTRAN and then ported to "C". I still use it regularly (several times a day in a typical workday) and in true DIY style it is best kept raw and simple (e.g. command line interface, nothing too fancy or sophisticated). This is all so simple in fact that you can easily write it yourself without being held prisoner to any API or SDK!

Yet it gives you a powerful tool for managing lighting and exposures.

Over the years I've found that pixel count (more megapixels) matters less than dynamic range, the range of lightspace, and lighting in general. My HDR eye glass only run at NTSC resolution, yet allow me to see better than most cameras, owing to a dynamic range of more than 100,000,000:1, even though the pixel count is not too high.

The best way to get control over exposures is to use multiple exposures, and manage each exposure separately. When shooting something that has LED lights on it, or a video display, or TV screen, for example, one shot taken with flash or ambient light, and another taken without flash or without the ambient light (e.g. in the dark) can be combined using the Computer Enhanced Multiple Exposure Numerical Technique (CEMENT) that I invented for combining multiple differently illuminated pictures of the same scene or subject matter.

Above you can see examples of pictures I took with a 4-hour long exposure, and a ten-year-long exposure, using CEMENT (HDR with 9 exposure brackets every 2 minutes for 10 years).

I spent most of my time working through the philosophical, inventive, and mathematical aspects of CEMENT and less time writing great code, so the programs are very primitive and simple, in true DIY style, so don't expect great code. You can download it from http://wearcam.org/cement.tgz

Here's also a mirror site in case wearcam.org is busy serving requests:

http://www.eyetap.org/cement.tgz

CEMENT is meant to be run on a simple GNU Linux computer system.

Make (compile) the program using gcc.

If you have too much trouble getting it to compile, you can skip ahead to Step 3, and do it using Octave instead.

In the main CEMENT directory there are some example images you can learn and test with. See that these are present:

$ ls *.jpg

sv035.jpg sv080.jpg sv097.jpg sv100.jpg sv101.jpg

Now you can try CEMENT.

First generate a lookup table:

$ makeLookup

With CEMENT, images are combined in lightspace, so you first convert one of the images to lightspace, CEMENT it to another image, and then convert the result back to imagespace.

If you care about this you can read more about comparametric and superposimetric equations, or you can just assume we're doing the math right, and continue.

Once you generate the lookup table, you can apply it to the first image, e.g. let's say we want to CEMENT 35 and 80 together, we'll begin by initializing with sv035.jpg using RGB (Red, Green, Blue) values 1 1 1 (white):

$ cementinit sv035.jpg 1 1 1 -o spimelapse.plm
Init sv035.jpg (powLookup22.txt) 1 1 1 100%

If you forgot to makeLookup you'll get an error message:

Unable to open powLookup22.txt.
Segmentation fault

I love machines, so rather than exit gracefully, I print a warning message and then let the raw ungraceful exit occur.

Once you get cementinit going on sv035.jpg you've created a Portable Lightspace Map, with filename spimelapse.plm

Now CEMENT the second image into that PLM:

$ cementi spimelapse.plm sv080.jpg 1 1 1
p: 2.2 exp: 22 filename: powLookup22.txt
Add sv080.jpg 1 1 1 100%

and convert the result back to imagespace:

$ plm2pnm spimelapse.plm -o spimelapse.jpg
Create spimelapse.jpg (powLookup22.txt) -1 -1 -1 100%

Now you've just CEMENTed two pictures together!

If you got this far, please click "I made it!" and upload the two input images and the CEMENTed result.

Check Your Results: How to Write Your Own Version of CEMENT and Test It to See How Well It Works!

CEMENT_err_plot.png
deconism1c.jpg
deconism2c.jpg
deconism3c.jpg

How do we know how well CEMENT works?

One way to test it is to take 3 pictures of a scene or object lit by 2 lights, as shown above (from our ICIP2004 paper; see reference at end of paper).

The first picture, call it "v1.jpg", is a picture taken with one light turned on. Call that one light lamp 1. In our case, that's the lamp to the left of our studio space (notice how it casts shadows to the right of their corresponding objects).

The second picture, call it "v2.jpg", is a picture with that light turned off and another light turned on, say lamp 2 turned on, so v2 is the picture as lit only by lamp 2. In our case, lamp 2 is the the right of our studio space (notice how it casts shadows to the left of their corresponding objects).

The third picture, call it "v3.jpg", is a picture with both lights turned on together. Notice how we see double shadows in this picture.

Now try CEMENTing v1 and v2 together, call the result "v12.jpg".

Now test to see how similar v12 is to v3.

The easiest way to read these images into an array is to download the raw images:

http://wearcam.org/instructableCEMENT/octave_scrip...

http://wearcam.org/instructableCEMENT/octave_scrip...

http://wearcam.org/instructableCEMENT/octave_scrip...

but if you have a slow net connection, just grab the .jpeg images and decompress them:

djpeg -gray v1.jpg > v1.pgm
djpeg -gray v2.jpg > v2.pgm
djpeg -gray v3.jpg > v3.pgm

then edit out the header so you have the raw data, saved, let's say, as files "v1", "v2", and "v3".

You can do this in Matlab, but if you're in the true DIY spirit, you'll prefer to use the free+opensource program "octave": apt-get install octave, and then try this:

fid1=fopen('v1');
fid2=fopen('v2');
fid3=fopen('v3');
v1=fread(fid1,'uint8');
V1=reshape(v1,2000,1312); % these dimensions are assuming you downloaded from wearcam
v2=fread(fid2,'uint8');
V2=reshape(v2,2000,1312);
v3=fread(fid3,'uint8');
V3=reshape(v3,2000,1312);
colormap("gray");
image(V1/4);
image(V2/4);
image(V3/4);
v12=v1+v2;
e=sum(sum((V12-V3).^2))

Which returns:

e = 9.0995e+09

If you downloaded from Instructables, the image dimensions may have changed, e.g. if the dimensions are something like 1024 by 672, then change the above reshape commands to:

V1=reshape(v1,1024,672);
and the same for V2 and V3.

We have just CEMENTed two the two single-lamp images together in Octave, by simply adding them together, and tested to see how similar they are to the picture with both lights on.

Now instead of adding them, try taking the square root of the sum of their squares, i.e. like a "distance" metric:

v12=sqrt(v1.^2+v2.^2);
e=sum(sum((V12-V3).^2))

and what you get is a much lower error:

ans = 6.5563e+08

Now try cubing them and taking the cube root; here the error is a little bit lower still:

ans = 2.2638e+08

More generally, we can raise them to some exponent, n, and then take the nth root. Of course n needn't necessarily be an integer. So let's try a whole bunch of different "n" values and plot a graph of the error as a function of "n". We can do this nicely by writing a simple Octave function in a file named "err.m":

function err=err(v1,v2,v3,N)
if(nargin~=4)
disp("err must have exactly 4 input arguments: v1,v2,v3,n");
end%if
if(max(size(N)))>1
disp("err only deals with vector N, not arrays of N");
end%if
for k=1:length(N)
n=N(k);
v12=(v1.^n+v2.^n).^(1../n);
err(k)=sum(sum( (v12-v3).^2 ));
end%for

Now we can test CEMENT for a whole bunch of "N" values in a long list, e.g. let's try 1000 different N values going from 1 to 10:

N=(1:.01:10).';

The error for each of these is in:

e=err(V1,V2,V3,N);

which is at a minimum around 3.27 or 3.28 (close to equal for those values of N), so let's say that the optimal value of "N" is 3.275.

The optimal value of "N" depends on the response function of a particular camera, which in my case is the Nikon D2h.

Others who have done this Instructable report "N" values for other cameras, so I propose the creation of a "Massive Superposimetric Chart" much like the "The Massive Dev Chart" for film:

Massive Superposimetric Chart:
Camera make and model number "n" Response function
Nikon D2H 3.275
Nikon D60 3.3
Sony RX100 2.16
Canon Powershot S50 2.1875

Going further:

We've used a simple power law here for illustrative purposes, but in fact, we can do something a lot more powerful: we can actually unlock the secrets of any camera, non-parametrically, i.e. determine its true response function, from three pictures, as above, but instead of solving for one "n" we solve for the 256 quantimetric camera response function entries. See for example:

Manders, Corey, Chris Aimone, and Steve Mann. "Camera response function recovery from different illuminations of identical subject matter." In Image Processing, 2004. ICIP'04. 2004 International Conference on, vol. 5, pp. 2965-2968. IEEE, 2004.

Automate CEMENT With TROWEL

convocationhall1.jpg
sv035.jpg
sv080.jpg
sv097.jpg
sv101.jpg

TROWEL is a tool for applying CEMENT.

In true DIY spirit TROWEL and CEMENT are command-line based. Keep things pure and simple to start with. Then add fancy GUIs later (we wrote something called X-CEMENT, an X-windows front end, and eCEMENT, on online web-based interactive CEMENT, etc., but let's not go there yet!).

TROWEL is an acronym for To Render Our Wonderful Excellent Lightvectors, and it is simply a PERL script that reads a file named "cement.txt" and calls CEMENT for each line of the cement.txt file that specifies a filename and RGB (Red, Green, and Blue) values.

So for the previous example, create a cement.txt file like this:

sv035.jpg 1 1 1

sv080.jpg 1 1 1

and then run TROWEL with that cement.txt file in the current working directory:

$ trowel
Init sv035.jpg (powLookup22.txt) 1 1 1 100% p: 2.2 exp: 22 filename: powLookup22.txt

Add sv080.jpg 1 1 1 100%

Create trowel_out.ppm (powLookup22.txt) -1 -1 -1 100%

Try experimenting with different colors and different RGB values, e.g. try changing the cement.txt file to:

sv035.jpg 1 1 0
sv080.jpg 1 2 9

and you will get something with nice yellow light coming from the window, and a bluish sky and building.

These are just low-resolution test images to run quickly, and come with the cement.tgz file for testing purposes.

You can get the raw data for the above picture at full resolution from http://wearcam.org/convocationhall.htm

and click "index of lightvectors" to see the individual exposures that made this multi-exposure picture,

and if you want to reproduce my result exactly, use this textfile: http://wearcam.org/ece1766/lightspace/dusting/conv...

and rename it to "cement.txt" and then run trowel on those lightvectors.

Making Image Sequences With CEMENT

spimelapse246246246.png

Instructables.com creates a light grey border around each picture.... If you're creating diagrams for an Instructable, like the above diagram, you should set the background color to this same light grey, specifically RGB=(246,246,246)=#F6F6F6, because if you leave it as NaN (transparent or undefined) it gets set to Black. I created the above drawing using Inkscape and then converted the SVG to a PNG file (it would be nice if Instructables had better SVG support, e.g. for vector graphics).

In making a sequence of images, I usually use CEMENT to make each frame of the image sequence, doing this in a shell script, i.e. calling TROWEL from within a shell script, usually BASH, usually with a file named "cements.sh" in the same directory as the images being CEMENTed.

Here's an example "cements.sh" file where I generate 2 frames called out101.jpg and out102.jpg. The first frame is made by CEMENTing a1.jpg (ambient picture #1) and h1.jpg ("head" picture #1), into output frame 101, and then the second frame is made by CEMENTing a2 and h2 into output frame 102. The other image "r.jpg" is just the radar lit up and nothing else.

#!/bin/sh
echo "a1.jpg 1 1 1" > cement.txt
echo "h1.jpg 1 1 1" >> cement.txt
echo "r.jpg 1 1 1" >> cement.txt # radar only lit up
trowel
cjpeg -quality 98 trowel_out.ppm > out101.jpg
echo "a2.jpg 1 1 1" > cement.txt
echo "h2.jpg 1 1 1" >> cement.txt
echo "r.jpg 1 1 1" >> cement.txt # radar only lit up
trowel
cjpeg -quality 98 trowel_out.ppm > out102.jpg

You can get that raw data and images I used from http://wearcam.org/swim/swimstructable/swimstruct...

Here's the shell script I wrote to make the main image used in my homepage Feature last week:

http://wearcam.org/swim/swimstructable/swimstructa...

Now Make a ".gif" File But Be Sure Not to Exceed 10,000,000 Bytes!

steve_less_than_10000000bytes.gif

One thing I really love about the Instructables.com website is its excellent handling of .gif images.

In the self-portrait above, I took one picture with flash, and then 35 long exposure pictures with a light bulb, with the cord defining an arc in front of the surveillance camera. I'm using a tube-type television receiver and amplifier system I created more than 30 years ago, in which four 6BQ5 tubes in a push-pull configuration (two in parallel for push and two in parallel for pull) to drive a 220 volt light bulb directly with an amplified NTSC television signal. This results in video feedback as described in one of my earlier Instructables.

A nice feature of CEMENT is that you can keep CEMENTing in new lightvectors. In the above sequence, the first image is the one with flash, then the next one has the first bulb trace CEMENTed in, then the next bulb trace is CEMENTed into that total, and so on.

Here's the simple two-line shell script called "cementij.sh" I wrote to do the above (calling cementij.sh from another script, once for each output image):

#!/bin/sh
cementi temp.plm img$1.jpg 1 1 1
plm2pnm temp.plm -o steve$1.jpg

Finally, at the end, I CEMENTed in the while thing at double weight, and quadruple weight, etc., to build it up for a final crescendo of "(sur)veillance flux". Lastly, if the final image is the most interesting, rename it so it comes up first in the globbing of filenames in generating the .gif image. In this way, when the .gif file is non-animated (e.g. while loading initially, or when iconified) the first frame (and sometimes the only frame visible) will be the most interesting of the frames.

Make sure none of your .gif images exceed 10,000,000 bytes or it just won't work when you upload to Instructables, and there's no warning message (it just simple fails to load).

In the true DIY spirit, I like simple command-line interfaces, and my command of choice to generate the .gif files is "convert".

The main picture from my previous homepage Feature Instructable was generated using the following script:

convert.sh

This generates various sizes of .gif files I used internally, with one of them being generated at just under 10,000,000 bytes.

To get the exact size, I simply generate something near the size I think it should be, and then correct it. For example, the picture at the top of this page at original resolution of 1024 lines (steve1536x1024, close to HDTV resolution) was too big (22894455 bytes) by a factor of 22894455/10,000,000, i.e. 2.289... times too big.

Take the square root of that ratio, and you get about the size reduction you need to hit the target size: cut the size by about 1.5131 and you get 1015x677.

Odd sizes like that don't tend to handle well on computers. So pick the next size down that results in image dimensions that are reasonably composite numbers, e.g. let's take those dimensions, divide by 32 (a typical blocksize in image processing and handling), round down, and then multiply again by 32.

This gives us 960x640, which ends up giving a .gif file that's 9,100,702 bytes, i.e. just under 10,000,000 bytes.

Have fun and make some great .gif pictures the DIY way (e.g. with code you make or write yourself)!