Nature Time Lapse of the World

Since the 2013 road trip I managed to compile and fill in most of gaps in this project – the project I’ve been working on intermittently for 3 years. It’s a time lapse I had started in 2010 when I was caught and shocked by the realism presented by time lapse photography. Those pioneers on Youtube and Vimeo made nature time lapse videos with moving clouds and starry night.

I made a rough calculation and I immediately realized how fast the shutter life will be consumed should I use a DSLR. A 1000 exposures for one time lapse at a interval of 6 seconds. So I started with bridge camera, but this time the image quality and lagging speed greatly reduced the success rate. Finally to only have one camera to work with during travel I started serious time lapse photography. The first series was shot in UK during my internship and I was first amazed by how transient but colorful the nature could be.

Gradually problems began to appear, the flickering issue posed a challenge to correct. Then comes the annoying vibration. Sometimes caused by winds, others by necessary manual control interruption. When more videos were taken between sunrise and set, the exposure control became a serious hurdle to smooth the transit. And worst of all, my physical perseverance to withstand the winds and bitter coldness for one hour.

Rio Grande Sunset

In 2011 the music was chosen and I start to fill in each melody with the appropriate clip. The flicker was vastly reduced with VirtualDub filter. When better clips were available, the entire edition would be moved around and discarded. In 2012 I got my new D7000, the new feature avtive D-lighting was both the curse and bless of photography. On one side it could greatly expand dynamic range for single shot JPEG, on the bad side it introduces extreme flicker between images due to its algorithm treats images individually. People moving around was suppressed with statistics filter with my coding effort. Every times when new semester started it became another down time for me to doubt if I could eventually finish this project.

At the end of 2013 I traveled to the Big Bend National Park in Texas for the final shot. Against all odds, I managed to reduce the light from cars moving by and finished this project. Please enjoy the video. For best effect, use full screen, dim lights and sounds up! If you like it, help me share it so others would view as well.

Technical Detail

Time lapses were mostly taken at 6 seconds interval. To reduce data burden, most were taken at medium resolution. The preprocessing were done at 4K resolution if applicable.

Panning done with AstroTrac pointing zenith. Camera includes D200 and D7000 with Samyang 14mm lens. One shot with Canon 6D.


Beware of DIFF_TERM

The Vivado tool chain might be wrong occasionally. And it screwed me up big time!

A bit of background where problem occurred

For serial data reception, this is usually straightforward due to each channel is independent from the others. A simple IDELAYE2 could sweep through all possible delay tap values and feed the data to an ISERDESE2 for deserialization. The final value should be set at the mid-point where deserialization is most reliable. But for parallel data, all bits in a word must be perfectly deskewed and aligned! Assume 8 bit data pairs has 16 taps each to scan through, then the combination would be at an astronomically value of 168 ! A workaround is used to call in a second IDELAYE2 to independently validate the result from the primary data path. This would make deskew independent.

How problem occurred

The implemented design works perfectly on hardware on each LVDS word channel. I combined them later on but did not connects the deskew feedback into downstream logic for partial validation. This is where Vivado gets “smart”. It optimized out all my unused logic including the secondary or slave IDELAYE2. But during the process, it “optimized” away the DIFF_TERM from the IBUFDS_DIFF_OUT! This severely impacted the signal integrity in some bit lines, which in turns, making my data reception completely fail.

The following Tcl command can be used to interrogate if all your IBUFDS has DIFF_TERM:

get_property DIFF_TERM [get_cells -hier -regexp {.*IBUFDS_inst}]

If all return is 1, then DIFF_TERM is set to True.


Interfacing Nikon CMOS module with ZYNQ SoC

In my last post I showed a CMOS camera in progress. This time I’m going to deviate from that topic a little bit by interfacing the image sensor module from D5100, the mighty IMX071.

On the same relay board that serves as the carrier card for the microZed, I intentionally included a flex connector for the Nikon module. The connector contains 8 data pairs and one accompanying clock signal, all as sub-LVDS signaling standard. The rest of pins are mostly power switches, SPI for configuration and synchronization signals.


The same connector on the Nikon D5100 motherboard

The great thing about FPGA is its versatile I/O standards. On the ZYNQ fabric side, each IO bank can hosts multiple I/O standards as long as their voltages are the same. Here I combined LVDS25 and LVCMOS25 for control into the IO bank 35. The LVDS25 is required to enable 100 Ohm differential ODT (On Die Termination). A simplified block design is shown below. A 2.5V to 1.8V logic shifter was omitted.


The LVDS clock signal drives the entire logic fabric, which includes 2 major components. First, a signal receiver/decoder that writes data into a FIFO. The the AXI-DMA will transfer the data from FIFO into the DDR3 memory on the processor side through the AXI-HP port. Secondly, a sensor driver responsible for generating the correct line synchronization pulses for the sensor. The driver is configured through the AXI-GP port by the program running on PS.

All things connected, we have microZed stacked on top of my relay board, a D5100 CMOS module on top right and its DCDC power board on bottom right. Upon power up, the logic will load the bitstream configuration. Once this is done the program running on ZYNQ ARM processor will configure the system, interrupts and PS IO port. Then the sensor will light up followed by the register setting sequence. The final step is the actual synchronization driving and DMA run. After data acquisition is completed, the program writes the image from RAM to SD card.


A decoded test image (lens not attached) between vertical blanking regions

We are currently designing the full product! I’ll keep posted!

The Making of a Cooled CMOS Camera – P1

As my last post had suggested, I was working on a camera design. Right now the “prototype”, as I would call it, is in the test phase. The project actually dates back to 3 years ago when we envisioned a large focal area CCD imager customized for deep sky astrophotography. At that time, the price for such a commercialized camera was so prohibitive. The most suitable monochromatic chip was the interline KAL-11002 with a size of 36 x 24mm^2. Unlike full frame CCD which necessitates a mechanical shutter for exposure control, interline could handles this electronically. However, the addition of a shielded VCCD region greatly impacts the quantum efficiency and full well capacity. Beyond that, Kodak CCDs don’t seem to recover QE well enough with microlenses, with peak at 50% and only 30% for 650nm on a B/W device. Later on we started to dig deep into the datasheet and soon we abandoned the project. The accumulated dark current in VCCD was simply too much at the slow readout speed required for decent level of read noise.


The KAL-11002ABA in the original plan

What happened next was dramatic. After getting my hands on D7000 and the hacking, I was shocked by how good CMOS sensor performs. I soon realized the era for CCD in astronomy might come to an end. Sooner or later, it will too embrace the noiseless CMOS in the telescopes. When Kodak span off its imaging division to Truesense, it soon re leased its first CMOS sensor with sub 4e- read noise and CCD-like dark current. We decided to give it a try.


Got the sensor, now big challenges lay ahead. To speed up, I decided to use the microZed SOM board as the embedded controller, at least for the prototype. Thus only the power supplies and connecting PCB had to be designed. The Zynq-7010 will configure the sensor with its SPI MIO from the ARM PS side. The data will be received at the FPGA programming logic (PL) and somehow relay to the PS DDR3 memory. The data can then undergo complex calibration and save to SD card or transfered over GbE/USB.


The microZed SOM with 1GB DDR3 and various I/O

The board is then designed and fabricated with the 754 CPU socket mounting the sensor. The main PCB contains the voltage regulators, oscillator and temperature sensing circuits.



The data lines go through a relay board, which also provides power to Zynq PL I/O banks. The whole stack is then tripled checked before applying power. After weeks of hardware and software debugging, the sensor was finally configured and running at designated frame rate. Now it’s time to work on verilog in order to receive the data. I’m going to cover that in my next part.

Modify a 754 CPU socket for image sensor

Recently I’m working on a camera design using a CMOS image sensor. The problem is the sensor was built into a custom uPGA package (1.27mm pin pitch) with a lot of pins. The actual socket built for this sensor is very expensive. And it requires force to mount and tools to dismount. Luckily, a lot of old CPU sockets use the similar uPGA standard and they are ZIF (Zero Insert Force) sockets. The problem is just too many extra pins you don’t want. A straightforward way is to cut off every unused pin on the package but could risk bending an unintended pin in the process. And most often the cutting process is not clean to the edge, preventing a flat and smooth fit on the surface of PCB.

After a bit investigation, I found the upper movable lid (white color) can be easily disassembled by gently prying the side rail. This will expose the bottom part holding all the ZIF contacts and the pins in each square hole.


The lid holding the CPU

The releas e/lock handle

Bottom part with tons of ZIF contacts

Now with a jumper cable pin (2.54mm pitch male end), you could easily pop out all the unwanted ones, leaving a custom ZIF socket!

And here’s the actual ZIF contact.

The ZIF acceptor assembly is inserted from bottom and locked inside the square hole. The CPU pin comes down from above. Once the locking handle rotates, all the pins are carried and pushed towards the narrower slot between the 2 U-shaped hairpins to establish the electric contact.

The Dark Side of Image Sensors

An ideal image sensor is always desired in astronomical imaging, which has negligible read noise and dark current, with almost 100% quantum efficiency. That means your SNR is only limited by your background sky noise and exposure time. The reality is, most of the time, far from being ideal.

In the last decade, the fierce competition for consumer camera market share and the investment into R&D perfected the CMOS sensor to a point far exceeding the performance of CCDs. Here let’s looked at the imperfections left in some of these famous CMOS sensors.

Recently I ported the “Dark Current Enable Tool” to the 3rd generation Nikon DSLRs. This made it possible to evaluate the image sensor more accurately and conveniently.

Uneven Bias Level

In the imaging pipeline, the first calibration step is subtraction of master bias frame. This will even up the pedestal for all the pixels before any scaling step. The master bias is usually created by combining dozens of bias frames. Now just by looking at the master bias we can check how good the image sensor and read out circuit are made.


Portion of bias frame in NC81338L (D3, D700) at ISO200

The above bias frame is from D700. Due to its use of 6 discrete AD9974 dual channel ADCs from analog readouts, the 12 column per cycle of uneven bias level is clearly visible even without frequency analysis. The Sony sensors, like the IMX028 in D3X, employed the column parallel ADC. The column-wise irregularity is much smaller due to the voltage comparators share a common ramping DAC reference. There should also be calibration circuits before readout to cancel the differences between these column circuits.


Bias pattern in D3X at ISO 1600 and ISO 100

However, I do notice a weird global pattern in the IMX028 bias frame. At first I thought it’s due to a light leak. But I quickly ruled it out since it decreases with increasing ISO and is absent in 2 Bayer channels. It cannot be a low frequency noise in power circuit either as the same pattern is observed in individual bias frames and different ISO. Could this be individual sensor issue? I don’t know. But it’s not observed in any other camera I had data access to.

Read Noise

Most calibration program only outputs a master bias frame. During this process, read noise for each pixel can be calculated by sample standard deviation without much effort. (In statistics, sample SD is a biased estimator, but none the less, consistent if sample size is large)


Read noise in D700, D3X, D5100 and D800

It is clear how the sensor design had an effect on the read noise distribution. Unlike the 4T stand alone pixel design in NC81338L, all Sony CMOS inside Nikon used 2.5T shared pixel. That’s what give you the pairs of noisy pixels.

Photo Response Non-Uniformity

PRNU is very difficult to estimate unless a light source more uniform than the sensor response itself is used. Some form of PRNU, such as stitching artifact is more apparent.

Stitching artifact in IMX028

These artifacts are caused by multiexposure photolithography. Since the stepper does not have an imaging circle large enough to cover a full frame image sensor die, it has to be stitched like making a panorama. IMX028 has only one seam while the NC81338L has 3 due to a much smaller mask used.

Astrophotography in pure darkness

In Michigan, I could only see one nebula – “Michigan Nebula”. Nah, that’s just a joke in the amateur astronomy society here to complain about the frequency of cloudy nights in the state. For me, the complaint is real. I do not have an observatory for regular imaging. Packing such a heavy weight EQ mount and going to some dark rural site only to find cloud building up is almost frustrating and unacceptable. Now it seems a road trip every half year could offer me better opportunity with the best dark sites in the States.

So here are some examples. During the Christmas of 2013, I went to the Big Bend National Park in Texas. There’s absolutely no light pollution from almost any direction except some desert town outside the park. Terrain should perfectly shade these local glares.

At the dusk we entered the park, but from where we were staying took about 1 hour drive. The surrounding lost its colorful appearance when the last patch of sky became completely black. The headlight of our vehicle and the passing by prevent us from dark adaptation. But when we step out of the car, the brilliant zodiac light immediately catches my attention. It was so bright, even under the streetlight in a parking lot, I could see it reaching 30 degrees high in the sky. The clouds kept me blinded for 1 day and half. It was until the third night that I could view it in its full majesty. Until midnight that day, the zodiac light was still bright on the horizon.

Zodiac Light

Zodiac Light

This time, all the clouds move away to the west and it offered a clear night for astrophotography. I picked a spot near the park entrance to setup my tracking rig, and another camera for time lapse. The Orion’s belt was my imaging priority. In a 2 hour and 40 minutes total exposure, I was able to reveal all the dark nebula and dust bands adjacent to the bright M42 and horsehead.


Meanwhile, the sunset at Rio Grande Village was considered by us to be the most scenic combination after 3 days of lonely drive in desert.


360 panorama – Sunset of Rio Grande

Now 6 months have passed, another opportunity took me to the Mojave Desert in California. This time, I’ve substitute the glass inside the optical glass inside with one having antireflection coating. Thus all the glare surrounding bright stars and nebula center are gone. About 10 minutes’ drive away from the small desert town Baker, I set up my AstroTrac on the sandy road of Mojave National Preserve. It was dry hot at such a low altitude. Besides the intermittent wind blowing against you, is the occasional sound from some unknown animal sheltering in the wasteland. The glare from Baker and head light of passing cars on I-15 are on my north, the Rho Ophiuchi Nebula is a perfect target. Yet under this dry heat, it was exhaustive trying to sleep inside a car. I manage to get 100 minutes of exposure in total.

Rho Oph

The Rho Ophiuchi Cloud Complex

This time I’m using the hacked firmware preserving the raw output from the sensor. Now with custom made calibration pipeline developed, I could achieve perfect preprocessing before the actual alignment and stacking.

Meteor and Milky Way

An occasional meteor captured during the time lapse at the same night. The Rho Ophiuchi gradually sets into light dome from southern California as my TT-320X tracking it. The background light would still impact the SNR in the dark nebula.

Some 360 panoramas along the way, click to pan and zoom.


Devil's Postpile


At monolake, I took a panorama of the sky. But it seems more challenging to process. The sky was divided into 7 areas each 4 subframes. Airglow greatly increases the sky background near horizon that night.


IMX071 Characteristics

The IMX071 is the CMOS imaging sensor on Nikon D7000 and D5100. After almost 2 years of work, we have found the register controlling the black level and the patch is available for both D5100 and D7000. As expected, the register is part of the EXPEED Image Processor, not on sensor. With more tweaking on the EXPEED preprocessing switch, we discovered the optical black region is also filtered to get rid of hot pixel and pattern noise. Digital gain above ISO1000 and color channel scaling could also be turned off.

Much work has been done to understand the sensor SPI protocol as well. So far, a register controlling the average black level after digitalization is discovered. There’s another set of 4 registers controlling the analog gain for each color channel. The camera faithfully issues the same gain setting for each ISO no matter the exposure length, thus linearity should be ideal.

Based on these work, a more accurate test on the sensor can be done again. Here’s a sensor characteristic test using data obtained by service mode hack and later disabling the internal image pre-processing.

Area Definition

The mod will also turn on image sensor overscan, where the periphery dummy pixels and optical black pixels are included in the NEF image.

Total Area: 0, 0 – 5054, 3357

Virtual Dummy: 0, 0 – 5039, 11 (Blank ADC runs, very low readout noise)

Dummy Pixels: 0, 12 – 5039, 35 & 4960, 36 – 5039, 3357 (Pixel readout without dark current, for bias level estimation)

Optical Black: 0, 36 – 4959, 3357 except 2, 70 – 3, 3357 & 4956, 70 – 4957, 3357 (Dummy pixels) and effective pixels

Effective Pixel: 4, 70 – 4955, 3357

Offset to non-overscan: 8, 74

The column 5040..5054 are the horizontal blanking region and it outputs a constant value. The dummy pixels don’t seem to have a photodiode, hence not dark current but only read noise is recorded. However, dummy pixels should not be used for read noise estimation. The optical black pixels are actual pixels with light shielding, and they can be used for read noise and dark current estimation.

Testing Procedure

First, a pair of images is taken with same exposure setting against a flat lighting source. A LCD panel display with 4 layer of parchment paper is used as uniform light source. Then the RAW image is split into 4 separate CFA channel and a center 256 x 256 uniform region is cropped.

Image pair

The average pixel value and standard deviation are recorded.


Then the second image is subtracted from the first one, taking away the variance caused by PhotoResponse Non-Uniformity (PRNU). The rest of the variance will be the double of summation from photon shot noise, dark current shot noise and read noise. Since the shutter speed is relative fast – 1/6s, thus we ignore the dark current.

2x Std EV

The read noise can be calculated from the optical black region with the same subtraction. Then the variance of read noise is subtracted from the rest.

Test Result


The following table illustrates the more accurate sensor characteristics under ISO100 and ISO320. Note that with the patch, sensor is using the default analog gain setting instead of calibrated ones. Thus ISO100 is more like ISO 80 compared to factory setting. Also note that “DEV 2”s are the summation of 2 deviation resulting from image subtraction.


The sensor pretty much behaves “ISO-less” with identical read noise. A little elevated read noise at base ISO is likely due to quantilization error from ADC at such a low gain. PRNU may not be accurate enough since I did not use a diffuser. Flatness is achieved by manually picking a 512×512 section from image with the most uniformity.

Read Noise

To further characterize read noise at ISO320, I used an in-house tool to take 64 bias frames without shutter actuation, much like the in camera Long Exposure Noise Reduction (LENR). The images are then converted to TIFF and imported into R Studio for analysis. For one channel CCD, all pixel went through a single output amplifier, thus read noise is identical. However this is not true for CMOS where each pixel has its own amplifier, and each column has its own CDS and ADC circuit. To address this, I calculated the standard deviation for each pixel and plot its distribution. Ideally, each pixel will have a different gain in a best mathmatical model. But measuring it with the current setting is difficult. Under the assumption that gain is the same for each channel, we have this:

RN Distribution

2.5x is the median value for the read noise distribution, meaning that half of the pixels have read noise less than 2.55e- RMS in that channel. Albeit a bit higher than scientific CMOS, which has median read noise of around 1e-, this is a lot better than CCDs.

When we save the read noise as a FITs image, more interesting results are revealed!


Notice that the pixels having high read out noise always occur in vertical pairs, why? Well I guess the answer is due to the vertical 2 pixels read out sharing structure. Namely, 2.5T pixel. It’s possible the elevated noise in the common floating diffusion or the transistors cause elevated read noise appear in both photodiode.

Dark Current

To measure the dark current, I first checked the linearity of the dark count with exposure time. A bracketing set of exposures was taken after 40 minutes of thermo stabilization. And the average count in optical black and white point in effective pixels is checked. The test was done before the hack was available, but from the subtracted black level the linearity is nicely followed.



Now the actual dark current can be computed. A series of 5 minutes exposures were taken continuously. And the dark count can be illustrated using the following diagram. The dark current gradually increase from 0.13eps to 0.34eps at the end. It never reaches equilibrium even after 100 minutes as the battery continues to heat up.


As dark current accumulates with increased temperature, the distribution shifts right and becomes even dispersed due to dark current shot noise and hot pixels. dark_current

Recently, 2 NTC thermistors were discovered inside the camera, one of which resides on the sensor PCB. The good news is, there’s a PTP command to read the resistance value although it’s not reported in EXIF. The thermistor is not in direct contact with the sensor heat sink nor the metal mounting frame, but none the less it’s the best representation of sensor temperature so far.


The above dark current – temp diagram was constructed using 146 30sec dark frames and immediate NTC thermistor read out. 5% of extreme pixels on both end were discarded. 0.14eps at 25°C.


With linear regression, I could better tell the doubling temperature. A 4.8°C doubling temp was recorded around RT. Below 10°C, cooling the sensor give dimishing return as doubling temp goes to 6.8°C. But none the less, only 0.005eps of dark current is accumulating at freezing. That’s only 3 electrons for 10 minutes exposure!

For more sensor related information, view this post.

Updated 2015/2/27