Cooled CMOS Camera – P3: Image Quality

In the previous post I successfully obtained the test pattern with custom VDMA core. The next step will be to implement an operating system and software on host machine. In order to obtain real time live view and control, both software should be developed in parallel. Thus in this post, let’s take a look at the image quality with a simple baremetal application.

The sensor is capable for 10FPS 14Bit, 30FPS 12Bit, or 70FPS at 10bit ADC resolution. For astrophotography, 14bit provides the best setting for dynamic range and achieves unity gain at default setting. The sensor IR filter holder and the camera mounting plate are still in design. I will only provide a glimpse into some bias and dark images at this moment.

To facilitate dark current estimation, the cover glass protective tape was glued to a piece of cardboard. The whole sensor was then shielded from light with metal can lid. Lastly, the camera assembly was placed inside a box and exposed to -15°C winter temperature. During the process, my camera would continuously acquire 2min dark frames for 2 hours, followed by 50 bias frames.

Bias Hist

Pixel Intensity distribution for a 2×4 repeating block (Magenta, Green, Blue for odd rows)

The above distribution reflects a RAW bias frame. It appears each readout bank has different bias voltage in its construction. The readout banks assignment is a 2 rows by 4 columns repeating pattern, each color for each channel. A spike in the histogram at certain interval implies a scaling factor is applied to odd rows post-digitalization to correct for uneven gain between top and bottom ADCs.

Read Noise Distribution

Read Noise – Mode 3.12 Median 4.13 Mean 4.81

The read noise distribution is obtained by taking standard deviation among 50 bias frames for each pixel. Then I plot the above distribution to look at the mode, median and mean. The result is much better compared to a typical CCD.

Dark_current_minus_15

Finally the dark current in a series of 2-minute exposures is measured by subtracting master bias frame. Two interesting observations: 1. The density plot gets sharper (taller, narrower) as temperature decreases corresponding to even lower dark generation rate at colder temperature. 2. The bias is drifting with respect to temperature. This could be in my voltage regulator or in the sensor, or a combination of two.

The bias drift is usually compensated internally by the clamping circuit prior to ADC. But I had to turn this calibration off due to a specific issue with this particular sensor design. I will elaborate more in a later post. Thus to measure dark generation rate, I have to use FWHM of the noise distribution and compare against that in a bias frame. At temperature stabilization, FWHM was registered at 8.774, while a corrected bias is 8.415 e-. For a Gaussian distribution, FWHM is 2.3548 of sigma. Thus the variance for the accumulated dark current is 1.113 given the independent noise source. As such, the dark generation rate at this temperature is less than 0.01 eps. Excellent!

Preliminary Summary

The sensor performs well in terms of noise. For long exposure, the dark generation rate in this CMOS is more sensitive to temperature change than CCDs. The dark current is massively reduced when cooled below freezing point. The doubling temperature is below 5°C.

LEXP_001

An uncorrected dark frame after 120s exposure showing visible column bias and hot pixels

Scraping the Bayer, Gain or Loss? – A quantitative analysis of mono-mod sensitivity

When you are deep into astrophotography, you’d probably start doing monochromatic deep sky imaging. A typical choice would be cooled CCD imager. These CCD cameras come in a variety of size format and architecture. The most affordable are interline CCD offered by Sony and Kodak (now ONSemi). Then the expensive full frame CCD requiring mechanical shutter from Kodak. Now however, as most of my previous posts and other similar studies have pointed out, CMOS holds a much better edge comparing to CCD. The only problem is, not a lot of CMOS based monochromatic devices are out there for your choice.

CMOSIS CMV12000

One option is the sCMOS from EEV and Fairchild. But I would imagine those to be expensive. Then CMOSIS who offer global shutter ones with monochrome in various format. But their dark current (~125 eps) and read noise (>10e-) figures are not a clear competitor to CCD in any way. Sony makes small format B/W CMOS but nothing bigger than 1 inch format. As such, we saw many specialized conversion service that scrape away the Bayer filter layer these years. Unfortunately, by doing so, you essentially remove the microlens array which boost the quantum efficiency. So in this post, I’m going to investigate the QE loss and gain with such modification.

Data is kindly provided by ChipMod for this study.

The modification steps involve camera disassembly, filter stack removal, followed by prying open the cover glass, protecting the bonding wire and finally scratching the pixel array. For the last step, the scratching actually happens in layers. We’ll use IMX071 cross section EM image from Chipworks again for illustration.

image

The surface texture of an image sensor, as described by ChipMod, varies in resistance to scratching. The first layer to come off, are the microlens array indicated in green arrow. This layer is usually made of polymer. Further applying force would strip away the RGB Bayer filter as well, indicated by the red arrow. The yellow region represents the pixel pitch with the blue defining the photodiode boundary. Comparing the length of blue to yellow, we could estimate the fill factor is 50%. Because of the channel stop, overflow drain on the other axis, the fill factor is typically 40%. The gapless microlens above, focus the light rays onto the photodiode to bring the fill factor close to 90%.

image

The sensor was scraped into 3 vertical regions. From top to bottom, A: the microlens array is removed; B: both layer removed and C: the original one. Comparing A/B tells you how much light the color dye absorbs at that wavelength. A/C tells you how effective are microlens. Finally, B/C gives you the gain/loss after mod.

An identical test condition was set up with a 50F6.5 ED telescope in front of a white screen. 2 wavelength, Ha and Oiii are tested with 7nm FWHM filter in the back. Field is sufficiently flat so center regions are used to calculate mean intensity.

image

Test result

The microlens array performs as expected, it typically boost QE to 2x in native channels. Even in non-native color channel, the uLens still boost signal by 50% or more. Losing the uLens array is a major downside. But considering the absorption of color dye even in its peak transmission, stripping CFA actually minimize the QE loss. For example, in the red channel of H-alpha, signal was at 64% even though losing the uLens should impact the QE by more than half. The same is more apparent in Oiii wavelength. Because green channel peaks at 550nm, at 500nm the absorption is nearly half for this particular sensor. Thus the net result is no different from the original sensor.

In conclusion, mono-mod sacrifices some QE for resolution and all spectrum sensitivity. My estimation puts final peak QE at around photodiode fill factor, or around 45%. The state of art CMOS process maximized the photodiode aperture, making such mod less prone to loss of QE after microlens removal. This is in vast contrast with Kodak interline CCD structure where a 5 fold QE penalty should microlens are stripped away. The mod should perform well for narrowband imaging, especially for emission nebulas. However, a fully microlensed monochromatic sensor is still preferred for broadband imaging.

The Making of a Cooled CMOS Camera – P1

As my last post had suggested, I was working on a camera design. Right now the “prototype”, as I would call it, is in the test phase. The project actually dates back to 3 years ago when we envisioned a large focal area CCD imager customized for deep sky astrophotography. At that time, the price for such a commercialized camera was so prohibitive. The most suitable monochromatic chip was the interline KAL-11002 with a size of 36 x 24mm^2. Unlike full frame CCD which necessitates a mechanical shutter for exposure control, interline could handles this electronically. However, the addition of a shielded VCCD region greatly impacts the quantum efficiency and full well capacity. Beyond that, Kodak CCDs don’t seem to recover QE well enough with microlenses, with peak at 50% and only 30% for 650nm on a B/W device. Later on we started to dig deep into the datasheet and soon we abandoned the project. The accumulated dark current in VCCD was simply too much at the slow readout speed required for decent level of read noise.

KAL-11k

The KAL-11002ABA in the original plan

What happened next was dramatic. After getting my hands on D7000 and the hacking, I was shocked by how good CMOS sensor performs. I soon realized the era for CCD in astronomy might come to an end. Sooner or later, it will too embrace the noiseless CMOS in the telescopes. When Kodak span off its imaging division to Truesense, it soon re leased its first CMOS sensor with sub 4e- read noise and CCD-like dark current. We decided to give it a try.

KAC

Got the sensor, now big challenges lay ahead. To speed up, I decided to use the microZed SOM board as the embedded controller, at least for the prototype. Thus only the power supplies and connecting PCB had to be designed. The Zynq-7010 will configure the sensor with its SPI MIO from the ARM PS side. The data will be received at the FPGA programming logic (PL) and somehow relay to the PS DDR3 memory. The data can then undergo complex calibration and save to SD card or transfered over GbE/USB.

microZed

The microZed SOM with 1GB DDR3 and various I/O

The board is then designed and fabricated with the 754 CPU socket mounting the sensor. The main PCB contains the voltage regulators, oscillator and temperature sensing circuits.

Main_PCB

Stack-up

The data lines go through a relay board, which also provides power to Zynq PL I/O banks. The whole stack is then tripled checked before applying power. After weeks of hardware and software debugging, the sensor was finally configured and running at designated frame rate. Now it’s time to work on verilog in order to receive the data. I’m going to cover that in my next part.

Peeping into Pixel – A micrograph of CMOS sensor

Macro-photography are done at 1x ~ 2x magnification. Microscope on the other hand could easily deliver a 40x magnification without eyepiece. In this post, we are peeping into the basic element that captures the image in digital photograph – a pixel on CMOS sensor. I had obtained a Nikon JFET LBCAST sensor from a broken D2H imaging board. LBCAST is still based on CMOS fabrication technology and it’s an Active Pixel Sensor.

Photographing an opaque sample compared to biological slice is extremely difficult, since ordinary trans-illumination will not work. An epi-illumination, de facto illuminating through the objective, should be used instead. Basically a half mirror is in place of the optical path to direct light towards the objective, then back in to the eyepiece and camera. Epi-fluorescence will use a dichroic mirror and a pair of filters.

LBCAST

Back Side

Cover Glass

The D2H sensor die is sitting inside a robust 38 pin ceramic dual-in-line package. But the bonding wire is shielded by a metal frame underneath the cover glass, thus made it impossible to see the die marking. These’s no package marking on the backside except a tape indicating its serial number (or could be color correction information used for calibration). The cover glass is rather thick, roughly 0.7mm.

Top Left

Top Right

Bottom Right

The corner has clearly shown the active pixel region covered by optical black and non-microlensed region. This image is taken by a 10x objective on a stereo microscope. Now we peep in using 40x objective!

Effective Pixel

The effective pixel array (The pixel array which responds to light normally). Note that active array discards the periphery of the effective array due to color interpolation.

Unfortunately the camera is B/W. The brighter ones are green pixels while darker ones are red and blue. With this resolution, we can actually calculate the optical fill factor, it’s well below 60% given such a big lens gap! Even though a square microlens seemed to be employed, not all light is directed into the window. It seems the microlens array are not fabricated in one cycle, as you notice the lenslet on blue and red pixel are slightly larger then green lenslet.

Optical Black

Now comes the optical black (OB) region, and the edge of active pixels. The optical black pixels have a metal shielding in the photodiode window. By blocking light, it will only output dark current and bias level, which will be used as black reference for active pixel region. Nikon subtract the average value of OB from the intensity value in active pixel region, which transforms the black level to 0. This is not good for astrophotography and Canon will add a 1024 ADU to it. From OB pixels, it becomes even clear that only a partial region of the microlens is illuminated, roughly 40%. I believe that’s the reason for low QE, and as a result, low 18% SNR in D2H.

Lens Array border

Finally comes to the border of lens array, now you have bare Color Filter Array (CFA) above pixel. You can clearly see the metal lines (column lines; the sensor is oriented 90°) running in between, which occupies a lot of space and is also the reason in need of microlens.

Wiring

The very bottom right corner of the total pixel array

Each row has a pair of control lines. The upper one in the pair is for JFET select/reset, while the bottom controls photodiode transfer gate. The pair is made of poly-silicon on the substrate. The 2 small black dots in between are likely vias or contacts. The column line (metal layer 1) connectes to the source of JFET transistor according to this paper, which relay the pixel signal. Notice the line is really thick to reduce electroresistance! The reset drain is the metal layer 2 above the row and also serves as a photo shield for the transistor below. It is not visible here.

Another interesting observation is the lack of blurring function of this cover glass. Canon has integrated its second anti-aliasing layer of OLPF as sensor cover glass in full frame and new generation APS-C DSLRs. Apparently the OLPF stack is standalone in D2H.

Line 40

Moving the view to the bottom long edge reveals the column circuit, possibly the buffer, CDS and column scanning driver that latches to the output amplifier. Note a letter “4” is photo-lithographed on the die, this indicates the column 40. The die also has a “+” mark every 5 columns in between.

Line 390

Somewhere along the line between column 385 to 405, there’s a recess in the long edge. I’m not sure what this for. (Image in 10x)

Corner

Top left corner on the opposite long edge of the sensor. (Image in 10x) Even though some of the non-microlensed pixels are hiding beneath the metal frame, we can still see the column number from 3, 4, 5…

Last Line

Top right corner, the last connected line is 256, which indicates total of 2560 columns.

LBCAST-XRay

The ceramic package viewed through a X-Ray scanner showing the bonding wires linking the lead and die. The metal frame is also visible.

Very interesting right, huh? Now we can compare it against a Micron CMOS sensor (Now Aptina) with 5.2um pixel.

Micron 1300

Die marking MI-1300 from year 2002, MT9M001.

MI-1300 Microlens

Now the microlens itself. We can clearly see a much narrower lens gap and higher fill factor. This sensor boost a 55% peak QE, but still less than the Sony Exmor sensors. I hope someone can donate me one for dissection.

The contributor behind the scene – Olympus LUC PL FLN 40x objective. This objective is designed for inverted microscope and has a collar to set the glass slide thickness, allowing the compensation of chromatic and spherical aberration.

Updated: 6/8/2014

Image orientation corrected.