National Park Time Lapse – Tranquility

Since my last astrophotography road trip in California two and half years ago, I really haven’t spent anytime writing on travel and photography. Amidst the camera project and my PhD, I somehow have accumulated a pile of decent photography yet to be processed or released. But anyway, all those hard work serves to produce better images wouldn’t they. So I took a break in the previous weeks to finish off some of those leftover photo work.

Please enjoy my second time lapse compilation – Tranquility

Included are some of the time lapses I took in Big Bend NP, Mojave National Preserve, Death Valley, Picture Rock Lakeshore and Shenandoah NP. Then there’s also the Jiuzhai Valley in Sichuan, China!

In terms of astrophotography, I only got a few left in hard drive for release. The road trips I cover recently were on the east coast. With light pollution and bad weather along the way, there really weren’t many stars to be seen. Let alone for deep space imaging.

Cygnus

Wide Field Milky Way Center shot in Death Valley

As for 360 panorama, it becomes a routine for me now as the pipeline for 3×6 stitching is well established. In the meantime I start to incorparate the floor image in the stitching process.

Carlsbad CavernsThe WindowWhite SandBig BendPorcupine MountainTybee Island LighthouseShenandoahDeath Valley

Mouse over for location, Click for 360 View

The link to my first time laspe compilation is here:

Advertisements

CMOS Camera – P6: First Light

In July I finally got the UV/IR cut filter for this camera. I designed a simple filter rack and 3D printed it. The whole thing now fits together nicely in front of the sensor. IR cut is necessary due to a huge proportion of light pollution in the near-infrared spectrum.

Filter rack

UV/IR cut taped to the plastic rack.

With all the hardware in place, I added a single trigger exposure mode in the camera firmware. And accordingly a protocol command to issue a release on the PC software.

70SA

The camera is then attached to a SkyRover 70SA astrograph. In the camera angle adjuster, there’s a 12nm bandwidth Ha filter. This would allow me to easily reject light pollution while imaging in front of my house. Focusing through the Ha filter is extremely difficult. I chose a bright star and pulled the exposure time to maximum during liveview for focusing. Finally, before the battery pack went dry (supplying both AZ-EQ6 mount and my camera), I managed to obtain 15 frames with 5 minutes each.

NGC7000

No dark frame was used for the first light image and guiding performance was exceptional. This foiled the kappa sigma algorithm for hot pixel removal and makes the background very noisy. Anyway, NGC7000 already shows rich details!

Remarks

1. This sensor has higher dark current than Sony CMOS. Somewhat >4 folds more at the same temperature. However, doubling temperature is small. In another word, its dark current reduces quickly with cooling. Last time I observed no dark noise at –15C. Thus imaging the horsehead during winter would be brilliant here in Michigan!

2. Power issue. The sensor consumes ~110mA @5V during long integration comparing to ~400mA for continuous readout, which is minimal. However, the Zynq SoC + Ethernet PHY consumes much more than a full running CMOS sensor. Thus some power saving technique can be employed. CPU throttling during long integration/standby, powering down the fabric during standby mode, move the bulk of RTOS to OCM instead of using DDR, etc. But many of these require substantial work.

 

Anyway, I’m going to use this during the solar eclipse here in USA!

CMOS Camera – P5: Ethernet Liveview

To make camera control easier, I spent the last several weeks making a control scheme based on Ethernet. The camera will be a server with LWIP tasks running on a freeRTOS operating system. The client will be my computer of any OS platform. The only thing connects the two will be a 1G Ethernet cable. To speed things up, the client demo program is written in python3.

image

Client application based on TKinter

Once the RTOS is boot up, a core task will set up the network and instantiate a listening port. On the client side, all control commands are sent through TCP protocol once connection is established. On the application layer, there’s really not much protocol going here. I chose to decode the command using a magic code followed by actual command id. Four commands are established so far:

1. Send Setting

2. Start Capture (RTOS will create the CMOS run task)

3. Halt Capture

4. Send Image

Once TCP handshake is done, client could send 1 and 2 to begin video capture with defined setting. During this time, command 4 will retrieve the latest image to decode and display on GUI. The camera setting includes exposure time and gain, frame definition and on chip binning, shutter mode and ADC depth, as well as many other readout related registers.

The image are transferred in RAW data, which is linear. Thus numpy functions become very helpful here to implement the level control and post readout binning. RAW image can be written to disk as RAW video given a fast enough I/O.

Several ongoing improvements are under progress. First and foremost is the Ethernet performance. In a direct point to point connection, there really should be reliability issue. And according to test, TCP could achieve ~75MB/s on a GigaETH. UDP will be even fast might need to with potential packet drop. But anyway, TCP will be able to handle 24FPS 1080P liveview. But both server and client needs optimization. Other issue includes file saving task on RTOS and better long exposure control.

Update 6/24

Some updates on the board operation system.

1. By modifying on socket API, I incorporated  the zero copy mode of TCP operation. Thus pointer to data memory is passed directly to EMAC task and no stack memcpy is involved. This provides a 15% bandwidth gain under TCP operation. Top speed is around 70MB/s for payload.

2. I added in an interrupt event on SDIO driver to avoid polling the status register. Thus IO will not waste CPU cycle and the single core can perform EMAC listening task. As a result, SD file I/O can be performed simultaneously along the video liveview. 

Microscopic survey of recent image sensors

Last year through cooperation with ChipMod studio, we obtained multiple die shot images of recent Sony sensors. And in this post we’re going to show some of them. Most of our device identification is based on teardown from various reliable source, such as Chipworks and manufacture repair manual. Or from direct microscopic imaging. For inference, it has to be relied on die signature such as number of bond pad and their relative location, or referred as “bond pad signature”.

 

Let’s begin. The first one, IMX038AQE from Pentax K-x/K-r. It’s the same silicon die as the AQQ variant seen in Nikon D90 and D5000 DSLR.

SONY and device marking code IMX038

Layer number during photolithography of Bayer pattern and on chip lens (OCL)

Factory die level testing left probe scratch on the test pad

Next, let’s take a look at the IMX071AQQ from D5100.

No device marking was found on the die except “SONY”

Bayer layer mark. PAC appears to be Photo Activated Chemical based on patents

Factory test pads

Finally we have the IMX094AQP from D800/D800E. The first image shows the alignment mark near the die boundary. It’s interesting that Nikon customized the cover glass to be a quartz anti-moiré layer. As advertised by Nikon, both D800 and E variant included the vertical separation glass. The glass appeared to be specially AR coated only in the image area, not on the whole plate level. We had never seen this on other Sony sensor, not even on IMX128.

Alignment marks shows duplicated image in vertical direction

Edge of the multilayer AR coating shows uneven gradient

Similar to 071, Sony did not imprint the device marking in the corner. However, I found a pair of mask number related to this device. MM094L and MM094R on the long edge of the silicon die. This pairs of mark appears on Sony full frame sensors only. We later found the pair on IMX235 and IMX128 as well. Based on their location, I realized that it could be a mask code for a stitching pair. A full frame sensor was just too big to fit in the circle of stepper imaging field of view. Thus to make a full sensor, a pair of mask has to be used just like taking your panorama. This was the case for IMX028 when I discovered the non-uniformity on its flat field image.

The microscope I had access to has a 40x objective. However its working distance is too short to prevent direct imaging through the sensor cover glass. With the permission and request by ChipMod studio, I’ll show some more enlarged image onto the pixels themselves.

One interesting sensor was the X-pro1 CMOS harboring a Sony marking code. Again no actual device code.

Xpro-1 IMX165

Xpro-1 IMX165

The corner of Fujifilm X-trans array

Through the central opening on the back of PCB, the package marking register X165A?. The second character is presumably a R or P or F.  It’s possibly IMX165AFE based on IC searching where many distributer had an entry on their listing. Sony usually used the second letter to denote Bayer type, with Q for RGB Bayer and L for mono. F would naturally mean a different pattern like X-trans. The die itself, appears to be the same as the 16MP IMX095 found in Sony NEX F3 and Pentax K-01.

Fujifilm CMOS PCB

IMX095AQE-K-01

Pentax K-01 uses CLCC version IMX095AQE

It’s possible that Sony fixed the underlying circuit, only altering the last few steps in their back end of line (BEOL) to pattern a different color filter array. This would significantly reduces cost by avoiding making a new sensor. So the question is, when will we see a native monochromatic CMOS in APS-C or larger format?

Next we will have a big one, the IMX235AQR in the Sony A7S, which harbors a 12MP full frame at around 8.5um pixel pitch. ChipMod obtained the following image during mono chip mode. In essence, scraping away the microlens and Bayer layer. The pixel opening is super wide given 55% area fill factor on the metal three layer.

50x objective view of the Metal 3 layer after Bayer removal

IMX235

The microlens array appears to shift towards top left of pixel boundary

We also surveyed the IMX183 BSI sensor. Surprisingly, BSI sensor also has a grid on the light sensitive side. After some literature search, the presence of this grid could reduce color crosstalk between adjacent pixels. This is because on BSI sensor light can easily pass to the collecting well in the next pixel when fill factor gets larger and incident angle gets smaller. It is also the reason to employ microlens array to focus light rays on to the pixel center.

IMX183

IMX183 BSI pixel boundary grid

At the end, we take a look at the old school interline CCDs. ICX413 in Pentax K-100.

And ICX493 using rotated horizontal-vertical transfer registers.

 

ICX493 employed four phase CCD, with two pixels covering a period. Thus readout is interlined. Charge on odd and even columns are transferred upward then right or downward and left to their respective HCCD (organized vertically) on each side for read out. Then the same is repeated for interline rows.

Cooled CMOS Camera – P4: Lens Mount

Things have been going slowly recently. Instead of improving the image acquisition pipeline, I decided to apply some mechanical touch to make it more stable. The PCI-E connector is without a doubt, the weakest link for the entire structure. Also I need to actually make this a camera by mounting a lens on it, instead of just several pieces of PCBs.

Drawing_1Drawing_2

3D Visualization with PCBs

Notice that the linkage of side plate consists of three slots instead of holes. This was designed for tuning the flange distance from the focal plane. Both PCBs are mounted on M3x0.5mm standoffs just like your motherboard in a computer case.

ASM_BackASM_FrontMount

View through the lens mount

An EF macro extension tube is used to mounting the lens. The flange distance is approximately 44mm. The electrical contacts are left float for now. I attached a 50mm 1.8D lens using a mount adapter.

50mm Lens

First image this camera sees through my window.

No AA filter? More of a marketing hype

Back in 2012 when D800 was released, Nikon did a bit tweaking on its antialiasing filter which led to the higher resolution D800E. A pair of birefringent crystal is organized in the parallel 180 degree to cancel out the effect. But were they worth it? As we had disassembled more camera, I decided to write a post on how these filter stack is organized.

ChipMod sent me a pair of filters on the Nikon D600. The IMX128 was scraped during monochromatic mod.

Filter set

Filters from D600: UV-IR, CMOS Cover Glass, Color Correction Stack

Back on D7000, I had shown the filter set consists of an antialiasing layer with UV-IR coating and an ICF stack sandwiched from a wave plate, a color correction glass and an other AA layer. Upon receiving the filter, I initially suspect the same. After closer examination, I found the color correction glass was actually just a single thin layer. No wave plate was glued to it. On a micrometer, it registered 0.484mm thick.

Without a wave plate, it’s impossible to spread a point into four, since the two light rays are in orthogonal polarized directions. I thought a workaround was to cut the AA filter at 45 degree instead of 0 or 90. (Here I refer to the orientation to the direction where two light rays separate. The AA filter is always cut perpendicular to the optical axis, or Z-axis, of the birefringent crystal) As such, the blue color could be mixed with red. However, upon inspection under a microscope, this was again rebutted. It turned out, the first UV-IR layer is only blurring on the vertical direction, leaving moiré as is in the horizontal direction.

AA under Microscope

Calibration slide between objective and AA1mm in 100 division

Stage setup with micrometer ruler in the vertical direction

The spread from this filter is around 5 micron and wider than that in D7k. This corresponds to a thicker crystal at 0.8mm. Now we know for sure D600 only blurs vertically. This gives the advantage to gain a bit higher resolution in the horizontal direction. The DPreview had an excellent resolution test confirming the case. D600 resolve horizontally well beyond 36, albeit accompanying color moiré. But it blurs out at around 34 in vertical directions.

Any other cameras also do this? It turns out that many other cameras follow this trend. To name a few: Sony A7Rii, Nikon D5100, and possibly other low end DSLRs all had a single AA glued to a color correction filter. One possibility is to suppress the already strong false color during video live view rising from row skipping. However, I would still argue the effect of this is minimal given the spread distance close to pixel pitch.

The material for AA filter and wave plate is usually crystalline quartz glass. Many website cites lithium niobate and that is incorrect. An argument floats around that quartz has too small a birefringent value and it requires a thick slice. This is true during the early days of digital imaging where pixel pitch were huge! (>10um) Once a proper calculation is done, the above 0.8mm thick material happens to give a close to 5um displacement. Should lithium niobate be used, it would be way too thin to manufacture. Another interesting property with quartz, or fused silica, is its UV transparent property. Based on the above transmission spectrum scan, the AA substrate material permits UV to pass when measured at corner. Lithium niobate would absorb strongly in UV just like those ICFs. Notice that without any coating, the glass itself reflects 10% of light. Again, for emission nebula imaging, you could keep the UV-IR filter.

Cooled CMOS Camera – P3: Image Quality

In the previous post I successfully obtained the test pattern with custom VDMA core. The next step will be to implement an operating system and software on host machine. In order to obtain real time live view and control, both software should be developed in parallel. Thus in this post, let’s take a look at the image quality with a simple baremetal application.

The sensor is capable for 10FPS 14Bit, 30FPS 12Bit, or 70FPS at 10bit ADC resolution. For astrophotography, 14bit provides the best setting for dynamic range and achieves unity gain at default setting. The sensor IR filter holder and the camera mounting plate are still in design. I will only provide a glimpse into some bias and dark images at this moment.

To facilitate dark current estimation, the cover glass protective tape was glued to a piece of cardboard. The whole sensor was then shielded from light with metal can lid. Lastly, the camera assembly was placed inside a box and exposed to -15°C winter temperature. During the process, my camera would continuously acquire 2min dark frames for 2 hours, followed by 50 bias frames.

Bias Hist

Pixel Intensity distribution for a 2×4 repeating block (Magenta, Green, Blue for odd rows)

The above distribution reflects a RAW bias frame. It appears each readout bank has different bias voltage in its construction. The readout banks assignment is a 2 rows by 4 columns repeating pattern, each color for each channel. A spike in the histogram at certain interval implies a scaling factor is applied to odd rows post-digitalization to correct for uneven gain between top and bottom ADCs.

Read Noise Distribution

Read Noise – Mode 3.12 Median 4.13 Mean 4.81

The read noise distribution is obtained by taking standard deviation among 50 bias frames for each pixel. Then I plot the above distribution to look at the mode, median and mean. The result is much better compared to a typical CCD.

Dark_current_minus_15

Finally the dark current in a series of 2-minute exposures is measured by subtracting master bias frame. Two interesting observations: 1. The density plot gets sharper (taller, narrower) as temperature decreases corresponding to even lower dark generation rate at colder temperature. 2. The bias is drifting with respect to temperature. This could be in my voltage regulator or in the sensor, or a combination of two.

The bias drift is usually compensated internally by the clamping circuit prior to ADC. But I had to turn this calibration off due to a specific issue with this particular sensor design. I will elaborate more in a later post. Thus to measure dark generation rate, I have to use FWHM of the noise distribution and compare against that in a bias frame. At temperature stabilization, FWHM was registered at 8.774, while a corrected bias is 8.415 e-. For a Gaussian distribution, FWHM is 2.3548 of sigma. Thus the variance for the accumulated dark current is 1.113 given the independent noise source. As such, the dark generation rate at this temperature is less than 0.01 eps. Excellent!

Preliminary Summary

The sensor performs well in terms of noise. For long exposure, the dark generation rate in this CMOS is more sensitive to temperature change than CCDs. The dark current is massively reduced when cooled below freezing point. The doubling temperature is below 5°C.

LEXP_001

An uncorrected dark frame after 120s exposure showing visible column bias and hot pixels

Scraping the Bayer, Gain or Loss? – A quantitative analysis of mono-mod sensitivity

When you are deep into astrophotography, you’d probably start doing monochromatic deep sky imaging. A typical choice would be cooled CCD imager. These CCD cameras come in a variety of size format and architecture. The most affordable are interline CCD offered by Sony and Kodak (now ONSemi). Then the expensive full frame CCD requiring mechanical shutter from Kodak. Now however, as most of my previous posts and other similar studies have pointed out, CMOS holds a much better edge comparing to CCD. The only problem is, not a lot of CMOS based monochromatic devices are out there for your choice.

CMOSIS CMV12000

One option is the sCMOS from EEV and Fairchild. But I would imagine those to be expensive. Then CMOSIS who offer global shutter ones with monochrome in various format. But their dark current (~125 eps) and read noise (>10e-) figures are not a clear competitor to CCD in any way. Sony makes small format B/W CMOS but nothing bigger than 1 inch format. As such, we saw many specialized conversion service that scrape away the Bayer filter layer these years. Unfortunately, by doing so, you essentially remove the microlens array which boost the quantum efficiency. So in this post, I’m going to investigate the QE loss and gain with such modification.

Data is kindly provided by ChipMod for this study.

The modification steps involve camera disassembly, filter stack removal, followed by prying open the cover glass, protecting the bonding wire and finally scratching the pixel array. For the last step, the scratching actually happens in layers. We’ll use IMX071 cross section EM image from Chipworks again for illustration.

image

The surface texture of an image sensor, as described by ChipMod, varies in resistance to scratching. The first layer to come off, are the microlens array indicated in green arrow. This layer is usually made of polymer. Further applying force would strip away the RGB Bayer filter as well, indicated by the red arrow. The yellow region represents the pixel pitch with the blue defining the photodiode boundary. Comparing the length of blue to yellow, we could estimate the fill factor is 50%. Because of the channel stop, overflow drain on the other axis, the fill factor is typically 40%. The gapless microlens above, focus the light rays onto the photodiode to bring the fill factor close to 90%.

image

The sensor was scraped into 3 vertical regions. From top to bottom, A: the microlens array is removed; B: both layer removed and C: the original one. Comparing A/B tells you how much light the color dye absorbs at that wavelength. A/C tells you how effective are microlens. Finally, B/C gives you the gain/loss after mod.

An identical test condition was set up with a 50F6.5 ED telescope in front of a white screen. 2 wavelength, Ha and Oiii are tested with 7nm FWHM filter in the back. Field is sufficiently flat so center regions are used to calculate mean intensity.

image

Test result

The microlens array performs as expected, it typically boost QE to 2x in native channels. Even in non-native color channel, the uLens still boost signal by 50% or more. Losing the uLens array is a major downside. But considering the absorption of color dye even in its peak transmission, stripping CFA actually minimize the QE loss. For example, in the red channel of H-alpha, signal was at 64% even though losing the uLens should impact the QE by more than half. The same is more apparent in Oiii wavelength. Because green channel peaks at 550nm, at 500nm the absorption is nearly half for this particular sensor. Thus the net result is no different from the original sensor.

In conclusion, mono-mod sacrifices some QE for resolution and all spectrum sensitivity. My estimation puts final peak QE at around photodiode fill factor, or around 45%. The state of art CMOS process maximized the photodiode aperture, making such mod less prone to loss of QE after microlens removal. This is in vast contrast with Kodak interline CCD structure where a 5 fold QE penalty should microlens are stripped away. The mod should perform well for narrowband imaging, especially for emission nebulas. However, a fully microlensed monochromatic sensor is still preferred for broadband imaging.