Phase AF CCD Die Shot

Back in 2014 we were investigating the AF/Lens system at NikonHacker. To understand the operation of phase AF, some efforts were put into the AF sensor itself. There were leaked D1X schematics indicating 3 linear CCDs made by Sony (ILX105 and ILX107) are incorporated into the MultiCAM-1300. In the old days, a single chip could not handle that many segments of linear pixels on a single die, so that the light path had to be split and focused onto multiple chips. The same is done on MultiCAM-2000 which uses 3 chips as well.

Then from the D200 until D90, a single chip ILX148 is used to handle all 11 focus points in the new CAM-1000 AF system. Some teardown serves as great resources even showing a die photo of that sensor. Missing in between was the D70’s CAM-900. Later I came across a cheap working sensor stripped from a broken D70 and decided to take a look.

Front

Back

The entire module came in with dust, clearly from a broken camera fall onto the ground. I tore the 2 duct tap covering the slit between the chip and plastic optical assembly. The opening is a metal mask outlying the light transmission boundaries of 5 focus points.

Then I use a knife to peer off the glue on the sides, exposing the reddish epoxy adhesion between the chip carrier and the optical module. A gentle pull separated them apart.

The Sensor

Sensor Die

Now the AF CCD is exposed! You could see a total of 12 linear CCD segments forming 6 pairs.

Let’s look at the back side of the optical assembly to understand why.

Lenslet

It appears each focus point has a pair of microlenses. The center cross-type use 2 linear segments in perpendicular, thus 4 lenslets. That gives you total of 6 pairs.

To illustrate how this works, I cover the focus plane with a scratching paper and point its front toward a light bulb. And here’s the image.

Segment-Image

The pattern matches the layout of linear CCD.

Now we could mimic a high contrast target by covering 2 focus points in half with a sticker.

You could see the 2 lenslets forms a copy of 2 high-contrast edges in the 2 segments.

When this is relayed from a photographic lens, the distance between the 2 high-contrast edges will vary depending on the defocus value. The firmware A then uses some sort of cross-correlation algorithm to determine that distance. The distance is then compared against the calibrated value to get the actual defocus amount used to drive the lens AF motor.

So far that’s for the working principle for the phase AF optics. There’s a lot need to dig into the ASM codes of firmware A, and the electronic interface between the AF CCD and the MCU running the codes. Here I decided to desolder the CCD from the flex board. The CCD is packaged inside a CLCC and the contacts form a L-shape covering both side and bottom. It turned out the heat from soldering iron disassociates the wires from flexible board before melting the solder on the bottom. All the contact pads on the flexible board are destroyed.

The backside of the CLCC package has following marking.

20140906_205726

It’s a Sony ILX127AA linear CCD.  405 R9KK is the product batch code. “405” indicates it’s made in the 5th week of 2004, around the time of D70 and D70s.

The schematics can be obtained from wiring trace. In the diagram below, VREF is probably 3.3V based on the trace. SD0~3 and STB formed a simple parallel command interface. CLK is the master clock input. The analog output of pixel intensity is on Vout synchronized to SYNC.

image

Now we could dig into the image sensor die using a microscope. I took more than 50 shot and stitched using a panoramic software. The CCD was manufactured using a very old process node, probably larger than 1 micron.

ILX127AA

Click for Large View

The charge transfer is based on 2-phase CCD. The total number of pixels is around 996. Considering the metal masked pixels, this number reduces to 912. Thus MultiCam-900 make sense. The greenish regions are the actual photodiodes. The photon generated charge is then transferred to the shaded region on the left or to the top. The charge is then clocked and shifted out to the output amplifier. The three long segments are continuous with dummy pixels in between two correlated pixel regions. The six shorter ones form the left, center and right focus points are broken into two due to the long segments. Thus each shorter one has its own amplifier. The CCD integrates all the input command decoder/segment select/CCD driver logic on chip, as indicated by the vertical grid of synthesized transistors and their metal interconnect wires.

Advertisements

Microscopic survey of recent image sensors

Last year through cooperation with ChipMod studio, we obtained multiple die shot images of recent Sony sensors. And in this post we’re going to show some of them. Most of our device identification is based on teardown from various reliable source, such as Chipworks and manufacture repair manual. Or from direct microscopic imaging. For inference, it has to be relied on die signature such as number of bond pad and their relative location, or referred as “bond pad signature”.

 

Let’s begin. The first one, IMX038AQE from Pentax K-x/K-r. It’s the same silicon die as the AQQ variant seen in Nikon D90 and D5000 DSLR.

SONY and device marking code IMX038

Layer number during photolithography of Bayer pattern and on chip lens (OCL)

Factory die level testing left probe scratch on the test pad

Next, let’s take a look at the IMX071AQQ from D5100.

No device marking was found on the die except “SONY”

Bayer layer mark. PAC appears to be Photo Activated Chemical based on patents

Factory test pads

Finally we have the IMX094AQP from D800/D800E. The first image shows the alignment mark near the die boundary. It’s interesting that Nikon customized the cover glass to be a quartz anti-moiré layer. As advertised by Nikon, both D800 and E variant included the vertical separation glass. The glass appeared to be specially AR coated only in the image area, not on the whole plate level. We had never seen this on other Sony sensor, not even on IMX128.

Alignment marks shows duplicated image in vertical direction

Edge of the multilayer AR coating shows uneven gradient

Similar to 071, Sony did not imprint the device marking in the corner. However, I found a pair of mask number related to this device. MM094L and MM094R on the long edge of the silicon die. This pairs of mark appears on Sony full frame sensors only. We later found the pair on IMX235 and IMX128 as well. Based on their location, I realized that it could be a mask code for a stitching pair. A full frame sensor was just too big to fit in the circle of stepper imaging field of view. Thus to make a full sensor, a pair of mask has to be used just like taking your panorama. This was the case for IMX028 when I discovered the non-uniformity on its flat field image.

The microscope I had access to has a 40x objective. However its working distance is too short to prevent direct imaging through the sensor cover glass. With the permission and request by ChipMod studio, I’ll show some more enlarged image onto the pixels themselves.

One interesting sensor was the X-pro1 CMOS harboring a Sony marking code. Again no actual device code.

Xpro-1 IMX165

Xpro-1 IMX165

The corner of Fujifilm X-trans array

Through the central opening on the back of PCB, the package marking register X165A?. The second character is presumably a R or P or F.  It’s possibly IMX165AFE based on IC searching where many distributer had an entry on their listing. Sony usually used the second letter to denote Bayer type, with Q for RGB Bayer and L for mono. F would naturally mean a different pattern like X-trans. The die itself, appears to be the same as the 16MP IMX095 found in Sony NEX F3 and Pentax K-01.

Fujifilm CMOS PCB

IMX095AQE-K-01

Pentax K-01 uses CLCC version IMX095AQE

It’s possible that Sony fixed the underlying circuit, only altering the last few steps in their back end of line (BEOL) to pattern a different color filter array. This would significantly reduces cost by avoiding making a new sensor. So the question is, when will we see a native monochromatic CMOS in APS-C or larger format?

Next we will have a big one, the IMX235AQR in the Sony A7S, which harbors a 12MP full frame at around 8.5um pixel pitch. ChipMod obtained the following image during mono chip mode. In essence, scraping away the microlens and Bayer layer. The pixel opening is super wide given 55% area fill factor on the metal three layer.

50x objective view of the Metal 3 layer after Bayer removal

IMX235

The microlens array appears to shift towards top left of pixel boundary

We also surveyed the IMX183 BSI sensor. Surprisingly, BSI sensor also has a grid on the light sensitive side. After some literature search, the presence of this grid could reduce color crosstalk between adjacent pixels. This is because on BSI sensor light can easily pass to the collecting well in the next pixel when fill factor gets larger and incident angle gets smaller. It is also the reason to employ microlens array to focus light rays on to the pixel center.

IMX183

IMX183 BSI pixel boundary grid

At the end, we take a look at the old school interline CCDs. ICX413 in Pentax K-100.

And ICX493 using rotated horizontal-vertical transfer registers.

 

ICX493 employed four phase CCD, with two pixels covering a period. Thus readout is interlined. Charge on odd and even columns are transferred upward then right or downward and left to their respective HCCD (organized vertically) on each side for read out. Then the same is repeated for interline rows.

No AA filter? More of a marketing hype

Back in 2012 when D800 was released, Nikon did a bit tweaking on its antialiasing filter which led to the higher resolution D800E. A pair of birefringent crystal is organized in the parallel 180 degree to cancel out the effect. But were they worth it? As we had disassembled more camera, I decided to write a post on how these filter stack is organized.

ChipMod sent me a pair of filters on the Nikon D600. The IMX128 was scraped during monochromatic mod.

Filter set

Filters from D600: UV-IR, CMOS Cover Glass, Color Correction Stack

Back on D7000, I had shown the filter set consists of an antialiasing layer with UV-IR coating and an ICF stack sandwiched from a wave plate, a color correction glass and an other AA layer. Upon receiving the filter, I initially suspect the same. After closer examination, I found the color correction glass was actually just a single thin layer. No wave plate was glued to it. On a micrometer, it registered 0.484mm thick.

Without a wave plate, it’s impossible to spread a point into four, since the two light rays are in orthogonal polarized directions. I thought a workaround was to cut the AA filter at 45 degree instead of 0 or 90. (Here I refer to the orientation to the direction where two light rays separate. The AA filter is always cut perpendicular to the optical axis, or Z-axis, of the birefringent crystal) As such, the blue color could be mixed with red. However, upon inspection under a microscope, this was again rebutted. It turned out, the first UV-IR layer is only blurring on the vertical direction, leaving moiré as is in the horizontal direction.

AA under Microscope

Calibration slide between objective and AA1mm in 100 division

Stage setup with micrometer ruler in the vertical direction

The spread from this filter is around 5 micron and wider than that in D7k. This corresponds to a thicker crystal at 0.8mm. Now we know for sure D600 only blurs vertically. This gives the advantage to gain a bit higher resolution in the horizontal direction. The DPreview had an excellent resolution test confirming the case. D600 resolve horizontally well beyond 36, albeit accompanying color moiré. But it blurs out at around 34 in vertical directions.

Any other cameras also do this? It turns out that many other cameras follow this trend. To name a few: Sony A7Rii, Nikon D5100, and possibly other low end DSLRs all had a single AA glued to a color correction filter. One possibility is to suppress the already strong false color during video live view rising from row skipping. However, I would still argue the effect of this is minimal given the spread distance close to pixel pitch.

The material for AA filter and wave plate is usually crystalline quartz glass. Many website cites lithium niobate and that is incorrect. An argument floats around that quartz has too small a birefringent value and it requires a thick slice. This is true during the early days of digital imaging where pixel pitch were huge! (>10um) Once a proper calculation is done, the above 0.8mm thick material happens to give a close to 5um displacement. Should lithium niobate be used, it would be way too thin to manufacture. Another interesting property with quartz, or fused silica, is its UV transparent property. Based on the above transmission spectrum scan, the AA substrate material permits UV to pass when measured at corner. Lithium niobate would absorb strongly in UV just like those ICFs. Notice that without any coating, the glass itself reflects 10% of light. Again, for emission nebula imaging, you could keep the UV-IR filter.

Scraping the Bayer, Gain or Loss? – A quantitative analysis of mono-mod sensitivity

When you are deep into astrophotography, you’d probably start doing monochromatic deep sky imaging. A typical choice would be cooled CCD imager. These CCD cameras come in a variety of size format and architecture. The most affordable are interline CCD offered by Sony and Kodak (now ONSemi). Then the expensive full frame CCD requiring mechanical shutter from Kodak. Now however, as most of my previous posts and other similar studies have pointed out, CMOS holds a much better edge comparing to CCD. The only problem is, not a lot of CMOS based monochromatic devices are out there for your choice.

CMOSIS CMV12000

One option is the sCMOS from EEV and Fairchild. But I would imagine those to be expensive. Then CMOSIS who offer global shutter ones with monochrome in various format. But their dark current (~125 eps) and read noise (>10e-) figures are not a clear competitor to CCD in any way. Sony makes small format B/W CMOS but nothing bigger than 1 inch format. As such, we saw many specialized conversion service that scrape away the Bayer filter layer these years. Unfortunately, by doing so, you essentially remove the microlens array which boost the quantum efficiency. So in this post, I’m going to investigate the QE loss and gain with such modification.

Data is kindly provided by ChipMod for this study.

The modification steps involve camera disassembly, filter stack removal, followed by prying open the cover glass, protecting the bonding wire and finally scratching the pixel array. For the last step, the scratching actually happens in layers. We’ll use IMX071 cross section EM image from Chipworks again for illustration.

image

The surface texture of an image sensor, as described by ChipMod, varies in resistance to scratching. The first layer to come off, are the microlens array indicated in green arrow. This layer is usually made of polymer. Further applying force would strip away the RGB Bayer filter as well, indicated by the red arrow. The yellow region represents the pixel pitch with the blue defining the photodiode boundary. Comparing the length of blue to yellow, we could estimate the fill factor is 50%. Because of the channel stop, overflow drain on the other axis, the fill factor is typically 40%. The gapless microlens above, focus the light rays onto the photodiode to bring the fill factor close to 90%.

image

The sensor was scraped into 3 vertical regions. From top to bottom, A: the microlens array is removed; B: both layer removed and C: the original one. Comparing A/B tells you how much light the color dye absorbs at that wavelength. A/C tells you how effective are microlens. Finally, B/C gives you the gain/loss after mod.

An identical test condition was set up with a 50F6.5 ED telescope in front of a white screen. 2 wavelength, Ha and Oiii are tested with 7nm FWHM filter in the back. Field is sufficiently flat so center regions are used to calculate mean intensity.

image

Test result

The microlens array performs as expected, it typically boost QE to 2x in native channels. Even in non-native color channel, the uLens still boost signal by 50% or more. Losing the uLens array is a major downside. But considering the absorption of color dye even in its peak transmission, stripping CFA actually minimize the QE loss. For example, in the red channel of H-alpha, signal was at 64% even though losing the uLens should impact the QE by more than half. The same is more apparent in Oiii wavelength. Because green channel peaks at 550nm, at 500nm the absorption is nearly half for this particular sensor. Thus the net result is no different from the original sensor.

In conclusion, mono-mod sacrifices some QE for resolution and all spectrum sensitivity. My estimation puts final peak QE at around photodiode fill factor, or around 45%. The state of art CMOS process maximized the photodiode aperture, making such mod less prone to loss of QE after microlens removal. This is in vast contrast with Kodak interline CCD structure where a 5 fold QE penalty should microlens are stripped away. The mod should perform well for narrowband imaging, especially for emission nebulas. However, a fully microlensed monochromatic sensor is still preferred for broadband imaging.

Teaser: Nikon DSLR Black Point Hack for Astrophotography

Heads up Astrophotographers, Canon is no longer the best camera in terms of image quality in astrophotography. Today, we, the Nikon Hackers, are first able to extract the real, authentic RAW image from the Nikon D7000. The last hurdle towards serious astro-imaging. Especially for people doing narrow band where background is very dark. It will also promise greater bias and dark calibration.

This is an exciting moment, for me as an amateur astronomer at least. Here’s a quick peak of the dark frame image.

Preview

Here’s the image straight out of the camera. The DSP engine is still treating 0 as black point thus it’s pink on the screen. The histogram also looks weird due to its X-axis is gamma corrected for JPEG preview.

statistics

Average will now be brought back to around 600ADU, the setting for on sensor black level.

As for image quality, Sony Exmor CMOS has far less readout noise and FPN compared to Canon. Dark current is also in the range of 0.15eps stablized under room temperature. Under a typical winter condition, dark current is so low and comparable to cooled CCDs.

Now 2 options are available to get sensor data without any pre-processing. One, get the firmware patch called “True dark current”. The drawback is camera will not use calibrated data. Gr and Gb pixels will not be at the same conversion gain. And currently it is only for D5100 and D7000 as we don’t have time to dig into the assembly codes for other DSLR models. Second option is to get my “Dark Current Enable Tool”. The downside is it’s only transient. Camera will return to normal once power cycled or metering system went asleep.

Thus if you have computer during imaging and use your camera for daylight photography, the second option will be the best. Otherwise if you travel like me, go for the first option and keep 2 copies in the smart phone. Copy the desired version with USB OTG and flash the camera with a charged battery.

5/17/2015 Update:

We released a new firmware patch for D5100/D7000/D800, which trades a menu entry called “Color Space” into one that can activate the original sensor data. Thus you can use your DSLR during travel for both astrophotography and daily photography without the need to flash different firmware or with a computer tether. And here’s a demo:

Dual Use Modification for D7000

Clearly, H-alpha astronomical imaging would greatly benefit from a modified camera. A lot more nebulas are now within feasible exposure time. Yet this pose another hurdle to use the camera in daily photography as its color balance is compeletely thrown away. Preset WB is a way to go but scene are so variable from one another that it becomes a chore to keep a sheet of white paper with me. And besides, preset WB only correct for one light source. The correction ratio for different color temperature is no longer the same with modified spectrum in the sensor. Spatial varible lighting, street lights plus moonlight as an example, will be a real chanllenge to correct. This will all add an insurmountable task in post-processing.

A genius solution will be taking advantage of the original factory filter and making it switchable! So, here’s the plan, I designed a filter rack just like the one offered by Hutech LPS front filter, which now could hold the ICF stack in it. After measuring the rack and dimension of Nikon lens mount, I drafted the 3D model in CAD and exported the final version as STL file, which is an universal format in 3D printing.

ICF-FF-N4

ICF-FF-N4-2

The 3D rendering of the filter rack. I named it ICF-FF-N4

The printing process is accurate up to 0.1mm in XY, the precision in Z is not as high. But none the less, horizontal accuracy is needed for filter mounting. It turned out the filter could just be secured inside the frame without any screw. The dent in the upper beam is reserved for an extra bump in the middle of the reflex mirror while it flips up.

IMG_6649

The waveplate must be in between the 2 antialiasing layer. Since we move the ICF from behind the dust filter to in front of it, we need to make sure the AA layer is facing the lens.

IMG_6650

Now the only thing left is to spray the filter rack black. The ICF is multilayer coated. We could now take away the clear filter installed in front of the sensor as the original focus could be restored. But a offset in the focusing system is still needed, because the AF sensor has an additional piece of glass in front of it.

LPS Filter for D7000

Just got my Hutech clip in filter for D7000, here’s a unboxing post of the LPS-P2-N4 filter. LPS stands for Light Pollution Suppression I guess. Overlaying the glass is a special interference coating that rejects the unwanted wavelength from most sodium lamps, while passing the majority of nebula emission light.

Package

ADs

A brochure of advertisement, including the spectrum of the filter

Filter in Box

The filter packed inside a SD card box, rolled in bubble wrap

Installed on D7000

Clip the filter into the lens mount

Now it’s time to mount the lens. It’s said on their website that lens with greater than 8.7mm protrusion are not compatible. The Samyang 14mm 2.8 AE works pretty well with less than 8mm of protrusion. And it clears my biggest doubt on using wide angle lens with interference filter. Here’s 2 images taken with filter on, absence of any color vignetting seen on front mounted LPS.

Cloudy day with LPS

Indoor shot with LPS

The indoor shot showing distinctive spectrum from 2 fluorescent tube

And as expected, the AF 50 1.8D cannot be mounted because of an outstanding “Aperture Indexing Post” which transmit the maximum aperture mechanically. This is only required on old mechanical SLR like this one:

Aperture Indexing Post

The Aperture Indexing Post couples with the small black tab in the bottom of the mount on camera side

Since DSLR communicate with the lens electronically, it makes no sense to preserve this protruding post. One way is to simply shave it down, but the rear element could be damaged in the process and it will leave dust inside. During my last repair with the plastic mount on my 18-105 DX lens, I’ve found out that the plastic ring hosting the “Aperture Indexing Post” could be disassembled independently from the mount. And it turned out to be the same with 50 1.8D lens.

Plastic Ring

Disassembled plastic ring from the metal mount

If you unscrew 3 black ones seen on the metal mounting ring and 2 on the side holding the electronic contacts, the black plastic ring would comes out easily. Then just screw the contact post back onto the metal ring. The lens could be used with the clip in filter easily.

The 180 2.8D telephoto lens was my biggest concern. The screws on the filter holder scraped the 2 corners of the plastic baffled tube. It seems the baffled tube can be disassembled, yet it would present a problem when light from the last element deep inside shines onto the metal mount leading to stray light. This lens is unlike the 50 1.8D where the last optical element is outside the mount, while 180 has the last one and aperture blades deep recessed inside the mount.

180 Mount Scrape

2 corners next to the electronic contact post are scraped on 180 2.8D

 

Star Field Test

 

 

Spectrum

The spectrum measured fits reasonably well with the advertisement. The spectrophotometer used only has 2nm FWHM, meaning that the graph below is the transmission of 5nm moving average.

LPS-P2 Spectrum

Spectrum of LPS-P2 at 0 incident angle (Measuring interval 2nm)