No AA filter? More of a marketing hype

Back in 2012 when D800 was released, Nikon did a bit tweaking on its antialiasing filter which led to the higher resolution D800E. A pair of birefringent crystal is organized in the parallel 180 degree to cancel out the effect. But were they worth it? As we had disassembled more camera, I decided to write a post on how these filter stack is organized.

ChipMod sent me a pair of filters on the Nikon D600. The IMX128 was scraped during monochromatic mod.

Filter set

Filters from D600: UV-IR, CMOS Cover Glass, Color Correction Stack

Back on D7000, I had shown the filter set consists of an antialiasing layer with UV-IR coating and an ICF stack sandwiched from a wave plate, a color correction glass and an other AA layer. Upon receiving the filter, I initially suspect the same. After closer examination, I found the color correction glass was actually just a single thin layer. No wave plate was glued to it. On a micrometer, it registered 0.484mm thick.

Without a wave plate, it’s impossible to spread a point into four, since the two light rays are in orthogonal polarized directions. I thought a workaround was to cut the AA filter at 45 degree instead of 0 or 90. (Here I refer to the orientation to the direction where two light rays separate. The AA filter is always cut perpendicular to the optical axis, or Z-axis, of the birefringent crystal) As such, the blue color could be mixed with red. However, upon inspection under a microscope, this was again rebutted. It turned out, the first UV-IR layer is only blurring on the vertical direction, leaving moiré as is in the horizontal direction.

AA under Microscope

Calibration slide between objective and AA1mm in 100 division

Stage setup with micrometer ruler in the vertical direction

The spread from this filter is around 5 micron and wider than that in D7k. This corresponds to a thicker crystal at 0.8mm. Now we know for sure D600 only blurs vertically. This gives the advantage to gain a bit higher resolution in the horizontal direction. The DPreview had an excellent resolution test confirming the case. D600 resolve horizontally well beyond 36, albeit accompanying color moiré. But it blurs out at around 34 in vertical directions.

Any other cameras also do this? It turns out that many other cameras follow this trend. To name a few: Sony A7Rii, Nikon D5100, and possibly other low end DSLRs all had a single AA glued to a color correction filter. One possibility is to suppress the already strong false color during video live view rising from row skipping. However, I would still argue the effect of this is minimal given the spread distance close to pixel pitch.

The material for AA filter and wave plate is usually crystalline quartz glass. Many website cites lithium niobate and that is incorrect. An argument floats around that quartz has too small a birefringent value and it requires a thick slice. This is true during the early days of digital imaging where pixel pitch were huge! (>10um) Once a proper calculation is done, the above 0.8mm thick material happens to give a close to 5um displacement. Should lithium niobate be used, it would be way too thin to manufacture. Another interesting property with quartz, or fused silica, is its UV transparent property. Based on the above transmission spectrum scan, the AA substrate material permits UV to pass when measured at corner. Lithium niobate would absorb strongly in UV just like those ICFs. Notice that without any coating, the glass itself reflects 10% of light. Again, for emission nebula imaging, you could keep the UV-IR filter.

Scraping the Bayer, Gain or Loss? – A quantitative analysis of mono-mod sensitivity

When you are deep into astrophotography, you’d probably start doing monochromatic deep sky imaging. A typical choice would be cooled CCD imager. These CCD cameras come in a variety of size format and architecture. The most affordable are interline CCD offered by Sony and Kodak (now ONSemi). Then the expensive full frame CCD requiring mechanical shutter from Kodak. Now however, as most of my previous posts and other similar studies have pointed out, CMOS holds a much better edge comparing to CCD. The only problem is, not a lot of CMOS based monochromatic devices are out there for your choice.

CMOSIS CMV12000

One option is the sCMOS from EEV and Fairchild. But I would imagine those to be expensive. Then CMOSIS who offer global shutter ones with monochrome in various format. But their dark current (~125 eps) and read noise (>10e-) figures are not a clear competitor to CCD in any way. Sony makes small format B/W CMOS but nothing bigger than 1 inch format. As such, we saw many specialized conversion service that scrape away the Bayer filter layer these years. Unfortunately, by doing so, you essentially remove the microlens array which boost the quantum efficiency. So in this post, I’m going to investigate the QE loss and gain with such modification.

Data is kindly provided by ChipMod for this study.

The modification steps involve camera disassembly, filter stack removal, followed by prying open the cover glass, protecting the bonding wire and finally scratching the pixel array. For the last step, the scratching actually happens in layers. We’ll use IMX071 cross section EM image from Chipworks again for illustration.

image

The surface texture of an image sensor, as described by ChipMod, varies in resistance to scratching. The first layer to come off, are the microlens array indicated in green arrow. This layer is usually made of polymer. Further applying force would strip away the RGB Bayer filter as well, indicated by the red arrow. The yellow region represents the pixel pitch with the blue defining the photodiode boundary. Comparing the length of blue to yellow, we could estimate the fill factor is 50%. Because of the channel stop, overflow drain on the other axis, the fill factor is typically 40%. The gapless microlens above, focus the light rays onto the photodiode to bring the fill factor close to 90%.

image

The sensor was scraped into 3 vertical regions. From top to bottom, A: the microlens array is removed; B: both layer removed and C: the original one. Comparing A/B tells you how much light the color dye absorbs at that wavelength. A/C tells you how effective are microlens. Finally, B/C gives you the gain/loss after mod.

An identical test condition was set up with a 50F6.5 ED telescope in front of a white screen. 2 wavelength, Ha and Oiii are tested with 7nm FWHM filter in the back. Field is sufficiently flat so center regions are used to calculate mean intensity.

image

Test result

The microlens array performs as expected, it typically boost QE to 2x in native channels. Even in non-native color channel, the uLens still boost signal by 50% or more. Losing the uLens array is a major downside. But considering the absorption of color dye even in its peak transmission, stripping CFA actually minimize the QE loss. For example, in the red channel of H-alpha, signal was at 64% even though losing the uLens should impact the QE by more than half. The same is more apparent in Oiii wavelength. Because green channel peaks at 550nm, at 500nm the absorption is nearly half for this particular sensor. Thus the net result is no different from the original sensor.

In conclusion, mono-mod sacrifices some QE for resolution and all spectrum sensitivity. My estimation puts final peak QE at around photodiode fill factor, or around 45%. The state of art CMOS process maximized the photodiode aperture, making such mod less prone to loss of QE after microlens removal. This is in vast contrast with Kodak interline CCD structure where a 5 fold QE penalty should microlens are stripped away. The mod should perform well for narrowband imaging, especially for emission nebulas. However, a fully microlensed monochromatic sensor is still preferred for broadband imaging.

Teaser: Nikon DSLR Black Point Hack for Astrophotography

Heads up Astrophotographers, Canon is no longer the best camera in terms of image quality in astrophotography. Today, we, the Nikon Hackers, are first able to extract the real, authentic RAW image from the Nikon D7000. The last hurdle towards serious astro-imaging. Especially for people doing narrow band where background is very dark. It will also promise greater bias and dark calibration.

This is an exciting moment, for me as an amateur astronomer at least. Here’s a quick peak of the dark frame image.

Preview

Here’s the image straight out of the camera. The DSP engine is still treating 0 as black point thus it’s pink on the screen. The histogram also looks weird due to its X-axis is gamma corrected for JPEG preview.

statistics

Average will now be brought back to around 600ADU, the setting for on sensor black level.

As for image quality, Sony Exmor CMOS has far less readout noise and FPN compared to Canon. Dark current is also in the range of 0.15eps stablized under room temperature. Under a typical winter condition, dark current is so low and comparable to cooled CCDs.

Now 2 options are available to get sensor data without any pre-processing. One, get the firmware patch called “True dark current”. The drawback is camera will not use calibrated data. Gr and Gb pixels will not be at the same conversion gain. And currently it is only for D5100 and D7000 as we don’t have time to dig into the assembly codes for other DSLR models. Second option is to get my “Dark Current Enable Tool”. The downside is it’s only transient. Camera will return to normal once power cycled or metering system went asleep.

Thus if you have computer during imaging and use your camera for daylight photography, the second option will be the best. Otherwise if you travel like me, go for the first option and keep 2 copies in the smart phone. Copy the desired version with USB OTG and flash the camera with a charged battery.

5/17/2015 Update:

We released a new firmware patch for D5100/D7000/D800, which trades a menu entry called “Color Space” into one that can activate the original sensor data. Thus you can use your DSLR during travel for both astrophotography and daily photography without the need to flash different firmware or with a computer tether. And here’s a demo:

Dual Use Modification for D7000

Clearly, H-alpha astronomical imaging would greatly benefit from a modified camera. A lot more nebulas are now within feasible exposure time. Yet this pose another hurdle to use the camera in daily photography as its color balance is compeletely thrown away. Preset WB is a way to go but scene are so variable from one another that it becomes a chore to keep a sheet of white paper with me. And besides, preset WB only correct for one light source. The correction ratio for different color temperature is no longer the same with modified spectrum in the sensor. Spatial varible lighting, street lights plus moonlight as an example, will be a real chanllenge to correct. This will all add an insurmountable task in post-processing.

A genius solution will be taking advantage of the original factory filter and making it switchable! So, here’s the plan, I designed a filter rack just like the one offered by Hutech LPS front filter, which now could hold the ICF stack in it. After measuring the rack and dimension of Nikon lens mount, I drafted the 3D model in CAD and exported the final version as STL file, which is an universal format in 3D printing.

ICF-FF-N4

ICF-FF-N4-2

The 3D rendering of the filter rack. I named it ICF-FF-N4

The printing process is accurate up to 0.1mm in XY, the precision in Z is not as high. But none the less, horizontal accuracy is needed for filter mounting. It turned out the filter could just be secured inside the frame without any screw. The dent in the upper beam is reserved for an extra bump in the middle of the reflex mirror while it flips up.

IMG_6649

The waveplate must be in between the 2 antialiasing layer. Since we move the ICF from behind the dust filter to in front of it, we need to make sure the AA layer is facing the lens.

IMG_6650

Now the only thing left is to spray the filter rack black. The ICF is multilayer coated. We could now take away the clear filter installed in front of the sensor as the original focus could be restored. But a offset in the focusing system is still needed, because the AF sensor has an additional piece of glass in front of it.

Peeping into Pixel – A micrograph of CMOS sensor

Macro-photography are done at 1x ~ 2x magnification. Microscope on the other hand could easily deliver a 40x magnification without eyepiece. In this post, we are peeping into the basic element that captures the image in digital photograph – a pixel on CMOS sensor. I had obtained a Nikon JFET LBCAST sensor from a broken D2H imaging board. LBCAST is still based on CMOS fabrication technology and it’s an Active Pixel Sensor.

Photographing an opaque sample compared to biological slice is extremely difficult, since ordinary trans-illumination will not work. An epi-illumination, de facto illuminating through the objective, should be used instead. Basically a half mirror is in place of the optical path to direct light towards the objective, then back in to the eyepiece and camera. Epi-fluorescence will use a dichroic mirror and a pair of filters.

LBCAST

Back Side

Cover Glass

The D2H sensor die is sitting inside a robust 38 pin ceramic dual-in-line package. But the bonding wire is shielded by a metal frame underneath the cover glass, thus made it impossible to see the die marking. These’s no package marking on the backside except a tape indicating its serial number (or could be color correction information used for calibration). The cover glass is rather thick, roughly 0.7mm.

Top Left

Top Right

Bottom Right

The corner has clearly shown the active pixel region covered by optical black and non-microlensed region. This image is taken by a 10x objective on a stereo microscope. Now we peep in using 40x objective!

Effective Pixel

The effective pixel array (The pixel array which responds to light normally). Note that active array discards the periphery of the effective array due to color interpolation.

Unfortunately the camera is B/W. The brighter ones are green pixels while darker ones are red and blue. With this resolution, we can actually calculate the optical fill factor, it’s well below 60% given such a big lens gap! Even though a square microlens seemed to be employed, not all light is directed into the window. It seems the microlens array are not fabricated in one cycle, as you notice the lenslet on blue and red pixel are slightly larger then green lenslet.

Optical Black

Now comes the optical black (OB) region, and the edge of active pixels. The optical black pixels have a metal shielding in the photodiode window. By blocking light, it will only output dark current and bias level, which will be used as black reference for active pixel region. Nikon subtract the average value of OB from the intensity value in active pixel region, which transforms the black level to 0. This is not good for astrophotography and Canon will add a 1024 ADU to it. From OB pixels, it becomes even clear that only a partial region of the microlens is illuminated, roughly 40%. I believe that’s the reason for low QE, and as a result, low 18% SNR in D2H.

Lens Array border

Finally comes to the border of lens array, now you have bare Color Filter Array (CFA) above pixel. You can clearly see the metal lines (column lines; the sensor is oriented 90°) running in between, which occupies a lot of space and is also the reason in need of microlens.

Wiring

The very bottom right corner of the total pixel array

Each row has a pair of control lines. The upper one in the pair is for JFET select/reset, while the bottom controls photodiode transfer gate. The pair is made of poly-silicon on the substrate. The 2 small black dots in between are likely vias or contacts. The column line (metal layer 1) connectes to the source of JFET transistor according to this paper, which relay the pixel signal. Notice the line is really thick to reduce electroresistance! The reset drain is the metal layer 2 above the row and also serves as a photo shield for the transistor below. It is not visible here.

Another interesting observation is the lack of blurring function of this cover glass. Canon has integrated its second anti-aliasing layer of OLPF as sensor cover glass in full frame and new generation APS-C DSLRs. Apparently the OLPF stack is standalone in D2H.

Line 40

Moving the view to the bottom long edge reveals the column circuit, possibly the buffer, CDS and column scanning driver that latches to the output amplifier. Note a letter “4” is photo-lithographed on the die, this indicates the column 40. The die also has a “+” mark every 5 columns in between.

Line 390

Somewhere along the line between column 385 to 405, there’s a recess in the long edge. I’m not sure what this for. (Image in 10x)

Corner

Top left corner on the opposite long edge of the sensor. (Image in 10x) Even though some of the non-microlensed pixels are hiding beneath the metal frame, we can still see the column number from 3, 4, 5…

Last Line

Top right corner, the last connected line is 256, which indicates total of 2560 columns.

LBCAST-XRay

The ceramic package viewed through a X-Ray scanner showing the bonding wires linking the lead and die. The metal frame is also visible.

Very interesting right, huh? Now we can compare it against a Micron CMOS sensor (Now Aptina) with 5.2um pixel.

Micron 1300

Die marking MI-1300 from year 2002, MT9M001.

MI-1300 Microlens

Now the microlens itself. We can clearly see a much narrower lens gap and higher fill factor. This sensor boost a 55% peak QE, but still less than the Sony Exmor sensors. I hope someone can donate me one for dissection.

The contributor behind the scene – Olympus LUC PL FLN 40x objective. This objective is designed for inverted microscope and has a collar to set the glass slide thickness, allowing the compensation of chromatic and spherical aberration.

Updated: 6/8/2014

Image orientation corrected.

Nikon D7000 H-Alpha Conversion

It has long been recognized as Nikon DSLRs are boosted by Sony sensors since D100. Now in the 3rd generation cameras, Nikon switch to Sony Exmor CMOS sensor from HAD CCDs. These CMOS sensor utilized on-chip column parallel Analog-to-Digital Converters (ADC) running at very low clock rate around 20kHz compared to serial external ADC that runs at around 10MHz (Such as D3/D700/D3s and Canon DSLRs. Calculation as follows: 12.2MP x 9FPS / 12 Channel = 9.2MP/ (Second x Channel) ). The slower clock rate and integration of ADC on chip significantly reduces read noise, as it would be beneficial for dynamic range.

The latest of all, D7000, has been analyzed by Chipworks and they had found a Sony IMX071 Exmor sensor within. Test done by Dxomark indicated increased sensitivity compared to previous CMOS sensor (D90 and D300s). Interestingly, either Nikon nor Sony advertise their gapless microlens structure as Canon always did, but it has been clearly revealed that IMX071 microlens array does not have gaps in between. (Fig 1)

IMX071 5D vs 5Dii

Figure 1. Chipworks teardown shown pixel vertical structure. Above: IMX071 gapless microlens. Below: Canon 5D and Canon 5D Mark II

Combination of significantly improved Quantum Efficiency (QE), lowered read noise and dark current at room temperature against CCDs, these DSLRs should be highly competent in astrophotography. However, one more obstacle remains: DSLR are not sensitive to hydrogen alpha line (656nm) from most of the emission nebulas simply because camera manufacture add on a color correction filter to mimic the response of human eye. This color correction filter absorb a large proportion of red  and almost all infrared photons while leaving green and blue channel untouched. The silicon based sensor itself are highly sensitive in the entire visible spectrum ranging from 400 to 700nm, and extending towards near infrared of 1100nm with much lower response. So the simple answer to this is remove it!

 

Warning!

Please use this instruction at your own risk! I will not be responsible if you accidentally break your camera in the process! If you have shaky hands and are not confident with doing this yourself, send it to a qualified company for conversion or don’t do it!

 

Filter Structure

But removing it cause another problem when large amount of infrared rushes towards the sensor, then how do we solve it? So let’s take a look the filter in detail.

Actually the filter is more complicated, it consists of 2 parts: a dust reduction filter, and a sandwich of 3 layers of glass stitched together. The later is usually called ICF (infrared cut filter), or sometimes by its other property – Anti-aliasing filter, or optical low pass filter (OLPF). Each layer has its own function. This filter stack not only serve to suppress infrared leakage and correction of color in red channel, but blurs the image on pixel level to reduce the color moiré inherited in Bayer sensor. The dust reduction filter and the last layer of ICF are both similar in thickness and are made of birefringent material. And the first layer of ICF is a wave plate similar to the one in your circular polarizing filter. When oriented 90° to each other in conjunction with the wave plate, they separated the light in horizontal and vertical direction respectively, translating one dot into 4 and thus blur the image. The thickness of both are tuned according to the pixel pitch. As a general rule, the bigger the pixel, more blur you need and thicker the birefringent layers.

D7000 sensor module showing inner filter stack on top and a dust reduction filter disassembled on right.

The middle layer in the ICF is the color correction filter that absorb deep red photons and infrared. Another thing to notice is the dust reduction glass is coated with an interference filter that blocks UV and IR. The sharp cutoff allows greater than 95% of visible spectrum between 430 to 680nm to pass through. So in this modification, we will only remove the 3 layers ICF stack while preserving the dust reduction filter that blocks UV/IR as well as keeping dust away. Removing this filter also change the infinity focus distance and you may not be able to achieve focus at infinity for some lenses. So I ordered a clear piece of optical glass with 1mm thick just in case.

 

Modification

The detailed disassembly step can be found at Lifepixel website: http://www.lifepixel.com/tutorials/infrared-diy-tutorials/nikon-d7000-ir

Before tearing down your camera, prepare the small boxes for preserving the screws. I used the petri dish to help me memorize the same batch of screws from the same panel. Duct tape can also help you organize the screws. Also be sure to ground yourself and not to do it during cold and dry winter to prevent static charge from damaging the delicate electronics inside. The screwdriver you need is #000 or 5/64″ Phillips type. Do not use flash before conversion nor open the flash. This will charge the capacitor and may give you an electrical shock once you touch it, or destroy the circuit when you accidentally short circuit inside.

The disassembly step are as follows:

1. Remove battery, eyepiece rubber cap and SD cards. Remove the lens and cap the body.

2. Unscrew and remove the bottom panel

Bottom Removed Bottom Panel

Bottom panel removed and it is made of plastic

3. Unscrew the ones on SD slot compartment, viewfinder, one on the left and 3 silver ones in the bottom. Gently lift LCD panel unit.

Back Panel

The back panel is made of magnesium alloy.

 Main PCB

4. Carefully flip the hinge connector using your finger nail or a forceps and gently release the ribbon cable from the slot. Be sure to identify which part is the rotating bar and which fixed jack. This is the most crucial and tricky step and you need to be really careful when dealing with these flat cables. Make sure all the cable are properly released before the next step.

Toshiba CPU Ribbon Cable Disconnect

The main PCB with Toshiba microcontroller chip. Ribbon cable are disconnected.

5. Unscrew the 6 silver screws on the PCB. Lift the PCB from right side and slowly pull the PCB to the right since the USB/HDMI jack is inserted into the plastic panel on the left of the camera.

Main PCB Backside CMOS PCB

The back side of PCB, where the virtual horizon sensor is in the bottom right corner. Removing the PCB exposes the sensor frame. The sensor power supply cable is L shape and the sensor output in 8 LVDS channels (+1 for Clock) to the main PCB.

6. Now it’s time to access the sensor. From this step on, you should consider dust and your dirty hands very seriously. I’m converting in a laboratory fume hood and wear latex gloves from this step onwards. The 3 big silver screw in the above image fasten the sensor module to the fore body of the camera. Once you tweak it, the focus calibration is no longer valid. To proceed, release the last ribbon cable and unscrew the sensor board. I suggest you mark the screws with color marker pen to distinguish the screw with its mounting hole, as well as its preset rotation position.

 

Marking

Mark the screw before proceed. The CMOS sensor is driven by a 54MHz crystal oscillator.

Working in hood

Now move to the hood

Sensor before Shutter

The sensor module and the shutter blades

7. The piezoelectric elements attached to the dust filter is connected via ribbon cable as well. But we do not need to desolder it. Unscrew the 4 holding the dust filter to the sensor unit and gently lift the dust filter frame (2 more on the right side in hidden below the aluminum foil). Once the screws come loose, gently lift the dust filter. Use the forceps to lift one corner of ICF and take it out. Reverse the steps and assemble your camera.

Sensor after conversion

After removal, sensor became much more transparent

 

ICF Structure

ICF/OLPF Diamension

The dimension of stack is 29.50mm x 25.30mm measured with a vernier scale. The wave plate and color correction filter is a little bit smaller, giving a 21.50mm in height. This dimension has been standard for Nikon since D70. The thickness reading from a micrometer yields a 1.182±0.002mm for the entire stack and 0.538±0.003mm for the transparent anti-aliasing glass. This means the color correction filter and wave plate is 0.64mm thick, which is identical to that in D90.

ICF

The cross section of ICF stack under microscope, showing 3 layers of wave plate, color correction layer and birefringent glass from bottom to top. The top is facing the CMOS sensor.

 

Focus Shift

Since we did not replace the ICF stack with a clear sheet of glass of equivalent optical depth, the camera now will become significantly nearsighted, and the phase detection focus will be useless. In my test, I set both lens and default focus fine tune to +20. The result is still slightly blurred. And for lenses with hard infinity focus stop, it will become impossible to achieve sharp focus at distance. Right now the only lens I have that can obtain focus at infinity is AF 180 2.8D, for this lens has to have a large tolerance of infinity for its focus shift of ED elements dependent on temperature variation.

Infinity after conversion

The Liveview infinity manual focus position largely deviates from original infinity focus, where bar should point right in the middle of the “\infty

My 18-105 DX seems to achieve infinity focus at zoom range greater than 30mm but not the wide angle end.

 

Sharpness Test

The removal of of ICF along with its anti-aliasing filter should increase the resolution on the pixel scale. This is in principle similar to recently announced D800E. However, as we had preserved the first AA layer for its IR rejection coating and dust reduction, this camera will have astigmatic vision. The images below are shot at ISO12233 chart at 70mm and fixed distance using liveview focus. NEF raw is taken and demosaic using MaxIM DL. These section are cropped at 200% from the same image.

After V

After H

After conversion, the vertical resolution is increased compared to horizontal. And the horizontal resolution remained the same (See below)

 

Before V

Before H

Before conversion, resolution is the same on 2 directions.

This test suggests that the dust filter separates the image on horizontal direction and the last layer of ICF blurs the vertical one.

 

OLPF under microscope

To investigate under a microscope, the last transparent layer of ICF stack is place between a stage micrometer and a 40x objective. The images below clearly illustrate the effect of the OLPF layer with a directional blurring of marking lines. When the ICF stack is placed with long edge parallel to the division lines (2nd image), it proves that the OLPF layer blurs the vertical direction. The displacement is roughly 3~4um, resembling the pixel pitch of IMX071.

Ruler

The stage micrometer with each division 10um apart

ICF Last Layer-Horizontal

Placing the transparent last layer on the micro-ruler, showing the displacement

ICF Last Layer-Vertical

Rotate 90° and the blurring direction changes accordingly. The green tint is from the color correction layer.

 

Clear glass replacement

Now in order to achieve infinity focus for my 18-105 lens, I’m going to replace the ICF with a clear optical glass. One way to test if the optical depth differences is simply to use a microscope. Place a sheet of paper underneath the filter and focus the microscope at 200x on the thin fiber on the paper. Then gentle replace it with the new optical glass. If the focus is sharp, then at least this means the focus remained the similar. The rest of the error will be within the range of D7000’s focus fine tune system.

The replacement HK-9 glass with a APS-C sensor as size comparison

Bonding Wire

Notice the bonding wire is sufficiently different from IMX038, where a lot of camera website used the wrong photo for this sensor. (Click for large image)

The procedure is the same, just place the glass in the same rubber gasket frame using forceps.

Clear glass on sensor

The rubber gasket sits on the IMX071 Sony CMOS sensor

 

This replacement glass lacks anti-reflection coating. The thickness is 40.3 Mil or 1.023mm. HK-9 glass is equivalent to BK-7 borosilicate glass with a refractive index of 1.5. After replacement, The focus almost comes back to the original factory setting.

 

Transmission Analysis

To roughly assess the transmission, I used the same lens setting to infinity focus and take a RAW image of 1/5s exposure at the LCD screen with the same aperture, ISO and the maximum brightness setting of LCD panel. The table below shows the raw average ADU count in the center uniformed area of image, and the ratio is normalized to ICF removal one.

RAW ADU

Ratio

R

G

B

R

G

B

Before

5738

11257

7947

0.51

0.86

0.92

ICF Remove

11308

13114

8643

1.00

1.00

1.00

HK-9

10651

12312

7966

0.94

0.94

0.92

It is astonishing that we have almost 1 fold gain in the red channel. The green channel is also improved a little. Note that HK-9 reflect and absorb 6-8% light in all channel.

 

Color Correction Filter Spectrum Property

Since gaining 1 fold more red light in a channel doesn’t mean the distribution of that gain is equal at different wavelength within. Here we use a UV-VIS spectrophotometer to measure the transmission profile of the infrared absorbing glass.

Transmission Spectrum

Transmission property of ICF, replacement glass and a quartz glass window (RCP) from 200 to 900nm. Actually I doubt the replacement glass as HK-9 will strongly absorb at 300nm, and the transmission curve closely resembles N-BK7 glass. Also notice that an uncoated glass window will reflect 8% of light in total on 2 interfaces. The quartz filter is broadband coated, which bring the transmittance up to >96%. The quartz is transparent down to 160~180nm in UV below the detection range of this spectrophotometer.

After all, this means we had a 4 fold gain at 656nm Ha emission line plus an increased QE of new generation sensor, personally I’m well satisfied with this result.

 

Updated 6/7/2013