Cheaper yet powerful camera solutions

It’s been a while since my last blog post. During this past year, I’ve built a few other cameras yet released on this blog. In the meantime, I have been looking into options to make this work available to fellow amateur astronomers as a viable product. One major blocker here is the cost. FPGAs are expensive devices due to two factors: 1. They are less produced compared to ASIC and still uses state of art silicon process. 2. Massive area dedicated to routing and configuration logic. Let’s look at a simple comparison. The MicroZed board I’m using cost $200 with dual Cortex-A9 core clocking at 666MHz. This contrasts with quad core Ras Pi 3B clocking at doubling frequency. And it only cost $30.

However, using these single board computer SoC devices are not free from challenges. Most scientific CMOS sensors do not output data using standard MIPI CSI2 interfaces and require a FPGA fabric to do the conversion. Beyond that, we also need to choose a SoC that has CSI2 interfaces supporting high enough total bandwidth. Then to take functionality into consideration, it’d be preferable to enable edge computing/storage and provide internet hosting in a single solution. In the end, we conclude the next generation should have the following connectivity.

1. 1000Base-T Ethernet and built-in WiFi support

2. USB3.0 in type-C connector

3. Fast storage with PCI-E NVME SSD

Besides these, the device should be open enough with Technical Reference Manual (TRM) and driver source code available for its various IP blocks. Ras Pi clearly drops out due to limited CSI2 bandwidth and absence of fast I/O. After length and careful comparison, I landed on Rockchip RK3399. It has dual CSI2 providing a total 1.5GB/s bandwidth and powerful hex A72/A53 cores running above 1.5GHz for any processing. One platform from friendlyArm NanoPC-T4 board is the most compact among all 3399 dev kits. This board also has IO interfaces aligning on one edge making case design straightforward. It is vastly cheaper compared to Zynq MPSoC with similar I/O connectivity.

NanoPC T4

Two MIPI CSI2 connector on the right

Now the rest is to provide a cheap FPGA bridge between the sensor and CSI2 interface. The difficult part is of course the 1.5Gbs of MIPI CSI2 transmitter. On datasheet, the 7 series HR bank OSERDES is rated at 1250Mbs. But like any other chip vendor, Xilinx down rate the I/O with some conserved margin. It’s been shown before that these I/O can be toggled safely at 1.5Gbs for 1080P60 HDMI operation. But still, that is TMDS33 with a much larger swing compared to LVDS/SLVS for MIPI D-PHY. To test this out, I put a compatible connector on the last carrier card design using extra I/Os. Because D-PHY is a mix I/O standard running on the same wire, only the latest Ultrascale Plus supports it natively. To combine both low power single ended LVCMOS12 and high-speed differential SLVS using cheap 7 Series FPGA, we must add an external resistor network according to Figure 10 in Xilinx XAPP894.

PCB resistor network with some rework

It is possible though, to merge all LP positive and negative line respectively to save some I/O if we are only using high-speed differential signaling. In this case, tying these LP will toggle all four lanes into HS mode simultaneously. The resistor divider ratio has also been changed because I need to share with LVDS25 signals from CMOS sensor in the same HR bank.

To produce an image, I wrote a test pattern generator to produce a simple ramp up value pixel by pixel in each line. Every next frame the starting value will increase by four. Timing closure was done at 190MHz for the AXI stream. This prevents FIFO underrun at 1.5Gbs at four lanes. I then took the stock OV13850 camera as mimicking target. A simple bare metal application runs on PS7. This app listens for I2C command interrupt, configures the MMCM clocking, sets image size and blanking and enables the core.

Finally, some non-trivial changes need to be done on the RK3399 side to receive correctly. After lengthy driver code review, I found two places requires change. First, the lane frequency setting in the driver. This eventually populates the a V4L2 struct that affect HS settling timing between LP and HS transition. Second, the device tree contains the entry for number of lanes used for this sensor.

MicroZed stack on top of NanoPC T4. Jumper cable are I2C

There’s a mode to disable all ISP function to get RAW data. This proves extremely helpful to verify data integrity. In the end, we won’t need ISP for astronomical imaging anyway.

Timing of low power toggle plus HS settle costs 21% overhead

Rolling ramp TPG wraps around through HDMI screen preview

This work paves the way for our ongoing full fledge adapter board. Stay tuned for more information soon!

2 Responses to Cheaper yet powerful camera solutions

  1. Dirk says:

    maybe this is a stupid question, but I didn’t really get why the FPGA is necessary. Shouldn’t it be possible to connect the OV13850 directly to the NanoPC as it is described in the friendly ARM Wiki?

    • jackshencn says:

      This project is to interface a true, high performaning, scientific sensor. Not some average cell phone CMOS. Most of those sensor do not run on standard MIPI interface. But instead some custom sLVDS SLVS etc. Thus a FPGA bridge is necessary.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: