top of page

Integrate a multi cameras solution into your Multiple view stereo system

Multiple View Stereo(MVS) system can be applied in AR/VR, Autonomous Driving, Inverse Engineering, Robot Manipulation and Remote Sensing. OpenNCC has built several multi-camera visual imaging systems enabling the deployment of algorithms for our customers to apply in several different scenarios.



What is MVS?


We abstract from the University of Washington.

Reconstructing 3D geometry from photographs is a classic Computer Vision problem that has occupied researchers for more than 30 years. Multi-view stereo (MVS) is the general term

given to a group of techniques that use stereo correspondence as their main cue and use more than two images. All the MVS algorithms described assume the same input: a set of images and their corresponding camera parameters.


Different applications may use different implementations of each of the main blocks, but the overall approach is always similar:

  • Collect images,

  • Compute camera parameters for each image,

  • Reconstruct the 3D geometry of the scene from the set of images

and corresponding camera parameters.

  • Optionally reconstruct the materials of the scene.

As we shared in our last post, Raytrix using such technology constructs a multi-view television broadcasting system.


Openncc's multi-cameras solution

A complete system is composed of image vision system, hardware system and software algorithm system.

For the vision system of multi-cam system, the following key problems need to be solved:

Multi-camera connect

Different application scenarios have different requirements on the number of cameras. We provide customers with different combinations of solutions ranging from 3 MIPI sensor to 12 MIPI sensor. The support is as follows:

  • Upto 6 sensors x 2Lans@2.5Gbps with one soc to compose as a sub-system

  • Upto 3 sensors x 4Lans@2.5Gbps with one soc to compose as a sub-system

  • up to 3 x (RGB sensor + ToF sensor) multi sub-systems

  • Upto 18 sensors with 3 multi-cam sub-systems

Frame synchronization

MVS requires a lot of algorithm processing. For a real-time processing system, the image frame must be strictly time synchronous. This requires the system to first have a hardware synchronization strategy in hardware design, and on this basis, the software system should avoid delayed output and frame loss, and strictly ensure that the image frames submitted to the algorithm are time-aligned.

There is accompanied high bandwidth for array cameras, which poses a great challenge to system reliability design. OpenNCC has gone through several iterations of versions, and the system has stood the test of time. The system framework is multi-channel video synchronization within one chip and stable solutions have been accumulated for microsecond level external synchronization between multiple chips after expanding the number of cameras.

For more information please contact us

36 views0 comments
bottom of page