How Structured Light Sensors Work

A structured light camera such as the Matterport Pro2 camera uses a very different technology from LIDAR scanners, despite perception they are the same. So what are those differences and why do they matter?

Structured light cameras use a projection method to project a pattern of light onto the surface of a 3D object or space. Different sensors allow for the measurement or capture of different types of objects. For example, a FARO handheld scanner (Freestyle) is great for collecting 3D data on objects while a Matterport camera (Pro2) is great for rooms, closed spaces and buildings. Both instruments fundamentally perform the same function but use very different technology to achieve similar results.

With structured light, one sensor sends out a pattern of light and another sensor “sees” the distortion in that known pattern. From that, it can calculate 3D information of the objects it sees and creates a point cloud. Both also have high-resolution imaging sensors so that pictures can be taken and the color information from those images (RGB data) can be applied. The end result is a photo-realistic point cloud, where you see the individual points in their true color, allowing the space to look very real (even though it is represented by thousands if not millions of points).

As the sensor is moved through space, it adds more points and ends up “building” the entire point cloud from the starting points that were created. This is where the term “relative accuracy” comes from that Matterport uses when talking about their accuracy specifications. All of the points are being created from a single set of points, and the accuracy is relative to those points that were originally created. They are not necessarily created from a known point or position and this is fundamentally one of the biggest differences between the two types of technologies. While structured light sensors can produce great results, they are not as accurate as LIDAR or other laser derived measuring devices.

How a LIDAR Scanner Works

For the best explanation, we defer to Leica Geosystems. They have a great white paper on how their scanners work. For those of you that don’t want to take the time to read through a white paper, let’s try to take a stab at explaining how LIDAR scanners work and why they are more accurate.
The main differences are centered around the different types of sensors inside of a LIDAR scanner. They get combined to create the outputs that the scanner creates and how those sensors get used to calculate 3D points in space. On the sensor side they include:

  • Laser for accurate distance measurement
  • Laser measurement encoders to determine accurate distance measurements from the laser and intensity measurement of the return signal of the laser to determine measurement quality
  • Compass for instrument orientation
  • Altimeter for accurate elevation detection and atmospheric corrections
  • Angle measurement encoders for the precise automated turning of the device and measurement of how much the device has turned
  • Instrument level compensators to determine the accurate level of the instrument and when not level to compensate and keep the instrument within the proper operating range
  • HDR camera sensors

How Sensors Work to Produce 3D Coordinates

Remember in grade school trying to calculate which train traveled the further distance and using the formula Rate x Time = Distance? This is what each laser on a 3D scanner is doing, in addition to accuracy coordination to ensure a high degree of quality in the measurement. The scanner measures the amount of time it takes for the laser to go from the scanner, hit an object and return a signal.
With this data, a known distance is calculated. Combine that with a horizontal angle and a vertical angle and now you can determine where a point is relative to the known origin of the scanner. Now apply a spinning mirror to move the laser vertically along the axis of the scanner and you can determine points along that axis. Finally, by adding a motor to rotate the scanner in a circle you start to create points along new vertical axis and your entire area starts to get measured or “spray-painted” with 3D points. This, in turn, creates your point cloud for each camera setup.
Most of the scanners today also have HDR camera sensors that are calibrated to the scanner itself, so that the RGB data (color) from the images can be applied to the point cloud. Just like the structured light sensor, you end up with a photorealistic point cloud.
We hope this helps shed some light and simplifies a complex process. We would love to hear from you, if you have questions or comments!

Close Bitnami banner