DOI QR코드

DOI QR Code

Design of 3D Laser Radar Based on Laser Triangulation

  • Yang, Yang (School of Electronics Information Engineering, North China University of Technology) ;
  • Zhang, Yuchen (School of Electronics Information Engineering, North China University of Technology) ;
  • Wang, Yuehai (School of Electronics Information Engineering, North China University of Technology) ;
  • Liu, Danian (School of Electronics Information Engineering, North China University of Technology)
  • Received : 2018.08.22
  • Accepted : 2018.11.19
  • Published : 2019.05.31

Abstract

The aim of this paper is to design a 3D laser radar prototype based on laser triangulation. The mathematical model of distance sensitivity is deduced; a pixel-distance conversion formula is discussed and used to complete 3D scanning. The center position extraction algorithm of the spot is proposed, and the error of the linear laser, camera distortion and installation are corrected by using the proposed weighted average algorithm. Finally, the three-dimensional analytic computational algorithm is given to transform the measured distance into point cloud data. The experimental results show that this 3D laser radar can accomplish the 3D object scanning and the environment 3D reconstruction task. In addition, the experiment result proves that the product of the camera focal length and the baseline length is the key factor to influence measurement accuracy.

Keywords

1. Introduction

The principle of laser triangulation is to irradiate the measured object with a laser beam at a certain angle of incidence. The laser reflects and scatters on the surface of the object, and converges the reflected laser beam on the lens in another angle. The spot is imaged on the CMOS position sensor. When the measured object moves along the laser direction, the spot on the position sensor will move and its displacement corresponds to the moving distance of the measured object. Therefore, the distance between the measured object and the baseline can be calculated from the displacement of the spot by the algorithm design. Since the incident light and the reflected light form a triangle, the geometric triangle theorem is used to calculate the displacement of the spot. Therefore, the measurement method is called laser triangulation.

The earliest application of laser triangulation was proposed by D. Burrows and J. Heartwin[1] in 1973. They made a system that used laser scanning to create a seafloor profile with as can range of 10-90m. The vertical accuracy of the profile is about 0.75%. This system needs to be deployed on a large airship and requires a high-powered laser transmitter. In 1986, Osamu Ozeki et al. [2] used laser triangulation to measure and reconstruct a 3D object. The measured object needs to be placed in a square area with a fixed side length of 60 cm. The distance between the measured object and the measuring device is basically constant, and the measured object needs to be rotated slowly, after reconstruction the measurement error is about 2 cm. In 1988, J.-L.Jezouin et al. [3] also used laser triangulation to reconstruct 3D objects, while David Acosta et al. [4] used laser triangulation method to scan the simplified human head model in 2006. Both applications gave only the final reconstruction renderings, without detailed error analysis. In 2008, Marc Hildebrandt et al. [5] made an underwater scanner using laser triangulation and designed the calibration procedure for the instrument. In 2015, Muhammad Ogin Hasanuddin et al. [6] performed a scan of the human tooth model and achieved a scanning effect with an error of less than 4mm over the distance of 40cm. In 2013, C. C. Liebe and K. Coste [7] proposed a distance measuring sensor based on images and triangulation. The angle of the laser varied with distance and was precisely measured by a calibration method. The design used a linear polarizer, band-pass filter to achieve the removal of the backlight. Its maximum measuring distance was 10m, the measurement error was 1.5mmat 1m, and increased to 8mm at 4m. But its problem is that it can only support single-point ranging, and cannot measure the distance within 1 meter. Also, the selection of high-cost CCD camera is another problem, and as the measurement distance increases, the measurement efficiency decays quickly. The problems of these applications using laser triangulation method are: low scanning efficiency, accuracy acceptable at close range but reduce rapidly as the measuring distance increasing, and unresolved error in surface properties [8] - [10]. Papers [11] - [17] used some methods to solve these problems. Klaus Wendt et al. [11] used multiple high accurate tracking laser interferometers to avoid Abbe error; P. Morandi et al. [12] applied a rotatable plane laser beam; Feng Li et al. [13] and Guolu Ma et al. [14] came out new methods in coordinate measuring. Yajun Wang et al. [15] used digital micro-mirror device (DMD) to improve measuring speed; V.H. Flores et al. [16] present a novel Panoramic Fringe Projection system (PFP) and César-Cruz Almaraz-Cabral et al. [17] introduced a panoramic profilometric system. These applications measure 3D structures or shapes with hidden parts, or create structural tomograms, monitor the welding process, or retrieve the 3D terrain of quasi-cylindrical objects. However, the problems of these methods are: multiple sensor system, long processing time and expensive special equipment.

It can be seen that the ranging method based on laser triangulation has the advantage of low cost and a wide range of usage. However, how to improve its accuracy is a key issue. How to handle the errors generated by installation or electronic component is not mentioned in these papers. This paper aims to design a low-cost 3D laser radar. This radar uses laser triangulation to measure the distance, and then uses the line laser and the stepper motor to realize the 3D scanning. During the scanning, the spot center extraction algorithm and a weighted average algorithm are adopted to reduce the error. Finally, three-dimensional solution is proposed and used to generate point cloud data. The radar has the advantages of low cost and high measurement efficiency, and has practical application of benefits in the three-dimensional object modeling, environmental map construction, Autonomous robot obstacle avoidance and other fields.

2. Method

2.1 Single Point Ranging Model

The principle of laser triangulation is applied for distance measurement. Fig. 2.1 shows the basic geometric model of triangulation.

 

Fig. 2.1. Laser triangulation ranging model

In Fig. 2.1, s is the distance between the laser and the camera center, f is the focal length of the camera, d is the distance between the laser and the measured object, q is the vertical distance from the measured object to the line formed by the laser and the camera, x is the pixel value of the spot after the laser is reflected in the frame of the camera. The dotted line parallel to the laser beam means that when the object is at infinitely distant from the radar, the reflected laser spot just falls on the edge of the CMOS sensor, which means when we measure objects at infinity, the laser spot falls on one end of the image; while at the smallest ranging distance  \(q_{\min }\) , the laser spot falls on the other end of the image. Equation (2.1) is the vertical distance from the radar to the object.

\(q=\frac{f s}{x}\)    (2.1)

This equation shows the hyperbolic relationship between the image distance and the object distance, which is the principle of laser triangulation. This non-linear relationship will lead to the problem of measuring long distance objects; the distance sensitivity \(d q / d x\) varies with the square of distance.

 \(\frac{d q}{d x}=-\frac{q^{2}}{f s}\)       (2.2)

A single-point model design standard is a contradiction between the measured minimum distance (2.1) and the distance resolution (2.2): small fs can measure more near objects, but larger fs can get higher distance resolution.

2.2 Spot Center Extraction Algorithm

When the camera captures laser reflected light from the measured object, the laser does not generate a precise point on the camera, but forms a light spot. In this case, the center of the spot needs to be calculated as a pixel number for calculating the distance. The simplest method is to directly find the coordinates of the brightest pixel in the spot. However, the pixel value x obtained from Equation 2.1 is an integer, thus the calculated distance q will have a relatively large jump. In this paper, the centroid method is used to locate sub-pixel points so that x becomes a more accurate sub-pixel level. The algorithm traverses from left of image to find the position of the first brightest point, then traverses from right to find the position of the first brightest point and averages the positions of the two brightest points to obtain the brightest point of the center \(\boldsymbol{X}_{\text {center}}\) . Then the weighted average value of the ±10 pixels around the center brightest spot is calculated, which is the actual spot center position \(\mathcal{X}_{t r u e}\) . From Equation (2.3). \(I(x)\) is the brightness value of point x , the result is that the brightest spot on the laser can be extracted at least to the nearest 0.1-pixel level.

\(x_{\text {true }}=\sum_{\pm 10}\left[I(x)^{*} x\right] / \sum_{\pm 10} I(x)\)      (2.3)

After getting the brightest pixel position, we can find the distance from the object to theradar through Equation 2.1 to achieve single-point ranging. If we change the spot laser to linelaser, the laser will emit a straight line, the line encounters objects in the space will produce reflection, the camera can get image through the light reflection, which can complete multi-point ranging at the same time.

During installation, the mechanical constants of the installed structure cannot be exactly equal to the theoretical calculations for various reasons. This results in a deviation from the pixel position to the actual distance. Therefore, the system needs to use the installed radar to measure the standard plane multiple times firstly, preprocess the data, and then curve fit the mapping between pixel location and actual distance, where least square method is used for curve fitting.

2.3 The Weighted Average Algorithm

The core part of the distance ranging is to determine the distance by the location of the pixel where the brightest spot of the laser in the frame of the camera, then the location of the pixel is very important. However, when the radar was manufactured, the reflection of the laser beam shown on the camera frame is not a vertical straight line when the laser irradiates the standard plane due to the error that cannot be completely avoided, such as the manufacturing error of the linear laser and the installation error. This leads to errors in the measurement of objects at different heights on the same plane. From Equation (2.2), we know that the error increases with the increase of distance, which will greatly affect the accuracy of  measurement. Therefore, this paper uses a weighted average algorithm for error correction at the pixel level.

A standard line is measured at a certain position, and the theoretical true value of the line is \(P_{i}\) . The actual measurement value of this laser radar system is \(Z_{i}\) , then the compensation value\(\Delta i\) of the measured data is obtained from Equation (2.4). According to the minimum detection distance of this system, the standard plane of equal interval is collected k times between 0.02m and 5m, and the lookup table \(\Delta D=\{\Delta i(\mathrm{i}=1,2, \ldots \mathrm{k})\}\) for pixel position error compensation value can be established by Equation (2.4). In order to facilitate the search, we create a lookup table\(D=\left\{Z_{i}(i=1,2, \ldots k)\right\}\) for indexing coordinates. In this paper, the pixel position error compensation lookup table \(\Delta D\) and the index coordinates of the lookup table D collectively referred to as the pixel position error compensation lookup table.

\(\Delta_{i}=P_{i}-Z_{i}(i=1,2, \ldots, k)\)        (2.4)

Since the look-up table created is discreted, global measurement data cannot be error-compensated using only by Equation (2.4). When the uncorrected data \(d \notin D\) , in order to compensate the pixel value error in the continuous depth space, a method of weighted average compensation is required. Supposing that the uncorrected data \(z(\mathrm{u})\) is located in the index coordinate lookup tables between \(Z_{i}(u)\) and \(Z_{i+1}(u)\) (which means \(\left.Z_{i}(u)<z(u)<Z_{i+1}(u)\right)\), A threshold s is set up to calculate the two-dimensional Euclidean distance between all the points in the neighborhood of \(Z_{i}(u)\) and \(Z_{i+1}(u)\) of the lookup table and \(z(\mathrm{u})\) . The two-dimensional Euclidean distance between any point in \(Z_{i}(u)\) and \(z(\mathrm{u})\) is shown in Equation (2.5), and the distance between any points in \(Z_{i+1}(u)\) and \(z(\mathrm{u})\) is shown in Equation (2.6).

\(L_{i}(m)=\sqrt{\left(z(u)-Z_{i}(m)\right)^{2}+\left(u-m_{i}\right)^{2}}\)      (2.5)

\(L_{i+1}(m)=\sqrt{\left(z(u)-Z_{i+1}(m)\right)^{2}+\left(u-m_{i+1}\right)^{2}}\)      (2.6)

In the formula, \(u-s \leq m \leq u+s, m_{i}\) and \(m_{i+1}\) are the ordinate of the point where \(Z_{i}(m)\) and \(Z_{i+1}(m)\) are in the camera frame respectively, and u is the ordinate of the point where \(z(u)\) is located in the camera frame.

Then, the regional weighted average compensation value is calculated as follows:

\(\Delta d(u)=\Delta_{i}(u), \text { when } z(u)=Z_{i}(u)\)      (2.7)

\(\Delta d(u)=\frac{S U M_{\Delta}(u)}{S U M_{L}(u)}, \text { when } Z_{i}(u)<z(u)<Z_{i+1}(u)\)       (2.8)

In the formula,

\(S U M_{L}(u)=\sum_{m=u-s}^{u+s}\left(L_{i}(m)+L_{i+1}(m)\right)\)

\(S U M_{\Lambda}(u)=\sum_{m=u-s}^{u+s}\left(\frac{S U M_{L}(m)}{L_{i}(m)} * \Delta_{i}(m)\right)+\sum_{m=u-s}^{u+s}\left(\frac{S U M_{L}(m)}{L_{i+1}(m)} * \Delta_{i+1}(m)\right)\)

Thus, the error compensation formula at any point in the continuous depth space is:

\(Z_{\text {corrected}}(u)=\mathrm{z}(\mathrm{u})+\Delta \mathrm{d}(\mathrm{u})\)       (2.9)

2.4 Three-dimensional Solution Algorithm

From the calculation of the previous two steps, we can get the point set of the coordinates with the lightest coordinates (x, y). By combining the known constants, a point cloud located in the polar coordinate system can be solved according to the formulas. Equation (2.11), (2.12) can be deduced from Fig. 2.2, where O'P' = q.

\(q=f s /\left(\text {pixSize}^{*} x+C\right)\)      (2.10)

\(\text { pitch }=\arctan \left(\frac{P P^{\prime}}{f}\right)=\arctan \left(\left(y-\frac{480}{2}\right) * \text { pixSize } / f\right)\)       (2.11)

\(O^{\prime} P=O^{\prime} P^{\prime} / \cos (p i t c h)\)       (2.12)

In Fig. 2.3, L, O, O', C, P' five points are in the same plane. The object point P lies on the plane below the five points, and is perpendicular to the point just below the point P '. Equation (2.13) to (2.17) can be deduced from Fig. 2.3.

\(O^{\prime} L=O^{\prime} P^{\prime *} \tan \alpha\)      (2.13)

\(P^{\prime} L=\sqrt{O^{\prime} P^{\prime 2}+O^{\prime} L^{2}}\)      (2.14)

\(O P^{\prime}=\sqrt{O O^{\prime 2}+O^{\prime} P^{\prime 2}}\)       (2.15)

\(y a w=\frac{\pi}{2}-\arccos \left(\frac{O L^{2}+O P^{\prime 2}-P^{\prime} L^{2}}{2 O L^{*} O P^{\prime}}\right)\)       (2.16)

\(d=\sqrt{O O^{\prime 2}+O^{\prime} P^{2}}\)       (2.17)

 

Fig. 2.2. 3D solution diagram 1

 

Fig. 2.3. 3D solution diagram 2

Finally, the result of the algorithm functions is output as three-dimensional polar coordinate data, where O is coordinate origin, d is polar axis.

3. System Setup and Experiment Results

In order to verify the feasibility of the algorithm, this paper produced a 3D laser radar prototype, completed its mechanical, electronic, optical and other parts of the production.

3.1 Mechanical Structure

As shown in Fig. 3.1, the 3D laser radar communicates with the outside through three wires. Here is an exploded view of the plastic box, the layout of the electrical and electronic installation can be clearly seen.

 

Fig. 3.1. 3D laser radar structure diagram

3.2 Optical Part Design

As shown in Fig. 3.2, the line laser emits a continuous laser beam. In order to remove interference, we filter the light reflected by the object and amplify the signal by the lens. Then, it is converted to digital signal by CMOS image sensor and uploaded to PC.

 

Fig. 3.2. Optical system diagram

The wavelength of the laser beam is 808nm, which belongs to the infrared laser. This effectively avoids the interference of visible light and makes communication more reliable. When purchasing lenses, manufacturers are required to manually attach a specific 808nm narrow-band filter outside the lens. Only light near 808nm can pass through. OV2710-based USB camera is selected for image acquisition, the size of each pixel is 6um x 6um. In this design, more than half of the cost of the entire radar is spent on high-speed cameras.

 

3.3 Electronic System Design

The system design diagram is shown in Fig. 3.3. The hardware system of the 3D laser mainly consists of two parts: rotary scan PTZ and electronic system. The line laser, camera, stepper motor are the main components on the rotary scan PTZ. Electronic system consists of STM32 controller, human-computer interface, stepper motor driver and relay.

 

Fig. 3.3. System design diagram

3.4 Measurement Accuracy Analysis

This paper selects a 640x480 resolution camera which the pixel size is 6μm. In addition, it is desirable to position the laser spot at 0.1 subpixels or higher by the algorithm, as shown in Equation (3.2). Based on these parameters, the distance resolution renderings between fs and the minimum distance can be plotted (Fig. 3.4).

If the minimum measuring distance is within 20cm, fs should be taken as 768mm2 or less, as calculated by Equation (3.1). If the distance resolution is 30mm or smaller at 6m, according to Equation (3.3), fs should be greater than 700 mm2.

\(q=\frac{f s}{640 * p i x S i z e}\)       (3.1)

\(d x=d\left(\text {pixSize}^{*} 0.1^{*} \text { pixel}\right)=\text {pixSize}^{*} 0.1^{*} d(\text {pixel})\)       (3.2)

\(\frac{d q}{d(\text {pixel})}=\frac{\text {pixSize} * 0.1 * q^{2}}{f s}\)       (3.3)

Fig. 3.4 shows the relationship between the minimum detection range and relative resolution range at 6m. In this paper, laser radars with fs parameters of 760 mm2 and 1500 mm2 are selected respectively as a control group, and we compared the minimum detection distance and distance resolution of the two radars. Theoretically, the distance resolution will be improved by increasing fs , and the ranging accuracy decreases slower as the measuring distance increases. On the other hand, small fs can improve scanning and measurement distance, which can measure the objects closely. The vertical short line on the left of the Fig. 3.4 stands for fs parameters of 760 mm2, while the vertical short line on the right stands for fs parameters of 1500 mm2.

 

Fig. 3.4. the minimum detection range and relative resolution range at 6m distance

Finally, the laser angle β relative to the main optical axis can be determined by the Equation (3.4):

\(\beta=\arctan (f /(320 * 6 \mu m)) \approx 77^{\circ}\)     (3.4)

The constant fs of the 3D laser radar can be achieved in different ways. In this paper, we hope to get a bigger perspective. Choosing a small focal length lens can get a bigger perspective. The 6mm focal length lens is more appropriate.

3.5 3D Laser Radar Prototype

The physical schematic of 3D laser radar prototypes are shown in Fig. 3.5. Because we use same camera for both of the laser radars, which means f are the same. And a longer frame can make a longer s . Therefore, the radar with green frame is shorter than the red one.

 

Fig. 3.5. 3D laser radar prototype (fs=760 mm2, GREEN; fs=1500 mm2, RED)

3.6 Curve Fitting

According to the single point model and the distance conversion formula of laser triangulation, we choose the longitudinal center of the camera as the standard point. By measuring the equidistance of the standard plane, we can get a mapping relationship between the pixel position and the true distance. Then according to the formula model, curve fitting is performed by using the least square method, finally the pixel position-actual distance conversion formula is obtained. The result is shown in Table 3.1 and Table 3.2.

Table 3.1. Curve Fitting (fs=760 mm2)

 

Table 3.2. Curve Fitting (fs=1500 mm2)

 

3.7 Error Analysis of Weighted Average Algorithm

As shown in Fig. 3.6 and Fig. 3.7, it can be seen clearly from the set of points on measuring the distance from standard plane that in the 640 * 480 frame, the "center part" of the plane (the pixel near the 240th pixel in the 480 vertical pixels) is better measured, but its "upper and lower ends" are warped with a significantly higher error than the center. The dot set of the standard plane measurement constitutes a curve. In Fig. 3.6 and Fig. 3.7, the dot set is measured by line laser, and the vertical line in the middle is a pixel position at the central point (real point) in a longitudinal direction of a camera at the actual distance.

 

Fig. 3.6. standard plane distance measurement at 1.007m (fs=760 mm2)

 

Fig. 3.7. standard plane distance measurement at 1.005m (fs=1500 mm2)

In order to quantitatively analyze the distribution of pixel error, we measure the distance between the measured data and the standard plane in the range of 20cm to 4m between the installed radar and the standard plane, and calculate the deviation between the measured data and the standard plane as shown in Table 3.3 and 3.4. From the data, we can see that:

The standard deviation of each point set near the central point is small, and the standard deviation of the positions of the upper and lower end edges is large.

Table 3.3. Pixel error distribution (fs=760 mm2)

 

Table 3.4. Pixel error distribution (fs=1500 mm2)

 

It can be seen from the above analysis that the error of the pixel value data in the middle partis obviously smaller than the upper and lower end parts. Therefore, this paper takes the longitudinal center point of each measurement point set as the current distance measurement of the theoretical value. Then according to the method described in Section 2.3, create a Pixel Position Error Compensation Lookup Table from Equation 2.4.

To validate the effectiveness of the proposed algorithm, pixel values of data for testing are randomly collected from standard planes at different locations using the installed radar, and the pixel value data is compensated and verified according to Equation 2.9. Fig. 3.8 and 3.9 is the comparison of the pixel data collected by the radar before and after the compensation. It can be seen that the pixel error can be corrected obviously after using our algorithm.

 

Fig. 3.8. Pixel data collected by the radar before and after the compensation at 1.007m (fs=760 mm2)

 

Fig. 3.9. Pixel data collected by the radar before and after the compensation at 1.005m (fs=1500 mm2)

The maximum and average error before and after compensation are calculated for the test data respectively, and the actual error after pixel-to-distance conversion is shown in Table 3.5 and 3.6.

Table 3.5. Comparison of uncompensated and compensated data (fs=760 mm2)

 

Table 3.6. Comparison of uncompensated and compensated data (fs=1500 mm2)  

As can be seen from the tables above, a larger fs often represents better distance resolution and higher accuracy but sacrifices the minimum measurement distance. Smaller fs can be used to measure objects at closer distances. When fs is 760 mm2, the distance within 20 cm can be measured. When fs is 1500 mm2, a minimum distance of 40 cm can be measured, which is in accordance with the function curve in Fig. 3.4.

3.8 Scan Test

In order to prove the usability of the laser radar designed in this paper, we simulated an indoor scene and placed the two radars at the same locations conducting scan test. The photograph of the scene is shown in Fig. 3.10.

 

Fig. 3.10. Indoor scan scene

There are three boxes in the scene: box A, B and C. The size of the three boxes are the same, but the distance between the laser radar and box A, B and C is 1m, 2m and 3m. The point clouds data of each scan are shown below.

 

Fig. 3.11. Scan test

Fig. 3.11 shows the scan result of two laser radars. The number of points that described each box can be counted according to their 3D coordinates. The errors between the measured distances and the true distances of all the points are calculated, and the number of points whose errors are less than 1% are counted (correct points). The number of points and the percentage of points that match the true distance of each box are shown in Table 3.7 and 3.8.

Table 3.7. The correct rate of boxes scanning (GREEN, fs=760mm2)

 

Table 3.8. The correct rate of boxes scanning (RED, fs=1500mm2)

 

According to all the scan results of the two laser radars, the measurement accuracy of the equipment with larger fs is greatly improved. It can also be seen from Tables 3.5 and 3.6 that when the measuring distance is less than 3 meters, the error of the smaller fs equipment is at least twice that of the larger fs equipment. The product of the camera focal length and the baseline length is the key factor to influence measurement accuracy. Therefore, the measurement accuracy can be greatly improved by selecting larger fs without requiring the measurement of near objects and the volume of the equipment.

4. Conclusion

This paper designed and produced a low-cost 3D laser radar. Based on the principle of laser triangulation, a mathematical model of distance sensitivity is deduced. On this basis, a pixel point distance conversion formula is extended and extended to 3D scanning. In this paper, the spot center extraction algorithm is proposed, and the weighted average model is used to correct the error of manufacturing error, camera distortion and installation error of linear laser. Finally, the measured distance is changed to point cloud data using a three-dimensional solution algorithm. The experimental results show that the 3D laser radar designed in this paper can precisely accomplish the 3D object scanning, environment 3D reconstruction work. When the product of the camera focal length and the radar baseline length is 760mm2, the minimum measurement distance is 200mm, the measurement error is less than 25mm within 1m, no more than 53mm within 3m, and no more than 70mm within 4m distance. When the product of the camera focal length and the baseline length is 1500mm2, the minimum measurement distance is 400mm, and the measurement error is no more than 6mm within 1m, no more than 33mm within 3m, and no more than 64mm within 4m distance. The product of the camera focal length and the baseline length is the key factor to influence measurement accuracy. When measuring near objects, smaller fs can be selected to ensure measurement, and larger fs can be selected to ensure accuracy in applications that do not require close measurement. The radar system designed in this paper has practical application benefits in the field of three-dimensional object modeling, environment map building and autonomous robotic obstacle avoidance.

Acknowledgments

This work was partially supported by a grant from the National Natural Science Foundation of China (No.61573019) and Scientific Research Project of North China University of Technology (No.110051370018XN143).

References

  1. D. Burrows, J. Hadwin, "A scanned laser and tracking system for sea floor profiling and precision survey," in Proc. of IEEE Int. Conf. on Engineering in the Ocean Environment, pp. 31-34, September 25-28, 1973.
  2. Osamu Ozeki, Tomoaki Nakano and Shin Yamamoto, "Real-Time Range Measurement Device for Three-Dimensional Object Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 8, no. 4, pp. 550-554, July, 1986.
  3. J.-L. Jezouin, P. Saint-Marc and G. Medioni, "Building an accurate range finder with off the shelf components," in Proc. of CVPR 1988, pp. 195-201, June 5-9, 1988.
  4. David Acosta, Olmer Garcia and Jorge Aponte, "Laser Triangulation for shape acquisition in a 3D Scanner Plus Scanner," in Proc. of the Electronics, Robotics and Automotive Mechanics Conference, pp. 14-19, September 26-29, 2006.
  5. Marc Hildebrandt, Jochen Kerdels, Jan Albiez and Frank Kirchner, "A practical underwater 3D-Laserscanner," in Proc. of OCEANS, September 15-18. 2008.
  6. Muhammad Ogin Hasanuddin, Gilang Eka Permana, Ibrahim Akbar and Aciek Ida Wuryandari, "3D Scanner for Orthodontic Using Triangulation Method," in Proc. of 5th Int. Conf. on Electrical Engineering and Informatics, pp. 360-364, August 10-11, 2015.
  7. Carl Christian Liebe and Keith Coste, "Distance Measurement Utilizing Image-Based Triangulation," IEEE Sensors Journal, Vol. 13, no. 1, pp. 234-244, January, 2013. https://doi.org/10.1109/JSEN.2012.2212428
  8. Bing Li, Bing Sun, Lei Chen and Xiang Wei, "Application of Laser Displacement Sensor in Free-from Surface Measurement," Optics and Precision Engineering, vol. 23, no. 7, pp. 1939-1947, 2015. https://doi.org/10.3788/OPE.20152307.1939
  9. Guangfang Ning and Quan Gan, "Simulation Analysis of Error Compensation of Laser Displacement Sensor," Laser Journal, vol. 37, no. 4, pp. 37-40, 2016.
  10. Bing Li, Jianlu Wang, Fei Zhang, Lei Chen and Jianlu Wang, "Error analysis and compensation of single-beam laser triangulation measurement," in Proc. of IEEE Int. Conf. on Automation and Logistics, pp. 1223-1227, August 5-7, 2009.
  11. Klaus Wendt, Matthias Franke and Frank Hartig, "Measuring large 3D structures using four portable tracking laser interferometers," Measurement, vol. 45, no. 10, pp. 2339-2345, December, 2012. https://doi.org/10.1016/j.measurement.2011.09.020
  12. P. Morandi, F. Breman, P. Doumalin, A. Germaneau and J.C. Dupre, "New Optical Scanning Tomography using a rotating slicing for time-resolved measurements of 3D full field displacements in structures," Optics and Lasers in Engineering, vol. 58, pp. 85-92, July, 2014. https://doi.org/10.1016/j.optlaseng.2014.02.007
  13. Feng Li, Andrew Peter Longstaff, Simon Fletcher and Alan Myers, "A practical coordinate unification method for integrated tactile-optical measuring system," Optics and Lasers in Engineering, vol. 55, pp. 189-196, April, 2014. https://doi.org/10.1016/j.optlaseng.2013.11.004
  14. Guolu Ma, Bin Zhao and Yiyan Fan, "Non-diffracting beam based probe technology for measuring coordinates of hidden parts," Optics and Lasers in Engineering, vol. 51, pp. 585-591, May, 2013. https://doi.org/10.1016/j.optlaseng.2012.12.011
  15. Wang Yajun, Bhattacharya Bhaskar, H. Eliot, Kosmicki Peter, H. El-Ratal Wissam and Song Zhang, "Digital micromirror transient response influence on superfast 3D shape measurement," Optics and Lasers in Engineering, vol. 58, pp. 19-26, July, 2014. https://doi.org/10.1016/j.optlaseng.2014.01.015
  16. V.H. Flores, L. Casaletto, K. Genovese, A. Martinez, A. Montes and J.A. Rayas, "A Panoramic Fringe Projection System," Optics and Lasers in Engineering, vol. 58, pp. 80-84, July, 2014. https://doi.org/10.1016/j.optlaseng.2014.02.002
  17. Cesar-Cruz Almaraz-Cabral, Jose-Joel Gonzalez-Barbosa, Jesus Villa, Juan-Bautista Hurtado-Ramos, Francisco-Javier Ornelas-Rodriguez and Diana-Margarita Cordova-Esparza, "Fringe projection profilometry for panoramic 3D reconstruction," Optics and Lasers in Engineering, vol. 78, pp. 106-112, March, 2016. https://doi.org/10.1016/j.optlaseng.2015.10.004