DOI QR코드

DOI QR Code

동영상을 이용한 부유구조물 모형의 변위 관측

Displacement Measurement of a Floating Structure Model Using a Video Data

  • 한동엽 (전남대학교 공학대학 해양토목공학과) ;
  • 김현우 (한국시설안전공단) ;
  • 김재민 (전남대학교 공학대학 해양토목공학과)
  • 투고 : 2013.04.01
  • 심사 : 2013.04.24
  • 발행 : 2013.04.30

초록

움직이는 한 개의 카메라 동영상으로부터 개체의 3차원 위치를 추출할 수 있다고 알려져 있다. 이로부터 캠코더 측정시스템을 이용하여 부유체 모형에 대한 영상기반 모니터링을 수행하였다. 규칙파 및 비규칙파 실험조건에서의 디지털 캠코더 동영상으로부터 프레임 영상을 추출하고, 특징점을 정합하여, 상대적인 3차원 좌표를 획득하였다. 수정된 SURF 기반 정합의 영상 변환 정확도와 규칙파에서 부유체 모델의 영상기반 변위 관측 정확도를 평가하였다. 규칙파의 경우 조파기의 설정값은 3.0sec이고, 영상기반 변위에 의한 주기는 2.993sec이었다. 기계적 오차를 고려할 때 이 두 값은 유사한 결과로 여겨진다. 시각적으로도 X Y Z축으로의 1차원 투영결과나 3차원 공간에서의 결과에서 규칙파의 형상을 볼 수 있었다. 결과적으로 30fps의 일반 디지털 캠코더 동영상을 이용하여 근실시간으로 위치변동을 계산할 수 있었다.

It is well known that a single moving camera video is capable of extracting the 3-dimensional position of an object. With this in mind, current research performed image-based monitoring to establish a floating structure model using a camcorder system. Following this, the present study extracted frame images from digital camcorder video clips and matched the interest points to obtain relative 3D coordinates for both regular and irregular wave conditions. Then, the researchers evaluated the transformation accuracy of the modified SURF-based matching and image-based displacement estimation of the floating structure model in regular wave condition. For the regular wave condition, the wave generator's setting value was 3.0 sec and the cycle of the image-based displacement result was 2.993 sec. Taking into account mechanical error, these values can be considered as very similar. In terms of visual inspection, the researchers observed the shape of a regular wave in the 3-dimensional and 1-dimensional figures through the projection on X Y Z axis. In conclusion, it was possible to calculate the displacement of a floating structure module in near real-time using an average digital camcorder with 30fps video.

키워드

참고문헌

  1. Bay, H., Ess, A., Tuytelaars, T. and Van Gool, L. (2008), SURF: speeded up robust features, Computer Vision and Image Understanding, Vol. 110, No. 3, pp. 346-359. https://doi.org/10.1016/j.cviu.2007.09.014
  2. Fischler, M.A. and Bolles, R.C. (1981), Random Sample Consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Comm. of the ACM 24, pp. 381-395. https://doi.org/10.1145/358669.358692
  3. Funayama, R., Yanagihara, H., Van Gool, L., Tuytelaars, T. and Bay, H. (2009), Robust interest point detector and descriptor, Patent US8165401.
  4. Hartley, R. and Zisserman, A. (2003), Multiple view geometry in computer vision. 2nd ed., Cambridge University Press, Cambridge. p. 607
  5. Jeon, H. S., Choi, Y. C., Park, J. H. and Park, J. W. (2010), Multi-point measurement of structural vibration using pattern recognition from camera image, Nuclear Engineering and Technology, Vol. 42, No. 6, pp. 704-711. https://doi.org/10.5516/NET.2010.42.6.704
  6. Kim, H.W., Kim J.M., Kim Y.J. and Kim Y.S. (2011), Measurement of lateral prestress force of UHPC cross beam using the smart tendon, COSEIK annual conference 2011, T22, pp. 178-181.
  7. Kim, J.H., Koo, K.M., Kim, C.K. and Cha, E.Y. (2012), SURF algorithm to improve correspondence point using geometric features, Journal of The Korea Society of Computer and Information, Vol. 20, No. 2, pp. 43-46.
  8. Lischinski, D. (2007), Structure from motion: Tomasi- Kanade factorization, http://www.cs.huji.ac.il/-csip/sfm. pdf (last date accessed: 16 April 2013).
  9. Lucas, B.D. and Kanade, T. (1981), An iterative image registration technique with an application to stereo vision, International Joint Conference on Artificial Intelligence, pp. 674-679.
  10. Luhmann, T., Robson, S., Kyle, S. and Harley, I. (2006), Close Range Photogrammetry: Principles, Methods and Applications, Wiley, Scotland, UK, p. 510
  11. Mikolajczyk, K. and Schmid, C. (2005), A performance evaluation of local descriptors, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 10, pp. 1615-1630. https://doi.org/10.1109/TPAMI.2005.188
  12. Ozbek, M., Rixen, D. J., Erne, O. and Sanow, G. (2010), Feasibility of monitoring large wind turbines using photogrammetry, Energy, Vol. 35, pp. 4802-4811. https://doi.org/10.1016/j.energy.2010.09.008
  13. Rousseeuw, P.J. and Leroy, A.M. (1987), Robust regression and outlier detection, John Wiley & Sons, New York, p. 360
  14. Shi, J. and Tomasi, C. (1994), Good features to track, IEEE Conference on Computer Vision and Pattern Recognition, pp. 593-600.
  15. Sun, H.H. and Soares, C.G. (2003), Reliability-based structural design of ship-type FPSO units, Journal of Offshore Mechanics and Arctic Engineering, Vol. 125, No. 2, pp. 108-113. https://doi.org/10.1115/1.1554700
  16. The MathWorks (2011), Computer Vision System Toolbox User's Guide, http://www.mathworks.co.kr/help/pdf_doc/ vision/vision_ug.pdf (last date accessed: 16 April 2013).
  17. Trucco, E. and Plakas, K. (2006), Video tracking: a concise survey, IEEE Journal of Oceanic Engineering, Vol. 31, No. 2, pp. 520-529. https://doi.org/10.1109/JOE.2004.839933
  18. Yilmaz, A., Javed, O. and Shah, M. (2006), Object tracking: a survey, ACM Computing Surveys, Vol. 38, No. 4, pp. 1-45. https://doi.org/10.1145/1132952.1132953