DOI QR코드

DOI QR Code

A Real-time Virtual Model Synchronization Algorithm Using Object Feature Detection

객체 특징 탐색을 이용한 실시간 가상 모델 동기화 알고리즘

  • Lee, Ki-Hyeok (Department of Electrical and Electronic Engineering, Hanyang University) ;
  • Kim, Mu-In (Division of Electronics Engineering, Hanyang University) ;
  • Kim, Min-Jae (Division of Electronics Engineering, Hanyang University) ;
  • Choi, Myung-Ryul (Division of Electronics Engineering, Hanyang University)
  • 이기혁 (한양대학교 전자공학과) ;
  • 김무인 (한양대학교 전자공학부) ;
  • 김민재 (한양대학교 전자공학부) ;
  • 최명렬 (한양대학교 전자공학부)
  • Received : 2018.10.26
  • Accepted : 2019.01.20
  • Published : 2019.01.28

Abstract

In this paper, we propose a real-time virtual model synchronization algorithm using object feature detection. The proposed algorithm may be useful to synchronize between real objects and their corresponding virtual models through object feature search in two-dimensional images. It consists of an algorithm to classify objects with colors individually, and an algorithm to analyze the orientation of objects with angles. We can synchronize the motion of the real object with the virtual model by providing the environment of moving the virtual object through the hand without specific controllers. The future research will include the algorithm to synchronize real object with unspecified shapes, colors, and directions to the corresponding virtual object.

본 논문은 객체 특징 탐색을 이용한 실시간 가상 모델 동기화 알고리즘을 제안한다. 제안된 알고리즘은 2차원 영상에서 객체 특징 탐색을 사용하여, 실제 객체와 가상모델을 실시간으로 동기화 한다. 객체 특징 탐색 알고리즘은 색상을 이용하여 객체를 개별적으로 분류하는 알고리즘과 각도를 이용하여 객체가 설치된 방향을 분석하는 알고리즘으로 구성된다. 제안한 알고리즘을 사용하여 실제 물체의 움직임을 가상 모델에 동기화함으로써, 별도의 사용자 조작 도구 사용 없이 손을 사용하여 가상 물체를 움직이는 환경을 제공할 수 있다. 본 알고리즘은 향후 불특정한 모양과 색상 및 방향을 갖는 객체를 동기화하는 연구를 진행하며, VR/AR기술에 적용하여 사실적인 가상환경을 제공하는 것을 목표로 한다.

Keywords

DJTJBT_2019_v17n1_203_f0001.png 이미지

Fig. 1. Flow Chart of Virtual Model Synchronization

DJTJBT_2019_v17n1_203_f0002.png 이미지

Fig. 2. Image Classification Detection Result

DJTJBT_2019_v17n1_203_f0003.png 이미지

Fig. 3. Object Angles and Virtual Models

DJTJBT_2019_v17n1_203_f0004.png 이미지

Fig. 4. Development Language Conversion Steps

DJTJBT_2019_v17n1_203_f0005.png 이미지

Fig. 5. Real Chaos-Block Image

DJTJBT_2019_v17n1_203_f0006.png 이미지

Fig. 6. Virtual Model Synchronization

Table 1. Color Range for each Block

DJTJBT_2019_v17n1_203_t0001.png 이미지

Table 2. Angle by object installation direction

DJTJBT_2019_v17n1_203_t0002.png 이미지

Table 3. System Configuration

DJTJBT_2019_v17n1_203_t0003.png 이미지

Table 4. Virtual Environment Configuration

DJTJBT_2019_v17n1_203_t0004.png 이미지

References

  1. I. S. MacKenzie. (1992). Fitts' law as a research and design tool in human-computer interaction. Human-computer interaction, 7(1), 91-139. https://doi.org/10.1207/s15327051hci0701_3
  2. V. I. Pavlovic, R. Sharma & T. S. Huang. (1997). Visual interpretation of hand gestures for human-computer interaction: A review. IEEE Transactions on Pattern Analysis & Machine Intelligence, (7), 677-695.
  3. K. B. Park & J. Y. Lee. (2016). Comparative Study on the Interface and Interaction for Manipulating 3D Virtual Objects in a Virtual Reality Environment. Korean Journal of Computational Design and Engineering, 21(1), 20-30. https://doi.org/10.7315/CADCAM.2016.020
  4. F. S. Chen, C. M. Fu & C. L. Huang. (2003). Hand gesture recognition using a real-time tracking method and hidden Markov models. Image and vision computing, 21(8), 745-758. https://doi.org/10.1016/S0262-8856(03)00070-2
  5. H. I. Suk & J. H. Lee. (2008). Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling. Journal of KISS: Software and Applications, 35 (12), 780-788.
  6. C. G. Rafael & E. W. Richard. (2004). Digital Image Processing 2nd Edition. Seoul : Green publishing.
  7. S. G. Hwang. (2015). Visual C++ image processing programming. Seoul : Gilbut.
  8. V. Vezhnevets, V. Sazonov & A. Andreeva. (2003, September). A survey on pixel-based skin color detection techniques. In Proc. Graphicon, 3, 85-92.
  9. J. Canny. (1986). A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, (6), 679-698.
  10. O. R. Vincent & O. Folorunso. (2009, June). A descriptive algorithm for sobel image edge detection. In Proceedings of Informing Science & IT Education Conference (InSITE) Vol. 40, 97-107.
  11. S. Datta. (2016). Learning OpenCV 3 application development : build, create, and deploy your own computer vision applications with the power of OpenCV. Birmingham : Packt.
  12. JiphuTzu. (2015). C# (CSharp) Method OpenCvSharp.Mat.Clone Code Examples. Hot Examples. https://shrl.tk/RF89B
  13. UNITY manual. (2018) Managed Plugins. UNITY. https://goo.gl/zmYTqV
  14. L. Jonathan. (2016). UNITY Virtual Reality Projects. Seoul : Acorn publishing.
  15. B. G. Lee. (2012.). (Anyone can easily learn by example) SolidWorks Easy to follow. Seoul : KICT.
  16. J. Illingworth & J. Kittler. (1988). A survey of the Hough transform. Computer vision, graphics, and image processing, 44(1), 87-116. https://doi.org/10.1016/S0734-189X(88)80033-1