• Title/Summary/Keyword: camera pose estimation

Search Result 121, Processing Time 0.033 seconds

The Estimation of Craniovertebral Angle using Wearable Sensor for Monitoring of Neck Posture in Real-Time (실시간 목 자세 모니터링을 위한 웨어러블 센서를 이용한 두개척추각 추정)

  • Lee, Jaehyun;Chee, Youngjoon
    • Journal of Biomedical Engineering Research
    • /
    • v.39 no.6
    • /
    • pp.278-283
    • /
    • 2018
  • Nowdays, many people suffer from the neck pain due to forward head posture(FHP) and text neck(TN). To assess the severity of the FHP and TN the craniovertebral angle(CVA) is used in clinincs. However, it is difficult to monitor the neck posture using the CVA in daily life. We propose a new method using the cervical flexion angle(CFA) obtained from a wearable sensor to monitor neck posture in daily life. 15 participants were requested to pose FHP and TN. The CFA from the wearable sensor was compared with the CVA observed from a 3D motion camera system to analyze their correlation. The determination coefficients between CFA and CVA were 0.80 in TN and 0.57 in FHP, and 0.69 in TN and FHP. From the monitoring the neck posture while using laptop computer for 20 minutes, this wearable sensor can estimate the CVA with the mean squared error of 2.1 degree.

Design of Personalized Exercise Data Collection System based on Edge Computing

  • Jung, Hyon-Chel;Choi, Duk-Kyu;Park, Myeong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.5
    • /
    • pp.61-68
    • /
    • 2021
  • In this paper, we propose an edge computing-based exercise data collection device that can be provided for exercise rehabilitation services. In the existing cloud computing method, when the number of users increases, the throughput of the data center increases, causing a lot of delay. In this paper, we design and implement a device that measures and estimates the position of keypoints of body joints for movement information collected by a 3D camera from the user's side using edge computing and transmits them to the server. This can build a seamless information collection environment without load on the cloud system. The results of this study can be utilized in a personalized rehabilitation exercise coaching system through IoT and edge computing technologies for various users who want exercise rehabilitation.

Zoom Lens Distortion Correction Of Video Sequence Using Nonlinear Zoom Lens Distortion Model (비선형 줌-렌즈 왜곡 모델을 이용한 비디오 영상에서의 줌-렌즈 왜곡 보정)

  • Kim, Dae-Hyun;Shin, Hyoung-Chul;Oh, Ju-Hyun;Nam, Seung-Jin;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.14 no.3
    • /
    • pp.299-310
    • /
    • 2009
  • In this paper, we proposed a new method to correct the zoom lens distortion for the video sequence captured by the zoom lens. First, we defined the nonlinear zoom lens distortion model which is represented by the focal length and the lens distortion using the characteristic that lens distortion parameters are nonlinearly and monotonically changed while the focal length is increased. Then, we chose some sample images from the video sequence and estimated a focal length and a lens distortion parameter for each sample image. Using these estimated parameters, we were able to optimize the zoom lens distortion model. Once the zoom lens distortion model was obtained, lens distortion parameters of other images were able to be computed as their focal lengths were input. The proposed method has been made experiments with many real images and videos. As a result, accurate distortion parameters were estimated from the zoom lens distortion model and distorted images were well corrected without any visual artifacts.

Improved CS-RANSAC Algorithm Using K-Means Clustering (K-Means 클러스터링을 적용한 향상된 CS-RANSAC 알고리즘)

  • Ko, Seunghyun;Yoon, Ui-Nyoung;Alikhanov, Jumabek;Jo, Geun-Sik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.6
    • /
    • pp.315-320
    • /
    • 2017
  • Estimating the correct pose of augmented objects on the real camera view efficiently is one of the most important questions in image tracking area. In computer vision, Homography is used for camera pose estimation in augmented reality system with markerless. To estimating Homography, several algorithm like SURF features which extracted from images are used. Based on extracted features, Homography is estimated. For this purpose, RANSAC algorithm is well used to estimate homography and DCS-RANSAC algorithm is researched which apply constraints dynamically based on Constraint Satisfaction Problem to improve performance. In DCS-RANSAC, however, the dataset is based on pattern of feature distribution of images manually, so this algorithm cannot classify the input image, pattern of feature distribution is not recognized in DCS-RANSAC algorithm, which lead to reduce it's performance. To improve this problem, we suggest the KCS-RANSAC algorithm using K-means clustering in CS-RANSAC to cluster the images automatically based on pattern of feature distribution and apply constraints to each image groups. The suggested algorithm cluster the images automatically and apply the constraints to each clustered image groups. The experiment result shows that our KCS-RANSAC algorithm outperformed the DCS-RANSAC algorithm in terms of speed, accuracy, and inlier rate.

Display of Irradiation Location of Ultrasonic Beauty Device Using AR Scheme (증강현실 기법을 이용한 초음파 미용기의 조사 위치 표시)

  • Kang, Moon-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.9
    • /
    • pp.25-31
    • /
    • 2020
  • In this study, for the safe use of a portable ultrasonic skin-beauty device, an android app was developed to show the irradiation locations of focused ultrasound to a user through augmented reality (AR) and enable stable self-surgery. The utility of the app was assessed through testing. While the user is making a facial treatment with the beauty device, the user's face and the ultrasonic irradiation location on the face are detected in real-time with a smart-phone camera. The irradiation location is then indicated on the face image and shown to the user so that excessive ultrasound is not irradiated to the same area during treatment. To this end, ML-Kit is used to detect the user's face landmarks in real-time, and they are compared with a reference face model to estimate the pose of the face, such as rotation and movement. After mounting a LED on the ultrasonic irradiation part of the device and operating the LED during irradiation, the LED light was searched to find the position of the ultrasonic irradiation on the smart-phone screen, and the irradiation position was registered and displayed on the face image based on the estimated face pose. Each task performed in the app was implemented through the thread and the timer, and all tasks were executed within 75 ms. The test results showed that the time taken to register and display 120 ultrasound irradiation positions was less than 25ms, and the display accuracy was within 20mm when the face did not rotate significantly.

Real-Time Joint Animation Production and Expression System using Deep Learning Model and Kinect Camera (딥러닝 모델과 Kinect 카메라를 이용한 실시간 관절 애니메이션 제작 및 표출 시스템 구축에 관한 연구)

  • Kim, Sang-Joon;Lee, Yu-Jin;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.269-282
    • /
    • 2021
  • As the distribution of 3D content such as augmented reality and virtual reality increases, the importance of real-time computer animation technology is increasing. However, the computer animation process consists mostly of manual or marker-attaching motion capture, which requires a very long time for experienced professionals to obtain realistic images. To solve these problems, animation production systems and algorithms based on deep learning model and sensors have recently emerged. Thus, in this paper, we study four methods of implementing natural human movement in deep learning model and kinect camera-based animation production systems. Each method is chosen considering its environmental characteristics and accuracy. The first method uses a Kinect camera. The second method uses a Kinect camera and a calibration algorithm. The third method uses deep learning model. The fourth method uses deep learning model and kinect. Experiments with the proposed method showed that the fourth method of deep learning model and using the Kinect simultaneously showed the best results compared to other methods.

Estimation of Manhattan Coordinate System using Convolutional Neural Network (합성곱 신경망 기반 맨하탄 좌표계 추정)

  • Lee, Jinwoo;Lee, Hyunjoon;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.31-38
    • /
    • 2017
  • In this paper, we propose a system which estimates Manhattan coordinate systems for urban scene images using a convolutional neural network (CNN). Estimating the Manhattan coordinate system from an image under the Manhattan world assumption is the basis for solving computer graphics and vision problems such as image adjustment and 3D scene reconstruction. We construct a CNN that estimates Manhattan coordinate systems based on GoogLeNet [1]. To train the CNN, we collect about 155,000 images under the Manhattan world assumption by using the Google Street View APIs and calculate Manhattan coordinate systems using existing calibration methods to generate dataset. In contrast to PoseNet [2] that trains per-scene CNNs, our method learns from images under the Manhattan world assumption and thus estimates Manhattan coordinate systems for new images that have not been learned. Experimental results show that our method estimates Manhattan coordinate systems with the median error of $3.157^{\circ}$ for the Google Street View images of non-trained scenes, as test set. In addition, compared to an existing calibration method [3], the proposed method shows lower intermediate errors for the test set.

A Study on the Application of ColMap in 3D Reconstruction for Cultural Heritage Restoration

  • Byong-Kwon Lee;Beom-jun Kim;Woo-Jong Yoo;Min Ahn;Soo-Jin Han
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.95-101
    • /
    • 2023
  • Colmap is one of the innovative artificial intelligence technologies, highly effective as a tool in 3D reconstruction tasks. Moreover, it excels at constructing intricate 3D models by utilizing images and corresponding metadata. Colmap generates 3D models by merging 2D images, camera position data, depth information, and so on. Through this, it achieves detailed and precise 3D reconstructions, inclusive of objects from the real world. Additionally, Colmap provides rapid processing by leveraging GPUs, allowing for efficient operation even within large data sets. In this paper, we have presented a method of collecting 2D images of traditional Korean towers and reconstructing them into 3D models using Colmap. This study applied this technology in the restoration process of traditional stone towers in South Korea. As a result, we confirmed the potential applicability of Colmap in the field of cultural heritage restoration.

Visualization System for Dance Movement Feedback using MediaPipe (MediaPipe를 활용한 춤동작 피드백 시각화 시스템)

  • Hyeon-Seo Kim;Jae-Yeung Jeong;Bong-Jun Choi;Mi-Kyeong Moon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.217-224
    • /
    • 2024
  • With the rapid growth of K-POP, the dance content industry is spreading. With the recent increase in the spread of SNS, they also shoot and share their dance videos. However, it is not easy for dance beginners who are new to dancing to learn dance moves because it is difficult to receive objective feedback when dancing alone while watching videos. This paper describes a system that uses MediaPipe to compare choreography videos and dance videos of users and detect whether they are following the movement correctly. This study proposes a method of giving feedback based on Color Map to users by calculating the similarity of dance movements between user images taken with webcam or camera and choreography images using cosine similarity and COCO OKS. Through this system, objective feedback on users' dance movements can be visually received, and beginners are expected to be able to learn accurate dance movements.

Fast Natural Feature Tracking Using Optical Flow (광류를 사용한 빠른 자연특징 추적)

  • Bae, Byung-Jo;Park, Jong-Seung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.345-354
    • /
    • 2010
  • Visual tracking techniques for Augmented Reality are classified as either a marker tracking approach or a natural feature tracking approach. Marker-based tracking algorithms can be efficiently implemented sufficient to work in real-time on mobile devices. On the other hand, natural feature tracking methods require a lot of computationally expensive procedures. Most previous natural feature tracking methods include heavy feature extraction and pattern matching procedures for each of the input image frame. It is difficult to implement real-time augmented reality applications including the capability of natural feature tracking on low performance devices. The required computational time cost is also in proportion to the number of patterns to be matched. To speed up the natural feature tracking process, we propose a novel fast tracking method based on optical flow. We implemented the proposed method on mobile devices to run in real-time and be appropriately used with mobile augmented reality applications. Moreover, during tracking, we keep up the total number of feature points by inserting new feature points proportional to the number of vanished feature points. Experimental results showed that the proposed method reduces the computational cost and also stabilizes the camera pose estimation results.