• Title/Summary/Keyword: Camera pose

Search Result 271, Processing Time 0.024 seconds

Visual Servoing-Based Paired Structured Light Robot System for Estimation of 6-DOF Structural Displacement (구조물의 6자유도 변위 측정을 위한 비주얼 서보잉 기반 양립형 구조 광 로봇 시스템)

  • Jeon, Hae-Min;Bang, Yu-Seok;Kim, Han-Geun;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.10
    • /
    • pp.989-994
    • /
    • 2011
  • This study aims to demonstrate the feasibility of a visual servoing-based paired structured light (SL) robot for estimating structural displacement under various external loads. The former paired SL robot, which was proposed in the previous study, was composed of two screens facing with each other, each with one or two lasers and a camera. It was found that the paired SL robot could estimate the translational and rotational displacement each in 3-DOF with high accuracy and low cost. However, the measurable range is fairly limited due to the limited screen size. In this paper, therefore, a visual servoing-based 2-DOF manipulator which controls the pose of lasers is introduced. By controlling the positions of the projected laser points to be on the screen, the proposed robot can estimate the displacement regardless of the screen size. We performed various simulations and experimental tests to verify the performance of the newly proposed robot. The results show that the proposed system overcomes the range limitation of the former system and it can be utilized to accurately estimate the structural displacement.

POSE-VIWEPOINT ADAPTIVE OBJECT TRACKING VIA ONLINE LEARNING APPROACH

  • Mariappan, Vinayagam;Kim, Hyung-O;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.20-28
    • /
    • 2015
  • In this paper, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame with posture variation and camera view point adaptation by employing the non-adaptive random projections that preserve the structure of the image feature space of objects. The existing online tracking algorithms update models with features from recent video frames and the numerous issues remain to be addressed despite on the improvement in tracking. The data-dependent adaptive appearance models often encounter the drift problems because the online algorithms does not get the required amount of data for online learning. So, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame.

Real-time Multiple Pedestrians Tracking for Embedded Smart Visual Systems

  • Nguyen, Van Ngoc Nghia;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.2
    • /
    • pp.167-177
    • /
    • 2019
  • Even though so much progresses have been achieved in Multiple Object Tracking (MOT), most of reported MOT methods are not still satisfactory for commercial embedded products like Pan-Tilt-Zoom (PTZ) camera. In this paper, we propose a real-time multiple pedestrians tracking method for embedded environments. First, we design a new light weight convolutional neural network(CNN)-based pedestrian detector, which is constructed to detect even small size pedestrians, as well. For further saving of processing time, the designed detector is applied for every other frame, and Kalman filter is employed to predict pedestrians' positions in frames where the designed CNN-based detector is not applied. The pose orientation information is incorporated to enhance object association for tracking pedestrians without further computational cost. Through experiments on Nvidia's embedded computing board, Jetson TX2, it is verified that the designed pedestrian detector detects even small size pedestrians fast and well, compared to many state-of-the-art detectors, and that the proposed tracking method can track pedestrians in real-time and show accuracy performance comparably to performances of many state-of-the-art tracking methods, which do not target for operation in embedded systems.

The Estimation of Craniovertebral Angle using Wearable Sensor for Monitoring of Neck Posture in Real-Time (실시간 목 자세 모니터링을 위한 웨어러블 센서를 이용한 두개척추각 추정)

  • Lee, Jaehyun;Chee, Youngjoon
    • Journal of Biomedical Engineering Research
    • /
    • v.39 no.6
    • /
    • pp.278-283
    • /
    • 2018
  • Nowdays, many people suffer from the neck pain due to forward head posture(FHP) and text neck(TN). To assess the severity of the FHP and TN the craniovertebral angle(CVA) is used in clinincs. However, it is difficult to monitor the neck posture using the CVA in daily life. We propose a new method using the cervical flexion angle(CFA) obtained from a wearable sensor to monitor neck posture in daily life. 15 participants were requested to pose FHP and TN. The CFA from the wearable sensor was compared with the CVA observed from a 3D motion camera system to analyze their correlation. The determination coefficients between CFA and CVA were 0.80 in TN and 0.57 in FHP, and 0.69 in TN and FHP. From the monitoring the neck posture while using laptop computer for 20 minutes, this wearable sensor can estimate the CVA with the mean squared error of 2.1 degree.

Design of Personalized Exercise Data Collection System based on Edge Computing

  • Jung, Hyon-Chel;Choi, Duk-Kyu;Park, Myeong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.5
    • /
    • pp.61-68
    • /
    • 2021
  • In this paper, we propose an edge computing-based exercise data collection device that can be provided for exercise rehabilitation services. In the existing cloud computing method, when the number of users increases, the throughput of the data center increases, causing a lot of delay. In this paper, we design and implement a device that measures and estimates the position of keypoints of body joints for movement information collected by a 3D camera from the user's side using edge computing and transmits them to the server. This can build a seamless information collection environment without load on the cloud system. The results of this study can be utilized in a personalized rehabilitation exercise coaching system through IoT and edge computing technologies for various users who want exercise rehabilitation.

BIM model-based structural damage localization using visual-inertial odometry

  • Junyeon Chung;Kiyoung Kim;Hoon Sohn
    • Smart Structures and Systems
    • /
    • v.31 no.6
    • /
    • pp.561-571
    • /
    • 2023
  • Ensuring the safety of a structure necessitates that repairs are carried out based on accurate inspections and records of damage information. Traditional methods of recording damage rely on individual paper-based documents, making it challenging for inspectors to accurately record damage locations and track chronological changes. Recent research has suggested the adoption of building information modeling (BIM) to record detailed damage information; however, localizing damages on a BIM model can be time-consuming. To overcome this limitation, this study proposes a method to automatically localize damages on a BIM model in real-time, utilizing consecutive images and measurements from an inertial measurement unit in close proximity to damages. The proposed method employs a visual-inertial odometry algorithm to estimate the camera pose, detect damages, and compute the damage location in the coordinate of a prebuilt BIM model. The feasibility and effectiveness of the proposed method were validated through an experiment conducted on a campus building. Results revealed that the proposed method successfully localized damages on the BIM model in real-time, with a root mean square error of 6.6 cm.

Mono-Vision Based Satellite Relative Navigation Using Active Contour Method (능동 윤곽 기법을 적용한 단일 영상 기반 인공위성 상대항법)

  • Kim, Sang-Hyeon;Choi, Han-Lim;Shim, Hyunchul
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.43 no.10
    • /
    • pp.902-909
    • /
    • 2015
  • In this paper, monovision based relative navigation for a satellite proximity operation is studied. The chaser satellite only uses one camera sensor to observe the target satellite and conducts image tracking to obtain the target pose information. However, by using only mono-vision, it is hard to get the depth information which is related to the relative distance to the target. In order to resolve the well-known difficulty in computing the depth information with the use of a single camera, the active contour method is adopted for the image tracking process. The active contour method provides the size of target image, which can be utilized to indirectly calculate the relative distance between the chaser and the target. 3D virtual reality is used in order to model the space environment where two satellites make relative motion and produce the virtual camera images. The unscented Kalman filter is used for the chaser satellite to estimate the relative position of the target in the process of glideslope approaching. Closed-loop simulations are conducted to analyze the performance of the relative navigation with the active contour method.

Estimation of Manhattan Coordinate System using Convolutional Neural Network (합성곱 신경망 기반 맨하탄 좌표계 추정)

  • Lee, Jinwoo;Lee, Hyunjoon;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.31-38
    • /
    • 2017
  • In this paper, we propose a system which estimates Manhattan coordinate systems for urban scene images using a convolutional neural network (CNN). Estimating the Manhattan coordinate system from an image under the Manhattan world assumption is the basis for solving computer graphics and vision problems such as image adjustment and 3D scene reconstruction. We construct a CNN that estimates Manhattan coordinate systems based on GoogLeNet [1]. To train the CNN, we collect about 155,000 images under the Manhattan world assumption by using the Google Street View APIs and calculate Manhattan coordinate systems using existing calibration methods to generate dataset. In contrast to PoseNet [2] that trains per-scene CNNs, our method learns from images under the Manhattan world assumption and thus estimates Manhattan coordinate systems for new images that have not been learned. Experimental results show that our method estimates Manhattan coordinate systems with the median error of $3.157^{\circ}$ for the Google Street View images of non-trained scenes, as test set. In addition, compared to an existing calibration method [3], the proposed method shows lower intermediate errors for the test set.

A Study on the Estimation of Multi-Object Social Distancing Using Stereo Vision and AlphaPose (Stereo Vision과 AlphaPose를 이용한 다중 객체 거리 추정 방법에 관한 연구)

  • Lee, Ju-Min;Bae, Hyeon-Jae;Jang, Gyu-Jin;Kim, Jin-Pyeong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.279-286
    • /
    • 2021
  • Recently, We are carrying out a policy of physical distancing of at least 1m from each other to prevent the spreading of COVID-19 disease in public places. In this paper, we propose a method for measuring distances between people in real time and an automation system that recognizes objects that are within 1 meter of each other from stereo images acquired by drones or CCTVs according to the estimated distance. A problem with existing methods used to estimate distances between multiple objects is that they do not obtain three-dimensional information of objects using only one CCTV. his is because three-dimensional information is necessary to measure distances between people when they are right next to each other or overlap in two dimensional image. Furthermore, they use only the Bounding Box information to obtain the exact coordinates of human existence. Therefore, in this paper, to obtain the exact two-dimensional coordinate value in which a person exists, we extract a person's key point to detect the location, convert it to a three-dimensional coordinate value using Stereo Vision and Camera Calibration, and estimate the Euclidean distance between people. As a result of performing an experiment for estimating the accuracy of 3D coordinates and the distance between objects (persons), the average error within 0.098m was shown in the estimation of the distance between multiple people within 1m.

A Home-Based Remote Rehabilitation System with Motion Recognition for Joint Range of Motion Improvement (관절 가동범위 향상을 위한 원격 모션 인식 재활 시스템)

  • Kim, Kyungah;Chung, Wan-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.3
    • /
    • pp.151-158
    • /
    • 2019
  • Patients with disabilities from various reasons such as disasters, injuries or chronic illness or elderly with limited body motion range due to aging are recommended to participate in rehabilitation programs at hospitals. But typically, it's not as simple for them to commute without help as they have limited access outside of the home. Also, regarding the perspectives of hospitals, having to maintain the workforce and have them take care of the rehabilitation sessions leads them to more expenses in cost aspects. For those reasons, in this paper, a home-based remote rehabilitation system using motion recognition is developed without needing help from others. This system can be executed by a personal computer and a stereo camera at home, the real-time user motion status is monitored using motion recognition feature. The system tracks the joint range of motion(Joint ROM) of particular body parts of users to check the body function improvement. For demonstration, total of 4 subjects with various ages and health conditions participated in this project. Their motion data were collected during all 3 exercise sessions, and each session was repeated 9 times per person and was compared in the results.