• Title/Summary/Keyword: 자세 추정

Search Result 469, Processing Time 0.023 seconds

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Analysis of Transfer Gyro Calibration Error Budget (전이궤도 자이로보정 오차버짓 해석)

  • Park, Keun-Joo;Yang, Koon-Ho;Yong, Ki-Lyuk
    • Aerospace Engineering and Technology
    • /
    • v.9 no.2
    • /
    • pp.36-43
    • /
    • 2010
  • A GEO satellite launched by Arian 5 ECA launcher will be located in a transfer orbit where it requires several Apogee burn maneuvers to reach the target orbit. To obtain the required performance of Apogee burn maneuvers, a calibration of gyro drift error needs to be performed before each maneuver. In this paper, a unique gyro calibration scheme which is applied to COMS is described and the calibration error budget analysis is performed.

A Study on the GPS/INS Integration and GPS Compensation Algorithm Based on the Particle Filter (파티클 필터를 이용한 GPS 위치보정과 GPS/INS 센서 결합에 관한 연구)

  • Jeong, Jae Young;Kim, Han Sil
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.267-275
    • /
    • 2013
  • EKF has been widely used for GPS/INS integration as standard method but EKF has one well-known drawback. if the errors are not within the bounded region, the filter may be divergent. The particle filter has the advantage of the nonlinear and non-gaussian system. This paper proposes a method for compensating the GPS position errors based on the particle filter and presents loosely-coupled GPS/INS integration using proposed algorithm. We used GPS position pattern with particle filter and added attitude kalman filter for improving attitude accuracy. To verify the performance, the proposed method is compared with high cost GPS as reference. In the experimental result, we verified that the accuracy and robust were well improved by the proposed method filter effectively and robustness than by original loosely-coupled integration when vehicle turns at corner.

Design and Implementation of Pedestrian Position Information System in GPS-disabled Area (GPS 수신불가 지역에서의 보행자 위치정보시스템의 설계 및 구현)

  • Kwak, Hwy-Kuen;Park, Sang-Hoon;Lee, Choon-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.9
    • /
    • pp.4131-4138
    • /
    • 2012
  • In this paper, we propose a Pedestrian Position Information System(PPIS) using low-cost inertial sensors in GPS-disabled area. The proposed scheme estimates the attitude/heading angle and step detection of pedestrian. Additionally, the estimation error due to the inertial sensors is mitigated by using additional sensors. We implement a portable hardware module to evaluate performance of the proposed system. Through the experiments in indoor building, the estimation error of position information was measured as 2.4% approximately.

Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction (인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정)

  • Park Sung-Kee;Park Mignon;Lee Taigun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.1
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.

Performing Missions of a Minicar Using a Single Camera (단안 카메라를 이용한 소형 자동차의 임무 수행)

  • Kim, Jin-Woo;Ha, Jong-Eun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.1
    • /
    • pp.123-128
    • /
    • 2017
  • This paper deals with performing missions through autonomous navigation using camera and other sensors. Extracting pose of the car is necessary to navigate safely within the given road. Homography is used to find it. Color image is converted into grey image and thresholding and edge is used to find control points. Two control ponits are converted into world coordinates using homography to find the angle and position of the car. Color is used to find traffic signal. It was confirmed that the given tasks performed well through experiments.

A Study on the Sail Force Prediction Method for Hull Hydrodynamic Force Measurement of 30feet Catamaran Sailing Yacht (30ft급 쌍동형 세일링 요트의 선체 유체력 계측에 의한 세일력 추정방법에 관한 연구)

  • Jang, Ho-Yun;Park, Chung-Hwan;Kim, Hyen-Woo;Lee, Byung-Sung;Lee, In-Won
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.47 no.4
    • /
    • pp.477-486
    • /
    • 2010
  • During sailing by wind-driven thrust on the sail, a catamaran sailing yacht generates leeway and heeling. For predicting sail force, a model test was carried out according to running attitude. Through the model test, drag and side force of the real ship was predicted. A purpose of this study is to find sail force to C.E from changed attitude during running direction. By balance of hull and sail, a heeling force of designed sail is predicted. Also through heeling force and driving force, total sail force and direction from C.E are considered with changed mast including leeway and heeling.

Defects Length Measurement Using an Estimation Agorithm of the Camera Orientation and an Inclination Angle of a Laser Slit Beam (레이저 슬릿 빔의 경사각과 카메라 자세 추정 알고리듬을 이용한 벽면결함 길이측정)

  • Kim, Young-Hwang;Yoon, Ji-Sup;Kang, E-Sok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.1
    • /
    • pp.37-45
    • /
    • 2002
  • A method of measuring the length of defects on the wall and restructuring the defect image is proposed based on the estimation algorithm of a camera orientation, which uses the declination angle of a laser slit beam. The estimation algorithm of the horizontally inclined angle of CCD camera adopts a 3-dimensional coordinate transformation of the image plane where both the laser beam and the original image of the defects exist. The estimation equation is obtained by using the information of the beam projected on the wall and the parameters of this equation are experimentally obtained. With this algorithm, the original image of the defect can be reconstructed as an image normal to the wall. From the result of a series of experiments, the measuring accuracy of the defect is measured within 0.5% error bound of real defect size under 30 degree of the horizontally inclined angle. The proposed algorithm provides the method of reconstructing the image taken at any arbitrary horizontally inclined angle as the image normal as the wall and thus, it enables the accurate measurement of the defect lengths by using a single camera and a laser slit beam.

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

Attitude Estimation for the Biped Robot with Vision and Gyro Sensor Fusion (비전 센서와 자이로 센서의 융합을 통한 보행 로봇의 자세 추정)

  • Park, Jin-Seong;Park, Young-Jin;Park, Youn-Sik;Hong, Deok-Hwa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.6
    • /
    • pp.546-551
    • /
    • 2011
  • Tilt sensor is required to control the attitude of the biped robot when it walks on an uneven terrain. Vision sensor, which is used for recognizing human or detecting obstacles, can be used as a tilt angle sensor by comparing current image and reference image. However, vision sensor alone has a lot of technological limitations to control biped robot such as low sampling frequency and estimation time delay. In order to verify limitations of vision sensor, experimental setup of an inverted pendulum, which represents pitch motion of the walking or running robot, is used and it is proved that only vision sensor cannot control an inverted pendulum mainly because of the time delay. In this paper, to overcome limitations of vision sensor, Kalman filter for the multi-rate sensor fusion algorithm is applied with low-quality gyro sensor. It solves limitations of the vision sensor as well as eliminates drift of gyro sensor. Through the experiment of an inverted pendulum control, it is found that the tilt estimation performance of fusion sensor is greatly improved enough to control the attitude of an inverted pendulum.