• Title/Summary/Keyword: 카메라 위치 추정

Search Result 291, Processing Time 0.03 seconds

Gaze Detection by Computing Facial Rotation and Translation (얼굴의 회전 및 이동 분석에 의한 응시 위치 파악)

  • Lee, Jeong-Jun;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.535-543
    • /
    • 2002
  • In this paper, we propose a new gaze detection method using 2-D facial images captured by a camera on top of the monitor. We consider only the facial rotation and translation and not the eyes' movements. The proposed method computes the gaze point caused by the facial rotation and the amount of the facial translation respectively, and by combining these two the final gaze point on a monitor screen can be obtained. We detected the gaze point caused by the facial rotation by using a neural network(a multi-layered perceptron) whose inputs are the 2-D geometric changes of the facial features' points and estimated the amount of the facial translation by image processing algorithms in real time. Experimental results show that the gaze point detection accuracy between the computed positions and the real ones is about 2.11 inches in RMS error when the distance between the user and a 19-inch monitor is about 50~70cm. The processing time is about 0.7 second with a Pentium PC(233MHz) and 320${\times}$240 pixel-size images.

Development of Vehicle and/or Obstacle Detection System using Heterogenous Sensors (이종센서를 이용한 차량과 장애물 검지시스템 개발 기초 연구)

  • Jang, Jeong-Ah;Lee, Giroung;Kwak, Dong-Yong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.5
    • /
    • pp.125-135
    • /
    • 2012
  • This paper proposes the new object detection system with two laser-scanners and a camera for classifying the objects and predicting the location of objects on road street. This detection system could be applied the new C-ITS service such as ADAS(Advanced Driver Assist System) or (semi-)automatic vehicle guidance services using object's types and precise position. This study describes the some examples in other countries and feasibility of object detection system based on a camera and two laser-scanners. This study has developed the heterogenous sensor's fusion method and shows the results of implementation at road environments. As a results, object detection system at roadside infrastructure is a useful method that aims at reliable classification and positioning of road objects, such as a vehicle, a pedestrian, and obstacles in a street. The algorithm of this paper is performed at ideal condition, so it need to implement at various condition such as light brightness and weather condition. This paper should help better object detection and development of new methods at improved C-ITS environment.

Generation of Feature Map for Improving Localization of Mobile Robot based on Stereo Camera (스테레오 카메라 기반 모바일 로봇의 위치 추정 향상을 위한 특징맵 생성)

  • Kim, Eun-Kyeong;Kim, Sung-Shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.1
    • /
    • pp.58-63
    • /
    • 2020
  • This paper proposes the method for improving the localization accuracy of the mobile robot based on the stereo camera. To restore the position information from stereo images obtained by the stereo camera, the corresponding point which corresponds to one pixel on the left image should be found on the right image. For this, there is the general method to search for corresponding point by calculating the similarity of pixel with pixels on the epipolar line. However, there are some disadvantages because all pixels on the epipolar line should be calculated and the similarity is calculated by only pixel value like RGB color space. To make up for this weak point, this paper implements the method to search for the corresponding point simply by calculating the gap of x-coordinate when the feature points, which are extracted by feature extraction and matched by feature matching method, are a pair and located on the same y-coordinate on the left/right image. In addition, the proposed method tries to preserve the number of feature points as much as possible by finding the corresponding points through the conventional algorithm in case of unmatched features. Because the number of the feature points has effect on the accuracy of the localization. The position of the mobile robot is compensated based on 3-D coordinates of the features which are restored by the feature points and corresponding points. As experimental results, by the proposed method, the number of the feature points are increased for compensating the position and the position of the mobile robot can be compensated more than only feature extraction.

Localization of a Tracked Robot Based on Fuzzy Fusion of Wheel Odometry and Visual Odometry in Indoor and Outdoor Environments (실내외 환경에서 휠 오도메트리와 비주얼 오도메트리 정보의 퍼지 융합에 기반한 궤도로봇의 위치추정)

  • Ham, Hyeong-Ha;Hong, Sung-Ho;Song, Jae-Bok;Baek, Joo-Hyun;Ryu, Jae-Kwan
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.6
    • /
    • pp.629-635
    • /
    • 2012
  • Tracked robots usually have poor localization performance because of slippage of their tracks. This study proposes a new localization method for tracked robots that uses fuzzy fusion of stereo-camera-based visual odometry and encoder-based wheel odometry. Visual odometry can be inaccurate when an insufficient number of visual features are available, while the encoder is prone to accumulating errors when large slips occur. To combine these two methods, the weight of each method was controlled by a fuzzy decision depending on the surrounding environment. The experimental results show that the proposed scheme improved the localization performance of a tracked robot.

PTZ Camera Based Multi Event Processing for Intelligent Video Network (지능형 영상네트워크 연계형 PTZ카메라 기반 다중 이벤트처리)

  • Chang, Il-Sik;Ahn, Seong-Je;Park, Gwang-Yeong;Cha, Jae-Sang;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11A
    • /
    • pp.1066-1072
    • /
    • 2010
  • In this paper we proposed a multi event handling surveillance system using multiple PTZ cameras. One event is assigned to each PTZ camera to detect unusual situation. If a new object appears in the scene while a camera is tracking the old one, it can not handle two objects simultaneously. In the second case that the object moves out of the scene during the tracking, the camera loses the object. In the proposed method, the nearby camera takes the role to trace the new one or detect the lost one in each case. The nearby camera can get the new object location information from old camera and set the seamless event link for the object. Our simulation result shows the continuous camera-to-camera object tracking performance.

A Real-time Audio Surveillance System Detecting and Localizing Dangerous Sounds for PTZ Camera Surveillance (PTZ 카메라 감시를 위한 실시간 위험 소리 검출 및 음원 방향 추정 소리 감시 시스템)

  • Nguyen, Viet Quoc;Kang, HoSeok;Chung, Sun-Tae;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.11
    • /
    • pp.1272-1280
    • /
    • 2013
  • In this paper, we propose an audio surveillance system which can detect and localize dangerous sounds in real-time. The location information about dangerous sounds can render a PTZ camera to be directed so as to catch a snapshot image about the dangerous sound source area and send it to clients instantly. The proposed audio surveillance system firstly detects foreground sounds based on adaptive Gaussian mixture background sound model, and classifies it into one of pre-trained classes of foreground dangerous sounds. For detected dangerous sounds, a sound source localization algorithm based on Dual delay-line algorithm is applied to localize the sound sources. Finally, the proposed system renders a PTZ camera to be oriented towards the dangerous sound source region, and take a snapshot against over the sound source region. Experiment results show that the proposed system can detect foreground dangerous sounds stably and classifies the detected foreground dangerous sounds into correct classes with a precision of 79% while the sound source localization can estimate orientation of the sound source with acceptably small error.

An Implementation of QR Code based On-line Mobile Augmented Reality System (QR코드 기반의 온라인 모바일 증강현실 시스템의 구현)

  • Park, Min-Woo;Park, Jung-Pil;Jung, Soon-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.1004-1016
    • /
    • 2012
  • This paper proposes a mobile augmented reality system to provide detail information of the products using QR code included in them. In the proposed system, we perform the estimation of the camera pose using both of marker-based and markerless-based methods. If the camera can see the QR code, we perform the estimation of the camera pose using the set of rectangles in the QR code. However, if the QR code is out of sight, we perform the estimation of the camera pose based homography between consecutive frames. Moreover, the content of the augmented reality in the proposed system is made by using meta-data. Therefore, the user can make contents of various scenario using only meta-data file without modification of our system. Especially, our system maintains the contents as newly updated state by the on-line server. Thus, it can reduce the unnecessary update of the program.

Infrastructure 2D Camera-based Real-time Vehicle-centered Estimation Method for Cooperative Driving Support (협력주행 지원을 위한 2D 인프라 카메라 기반의 실시간 차량 중심 추정 방법)

  • Ik-hyeon Jo;Goo-man Park
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.1
    • /
    • pp.123-133
    • /
    • 2024
  • Existing autonomous driving technology has been developed based on sensors attached to the vehicles to detect the environment and formulate driving plans. On the other hand, it has limitations, such as performance degradation in specific situations like adverse weather conditions, backlighting, and obstruction-induced occlusion. To address these issues, cooperative autonomous driving technology, which extends the perception range of autonomous vehicles through the support of road infrastructure, has attracted attention. Nevertheless, the real-time analysis of the 3D centroids of objects, as required by international standards, is challenging using single-lens cameras. This paper proposes an approach to detect objects and estimate the centroid of vehicles using the fixed field of view of road infrastructure and pre-measured geometric information in real-time. The proposed method has been confirmed to effectively estimate the center point of objects using GPS positioning equipment, and it is expected to contribute to the proliferation and adoption of cooperative autonomous driving infrastructure technology, applicable to both vehicles and road infrastructure.

Development of Building Monitoring Techniques Using Augmented Reality (증강현실을 이용한 건물 모니터링 기법 개발)

  • Jeong, Seong-Su;Heo, Joon;Woo, Sun-Kyu
    • Korean Journal of Construction Engineering and Management
    • /
    • v.10 no.6
    • /
    • pp.3-12
    • /
    • 2009
  • In order to effectively distribute the resources, it is very critical to understand the status or progress of construction site quickly and accurately. Augmented Reality (AR) can provide this situation with information which is convenient and intuitive. Conventional implementation of AR in outdoor or construction site condition requires additional sensors or markers to track the position and direction of camera. This research is aimed to develop the technologies which can be utilized in gathering the information of constructing or constructed buildings and structures. The AR technique that does not require additional devices except for the camera was implemented to simplify the system and improve utility in inaccessible area. In order to do so, the position of camera's perspective center and direction of camera was estimated using exterior orientation techniques. And 3D drawing model of building was projected and overlapped using this information. The result shows that by using this technique, the virtual drawing image was registered on real image with few pixels of error. The technique and procedure introduced in this paper simplifies the hardware organization of AR system that makes it easier for the AR technology to be utilized with ease in construction site. Moreover, this technique will help the AR to be utilized even in inaccessible areas. In addition to this, it is expected that combining this technique and 4D CAD technology can provide the project manager with more intuitive and comprehensive information that simplifies the monitoring work of construction progress and planning.

Panoramic 3D Reconstruction of an Indoor Scene Using Depth and Color Images Acquired from A Multi-view Camera (다시점 카메라로부터 획득된 깊이 및 컬러 영상을 이용한 실내환경의 파노라믹 3D 복원)

  • Kim, Se-Hwan;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.24-32
    • /
    • 2006
  • 본 논문에서는 다시점 카메라부터 획득된 부분적인 3D 점군을 사용하여 실내환경의 3D 복원을 위한 새로운 방법을 제안한다. 지금까지 다양한 양안차 추정 알고리즘이 제안되었으며, 이는 활용 가능한 깊이 영상이 다양함을 의미한다. 따라서, 본 논문에서는 일반화된 다시점 카메라를 이용하여 실내환경을 복원하는 방법을 다룬다. 첫 번째, 3D 점군들의 시간적 특성을 기반으로 변화량이 큰 3D 점들을 제거하고, 공간적 특성을 기반으로 주변의 3D 점을 참조하여 빈 영역을 채움으로써 깊이 영상 정제 과정을 수행한다. 두 번째, 연속된 두 시점에서의 3D 점군을 동일한 영상 평면으로 투영하고, 수정된 KLT (Kanade-Lucas-Tomasi) 특징 추적기를 사용하여 대응점을 찾는다. 그리고 대응점 간의 거리 오차를 최소화함으로써 정밀한 정합을 수행한다. 마지막으로, 여러 시점에서 획득된 3D 점군과 한 쌍의 2D 영상을 동시에 이용하여 3D 점들의 위치를 세밀하게 조절함으로써 최종적인 3D 모델을 생성한다. 제안된 방법은 대응점을 2D 영상 평면에서 찾음으로써 계산의 복잡도를 줄였으며, 3D 데이터의 정밀도가 낮은 경우에도 효과적으로 동작한다. 또한, 다시점 카메라를 이용함으로써 수 시점에서의 깊이 영상과 컬러 영상만으로도 실내환경 3D 복원이 가능하다. 제안된 방법은 네비게이션 뿐만 아니라 상호작용을 위한 3D 모델 생성에 활용될 수 있다.

  • PDF