• Title/Summary/Keyword: 영상기반 위치 추정

Search Result 261, Processing Time 0.039 seconds

Object Detection and 3D Position Estimation based on Stereo Vision (스테레오 영상 기반의 객체 탐지 및 객체의 3차원 위치 추정)

  • Son, Haengseon;Lee, Seonyoung;Min, Kyoungwon;Seo, Seongjin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.4
    • /
    • pp.318-324
    • /
    • 2017
  • We introduced a stereo camera on the aircraft to detect flight objects and to estimate the 3D position of them. The Saliency map algorithm based on PCT was proposed to detect a small object between clouds, and then we processed a stereo matching algorithm to find out the disparity between the left and right camera. In order to extract accurate disparity, cost aggregation region was used as a variable region to adapt to detection object. In this paper, we use the detection result as the cost aggregation region. In order to extract more precise disparity, sub-pixel interpolation is used to extract float type-disparity at sub-pixel level. We also proposed a method to estimate the spatial position of an object by using camera parameters. It is expected that it can be applied to image - based object detection and collision avoidance system of autonomous aircraft in the future.

Research to improve the performance of self localization of mobile robot utilizing video information of CCTV (CCTV 영상 정보를 활용한 이동 로봇의 자기 위치 추정 성능 향상을 위한 연구)

  • Park, Jong-Ho;Jeon, Young-Pil;Ryu, Ji-Hyoung;Yu, Dong-Hyun;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.12
    • /
    • pp.6420-6426
    • /
    • 2013
  • The indoor areas for the commercial use of automatic monitoring systems of mobile robot localization improves the cognitive abilities and the needs of the environment with this emerging and existing mobile robot localization, and object recognition methods commonly around its great sensor are leveraged. On the other hand, there is a difficulty with a problem-solving self-location estimation in indoor mobile robots using only the sensors of the robot. Therefore, in this paper, a self-position estimation method for an enhanced and effective mobile robot is proposed using a marker and CCTV video that is already installed in the building. In particular, after recognizing a square mobile robot and the object from the input image, and the vertices were confirmed, the feature points of the marker were found, and marker recognition was then performed. First, a self-position estimation of the mobile robot was performed according to the relationship of the image marker and a coordinate transformation was performed. In particular, the estimation was converted to an absolute coordinate value based on CCTV information, such as robots and obstacles. The study results can be used to make a convenient self-position estimation of the robot in the indoor areas to verify the self-position estimation method of the mobile robot. In addition, experimental operation was performed based on the actual robot system.

Fuzzy Logic Based Sound Source Localization System Using Sound Strength in the Underground Parking Lot (지하주차장에서 음의 세기를 이용한 퍼지로직 기반 음원 위치추정 시스템)

  • Choi, Chang Yong;Lee, Dong Myung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.5
    • /
    • pp.434-439
    • /
    • 2013
  • It is very difficult to monitor the blind spots that are not recognized by traditional surveillance camera (CCTV) systems, and the surveillance efficiencies are very low though many accidents/events can be solved by the systems. In this paper, the fuzzy logic based sound source localization system using sound strength in the underground parking lot is suggested and the performance of the system is analyzed in order to enhance the stabilization and the accuracy of the localization algorithm in the suggested system. It is confirmed that the localization stabilization of the localization algorithm (SLA_fuzzy) using the fuzzy logic in the suggested system is 4 times higher than that of the conventional localization algorithm (SLA). In addition to this, the localization accuracy of the SLA_fuzzy in the suggested system is 29% higher than that of the SLA.

Color-Based Image Retrieval and Lacalization using Color Vector Angle (칼라 벡터각을 이용한 칼라 기반 영상 검색과 위치 추정)

  • 이호영;이호근;김윤태;남재열;하영호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.6B
    • /
    • pp.810-819
    • /
    • 2001
  • 칼라가 물체 인식에 아주 효율적인 단서를 제공하지만 칼라 분포는 시청 조건과 카메라의 위치에 아주 큰 영향을 받는다. 생김새와 모양의 변화에 의한 칼라 분포 변화 문제를 해결하기 위해 본 논문에서는 밝기 값의 변화에 영향을 받지 않고, 색상(hue) 성분에 민감한 칼라 벡터각(color vector angle)을 이용하여 칼라 에지를 추출한 후, 영상의 화소들을 평탄 화소와 에지 화소로 구분하여 칼라 특징 값을 추출하였다. 에지 화소의 경우에는 에지 주위 칼라 쌍의 전체 분포를 HLS 색좌표계의 비균일 양자화를 통해 칼라 인접 히스토그램(color adjacency histogram)으로 표현하고, 평탄 화소의 경우에는 HLS 색좌표계의 비균일 양자화와 칼라 벡터각 균일 양자화를 통해 칼라 벡터각 히스토그램(color vector angle histogram)을 구성하여 공간적인 칼라분포를 표현하였다. 제안한 칼라 히스토그램을 이용하여 영상 검색에 적용하여 성능을 실험한 결과, 작은 빈의 수를 가지는 제안한 방법이 기존의 방법들보다 훨씬 효율적이고, 생김새와 모양의 변화에 아주 강건한 영상 검색이 가능하였고, 기존의 칼라 히스토그램 역투사 방법보다 훨씬 정확한 물체 위치 추정이 가능함을 확인할 수 있었다.

  • PDF

Joint Deep Learning of Hand Locations, Poses and Gestures (손 위치, 자세, 동작의 통합 심층 학습)

  • Kim, Donguk;Lee, Seongyeong;Jeong, Chanyang;Lee, Changhwa;Baek, Seungryul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.1048-1051
    • /
    • 2020
  • 본 논문에서는 사람의 손에 관한 개별적으로 분리되어 진행되고 있는 손 위치 추정, 손 자세 추정, 손 동작 인식 작업을 통합하는 Faster-RCNN기반의 프레임워크를 제안하였다. 제안된 프레임워크에서는 RGB 동영상을 입력으로 하여, 먼저 손 위치에 대한 박스를 생성하고, 생성된 박스 정보를 기반으로 손 자세와 동작을 인식하도록 한다. 손 위치, 손 자세, 손 동작에 대한 정답을 동시에 모두 가지는 데이터셋이 존재하지 않기 때문에 Egohands, FPHA 데이터를 동시에 효과적으로 사용하는 방안을 제안하였으며 제안된 프레임워크를 FPHA데이터에 평가하였다., 손 위치 추정 정확도는 mAP 90.3을 기록했고, 손 동작 인식은 FPHA의 정답을 사용한 정확도에 근접한 70.6%를 기록하였다.

Deep Image Retrieval using Attention and Semantic Segmentation Map (관심 영역 추출과 영상 분할 지도를 이용한 딥러닝 기반의 이미지 검색 기술)

  • Minjung Yoo;Eunhye Jo;Byoungjun Kim;Sunok Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.230-237
    • /
    • 2023
  • Self-driving is a key technology of the fourth industry and can be applied to various places such as cars, drones, cars, and robots. Among them, localiztion is one of the key technologies for implementing autonomous driving as a technology that identifies the location of objects or users using GPS, sensors, and maps. Locilization can be made using GPS or LIDAR, but it is very expensive and heavy equipment must be mounted, and precise location estimation is difficult for places with radio interference such as underground or tunnels. In this paper, to compensate for this, we proposes an image retrieval using attention module and image segmentation maps using color images acquired with low-cost vision cameras as an input.

사용자-객체 상호작용을 위한 복잡 배경에서의 객체 인식

  • Bae, Ju-Han;Hwang, Yeong-Bae;Choe, Byeong-Ho;Kim, Hyo-Ju
    • Information and Communications Magazine
    • /
    • v.31 no.3
    • /
    • pp.46-53
    • /
    • 2014
  • 사용자-객체 상호작용을 위해서는 영상 내 객체의 종류와 위치를 정확하게 파악하여 사용자가 객체에 관련된 행동을 취할 경우, 그에 맞는 상호작용을 수행해야 한다. 이러한 객체인식에 널리 사용되는 지역 불변 특징량 기반의 방법론은 복잡한 배경이나 균일 물체에 대하여 잘못된 매칭으로 인식률이 저하된다. 본고에서는 이를 해결하기 위해, 컬러와 깊이 근접도 기반 깊이 계층을 나누고, 복잡 배경으로부터 생기는 잘못된 특징점 대응을 최소화 하기 위해 각 깊이 계층과 인식 물체 영상간의 특징점 대응을 수행한다. 또한, 각 깊이 계층영역에서 색상 히스토그램 재투영으로 객체의 위치를 추정하고 추정 영역과 인식 물체 영상간의 생상 및 깊이 유사도를 판단한다. 최종적으로, 복잡 배경 효과를 최소화한 특징점 대응의 수, 색상 및 컬러 유사도를 고려하여 신뢰도를 측정하여 객체를 인식하게 되며, 이를 통해 복잡한 배경에서도 사용자와 객체간의 유연한 상호작용이 가능해진다.

A Study on the Compensating of the Dead-reckoning Based on SLAM Using the Inertial Sensor (관성센서를 이용한 SLAM 기반의 위치 오차 보정 기법에 관한 연구)

  • Kang, Shin-Hyuk;Jang, Mun-Suck;Lee, Dong-Kwang;Lee, Eung-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.2
    • /
    • pp.28-35
    • /
    • 2009
  • Positioning technology which a part technology of Mobile Robot is an essential technology to locate the position of Robot and navigate to wanted position. The Robot that based on wheel drive uses Odometry position. technology. But when using Odometry positioning technology, it's hard to find out constant error value because a slip phenomenon occurs as the Robot runs. In this paper, we present the way to minimize positioning error by using Odometry and Inertial sensor. Also, the way to reduce error with Inertial sensor on SLAM using image will be shown, too.

Absolute Position Estimation Using IRS Satellite Images (IRS 위성영상을 이용한 절대위치 추정)

  • O, Yeong-Seok;Sim, Dong-Gyu;Park, Rae-Hong;Kim, Rin-Cheol;Lee, Sang-Uk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.5
    • /
    • pp.453-463
    • /
    • 2001
  • This paper presents an absolute position estimation method using Indian remote sensing (IRS) satellite images, which is a part of a position estimation (PE) system. The accumulated buffer (AB) matching method is proposed, in which a set of accumulator cells is employed for fast edge-based matching. By computer simulations with two sets of veal aerial image sequences, the performance of the AB matching method is analyzed and its effectiveness is shown in terms of the position error in the hybrid PE system.

  • PDF

Performance Analysis of Vision-based Positioning Assistance Algorithm (비전 기반 측위 보조 알고리즘의 성능 분석)

  • Park, Jong Soo;Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.101-108
    • /
    • 2019
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, developed a vision-based positioning assistant algorithm to estimate the distance to the object from stereo images. In addition, GNSS/on-board vehicle sensor/vision based positioning algorithm is developed by combining vision based positioning algorithm with existing positioning algorithm. For the performance analysis, the velocity calculated from the actual driving test was used for the navigation solution correction, simulation tests were performed to analyse the effects of velocity precision. As a result of analysis, it is confirmed that about 4% of position accuracy is improved when vision information is added compared to existing GNSS/on-board based positioning algorithm.