• 제목/요약/키워드: scene localization

검색결과 36건 처리시간 0.022초

A Comparison of Scene Change Localization Methods over the Open Video Scene Detection Dataset

  • Panchenko, Taras;Bieda, Igor
    • International Journal of Computer Science & Network Security
    • /
    • 제22권6호
    • /
    • pp.1-6
    • /
    • 2022
  • Scene change detection is an important topic because of the wide and growing range of its applications. Streaming services from many providers are increasing their capacity which causes the industry growth. The method for the scene change detection is described here and compared with the State-of-the-Art methods over the Open Video Scene Detection (OVSD) - an open dataset of Creative Commons licensed videos freely available for download and use to evaluate video scene detection algorithms. The proposed method is based on scene analysis using threshold values and smooth scene changes. A comparison of the presented method was conducted in this research. The obtained results demonstrated the high efficiency of the scene cut localization method proposed by authors, because its efficiency measured in terms of precision, recall, accuracy, and F-metrics score exceeds the best previously known results.

이동 로봇의 상대적 위치 추정을 위한 직사각형 기반의 기하학적 방법 (Geometric Formulation of Rectangle Based Relative Localization of Mobile Robot)

  • 이주행;이재연;이아현;김재홍
    • 로봇학회논문지
    • /
    • 제11권1호
    • /
    • pp.9-18
    • /
    • 2016
  • A rectangle-based relative localization method is proposed for a mobile robot based on a novel geometric formulation. In an artificial environment where a mobile robot navigates, rectangular shapes are ubiquitous. When a scene rectangle is captured using a camera attached to a mobile robot, localization can be performed and described in the relative coordinates of the scene rectangle. Especially, our method works with a single image for a scene rectangle whose aspect ratio is not known. Moreover, a camera calibration is unnecessary with an assumption of the pinhole camera model. The proposed method is largely based on the theory of coupled line cameras (CLC), which provides a basis for efficient computation with analytic solutions and intuitive geometric interpretation. We introduce the fundamentals of CLC and describe the proposed method with some experimental results in simulation environment.

비전 시스템을 이용한 이동로봇 Self-positioning과 VRML과의 영상오버레이 (Self-Positioning of a Mobile Robot using a Vision System and Image Overlay with VRML)

  • 권방현;정길도
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.258-260
    • /
    • 2005
  • We describe a method for localizing a mobile robot in the working environment using a vision system and VRML. The robot identifies landmarks in the environment and carries out the self-positioning. The image-processing and neural network pattern matching technique are employed to recognize landmarks placed in a robot working environment. The robot self-positioning using vision system is based on the well-known localization algorithm. After self-positioning, 2D scene is overlaid with VRML scene. This paper describes how to realize the self-positioning and shows the result of overlaying between 2D scene and VRML scene. In addition we describe the advantage expected from overlapping both scenes.

  • PDF

오차 감소를 위한 이동로봇 Self-Localization과 VRML 영상오버레이 기법 (Self-localization of a Mobile Robot for Decreasing the Error and VRML Image Overlay)

  • 권방현;손은호;김영철;정길도
    • 제어로봇시스템학회논문지
    • /
    • 제12권4호
    • /
    • pp.389-394
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

VRML 영상오버레이기법을 이용한 로봇의 Self-Localization (VRML image overlay method for Robot's Self-Localization)

  • 손은호;권방현;김영철;정길도
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF

야지환경에서 연합형 필터 기반의 다중센서 융합을 이용한 무인지상로봇 위치추정 (UGV Localization using Multi-sensor Fusion based on Federated Filter in Outdoor Environments)

  • 최지훈;박용운;주상현;심성대;민지홍
    • 한국군사과학기술학회지
    • /
    • 제15권5호
    • /
    • pp.557-564
    • /
    • 2012
  • This paper presents UGV localization using multi-sensor fusion based on federated filter in outdoor environments. The conventional GPS/INS integrated system does not guarantee the robustness of localization because GPS is vulnerable to external disturbances. In many environments, however, vision system is very efficient because there are many features compared to the open space and these features can provide much information for UGV localization. Thus, this paper uses the scene matching and pose estimation based vision navigation, magnetic compass and odometer to cope with the GPS-denied environments. NR-mode federated filter is used for system safety. The experiment results with a predefined path demonstrate enhancement of the robustness and accuracy of localization in outdoor environments.

자연 영상에서의 정확한 문자 검출에 관한 연구 (A Study on Localization of Text in Natural Scene Images)

  • 최미영;김계영;최형일
    • 한국컴퓨터정보학회논문지
    • /
    • 제13권5호
    • /
    • pp.77-84
    • /
    • 2008
  • 본 논문에서는 자연영상에 존재하는 문자들을 효율적으로 검출하기 위한 새로운 접근 방법을 제안한다. 빛 또는 조명의 영향에 의해 획득된 영상 내에 존재하는 반사성분은 문자 또는 관심객체들의 경계가 모호해 지거나 관심객체와 배경이 서로 혼합되었을 경우, 문자추출 및 인식을 함에 있어서 오류를 포함시킬 수 있다. 따라서 영상 내에 존재하는 반사성분을 제거하기 위해 먼저, 영상으로부터 Red컬러 성분에 해당하는 히스토그램에서 두개의 피크 점을 검출한다. 검출된 두 개의 피크 점들 간의 분포를 사용하여 노말 또는 편광 영상에 해당하는지를 판별한다. 노말 영상의 경우 부가적인 처리를 거치지 않고 문자영역을 검출하며 편광 영상인 경우 조명성분을 제거하기 위해 호모모픽 필터링 방법을 적용하여 반사성분에 해당하는 영역을 제거한다. 그리고 문자영역을 검출하기 위해 색 병합과 세일런스 맵을 이용하여 각각의 문자 후보영역을 결정한다. 마지막으로 두 후보영역을 이용하여 최종 문자영역을 검출한다.

  • PDF

Accurate Human Localization for Automatic Labelling of Human from Fisheye Images

  • Than, Van Pha;Nguyen, Thanh Binh;Chung, Sun-Tae
    • 한국멀티미디어학회논문지
    • /
    • 제20권5호
    • /
    • pp.769-781
    • /
    • 2017
  • Deep learning networks like Convolutional Neural Networks (CNNs) show successful performances in many computer vision applications such as image classification, object detection, and so on. For implementation of deep learning networks in embedded system with limited processing power and memory, deep learning network may need to be simplified. However, simplified deep learning network cannot learn every possible scene. One realistic strategy for embedded deep learning network is to construct a simplified deep learning network model optimized for the scene images of the installation place. Then, automatic training will be necessitated for commercialization. In this paper, as an intermediate step toward automatic training under fisheye camera environments, we study more precise human localization in fisheye images, and propose an accurate human localization method, Automatic Ground-Truth Labelling Method (AGTLM). AGTLM first localizes candidate human object bounding boxes by utilizing GoogLeNet-LSTM approach, and after reassurance process by GoogLeNet-based CNN network, finally refines them more correctly and precisely(tightly) by applying saliency object detection technique. The performance improvement of the proposed human localization method, AGTLM with respect to accuracy and tightness is shown through several experiments.

지역적, 전역적 특징을 이용한 환경 인식 (Scene Recognition Using Local and Global Features)

  • 강산들;황중원;정희철;한동윤;심성대;김준모
    • 한국군사과학기술학회지
    • /
    • 제15권3호
    • /
    • pp.298-305
    • /
    • 2012
  • In this paper, we propose an integrated algorithm for scene recognition, which has been a challenging computer vision problem, with application to mobile robot localization. The proposed scene recognition method utilizes SIFT and visual words as local-level features and GIST as a global-level feature. As local-level and global-level features complement each other, it results in improved performance for scene recognition. This improved algorithm is of low computational complexity and robust to image distortions.

팬데믹 시대의 도시 씬 요소 변화 (Changes in Urban Scene Elements in the Pandemic)

  • 구선아;장원호
    • 한국경제지리학회지
    • /
    • 제23권3호
    • /
    • pp.262-275
    • /
    • 2020
  • 코로나19에 의한 팬데믹으로 세계 도시는 변화를 맞이했다. 글로벌 경제체계가 약해짐에 따라 상품 생산 및 유통 체계에도 지역화가 강화되는 현상이 나타났다. 또한, 지역화가 강화된 도시에서는 소비 형태가 변화했고 이에 따라 물리적 장소를 소비하는 방식도 변하고 있다. 대형 다중이용시설 소비는 대폭 감소했고, 온오프라인 경계가 허물어지는 속도가 빨라졌으며, 취향 공유를 위한 어메니티 소비는 더욱 세분화, 전문화되고, 프라이빗(private)해졌다. 도시 어메니티의 집중으로 파악되는 도시 씬에도 큰 변화가 나타났다. 도시 씬에서 로컬 스케일과 로컬리티가 중요해졌고, 공감성이라는 새로운 도시 씬 요소가 등장하였다. 공감성은 개인이 도시 어메니티를 소비함에 있어 사회적, 정서적 연결을 목적으로 하며, 연결성, 취향 소비, 노스탤지어를 추구한다. 본 연구는 공감성에 기반하며 문화 소비하는 공간을 공감적 공간이라 명명하고 그 개념을 설명하였으며, 향후 포스트코로나 상황에서 도시 씬에서의 공감적 공간의 중요성을 제시하였다.