• Title/Summary/Keyword: scene localization

Search Result 36, Processing Time 0.026 seconds

A Comparison of Scene Change Localization Methods over the Open Video Scene Detection Dataset

  • Panchenko, Taras;Bieda, Igor
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.1-6
    • /
    • 2022
  • Scene change detection is an important topic because of the wide and growing range of its applications. Streaming services from many providers are increasing their capacity which causes the industry growth. The method for the scene change detection is described here and compared with the State-of-the-Art methods over the Open Video Scene Detection (OVSD) - an open dataset of Creative Commons licensed videos freely available for download and use to evaluate video scene detection algorithms. The proposed method is based on scene analysis using threshold values and smooth scene changes. A comparison of the presented method was conducted in this research. The obtained results demonstrated the high efficiency of the scene cut localization method proposed by authors, because its efficiency measured in terms of precision, recall, accuracy, and F-metrics score exceeds the best previously known results.

Geometric Formulation of Rectangle Based Relative Localization of Mobile Robot (이동 로봇의 상대적 위치 추정을 위한 직사각형 기반의 기하학적 방법)

  • Lee, Joo-Haeng;Lee, Jaeyeon;Lee, Ahyun;Kim, Jaehong
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.1
    • /
    • pp.9-18
    • /
    • 2016
  • A rectangle-based relative localization method is proposed for a mobile robot based on a novel geometric formulation. In an artificial environment where a mobile robot navigates, rectangular shapes are ubiquitous. When a scene rectangle is captured using a camera attached to a mobile robot, localization can be performed and described in the relative coordinates of the scene rectangle. Especially, our method works with a single image for a scene rectangle whose aspect ratio is not known. Moreover, a camera calibration is unnecessary with an assumption of the pinhole camera model. The proposed method is largely based on the theory of coupled line cameras (CLC), which provides a basis for efficient computation with analytic solutions and intuitive geometric interpretation. We introduce the fundamentals of CLC and describe the proposed method with some experimental results in simulation environment.

Self-Positioning of a Mobile Robot using a Vision System and Image Overlay with VRML (비전 시스템을 이용한 이동로봇 Self-positioning과 VRML과의 영상오버레이)

  • Hyun, Kwon-Bang;To, Chong-Kil
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.258-260
    • /
    • 2005
  • We describe a method for localizing a mobile robot in the working environment using a vision system and VRML. The robot identifies landmarks in the environment and carries out the self-positioning. The image-processing and neural network pattern matching technique are employed to recognize landmarks placed in a robot working environment. The robot self-positioning using vision system is based on the well-known localization algorithm. After self-positioning, 2D scene is overlaid with VRML scene. This paper describes how to realize the self-positioning and shows the result of overlaying between 2D scene and VRML scene. In addition we describe the advantage expected from overlapping both scenes.

  • PDF

Self-localization of a Mobile Robot for Decreasing the Error and VRML Image Overlay (오차 감소를 위한 이동로봇 Self-Localization과 VRML 영상오버레이 기법)

  • Kwon Bang-Hyun;Shon Eun-Ho;Kim Young-Chul;Chong Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.4
    • /
    • pp.389-394
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

VRML image overlay method for Robot's Self-Localization (VRML 영상오버레이기법을 이용한 로봇의 Self-Localization)

  • Sohn, Eun-Ho;Kwon, Bang-Hyun;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF

UGV Localization using Multi-sensor Fusion based on Federated Filter in Outdoor Environments (야지환경에서 연합형 필터 기반의 다중센서 융합을 이용한 무인지상로봇 위치추정)

  • Choi, Ji-Hoon;Park, Yong Woon;Joo, Sang Hyeon;Shim, Seong Dae;Min, Ji Hong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.5
    • /
    • pp.557-564
    • /
    • 2012
  • This paper presents UGV localization using multi-sensor fusion based on federated filter in outdoor environments. The conventional GPS/INS integrated system does not guarantee the robustness of localization because GPS is vulnerable to external disturbances. In many environments, however, vision system is very efficient because there are many features compared to the open space and these features can provide much information for UGV localization. Thus, this paper uses the scene matching and pose estimation based vision navigation, magnetic compass and odometer to cope with the GPS-denied environments. NR-mode federated filter is used for system safety. The experiment results with a predefined path demonstrate enhancement of the robustness and accuracy of localization in outdoor environments.

A Study on Localization of Text in Natural Scene Images (자연 영상에서의 정확한 문자 검출에 관한 연구)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.5
    • /
    • pp.77-84
    • /
    • 2008
  • This paper proposes a new approach to eliminate the reflectance component for the localization of text in natural scene images. Natural scene images normally have an illumination component as well as a reflectance component. It is well known that a reflectance component usually obstructs the task of detecting and recognizing objects like texts in the scene, since it blurs out an overall image. We have developed an approach that efficiently removes reflectance components while Preserving illumination components. We decided whether an input image hits Normal or Polarized for determining the light environment, using the histogram which consisted of a red component. In the normal image, we acquired the text region without additional processing. Otherwise we removed light reflecting from the object using homomorphic filtering in the polarized image. And then this decided the each text region based on the color merging technique and the Saliency Map. Finally, we localized text region on these two candidate regions.

  • PDF

Accurate Human Localization for Automatic Labelling of Human from Fisheye Images

  • Than, Van Pha;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.769-781
    • /
    • 2017
  • Deep learning networks like Convolutional Neural Networks (CNNs) show successful performances in many computer vision applications such as image classification, object detection, and so on. For implementation of deep learning networks in embedded system with limited processing power and memory, deep learning network may need to be simplified. However, simplified deep learning network cannot learn every possible scene. One realistic strategy for embedded deep learning network is to construct a simplified deep learning network model optimized for the scene images of the installation place. Then, automatic training will be necessitated for commercialization. In this paper, as an intermediate step toward automatic training under fisheye camera environments, we study more precise human localization in fisheye images, and propose an accurate human localization method, Automatic Ground-Truth Labelling Method (AGTLM). AGTLM first localizes candidate human object bounding boxes by utilizing GoogLeNet-LSTM approach, and after reassurance process by GoogLeNet-based CNN network, finally refines them more correctly and precisely(tightly) by applying saliency object detection technique. The performance improvement of the proposed human localization method, AGTLM with respect to accuracy and tightness is shown through several experiments.

Scene Recognition Using Local and Global Features (지역적, 전역적 특징을 이용한 환경 인식)

  • Kang, San-Deul;Hwang, Joong-Won;Jung, Hee-Chul;Han, Dong-Yoon;Sim, Sung-Dae;Kim, Jun-Mo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.3
    • /
    • pp.298-305
    • /
    • 2012
  • In this paper, we propose an integrated algorithm for scene recognition, which has been a challenging computer vision problem, with application to mobile robot localization. The proposed scene recognition method utilizes SIFT and visual words as local-level features and GIST as a global-level feature. As local-level and global-level features complement each other, it results in improved performance for scene recognition. This improved algorithm is of low computational complexity and robust to image distortions.

Changes in Urban Scene Elements in the Pandemic (팬데믹 시대의 도시 씬 요소 변화)

  • Gu, Suna;Jang, Wonho
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.262-275
    • /
    • 2020
  • Due to the pandemic caused by Corona 19, cities around the world have faced a change. As the global economic system weakens, localization is increasing in the product production and distribution system. In addition, consumption patterns have changed in urban where localization has been strengthened. As a result, the way physical places are consumed is also changing. Consumption of large multi-use facilities has drastically decreased, the speed of the collapse of the online and offline boundaries has been accelerated, and the consumption of amenities for sharing tastes has become more subdivided, specialized, and private. A big change also appeared in the urban scene, which is perceived as the concentration of urban amenities. Local scale and locality became important in the urban scene, and a new urban scene element called empathy emerged. Empathy aims to connect socially and emotionally to individuals consuming urban amenities. The pursuit of connectivity, taste consumption, and nostalgia. In this study, the space for cultural consumption based on empathy was named as empathetic space and the concept was explained. The importance of empathic space in the urban scene in the future post-corona situation was presented.