• Title/Summary/Keyword: Visual

Search Result 18,811, Processing Time 0.038 seconds

Image Mosaicking Considering Pairwise Registrability in Structure Inspection with Underwater Robots (수중 로봇을 이용한 구조물 검사에서의 상호 정합도를 고려한 영상 모자이킹)

  • Hong, Seonghun
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.238-244
    • /
    • 2021
  • Image mosaicking is a common and useful technique to visualize a global map by stitching a large number of local images obtained from visual surveys in underwater environments. In particular, visual inspection of underwater structures using underwater robots can be a potential application for image mosaicking. Feature-based pairwise image registration is a commonly employed process in most image mosaicking algorithms to estimate visual odometry information between compared images. However, visual features are not always uniformly distributed on the surface of underwater structures, and thus the performance of image registration can vary significantly, which results in unnecessary computations in image matching for poor-conditioned image pairs. This study proposes a pairwise registrability measure to select informative image pairs and to improve the overall computational efficiency of underwater image mosaicking algorithms. The validity and effectiveness of the image mosaicking algorithm considering the pairwise registrability are demonstrated using an experimental dataset obtained with a full-scale ship in a real sea environment.

Visual Positioning System based on Voxel Labeling using Object Simultaneous Localization And Mapping

  • Jung, Tae-Won;Kim, In-Seon;Jung, Kye-Dong
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.302-306
    • /
    • 2021
  • Indoor localization is one of the basic elements of Location-Based Service, such as indoor navigation, location-based precision marketing, spatial recognition of robotics, augmented reality, and mixed reality. We propose a Voxel Labeling-based visual positioning system using object simultaneous localization and mapping (SLAM). Our method is a method of determining a location through single image 3D cuboid object detection and object SLAM for indoor navigation, then mapping to create an indoor map, addressing it with voxels, and matching with a defined space. First, high-quality cuboids are created from sampling 2D bounding boxes and vanishing points for single image object detection. And after jointly optimizing the poses of cameras, objects, and points, it is a Visual Positioning System (VPS) through matching with the pose information of the object in the voxel database. Our method provided the spatial information needed to the user with improved location accuracy and direction estimation.

Sharing Eye Gaze in Mixed Reality Remote Collaboration System (원격협업 시스템에서 협력자 눈 시점 공유)

  • Jeong, Jaejoon;Kim, Seungwon
    • Smart Media Journal
    • /
    • v.11 no.6
    • /
    • pp.30-36
    • /
    • 2022
  • This paper explored the effect of using eye gaze pointer in addition to the hand gesture visual communication cue in remote collaboration. We recruited 24 participants and conducted a user study comparing two conditions: (1) only hand gesture visual communication cue, and (2) eye gaze pointer with hand gesture visual cue. The results showed that the added eye gaze pointer cue reduced the workload and increased the co-presence when using it together with hand gesture cue.

Omni-directional Visual-LiDAR SLAM for Multi-Camera System (다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM)

  • Javed, Zeeshan;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.

Loosely Coupled LiDAR-visual Mapping and Navigation of AMR in Logistic Environments (실내 물류 환경에서 라이다-카메라 약결합 기반 맵핑 및 위치인식과 네비게이션 방법)

  • Choi, Byunghee;Kang, Gyeongsu;Roh, Yejin;Cho, Younggun
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.397-406
    • /
    • 2022
  • This paper presents an autonomous mobile robot (AMR) system and operation algorithms for logistic and factory facilities without magnet-lines installation. Unlike widely used AMR systems, we propose an EKF-based loosely coupled fusion of LiDAR measurements and visual markers. Our method first constructs occupancy grid and visual marker map in the mapping process and utilizes prebuilt maps for precise localization. Also, we developed a waypoint-based navigation pipeline for robust autonomous operation in unconstrained environments. The proposed system estimates the robot pose using by updating the state with the fusion of visual marker and LiDAR measurements. Finally, we tested the proposed method in indoor environments and existing factory facilities for evaluation. In experimental results, this paper represents the performance of our system compared to the well-known LiDAR-based localization and navigation system.

Is the Peak-Affect Important in Fast Processing of Visual Images in Printed Ads?: A Comparative Study on the Affect Integration Theories

  • Bu, Kyunghee;Lee, Luri
    • Asia Marketing Journal
    • /
    • v.24 no.3
    • /
    • pp.96-108
    • /
    • 2022
  • This study investigates how affects elicited by visual images in print ads are integrated to form a liking for the ads. Assuming a sequential rather than simultaneous processing of still-cut images, we adopt the 'think-aloud' method to capture consumers' spontaneous responses to visual images. We hypothesize that not only would consumers show mixed affects toward a still-cut visual image but that they would also integrate their serial affects heuristically rather than simply averaging the affects as suggested by the compensatory hypothesis. By comparing the effects of two contradictory affect integration hypotheses (i.e., peak-affect and mood-maintenance) with compensatory integration, using a single regression model, we found that peak-negative along with mood maintenance integration of serial affects for a print ad works best in the formation of ad liking. The results also support our initial premise that people can have mixed valence even toward a still-cut ad.

Infrared Visual Inertial Odometry via Gaussian Mixture Model Approximation of Thermal Image Histogram (열화상 이미지 히스토그램의 가우시안 혼합 모델 근사를 통한 열화상-관성 센서 오도메트리)

  • Jaeho Shin;Myung-Hwan Jeon;Ayoung Kim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.260-270
    • /
    • 2023
  • We introduce a novel Visual Inertial Odometry (VIO) algorithm designed to improve the performance of thermal-inertial odometry. Thermal infrared image, though advantageous for feature extraction in low-light conditions, typically suffers from a high noise level and significant information loss during the 8-bit conversion. Our algorithm overcomes these limitations by approximating a 14-bit raw pixel histogram into a Gaussian mixture model. The conversion method effectively emphasizes image regions where texture for visual tracking is abundant while reduces unnecessary background information. We incorporate the robust learning-based feature extraction and matching methods, SuperPoint and SuperGlue, and zero velocity detection module to further reduce the uncertainty of visual odometry. Tested across various datasets, the proposed algorithm shows improved performance compared to other state-of-the-art VIO algorithms, paving the way for robust thermal-inertial odometry.

Visual Feature Extraction Technique for Content-Based Image Retrieval

  • Park, Won-Bae;Song, Young-Jun;Kwon, Heak-Bong;Ahn, Jae-Hyeong
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.12
    • /
    • pp.1671-1679
    • /
    • 2004
  • This study has proposed visual-feature extraction methods for each band in wavelet domain with both spatial frequency features and multi resolution features. In addition, it has brought forward similarity measurement method using fuzzy theory and new color feature expression method taking advantage of the frequency of the same color after color quantization for reducing quantization error, a disadvantage of the existing color histogram intersection method. Experiments are performed on a database containing 1,000 color images. The proposed method gives better performance than the conventional method in both objective and subjective performance evaluation.

  • PDF

Image Enhancement for Visual SLAM in Low Illumination (저조도 환경에서 Visual SLAM을 위한 이미지 개선 방법)

  • Donggil You;Jihoon Jung;Hyeongjun Jeon;Changwan Han;Ilwoo Park;Junghyun Oh
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.66-71
    • /
    • 2023
  • As cameras have become primary sensors for mobile robots, vision based Simultaneous Localization and Mapping (SLAM) has achieved impressive results with the recent development of computer vision and deep learning. However, vision information has a disadvantage in that a lot of information disappears in a low-light environment. To overcome the problem, we propose an image enhancement method to perform visual SLAM in a low-light environment. Using the deep generative adversarial models and modified gamma correction, the quality of low-light images were improved. The proposed method is less sharp than the existing method, but it can be applied to ORB-SLAM in real time by dramatically reducing the amount of computation. The experimental results were able to prove the validity of the proposed method by applying to public Dataset TUM and VIVID++.

A Study on Migration of a Web-based Convergence Service (웹 기반 융합 서비스의 이동성 연구)

  • Song, Eun-Ji;Kim, Su-Ra;Choi, Hun-Hoi;Kim, Geun-Hyung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.1129-1131
    • /
    • 2011
  • 최근 스마트폰, 태블릿 PC, 스마트 TV 와 같이 디스플레이 크기, OS, H/W 성능이 다양한 특징을 갖는 인터넷 커넥트 단말의 종류가 증가하고 하고 있으며, 웹 서비스 업체들은 기존의 각종 콘텐츠와 서비스를 융합하여 새로운 웹 서비스를 제공하고 있다. 개인 소유의 단말과 서비스가 증가함에 따라 사용자 단말간에 융합된 서비스를 자유롭게 이동할 수 있는 기술이 요구되었다. 하지만 서로 다른 특징을 같은 단말간에 seamless 한 서비스 이동이 어렵기 때문에, 이를 극복하기 위해 단말 제조업체, 통신 사업자들은 자사의 단말 또는 플랫폼 기반의 N-Screen 서비스를 제공하고 있다. 본 논문은 단말간 웹 서비스 이동에 있어 이동 가능한 객체를 정의하였으며, 서로 다른 특징을 갖는 단말과 플랫폼 기반에 구애 받지 않고 웹 기반의 서비스 이동을 위해 HTML5 의 Websocket 기술을 활용하여 사용자 단말간 서비스 이동이 가능함을 보였다.