• Title/Summary/Keyword: Visual system

Search Result 4,755, Processing Time 0.048 seconds

Loosely Coupled LiDAR-visual Mapping and Navigation of AMR in Logistic Environments (실내 물류 환경에서 라이다-카메라 약결합 기반 맵핑 및 위치인식과 네비게이션 방법)

  • Choi, Byunghee;Kang, Gyeongsu;Roh, Yejin;Cho, Younggun
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.397-406
    • /
    • 2022
  • This paper presents an autonomous mobile robot (AMR) system and operation algorithms for logistic and factory facilities without magnet-lines installation. Unlike widely used AMR systems, we propose an EKF-based loosely coupled fusion of LiDAR measurements and visual markers. Our method first constructs occupancy grid and visual marker map in the mapping process and utilizes prebuilt maps for precise localization. Also, we developed a waypoint-based navigation pipeline for robust autonomous operation in unconstrained environments. The proposed system estimates the robot pose using by updating the state with the fusion of visual marker and LiDAR measurements. Finally, we tested the proposed method in indoor environments and existing factory facilities for evaluation. In experimental results, this paper represents the performance of our system compared to the well-known LiDAR-based localization and navigation system.

A LiDAR-based Visual Sensor System for Automatic Mooring of a Ship (선박 자동계류를 위한 LiDAR기반 시각센서 시스템 개발)

  • Kim, Jin-Man;Nam, Taek-Kun;Kim, Heon-Hui
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.6
    • /
    • pp.1036-1043
    • /
    • 2022
  • This paper discusses about the development of a visual sensor that can be installed in an automatic mooring device to detect the berthing condition of a vessel. Despite controlling the ship's speed and confirming its location to prevent accidents while berthing a vessel, ship collision occurs at the pier every year, causing great economic and environmental damage. Therefore, it is important to develop a visual system that can quickly obtain the information on the speed and location of the vessel to ensure safety of the berthing vessel. In this study, a visual sensor was developed to observe a ship through an image while berthing, and to properly check the ship's status according to the surrounding environment. To obtain the adequacy of the visual sensor to be developed, the sensor characteristics were analyzed in terms of information provided from the existing sensors, that is, detection range, real-timeness, accuracy, and precision. Based on these analysis data, we developed a 3D visual module that can acquire information on objects in real time by conducting conceptual designs of LiDAR (Light Detection And Ranging) type 3D visual system, driving mechanism, and position and force controller for motion tilting system. Finally, performance evaluation of the control system and scan speed test were executed, and the effectiveness of the developed system was confirmed through experiments.

Survey on Visual Navigation Technology for Unmanned Systems (무인 시스템의 자율 주행을 위한 영상기반 항법기술 동향)

  • Kim, Hyoun-Jin;Seo, Hoseong;Kim, Pyojin;Lee, Chung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.2
    • /
    • pp.133-139
    • /
    • 2015
  • This paper surveys vision based autonomous navigation technologies for unmanned systems. Main branches of visual navigation technologies are visual servoing, visual odometry, and visual simultaneous localization and mapping (SLAM). Visual servoing provides velocity input which guides mobile system to desired pose. This input velocity is calculated from feature difference between desired image and acquired image. Visual odometry is the technology that estimates the relative pose between frames of consecutive image. This can improve the accuracy when compared with the exisiting dead-reckoning methods. Visual SLAM aims for constructing map of unknown environment and determining mobile system's location simultaneously, which is essential for operation of unmanned systems in unknown environments. The trend of visual navigation is grasped by examining foreign research cases related to visual navigation technology.

The Hierarchy of Images according to Construction Factors of the Flared Skirts

  • Lee, Jung-Soon;Han, Gyung-Hee
    • Journal of Fashion Business
    • /
    • v.13 no.6
    • /
    • pp.137-146
    • /
    • 2009
  • This study analyzed hierarchy of image for visual evaluation of flare skirt. This study analyzed expression words about flare skirt with frequency data of image expression words with different length and volume of flare. Stimuli for the study were set to be 4 different volume of flare ($90^{\circ}$, $180^{\circ}$, $270^{\circ}$, $360^{\circ}$) and 3 different length of skirt(48cm, 58cm, 68cm). Stimuli were made by using I-Designer which is Virtual Sewing System. From simulation of flare skirt, the subjects were asked to write down suggested adjective freely and selected 210 adjectives. With this, we chose total 38 adjectives considering frequencies in the pre-study. And we analyzed the combination process of expression words according to construction factor of flare skirt and hierarchy of image from dendrogram which was resulted by hierarchical cluster analysis. 'Feminine' got high score in all 12 flare skirt. When the skirt was short, it was vivid, and as the skirt got longer, ordinary and pure image showed. Also, as the volume of flare got larger, the average of visual effect was higher than visual image. Visual hierarchy construction according to construction factors of flare skirt could be divided into visual image and visual effect, and visual image was shown to be form 'A type - large volume of flare and short skirt length', 'H type-small volume of flare and short skirt length' and 'X type - large volume of flare and long skirt length'.

A Study on the Visual Odometer using Ground Feature Point (지면 특징점을 이용한 영상 주행기록계에 관한 연구)

  • Lee, Yoon-Sub;Noh, Gyung-Gon;Kim, Jin-Geol
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.3
    • /
    • pp.330-338
    • /
    • 2011
  • Odometry is the critical factor to estimate the location of the robot. In the mobile robot with wheels, odometry can be performed using the information from the encoder. However, the information of location in the encoder is inaccurate because of the errors caused by the wheel's alignment or slip. In general, visual odometer has been used to compensate for the kinetic errors of robot. In case of using the visual odometry under some robot system, the kinetic analysis is required for compensation of errors, which means that the conventional visual odometry cannot be easily applied to the implementation of the other type of the robot system. In this paper, the novel visual odometry, which employs only the single camera toward the ground, is proposed. The camera is mounted at the center of the bottom of the mobile robot. Feature points of the ground image are extracted by using median filter and color contrast filter. In addition, the linear and angular vectors of the mobile robot are calculated with feature points matching, and the visual odometry is performed by using these linear and angular vectors. The proposed odometry is verified through the experimental results of driving tests using the encoder and the new visual odometry.

Analysis of learning effects using audio-visual manual of SWAT (SWAT의 시청각 매뉴얼을 통한 학습 효과 분석)

  • Lee, Ju-Yeong;Kim, Tea-Ho;Ryu, Ji-Chul;Kang, Hyun-Woo;Kum, Dong-Hyuk;Woo, Won-Hee;Jang, Chun-Hwa;Choi, Jong-Dae;Lim, Kyoung-Jae
    • Korean Journal of Agricultural Science
    • /
    • v.38 no.4
    • /
    • pp.731-737
    • /
    • 2011
  • In the modern society, GIS-based decision support system has been used in evaluating environmental issues and changes due to spatial and temporal analysis capabilities of the GIS. However without proper manual of these systems, its desired goals could not be achieved. In this study, audio-visual SWAT tutorial system was developed to evaluate its effectives in learning the SWAT model. Learning effects was analyzed after in-class demonstration and survey. The survey was conducted for $3^{rd}$ grade students with/without audio-visual materials using 30 questionnaires, composed of 3 items for trend of respondent, 5 items for effects of audio-visual materials, and 12 items for effects of with/without manual in learning the model. For group without audio-visual manual, 2.98 out of 5 was obtained and 4.05 out of 5 was obtained for group with audio-visual manual, indicating higher content delivery with audio-visual learning effects. As shown in this study, the audio-visual learning material should be developed and used in various computer-based modeling system.

Robust Person Identification Using Optimal Reliability in Audio-Visual Information Fusion

  • Tariquzzaman, Md.;Kim, Jin-Young;Na, Seung-You;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.3E
    • /
    • pp.109-117
    • /
    • 2009
  • Identity recognition in real environment with a reliable mode is a key issue in human computer interaction (HCI). In this paper, we present a robust person identification system considering score-based optimal reliability measure of audio-visual modalities. We propose an extension of the modified reliability function by introducing optimizing parameters for both of audio and visual modalities. For degradation of visual signals, we have applied JPEG compression to test images. In addition, for creating mismatch in between enrollment and test session, acoustic Babble noises and artificial illumination have been added to test audio and visual signals, respectively. Local PCA has been used on both modalities to reduce the dimension of feature vector. We have applied a swarm intelligence algorithm, i.e., particle swarm optimization for optimizing the modified convection function's optimizing parameters. The overall person identification experiments are performed using VidTimit DB. Experimental results show that our proposed optimal reliability measures have effectively enhanced the identification accuracy of 7.73% and 8.18% at different illumination direction to visual signal and consequent Babble noises to audio signal, respectively, in comparison with the best classifier system in the fusion system and maintained the modality reliability statistics in terms of its performance; it thus verified the consistency of the proposed extension.

A Novel Feature Map Generation and Integration Method for Attention Based Visual Information Processing System using Disparity of a Stereo Pair of Images (주의 기반 시각정보처리체계 시스템 구현을 위한 스테레오 영상의 변위도를 이용한 새로운 특징맵 구성 및 통합 방법)

  • Park, Min-Chul;Cheoi, Kyung-Joo
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.55-62
    • /
    • 2010
  • Human visual attention system has a remarkable ability to interpret complex scenes with the ease and simplicity by selecting or focusing on a small region of visual field without scanning the whole images. In this paper, a novel feature map generation and integration method for attention based visual information processing system is proposed. The depth information obtained from a stereo pair of images is exploited as one of spatial visual features to form a set of topographic feature maps in our approach. Comparative experiments show that correct detection rate of visual attention regions improves by utilizing depth feature compared to the case of not using depth feature.

Implementation of Remote Image Surveillance for Mobile Robot Platform based on Embedded Processor (주행용 로봇 플랫폼을 위한 임베디드 프로세서 기반 원격영상감시 시스템 구현)

  • Han, Kyong-Ho;Yun, Hyo-Won
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.23 no.1
    • /
    • pp.125-131
    • /
    • 2009
  • In this paper, we proposed the remote visual monitoring system on mobile robot platform. The proposed system is composed of ARM9 core PXA255 processor, micro CMOS camera and wireless network and the captured visual image is transmitted via 803.11b/g wireless LAN(WLAN) for remote visual monitoring operations. Robot platform maneuvering command is transmitted via WLAN from host and the $640{\times}480$, $320{\times}240$ pixel fixed visual image is transmitted to host at the rate of $3{\sim}10$ frames per second. Experimental system is implemented on Linux OS base and tested for remote visual monitoring operation and verified the proposed objects.