• 제목/요약/키워드: mobile vision system

검색결과 290건 처리시간 0.028초

비전 시스템을 이용한 이동로봇 Self-positioning과 VRML과의 영상오버레이 (Self-Positioning of a Mobile Robot using a Vision System and Image Overlay with VRML)

  • 권방현;정길도
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.258-260
    • /
    • 2005
  • We describe a method for localizing a mobile robot in the working environment using a vision system and VRML. The robot identifies landmarks in the environment and carries out the self-positioning. The image-processing and neural network pattern matching technique are employed to recognize landmarks placed in a robot working environment. The robot self-positioning using vision system is based on the well-known localization algorithm. After self-positioning, 2D scene is overlaid with VRML scene. This paper describes how to realize the self-positioning and shows the result of overlaying between 2D scene and VRML scene. In addition we describe the advantage expected from overlapping both scenes.

  • PDF

능동 전방향 거리 측정 시스템을 이용한 이동로봇의 위치 추정 (Localization of Mobile Robot Using Active Omni-directional Ranging System)

  • 류지형;김진원;이수영
    • 제어로봇시스템학회논문지
    • /
    • 제14권5호
    • /
    • pp.483-488
    • /
    • 2008
  • An active omni-directional raging system using an omni-directional vision with structured light has many advantages compared to the conventional ranging systems: robustness against external illumination noise because of the laser structured light and computational efficiency because of one shot image containing $360^{\circ}$ environment information from the omni-directional vision. The omni-directional range data represents a local distance map at a certain position in the workspace. In this paper, we propose a matching algorithm for the local distance map with the given global map database, thereby to localize a mobile robot in the global workspace. Since the global map database consists of line segments representing edges of environment object in general, the matching algorithm is based on relative position and orientation of line segments in the local map and the global map. The effectiveness of the proposed omni-directional ranging system and the matching are verified through experiments.

RFID 기반 재고조사용 이동로봇 시스템의 설계 (Mobile Robot System Design for RFID-based Inventory Checking)

  • 손민혁;도용태
    • 로봇학회논문지
    • /
    • 제6권1호
    • /
    • pp.49-57
    • /
    • 2011
  • In many industries, the accurate and quick checking of goods in storage is of great importance. Most today's inventory checking is based on bar code scanning, but the relative position between a bar code and an optical scanner should be maintained in close distance and proper angle for the successful scanning. This requirement makes it difficult to fully automate the inventory information/control systems. The use of RFID technology can be a solution for overcoming this problem. The mobile robot presented in this paper is equipped with an RFID tag scanning system, that automates the otherwise manual or semi-automatic inventory checking process. We designed the robot system in a quite practical approach, and the developed system is close to the commercialization stage. In our experiments, the robot could collect information of goods stacked on shelves autonomously without any failure and maintain corresponding database while it navigated predefined paths between the shelves using vision.

비전시스템에 의한 열간 선재 단면 측정 (Measurement of Hot WireRod Cross-Section by Vision System)

  • 박중조;탁영봉
    • 제어로봇시스템학회논문지
    • /
    • 제6권12호
    • /
    • pp.1106-1112
    • /
    • 2000
  • In this paper, we present a vision system which measures the cross-section of a hot wire-rod in the steel plant. We developed a mobile vision system capable of accurate measurement, which is strong to vibration and jolt when moving. Our system uses green laser light sources and CCD cameras as a sensor, where laser sheet beams form a cross-section contour on the surface of the hot wire-rod and the reflected light from the wire-rode is imaged on the CCD cameras. We use four lasers and four cameras to obtain the image with the complete cross-section contour without an occlusion region. We also perform camera calibrations to obtain each cameras physical parameters by using a single calibration pattern sheet. In our measuring algorithm, distorted four-camera images are corrected by using the camera calibration information and added to generate an image with the complete cross-section contour of the wire-rod. Then, from this image, the cross-section contour of the wire-rod is extracted by preprocessing and segmentation, and its height, width and area are measured.

  • PDF

스테레오 비전에서 비용 축적 알고리즘의 비교 분석 (Comparative Analysis of Cost Aggregation Algorithms in Stereo Vision)

  • 이용환;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제15권1호
    • /
    • pp.47-51
    • /
    • 2016
  • Human visual system infers 3D vision through stereo disparity in the stereoscopic images, and stereo visioning are recently being used in consumer electronics which has resulted in much research in the application field. Basically, stereo vision system consists of four processes, which are cost computation, cost aggregation, disparity calculation, and disparity refinement. In this paper, we present and evaluate the existing various methods, focusing on cost aggregation for stereo vision system to comparatively analyze the performance of their algorithms for a given set of resources. Experiments show that Normalized Cross Correlation and Zero-Mean Normalized Cross Correlation provide higher accuracy, however they are computationally heavy for embedded system in the real time systems. Sum of Absolute Difference and Sum of Squared Difference are more suitable selection for embedded system, but they should be required on improvement to apply to the real world system.

전방향 구동 로봇에서의 비젼을 이용한 이동 물체의 추적 (Moving Target Tracking using Vision System for an Omni-directional Wheel Robot)

  • 김산;김동환
    • 제어로봇시스템학회논문지
    • /
    • 제14권10호
    • /
    • pp.1053-1061
    • /
    • 2008
  • In this paper, a moving target tracking using a binocular vision for an omni-directional mobile robot is addressed. In the binocular vision, three dimensional information on the target is extracted by vision processes including calibration, image correspondence, and 3D reconstruction. The robot controller is constituted with SPI(serial peripheral interface) to communicate effectively between robot master controller and wheel controllers.

센서 합성을 이용한 자율이동로봇의 주행 알고리즘 설계 (Design of Navigation Algorithm for Mobile Robot using Sensor fusion)

  • 김정훈;김영중;임묘택
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제53권10호
    • /
    • pp.703-713
    • /
    • 2004
  • This paper presents the new obstacle avoidance method that is composed of vision and sonar sensors, also a navigation algorithm is proposed. Sonar sensors provide poor information because the angular resolution of each sonar sensor is not exact. So they are not suitable to detect relative direction of obstacles. In addition, it is not easy to detect the obstacle by vision sensors because of an image disturbance. In This paper, the new obstacle direction measurement method that is composed of sonar sensors for exact distance information and vision sensors for abundance information. The modified splitting/merging algorithm is proposed, and it is robuster for an image disturbance than the edge detecting algorithm, and it is efficient for grouping of the obstacle. In order to verify our proposed algorithm, we compare the proposed algorithm with the edge detecting algorithm via experiments. The direction of obstacle and the relative distance are used for the inputs of the fuzzy controller. We design the angular velocity controllers for obstacle avoidance and for navigation to center in corridor, respectively. In order to verify stability and effectiveness of our proposed method, it is apply to a vision and sonar based mobile robot navigation system.

Vision Sensor and Ultrasonic Sensor Fusion Using Neural Network

  • Baek, Sang-Hoon;Oh, Se-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.668-671
    • /
    • 2004
  • This paper proposes a new method of sensor fusion of an ultrasonic sensor and a vision sensor at the sensor level. In general vision system, the vision system finds edges of objects. And in general ultrasonic system, the ultrasonic system finds absolute distance between robot and object. So, the method integrates data of two different types. The system makes perfect output for robot control in the end. But this paper does not propose only integrating a different kind of data but also fusion information which receives from different kind of sensors. This method has advantages which can simply embody algorithm and can control robot on real time.

  • PDF

비젼시스템을 이용한 이동로봇의 이동물체 추적에 관한 연구 (A Study of the tracking of moving object of mobile robot using vision system)

  • 전재현;홍석교
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1999년도 하계학술대회 논문집 G
    • /
    • pp.3083-3085
    • /
    • 1999
  • This paper presents an algorithm that the mobile robot track accurately a moving object with information from a CCD camera mounted on mobile robot. Singular Value Decomposition is adapted to remove the measurement noise of a Raw data of CCD. The mobile robot estimate the trajectory using Kalman filter and track the path of a moving object with a servo motor. Computer simulation results are showed that the efficient tracking system for the mobile robot is designed properly.

  • PDF

Mobile 기기에 적합한 Vertex Shader 의 설계 및 구현 (A Design of a Vertex Shader for Mobile Devices)

  • 정형기;남기훈;이광엽;허현민;이병옥;이주석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2005년도 추계종합학술대회
    • /
    • pp.751-754
    • /
    • 2005
  • In this paper, we designed a vertex shader for mobile devices. Proposed Vertex shader is compatible with the OpenGL ARB & DirectX 8.0 Vertex Shader 1.1 and is organized of modified IEEE-754 24 bits float point SIMD architecture. All float point arithmetic unit process 1 cycle operation with 100Mhz frequency more. We made a vertex shader demo system with Xilinx-Virtex II and get synthesis result that confirm 11M gates size at TSMC 0.13um @ 115MHz.

  • PDF