• 제목/요약/키워드: local vision

검색결과 378건 처리시간 0.228초

동적 Range 검출에 의한 원료 Pile 형상 관리 시스템 (Profile Management System of Material Piles by Dynamic Range Finding)

  • 안현식
    • 융합신호처리학회 학술대회논문집
    • /
    • 한국신호처리시스템학회 2000년도 하계종합학술대회논문집
    • /
    • pp.333-336
    • /
    • 2000
  • In this paper, a profile management system consisting of global and local range finders is presented for the automat ion of material pile handling. A global range finder detects range data of the front part of the piles of material and a profile map is obtained from a 3D profile detection algorithm. A local range finder attached on the side of the arm of the reclaimer detects range data with the handling function dynamically, and a local profile patch is acquired from the range data A yard profile map manager constructs a map by using the 3D profile of the global range finder and revises the map by replacing it with the local profile patch obtained Iron the local range finder. The developed vision system was applied to a simulator and the results of test show that it is appropriate to use for automating the material handling.

  • PDF

자율 주행로봇을 위한 국부 경로계획 알고리즘 (A local path planning algorithm for free-ranging mobil robot)

  • 차영엽;권대갑
    • 한국정밀공학회지
    • /
    • 제11권4호
    • /
    • pp.88-98
    • /
    • 1994
  • A new local path planning algorithm for free-ranging robots is proposed. Considering that a laser range finder has the excellent resolution with respect to angular and distance measurements, a simple local path planning algorithm is achieved by a directional weighting method for obtaining a heading direction of nobile robot. The directional weighting method decides the heading direction of the mobile robot by estimating the attractive resultant force which is obtained by directional weighting function times range data, and testing whether the collision-free path and the copen parthway conditions are satisfied. Also, the effectiveness of the established local path planning algorithm is estimated by computer simulation in complex environment.

  • PDF

실감 만남을 위한 네트워크 기반 Visual Agent Platform 설계 (The Design of a Network based Visual Agent Platform for Tangible Space)

  • 김현기;최익;유범재
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.258-260
    • /
    • 2006
  • In this paper, we designed a embedded system that will perform a primary role of Tangible Space implementation. This hardware includes function of image capture through camera interface, image process and sending off image information by LAN (local area network) or WLAN(wireless local area network). We define this hardware as a network based Visual Agent Platform for Tangible Space

  • PDF

실감 만남을 위한 네트워크 기반 Visual Agent Platform 개발 (The Development of a Network based Visual Agent Platform for Tangible Space)

  • 김현기;최익;유범재
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.172-174
    • /
    • 2007
  • In this paper, we designed a embedded system that will perform a primary role of Tangible Space implementation. This hardware includes function of image capture through camera interface, image process and sending off image information by LAN(local area network) or WLAN(wireless local area network). We define this hardware as a network based Visual Agent Platform for Tangible Space, This Visual Agent Platform includes the software that is RTLinux and CORBA

  • PDF

시력교정 과정에서 착안된 새로운 메타휴리스틱 최적화 알고리즘의 개발: Vision Correction Algorithm (Development of the new meta-heuristic optimization algorithm inspired by a vision correction procedure: Vision Correction Algorithm)

  • 이의훈;유도근;최영환;김중훈
    • 한국산학기술학회논문지
    • /
    • 제17권3호
    • /
    • pp.117-126
    • /
    • 2016
  • 본 연구에서는 안경의 광학적 특성에서 고안된 새로운 메타휴리스틱 최적화 알고리즘인 Vision Correction Algorithm(VCA)을 개발하였다. VCA는 안경광학분야에서 수행되는 검안과 교정과정을 최적해 탐색 과정에 적용한 기법으로 근시/원시교정-밝기조정-압축시행-난시교정의 과정을 거쳐 최적화를 수행하게 된다. 제안된 VCA는 기존의 메타휴리스틱 알고리즘과 달리 현재까지 축적된 최적화 결과를 기반으로 전역탐색과 국지탐색 적용 확률, 그리고 전역탐색의 방향이 자동적으로 조정 된다. 제안된 방법을 대표적인 최적화 문제(수학 및 공학 분야)에 적용하고, 그 결과를 기존 알고리즘들과 비교하여 제시하였다.

비전과 퍼지 규칙을 이용한 이동로봇의 경로계획과 장애물회피 (Path Planning and Obstacle Avoidance for Mobile Robot with Vision System Using Fuzzy Rules)

  • 배봉규;채양범;이원창;강근택
    • 한국지능시스템학회논문지
    • /
    • 제11권6호
    • /
    • pp.470-476
    • /
    • 2001
  • 본 논문에서는 미지의 환경에서 동작하는 비전 시스템을 갖는 이동로봇의 경로 계획과 장애물 회피를 위한 새로운 알고리즘을 제안하고자 한다. 목표점에 도달하기 위한 경로계획과 장애물회피를 위해 거리 변화율 기법을 적용하였으며, 소벨연산자를 이용하여 장애물의 윤곽을 추출하였다. 이동로봇의 자율성을 향상시키기 위해 경로 설정과 장애물 회피에 퍼지 규직을 사용하였다. 본 논문에서 제안된 알고리즘을 컴퓨터시뮬레이션을 통해 기존의 벡터장 기법을 이용한것보다 우수하다는것을 알수 있었다. 또한 실효성을 실제로 알아보기 위해 소형 이동로봇을 제작하였으며, 제안된 알고리즘을 탑재하여 실험한 결과 복잡한 주변환경하에서도 좋은 성능을 발휘함을 확인할수 있었다.

  • PDF

정적 및 동적 range 검출에 의한 원료 처리 자동화용 vision 시스템 (A vision system for autonomous material handling by static and dynamic range finding)

  • 안현식;최진태;이관희;신기태;하영호
    • 전자공학회논문지S
    • /
    • 제34S권10호
    • /
    • pp.59-70
    • /
    • 1997
  • Until now, considerable progress has been made in the application of range finding techanique performing direct 3-D measurement from the object. However, ther are few use of the method in the area of the application of material handing. We present a range finding vision system consisting of static and dynamic range finders to automate a reclaimer used for material handling. A static range finder detects range data of the front part of the piles of material, and a height map is obtained from the proposed image processing algorithm. The height map is used to calculate the optimal job path as features for required information for material handling function. A dynamic range finder attached on the side of the arm of the reclaimer detects the change of the local properties of the material with the handling function, which is used for avoiding collision and detecting the ending point for changing direction. the developed vision systm was applied to a 1/20 simulator and the results of test show that it is appropriate to use for automating the material handling.

  • PDF

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Real-time geometry identification of moving ships by computer vision techniques in bridge area

  • Li, Shunlong;Guo, Yapeng;Xu, Yang;Li, Zhonglong
    • Smart Structures and Systems
    • /
    • 제23권4호
    • /
    • pp.359-371
    • /
    • 2019
  • As part of a structural health monitoring system, the relative geometric relationship between a ship and bridge has been recognized as important for bridge authorities and ship owners to avoid ship-bridge collision. This study proposes a novel computer vision method for the real-time geometric parameter identification of moving ships based on a single shot multibox detector (SSD) by using transfer learning techniques and monocular vision. The identification framework consists of ship detection (coarse scale) and geometric parameter calculation (fine scale) modules. For the ship detection, the SSD, which is a deep learning algorithm, was employed and fine-tuned by ship image samples downloaded from the Internet to obtain the rectangle regions of interest in the coarse scale. Subsequently, for the geometric parameter calculation, an accurate ship contour is created using morphological operations within the saturation channel in hue, saturation, and value color space. Furthermore, a local coordinate system was constructed using projective geometry transformation to calculate the geometric parameters of ships, such as width, length, height, localization, and velocity. The application of the proposed method to in situ video images, obtained from cameras set on the girder of the Wuhan Yangtze River Bridge above the shipping channel, confirmed the efficiency, accuracy, and effectiveness of the proposed method.

카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발 (Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment)

  • 김유진;이호준;이경수
    • 자동차안전학회지
    • /
    • 제13권4호
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.