• 제목/요약/키워드: Vision-based

검색결과 3,438건 처리시간 0.038초

능동 스테레오 비젼을 시스템을 이용한 자율이동로봇의 목표물 추적에 관한 연구 (Study on the Target Tracking of a Mobile Robot Using Active Stereo-Vision System)

  • 이희명;이수희;이병룡;양순용;안경관
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2003년도 춘계학술대회 논문집
    • /
    • pp.915-919
    • /
    • 2003
  • This paper presents a fuzzy-motion-control based tracking algorithm of mobile robots, which uses the geometrical information derived from the active stereo-vision system mounted on the mobile robot. The active stereo-vision system consists of two color cameras that rotates in two angular dimensions. With the stereo-vision system, the center position and depth information of the target object can be calculated. The proposed fuzzy motion controller is used to calculate the tracking velocity and angular position of the mobile robot, which makes the mobile robot keep following the object with a constant distance and orientation.

  • PDF

퍼지 소속 함수에 기초한 원전 증기발생기 검사용 실시간 비젼시스템 (Real Time Vision System for the Test of Steam Generator in Nuclear Power Plants Based on Fuzzy Membership Function)

  • 왕한흥
    • 한국공작기계학회:학술대회논문집
    • /
    • 한국공작기계학회 1996년도 추계학술대회 논문
    • /
    • pp.107-112
    • /
    • 1996
  • In this paper it is proposed a new approach to the development of the automatic vision system to examine and repair the steam generator tubes at remote distance. In nuclear power plants workers are reluctant of works in steam generator because of the high radiation environment and limited working space. It is strongly recommended that the examination and maintenance works be done by an automatic system for the protection of the operator from the radiation exposure. Digital signal processors are used in implementing real time recognition and examination of steam generator tubes in the preposed vision system, Performance of proposed digital vision system is illustrated by experiment for similar steam generator model.

  • PDF

바이프리즘 스테레오 시각 센서를 이용한 GMA 용접 비드의 3차원 형상 측정 (Measurement of GMAW Bead Geometry Using Biprism Stereo Vision Sensor)

  • 이지혜;이두현;유중돈
    • Journal of Welding and Joining
    • /
    • 제19권2호
    • /
    • pp.200-207
    • /
    • 2001
  • Three-diemnsional bead profile was measured using the biprism stereo vision sensor in GMAW, which consists of an optical filter, biprism and CCD camera. Since single CCD camera is used, this system has various advantages over the conventional stereo vision system using two cameras such as finding the corresponding points along the horizontal scanline. In this wort, the biprism stereo vision sensor was designed for the GMAW, and the linear calibration method was proposed to determine the prism and camera parameters. Image processing techniques were employed to find the corresponding point along the pool boundary. The ism-intensity contour corresponding to the pool boundary was found in the pixel order and the filter-based matching algorithm was used to refine the corresponding points in the subpixel order. Predicted bead dimensions were in broad agreements with the measured results under the conditions of spray mode and humping bead.

  • PDF

독립 비젼 시스템 기반의 축구로봇을 위한 계층적 행동 제어기 (A Hierarchical Motion Controller for Soccer Robots with Stand-alone Vision System)

  • 이동일;김형종;김상준;장재완;최정원;이석규
    • 한국정밀공학회지
    • /
    • 제19권9호
    • /
    • pp.133-141
    • /
    • 2002
  • In this paper, we propose a hierarchical motion controller with stand-alone vision system to enhance the flexibility of the robot soccer system. In addition, we simplified the model of dynamic environments of the robot using petri-net and simple state diagram. Based on the proposed model, we designed the robot soccer system with velocity and position controller that includes 4-level hierarchically structured controller. Some experimental results using the stand-alone vision system from host system show improvement of the controller performance by reducing processing time of vision algorithm.

비젼 시스템을 이용한 이동 물체의 그립핑 (The Moving Object Gripping Using Vision Systems)

  • 조기흠;최병준;전재현;홍석교
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1998년도 하계학술대회 논문집 G
    • /
    • pp.2357-2359
    • /
    • 1998
  • This paper proposes trajectory tracking of the moving object based on one camera vision system. And, this system proposes a method which robot manipulator grips moving object and predicts coordinate of moving objcet. The trajectory tracking and position coordinate are computed by vision data acquired to camera. Robot manipulator tracks and grips moving object by vision data. The proposed vision systems use a algorithm to do real-time processing.

  • PDF

야시조명계통 요구도 분석 (Analysis of Requirements for Night Vision Imaging System)

  • 권종광;이대열;김환우
    • 한국군사과학기술학회지
    • /
    • 제10권3호
    • /
    • pp.51-61
    • /
    • 2007
  • This paper concerns about the requirement analysis for night vision imaging system(NVIS), whose purpose is to intensify the available nighttime near infrared(IR) radiation sufficiently to be caught by the human eyes on a miniature green phosphor screen. The requirements for NVIS are NVIS radiance(NR), chromaticity, daylight legibility/readability, etc. The NR is a quantitative measure of night vision goggle (NVG) compatibility of a light source as viewed through goggles. The chromaticity is the quality of a color as determined by its purity and dominant wavelength. The daylight legibility/readability is the degree at which words are readable based on appearance and a measure of an instrument's ability to display incremental changes in its output value. In this paper, the requirements of NR, chromaticity, and daylight legibility/readability for Type I and Class B/C NVIS are analyzed. Also the rationale is shown with respect to those requirements.

액티브 러닝을 활용한 영상기반 건설현장 물체 자동 인식 프레임워크 (Automated Vision-based Construction Object Detection Using Active Learning)

  • 김진우;지석호;서준오
    • 대한토목학회논문집
    • /
    • 제39권5호
    • /
    • pp.631-636
    • /
    • 2019
  • 최근 많은 연구자들이 대규모 현장에 투입된 건설자원의 유형과 위치를 자동 파악하는 영상분석기술을 활발히 개발하고 있다. 하지만 기존의 방법들은 인식하고자 하는 건설 물체(작업자, 중장비, 자재 등)를 학습용 이미지 데이터에 표시하는 Labeling 작업을 요구하고 이에 불필요한 시간과 노력이 낭비된다는 한계가 있다. 이러한 한계를 보완하기 위해서 본 연구는 액티브 러닝을 활용한 영상기반 건설현장 물체 자동 인식 프레임 워크를 제안함을 목표로 한다. 개발 프레임워크 검증을 목적으로 건설분야 Benchmark 데이터셋을 이용하여 실제 실험을 진행하였다. 그 결과, 액티브 러닝을 통해 학습한 모델은 다양한 특성을 지닌 건설물체를 성공적으로 인식할 수 있었고, 기존의 학습 DB 구축 방식과 비교할 때 더 적은 데이터 수와 반복학습 횟수로도 높은 성능을 가지는 영상분석모델을 개발할 수 있었다. 결과적으로 기존에 요구되던 학습 DB 구축을 위한 Labeling 작업을 줄일 뿐만 아니라 총 시간과 비용을 최소화할 수 있다.

Correlation Extraction from KOSHA to enable the Development of Computer Vision based Risks Recognition System

  • Khan, Numan;Kim, Youjin;Lee, Doyeop;Tran, Si Van-Tien;Park, Chansik
    • 국제학술발표논문집
    • /
    • The 8th International Conference on Construction Engineering and Project Management
    • /
    • pp.87-95
    • /
    • 2020
  • Generally, occupational safety and particularly construction safety is an intricate phenomenon. Industry professionals have devoted vital attention to enforcing Occupational Safety and Health (OHS) from the last three decades to enhance safety management in construction. Despite the efforts of the safety professionals and government agencies, current safety management still relies on manual inspections which are infrequent, time-consuming and prone to error. Extensive research has been carried out to deal with high fatality rates confronting by the construction industry. Sensor systems, visualization-based technologies, and tracking techniques have been deployed by researchers in the last decade. Recently in the construction industry, computer vision has attracted significant attention worldwide. However, the literature revealed the narrow scope of the computer vision technology for safety management, hence, broad scope research for safety monitoring is desired to attain a complete automatic job site monitoring. With this regard, the development of a broader scope computer vision-based risk recognition system for correlation detection between the construction entities is inevitable. For this purpose, a detailed analysis has been conducted and related rules which depict the correlations (positive and negative) between the construction entities were extracted. Deep learning supported Mask R-CNN algorithm is applied to train the model. As proof of concept, a prototype is developed based on real scenarios. The proposed approach is expected to enhance the effectiveness of safety inspection and reduce the encountered burden on safety managers. It is anticipated that this approach may enable a reduction in injuries and fatalities by implementing the exact relevant safety rules and will contribute to enhance the overall safety management and monitoring performance.

  • PDF

Intelligent Rain Sensing and Fuzzy Wiper Control Algorithm for Vision-based Smart Windshield Wiper System

  • Lee, Kyung-Chang;Kim, Man-Ho;Lee, Suk
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.1694-1699
    • /
    • 2003
  • A windshield wiper system plays a key part in assuring the driver's safety during the rainfall. However, because the quantity of rain and snow vary irregularly according to time and the velocity of the automobile, a driver changes wiper speed and interval from time to time to secure enough visual field in the traditional windshield wiper system. Because a manual operation of windshield wiper distracts driver's sensitivity and causes inadvertent driving, this is becoming a direct cause of traffic accidents. Therefore, this paper presents the basic architecture of a vision-based smart windshield wiper system and a rain sensing algorithm that regulates speed and interval of the windshield wiper automatically according to the quantity of rain or snow. This paper also introduces a fuzzy wiper control algorithm based on human's expertise, and evaluates the performance of the suggested algorithm in an experimental simulator.

  • PDF

Vision-Based Eyes-Gaze Detection Using Two-Eyes Displacement

  • Ponglanka, Wirote;Kumhom, Pinit;Chamnongthai, Kosin
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -1
    • /
    • pp.46-49
    • /
    • 2002
  • One problem of vision-based eye-gazed detection is that it gives low resolution. Base on the displacement of the eyes, we proposed method for vision-based eye-gaze detection. While looking at difference positions on the screen, the distance of the centers of the eyes change accordingly. This relationship is derived and used to map the displacement to the distance in the screen. The experiments are performed to measure the accuracy and resolution to verify the proposed method. The results shown the accuracy of the screen mapping function for the horizontal plane are 76.47% and the error of this function be 23.53%

  • PDF