• Title/Summary/Keyword: Vision Module

Search Result 197, Processing Time 0.028 seconds

Aircraft Recognition from Remote Sensing Images Based on Machine Vision

  • Chen, Lu;Zhou, Liming;Liu, Jinming
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.795-808
    • /
    • 2020
  • Due to the poor evaluation indexes such as detection accuracy and recall rate when Yolov3 network detects aircraft in remote sensing images, in this paper, we propose a remote sensing image aircraft detection method based on machine vision. In order to improve the target detection effect, the Inception module was introduced into the Yolov3 network structure, and then the data set was cluster analyzed using the k-means algorithm. In order to obtain the best aircraft detection model, on the basis of our proposed method, we adjusted the network parameters in the pre-training model and improved the resolution of the input image. Finally, our method adopted multi-scale training model. In this paper, we used remote sensing aircraft dataset of RSOD-Dataset to do experiments, and finally proved that our method improved some evaluation indicators. The experiment of this paper proves that our method also has good detection and recognition ability in other ground objects.

A Study of the Defect Detection Method of Vision Technology via Camera Image Analysis on 4-col 7-row LED Screen Module (4단 7열 LED 사이니지 전면부 설치형 카메라기반 불량 LED 소자 검출 Vision 기술에 관한 연구)

  • Park, Young ki;Im, Sang il;Jo, Ik hyeon;Cha, Jae sang
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.11
    • /
    • pp.1383-1387
    • /
    • 2020
  • Recently, a 4-col 7-row LED Screen that provides various information of major roads and local governments has been installed and operated. However, due to deterioration due to changes in temperature and humidity, deterioration due to static electricity, and mechanical stress, partial module failure of the display may occur, which is a major cause of missing information of vitally given to citizens. However, there have been frequent cases where the 4-col and 7-row LED Screen that have failed due to reasons such as installed location where the signboards are installed on the road and outdoor, the lack of monitoring means at all times, and the lack of manpower is often neglected for a long time. Following this flow, this paper proposes a method to detect defective modules by analyzing the images collected through the camera fixed to the front part of the LED display.

Compensation of Installation Errors in a Laser Vision System and Dimensional Inspection of Automobile Chassis

  • Barkovski Igor Dunin;Samuel G.L.;Yang Seung-Han
    • Journal of Mechanical Science and Technology
    • /
    • v.20 no.4
    • /
    • pp.437-446
    • /
    • 2006
  • Laser vision inspection systems are becoming popular for automated inspection of manufactured components. The performance of such systems can be enhanced by improving accuracy of the hardware and robustness of the software used in the system. This paper presents a new approach for enhancing the capability of a laser vision system by applying hardware compensation and using efficient analysis software. A 3D geometrical model is developed to study and compensate for possible distortions in installation of gantry robot on which the vision system is mounted. Appropriate compensation is applied to the inspection data obtained from the laser vision system based on the parameters in 3D model. The present laser vision system is used for dimensional inspection of car chassis sub frame and lower arm assembly module. An algorithm based on simplex search techniques is used for analyzing the compensated inspection data. The details of 3D model, parameters used for compensation and the measurement data obtained from the system are presented in this paper. The details of search algorithm used for analyzing the measurement data and the results obtained are also presented in the paper. It is observed from the results that, by applying compensation and using appropriate algorithms for analyzing, the error in evaluation of the inspection data can be significantly minimized, thus reducing the risk of rejecting good parts.

A Clean Mobile Robot for 4th Generation LCD Cassette transfer (4세대 LCD Cassette 자동 반송 이동로봇)

  • 김진기;성학경;김성권
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.249-249
    • /
    • 2000
  • This paper introduces a clean mobile robot fur 4th generation LCD cassette, which is guided by optical sensor and position compensation using vision module. The mobile robot for LCD cassette transfer might be controlled by AGV controller which has powerful algorithms. It offers optimum routes to the destination of clean mobile robot by using dynamic dispatch algorithm and MAP data. This clean mobile robot is equipped with 4 axes fork type manipulator providing repeatability accuracy of $\pm$ 0.05mm.

  • PDF

Development of Stand-Alone Vision Processing Module Based on Linux OS in ARM CPU (ARM CUP를 이용한 리눅스기반 독립형 Vision 처리 모듈 개발)

  • Lee, Seok;Moon, Seung-Bin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.657-660
    • /
    • 2002
  • 현재 Embedded system 에서 많은 기업체들이 리눅스를 채용하고 있고, 이러한 임베디드 리눅스는 실시간 운영체제가 필요한 로봇제어기에서부터 PDA, set-top box등 여러 분야에 걸쳐 응용되고 있다. 본 논문에서는 StrongARM SA-1110 CPU을 이용하여 만들어진 임베디드 시스템에 리눅스를 사용하여 독립형 비전모듈을 개발한 내용을 기술한다. 또한, WinCE 를 사용하여 개발된 비전모듈과의 성능을 비교하여 리눅스를 이용한 독립형 비전모듈을 평가하고, 머신비전 분야에서의 리눅스 응용 가능성을 제시하였다.

  • PDF

Smart Vision Sensor for Satellite Video Surveillance Sensor Network (위성 영상감시 센서망을 위한 스마트 비젼 센서)

  • Kim, Won-Ho;Im, Jae-Yoo
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.2
    • /
    • pp.70-74
    • /
    • 2015
  • In this paper, satellite communication based video surveillance system that consisted of ultra-small aperture terminals with small-size smart vision sensor is proposed. The events such as forest fire, smoke, intruder movement are detected automatically in field and false alarms are minimized by using intelligent and high-reliable video analysis algorithms. The smart vision sensor is necessary to achieve high-confidence, high hardware endurance, seamless communication and easy maintenance requirements. To satisfy these requirements, real-time digital signal processor, camera module and satellite transceiver are integrated as a smart vision sensor-based ultra-small aperture terminal. Also, high-performance video analysis and image coding algorithms are embedded. The video analysis functions and performances were verified and confirmed practicality through computer simulation and vision sensor prototype test.

Development of Progressive Scan Gamera module using FPGA (FPGA를 이용한 프로그래시브 스캔 카메라 접속 모듈 개발)

  • Kim, Jeong-Hun;Jeon, Jae-Wook;Byun, Jong-Eun
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2865-2867
    • /
    • 2000
  • In machine vision fields around FA, there have been demands for functions to capture high speed moving objects as blur-free images. By electronic shutters, progressive scan cameras can do it. This paper develops a module to connect a progressive scan camera, XC-55.

  • PDF

Development of Mobile Robot for CAS inspection of Oil Tanker (유조선의 상태평가계획 검사를 위한 이동로봇의 개발)

  • Lee, Seung-Heui;Son, Chang-Woo;Eum, Yong-Jae;Lee, Min-Cheol
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.2
    • /
    • pp.161-167
    • /
    • 2007
  • It is dangerous that an inspector overhauls defects and condition of the inner parts of an oil tanker because of many harmful gases, complex structures, and etc. However, these inspections are necessary to many oil tankers over old years. In this study, we proposed the design of mobile robot for inspection of CAS in oil tanker. The developed CAS inspection mobile robot has four modules, a measurement module of oil tanker's thickness, a corrosion inspection module, a climbing module of the surface on a wall, and a monitoring module. In order to get over at a check position, the driving control algorithm was developed. Magnetic wheels are used to move on the surface of a wall. This study constructed a communication network and the monitoring program to operate the developed mobile robot from remote sites. In order to evaluate the inspection ability, the experiments about performance of CAS inspection using the developed mobile robot have been carried out.

  • PDF

Interface of Tele-Task Operation for Automated Cultivation of Watermelon in Greenhouse

  • Kim, S.C.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.28 no.6
    • /
    • pp.511-516
    • /
    • 2003
  • Computer vision technology has been utilized as one of the most powerful tools to automate various agricultural operations. Though it has demonstrated successful results in various applications, the current status of technology is still for behind the human's capability typically for the unstructured and variable task environment. In this paper, a man-machine interactive hybrid decision-making system which utilized a concept of tole-operation was proposed to overcome limitations of computer image processing and cognitive capability. Tasks of greenhouse watermelon cultivation such as pruning, watering, pesticide application, and harvest require identification of target object. Identifying water-melons including position data from the field image is very difficult because of the ambiguity among stems, leaves, shades. and fruits, especially when watermelon is covered partly by leaves or stems. Watermelon identification from the cultivation field image transmitted by wireless was selected to realize the proposed concept. The system was designed such that operator(farmer), computer, and machinery share their roles utilizing their maximum merits to accomplish given tasks successfully. And the developed system was composed of the image monitoring and task control module, wireless remote image acquisition and data transmission module, and man-machine interface module. Once task was selected from the task control and monitoring module, the analog signal of the color image of the field was captured and transmitted to the host computer using R.F. module by wireless. Operator communicated with computer through touch screen interface. And then a sequence of algorithms to identify the location and size of the watermelon was performed based on the local image processing. And the system showed practical and feasible way of automation for the volatile bio-production process.

Hybrid Inertial and Vision-Based Tracking for VR applications (가상 현실 어플리케이션을 위한 관성과 시각기반 하이브리드 트래킹)

  • Gu, Jae-Pil;An, Sang-Cheol;Kim, Hyeong-Gon;Kim, Ik-Jae;Gu, Yeol-Hoe
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.103-106
    • /
    • 2003
  • In this paper, we present a hybrid inertial and vision-based tracking system for VR applications. One of the most important aspects of VR (Virtual Reality) is providing a correspondence between the physical and virtual world. As a result, accurate and real-time tracking of an object's position and orientation is a prerequisite for many applications in the Virtual Environments. Pure vision-based tracking has low jitter and high accuracy but cannot guarantee real-time pose recovery under all circumstances. Pure inertial tracking has high update rates and full 6DOF recovery but lacks long-term stability due to sensor noise. In order to overcome the individual drawbacks and to build better tracking system, we introduce the fusion of vision-based and inertial tracking. Sensor fusion makes the proposal tracking system robust, fast, accurate, and low jitter and noise. Hybrid tracking is implemented with Kalman Filter that operates in a predictor-corrector manner. Combining bluetooth serial communication module gives the system a full mobility and makes the system affordable, lightweight energy-efficient. and practical. Full 6DOF recovery and the full mobility of proposal system enable the user to interact with mobile device like PDA and provide the user with natural interface.

  • PDF