• Title/Summary/Keyword: computer vision systems

Search Result 600, Processing Time 0.026 seconds

Real-time geometry identification of moving ships by computer vision techniques in bridge area

  • Li, Shunlong;Guo, Yapeng;Xu, Yang;Li, Zhonglong
    • Smart Structures and Systems
    • /
    • v.23 no.4
    • /
    • pp.359-371
    • /
    • 2019
  • As part of a structural health monitoring system, the relative geometric relationship between a ship and bridge has been recognized as important for bridge authorities and ship owners to avoid ship-bridge collision. This study proposes a novel computer vision method for the real-time geometric parameter identification of moving ships based on a single shot multibox detector (SSD) by using transfer learning techniques and monocular vision. The identification framework consists of ship detection (coarse scale) and geometric parameter calculation (fine scale) modules. For the ship detection, the SSD, which is a deep learning algorithm, was employed and fine-tuned by ship image samples downloaded from the Internet to obtain the rectangle regions of interest in the coarse scale. Subsequently, for the geometric parameter calculation, an accurate ship contour is created using morphological operations within the saturation channel in hue, saturation, and value color space. Furthermore, a local coordinate system was constructed using projective geometry transformation to calculate the geometric parameters of ships, such as width, length, height, localization, and velocity. The application of the proposed method to in situ video images, obtained from cameras set on the girder of the Wuhan Yangtze River Bridge above the shipping channel, confirmed the efficiency, accuracy, and effectiveness of the proposed method.

Correlation Extraction from KOSHA to enable the Development of Computer Vision based Risks Recognition System

  • Khan, Numan;Kim, Youjin;Lee, Doyeop;Tran, Si Van-Tien;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.87-95
    • /
    • 2020
  • Generally, occupational safety and particularly construction safety is an intricate phenomenon. Industry professionals have devoted vital attention to enforcing Occupational Safety and Health (OHS) from the last three decades to enhance safety management in construction. Despite the efforts of the safety professionals and government agencies, current safety management still relies on manual inspections which are infrequent, time-consuming and prone to error. Extensive research has been carried out to deal with high fatality rates confronting by the construction industry. Sensor systems, visualization-based technologies, and tracking techniques have been deployed by researchers in the last decade. Recently in the construction industry, computer vision has attracted significant attention worldwide. However, the literature revealed the narrow scope of the computer vision technology for safety management, hence, broad scope research for safety monitoring is desired to attain a complete automatic job site monitoring. With this regard, the development of a broader scope computer vision-based risk recognition system for correlation detection between the construction entities is inevitable. For this purpose, a detailed analysis has been conducted and related rules which depict the correlations (positive and negative) between the construction entities were extracted. Deep learning supported Mask R-CNN algorithm is applied to train the model. As proof of concept, a prototype is developed based on real scenarios. The proposed approach is expected to enhance the effectiveness of safety inspection and reduce the encountered burden on safety managers. It is anticipated that this approach may enable a reduction in injuries and fatalities by implementing the exact relevant safety rules and will contribute to enhance the overall safety management and monitoring performance.

  • PDF

Measuring the volume of powder by vision

  • SeijiIshikawa;ShigeruHarada;HiroyukiYoshinaga;KiyoshiKato
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1987.10a
    • /
    • pp.776-779
    • /
    • 1987
  • This paper describes a technique for measuring the volume of a pile of powder in a visual way. The volume of a fragile object whose shape is easily transformed by a slight touch of another object must be measured without any contact with it. This can be achieved by applying a three-dimensional shape reconstruction technique employed in computer vision. We have developed a measurement system that finds the volume of a pile of powder by employing a range finder, and performed an experiment of determining the volume of PVC powder piled on a table. The result of the experiment was satisfactory.

  • PDF

Wavelet Analysis to Real-Time Fabric Defects Detection in Weaving processes

  • Kim, Sung-Shin;Bae, Hyeon;Jung, Jae-Ryong;Vachtsevanos, George J.
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.1
    • /
    • pp.89-93
    • /
    • 2002
  • This paper introduces a vision-based on-line fabric inspection methodology of woven textile fabrics. Current procedure for determination of fabric defects in the textile industry is performed by human in the off-line stage. The advantage of the on-line inspection system is not only defect detection and identification, but also 벼ality improvement by a feedback control loop to adjust set-points. The proposed inspection system consists of hardware and software components. The hardware components consist of CCD array cameras, a frame grabber and appropriate illumination. The software routines capitalize upon vertical and horizontal scanning algorithms characteristic of a particular deflect. The signal to noise ratio (SNR) calculation based on the results of the wavelet transform is performed to measure any deflects. The defect declaration is carried out employing SNR and scanning methods. Test results from different types of defect and different style of fabric demonstrate the effectiveness of the proposed inspection system.

Development of a Lateral Control System for Autonomous Vehicles Using Data Fusion of Vision and IMU Sensors with Field Tests (비전 및 IMU 센서의 정보융합을 이용한 자율주행 자동차의 횡방향 제어시스템 개발 및 실차 실험)

  • Park, Eun Seong;Yu, Chang Ho;Choi, Jae Weon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.3
    • /
    • pp.179-186
    • /
    • 2015
  • In this paper, a novel lateral control system is proposed for the purpose of improving lane keeping performance which is independent from GPS signals. Lane keeping is a key function for the realization of unmanned driving systems. In order to obtain this objective, a vision sensor based real-time lane detection scheme is developed. Furthermore, we employ a data fusion along with a real-time steering angle of the test vehicle to improve its lane keeping performance. The fused direction data can be obtained by an IMU sensor and vision sensor. The performance of the proposed system was verified by computer simulations along with field tests using MOHAVE, a commercial vehicle from Kia Motors of Korea.

A Evaluation of Sun Tracking Performance of Parabolic Dish Concentrator using Vision System (비전시스템을 이용한 태양추적시스템의 추적정밀도 평가)

  • 안효진;박영칠
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.408-408
    • /
    • 2000
  • A parabolic dish concentrator used in a high temperature application of solar energy tracks the sun's movement by two axis sun tracking system. In such a system, sun tracking performance affects the system efficiency directly. Generally the higher the tracking accuracy is, the better the system performance is. A large number of parabolic dish type concentrators has been developed and implemented in the world. However none of them clearly provided a qualitative method of how the accuracy of the sun tracking system can be evaluated. The work presented here is the evaluation of sun tracking performance of parabolic dish concentrator, which follows the sun's movement by the sensor, using computer vision system. We install a camera on the parabolic dish concentrator. While the concentrator follows the sun, sun's images are captured continuously. Then the performance of sun tracking system was evaluated by analyzing the variation of the position of the sun in the images.

  • PDF

Implementation of Measuring System for the Auto Focusing (자동 초점 조절 검사 시스템 설계 및 구현)

  • Lee, Young Kyo;Kim, Young Po
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.4
    • /
    • pp.159-165
    • /
    • 2012
  • The accurate focusing position should be determined for accurate measurements In VMS. Camera lens focusing is an important problem in computer vision and video measuring systems (VMS) that use CCD cameras and high precision XYZ stages. Camera focusing is a very important step in high precision measurement systems that use computer vision technique. The auto focusing process consists of two steps, the focus value measurement step and the exact focusing position determination step. It is suitable for eliminating high frequency noises with lower processing time and without blurring. An automatic focusing technique is applied to measure a crater with a one-dimensional search algorithm for finding the best focus. Throughout this paper, the suggested algorithm for the Auto focusing was combined with the learning. As a result, it is expected that such a combination would be expanded into the system of recognizing voices in a noisy environment.

Design of Autonomous Stair Robot System (자율주행 형 계단 승하강용 로봇 시스템 설계)

  • 홍영호;김동환;임충혁
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.1
    • /
    • pp.73-81
    • /
    • 2003
  • An autonomous stair robot recognizing the stair, and climbing up and down the stair by utilizing a robot vision, photo sensors, and appropriate climbing algorithm is introduced. Four arms associated with four wheels make the robot climb up and down more safely and faster than a simple track typed robot. The robot can adjust wheel base according to the stair width, hence it can adopt to a variable width stair with different algorithms in climbing up and down. The command and image data acquired from the robot are transferred to the main computer through RF wireless modules, and the data are delivered to a remote computer via a network communication through a proper data compression, thus, the real time image monitoring is implemented effectively.

A New Refinement Method for Structure from Stereo Motion (스테레오 연속 영상을 이용한 구조 복원의 정제)

  • 박성기;권인소
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.11
    • /
    • pp.935-940
    • /
    • 2002
  • For robot navigation and visual reconstruction, structure from motion (SFM) is an active issue in computer vision community and its properties arc also becoming well understood. In this paper, when using stereo image sequence and direct method as a tool for SFM, we present a new method for overcoming bas-relief ambiguity. We first show that the direct methods, based on optical flow constraint equation, are also intrinsically exposed to such ambiguity although they introduce robust methods. Therefore, regarding the motion and depth estimation by the robust and direct method as approximated ones. we suggest a method that refines both stereo displacement and motion displacement with sub-pixel accuracy, which is the central process f3r improving its ambiguity. Experiments with real image sequences have been executed and we show that the proposed algorithm has improved the estimation accuracy.

Real-Time Fire Detection Method Using YOLOv8 (YOLOv8을 이용한 실시간 화재 검출 방법)

  • Tae Hee Lee;Chun-Su Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.77-80
    • /
    • 2023
  • Since fires in uncontrolled environments pose serious risks to society and individuals, many researchers have been investigating technologies for early detection of fires that occur in everyday life. Recently, with the development of deep learning vision technology, research on fire detection models using neural network backbones such as Transformer and Convolution Natural Network has been actively conducted. Vision-based fire detection systems can solve many problems with physical sensor-based fire detection systems. This paper proposes a fire detection method using the latest YOLOv8, which improves the existing fire detection method. The proposed method develops a system that detects sparks and smoke from input images by training the Yolov8 model using a universal fire detection dataset. We also demonstrate the superiority of the proposed method through experiments by comparing it with existing methods.

  • PDF