• Title/Summary/Keyword: computer vision systems

Search Result 599, Processing Time 0.03 seconds

Autonomous-flight Drone Algorithm use Computer vision and GPS (컴퓨터 비전과 GPS를 이용한 드론 자율 비행 알고리즘)

  • Kim, Junghwan;Kim, Shik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.3
    • /
    • pp.193-200
    • /
    • 2016
  • This paper introduces an algorithm to middle-low price drone's autonomous navigation flight system using computer vision and GPS. Existing drone operative system mainly contains using methods such as, by inputting course of the path to the installed software of the particular drone in advance of the flight or following the signal that is transmitted from the controller. However, this paper introduces new algorithm that allows autonomous navigation flight system to locate specific place, specific shape of the place and specific space in an area that the user wishes to discover. Technology developed for military industry purpose was implemented on a lower-quality hobby drones without changing its hardware, and used this paper's algorithm to maximize the performance. Camera mounted on middle-low price drone will process the image which meets user's needs will look through and search for specific area of interest when the user inputs certain image of places it wishes to find. By using this algorithm, middle-low price drone's autonomous navigation flight system expect to be apply to a variety of industries.

Development of an Intelligent Control System to Integrate Computer Vision Technology and Big Data of Safety Accidents in Korea

  • KANG, Sung Won;PARK, Sung Yong;SHIN, Jae Kwon;YOO, Wi Sung;SHIN, Yoonseok
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.721-727
    • /
    • 2022
  • Construction safety remains an ongoing concern, and project managers have been increasingly forced to cope with myriad uncertainties related to human operations on construction sites and the lack of a skilled workforce in hazardous circumstances. Various construction fatality monitoring systems have been widely proposed as alternatives to overcome these difficulties and to improve safety management performance. In this study, we propose an intelligent, automatic control system that can proactively protect workers using both the analysis of big data of past safety accidents, as well as the real-time detection of worker non-compliance in using personal protective equipment (PPE) on a construction site. These data are obtained using computer vision technology and data analytics, which are integrated and reinforced by lessons learned from the analysis of big data of safety accidents that occurred in the last 10 years. The system offers data-informed recommendations for high-risk workers, and proactively eliminates the possibility of safety accidents. As an illustrative case, we selected a pilot project and applied the proposed system to workers in uncontrolled environments. Decreases in workers PPE non-compliance rates, improvements in variable compliance rates, reductions in severe fatalities through guidelines that are customized according to the worker, and accelerations in safety performance achievements are expected.

  • PDF

Design and Implementation of Automatic Detection Method of Corners of Grid Pattern from Distortion Corrected Image (왜곡보정 영상에서의 그리드 패턴 코너의 자동 검출 방법의 설계 및 구현)

  • Cheon, Sweung-Hwan;Jang, Jong-Wook;Jang, Si-Woong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.11
    • /
    • pp.2645-2652
    • /
    • 2013
  • For a variety of vision systems such as car omni-directional surveillance systems and robot vision systems, many cameras have been equipped and used. In order to detect corners of grid pattern in AVM(Around View Monitoring) systems, after the non-linear radial distortion image obtained from wide-angle camera is corrected, corners of grids of the distortion corrected image must be detected. Though there are transformations such as Sub-Pixel and Hough transformation as corner detection methods for AVM systems, it is difficult to achieve automatic detection by Sub-Pixel and accuracy by Hough transformation. Therefore, we showed that the automatic detection proposed in this paper, which detects corners accurately from the distortion corrected image could be applied for AVM systems, by designing and implementing it, and evaluating its performance.

Visual Tracking Control of Aerial Robotic Systems with Adaptive Depth Estimation

  • Metni, Najib;Hamel, Tarek
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.51-60
    • /
    • 2007
  • This paper describes a visual tracking control law of an Unmanned Aerial Vehicle(UAV) for monitoring of structures and maintenance of bridges. It presents a control law based on computer vision for quasi-stationary flights above a planar target. The first part of the UAV's mission is the navigation from an initial position to a final position to define a desired trajectory in an unknown 3D environment. The proposed method uses the homography matrix computed from the visual information and derives, using backstepping techniques, an adaptive nonlinear tracking control law allowing the effective tracking and depth estimation. The depth represents the desired distance separating the camera from the target.

Defect Detection Method using Human Visual System and MMTF (MMTF와 인간지각 특성을 이용한 결함성분 추출기법)

  • Huh, Kyung-Moo;Joo, Young-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.12
    • /
    • pp.1094-1098
    • /
    • 2013
  • AVI (Automatic Vision Inspection) systems automatically detect defect features and measure their sizes via camera vision. Defect detection is not an easy process because of noises from various sources and optical distortion. In this paper the acquired images from a TFT panel are enhanced with the adoption of an HVS (Human Visual System). A human visual system is more sensitive on the defect area than the illumination components because it has greater sensitivity to variations of intensity. In this paper we modified an MTF (Modulation Transfer Function) in the Wavelet domain and utilized the characteristics of an HVS. The proposed algorithm flattens the inner illumination components while preserving the defect information intact.

Crowd Activity Recognition using Optical Flow Orientation Distribution

  • Kim, Jinpyung;Jang, Gyujin;Kim, Gyujin;Kim, Moon-Hyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.2948-2963
    • /
    • 2015
  • In the field of computer vision, visual surveillance systems have recently become an important research topic. Growth in this area is being driven by both the increase in the availability of inexpensive computing devices and image sensors as well as the general inefficiency of manual surveillance and monitoring. In particular, the ultimate goal for many visual surveillance systems is to provide automatic activity recognition for events at a given site. A higher level of understanding of these activities requires certain lower-level computer vision tasks to be performed. So in this paper, we propose an intelligent activity recognition model that uses a structure learning method and a classification method. The structure learning method is provided as a K2-learning algorithm that generates Bayesian networks of causal relationships between sensors for a given activity. The statistical characteristics of the sensor values and the topological characteristics of the generated graphs are learned for each activity, and then a neural network is designed to classify the current activity according to the features extracted from the multiple sensor values that have been collected. Finally, the proposed method is implemented and tested by using PETS2013 benchmark data.

Visual Bean Inspection Using a Neural Network

  • Kim, Taeho;Yongtae Do
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.644-647
    • /
    • 2003
  • This paper describes a neural network based machine vision system designed for inspecting yellow beans in real time. The system consists of a camera. lights, a belt conveyor, air ejectors, and a computer. Beans are conveyed in four lines on a belt and their images are taken by a monochrome line scan camera when they fall down from the belt. Beans are separated easily from their background on images by back-lighting. After analyzing the image, a decision is made by a multilayer artificial neural network (ANN) trained by the error back-propagation (EBP) algorithm. We use the global mean, variance and local change of gray levels of a bean for the input nodes of the network. In an our experiment, the system designed could process about 520kg/hour.

  • PDF

Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing (수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법)

  • Lee, Sang-Hoon;Song, Jin-Mo;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

Human Action Recognition Using Pyramid Histograms of Oriented Gradients and Collaborative Multi-task Learning

  • Gao, Zan;Zhang, Hua;Liu, An-An;Xue, Yan-Bing;Xu, Guang-Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.2
    • /
    • pp.483-503
    • /
    • 2014
  • In this paper, human action recognition using pyramid histograms of oriented gradients and collaborative multi-task learning is proposed. First, we accumulate global activities and construct motion history image (MHI) for both RGB and depth channels respectively to encode the dynamics of one action in different modalities, and then different action descriptors are extracted from depth and RGB MHI to represent global textual and structural characteristics of these actions. Specially, average value in hierarchical block, GIST and pyramid histograms of oriented gradients descriptors are employed to represent human motion. To demonstrate the superiority of the proposed method, we evaluate them by KNN, SVM with linear and RBF kernels, SRC and CRC models on DHA dataset, the well-known dataset for human action recognition. Large scale experimental results show our descriptors are robust, stable and efficient, and outperform the state-of-the-art methods. In addition, we investigate the performance of our descriptors further by combining these descriptors on DHA dataset, and observe that the performances of combined descriptors are much better than just using only sole descriptor. With multimodal features, we also propose a collaborative multi-task learning method for model learning and inference based on transfer learning theory. The main contributions lie in four aspects: 1) the proposed encoding the scheme can filter the stationary part of human body and reduce noise interference; 2) different kind of features and models are assessed, and the neighbor gradients information and pyramid layers are very helpful for representing these actions; 3) The proposed model can fuse the features from different modalities regardless of the sensor types, the ranges of the value, and the dimensions of different features; 4) The latent common knowledge among different modalities can be discovered by transfer learning to boost the performance.

Computer vision and deep learning-based post-earthquake intelligent assessment of engineering structures: Technological status and challenges

  • T. Jin;X.W. Ye;W.M. Que;S.Y. Ma
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.311-323
    • /
    • 2023
  • Ever since ancient times, earthquakes have been a major threat to the civil infrastructures and the safety of human beings. The majority of casualties in earthquake disasters are caused by the damaged civil infrastructures but not by the earthquake itself. Therefore, the efficient and accurate post-earthquake assessment of the conditions of structural damage has been an urgent need for human society. Traditional ways for post-earthquake structural assessment rely heavily on field investigation by experienced experts, yet, it is inevitably subjective and inefficient. Structural response data are also applied to assess the damage; however, it requires mounted sensor networks in advance and it is not intuitional. As many types of damaged states of structures are visible, computer vision-based post-earthquake structural assessment has attracted great attention among the engineers and scholars. With the development of image acquisition sensors, computing resources and deep learning algorithms, deep learning-based post-earthquake structural assessment has gradually shown potential in dealing with image acquisition and processing tasks. This paper comprehensively reviews the state-of-the-art studies of deep learning-based post-earthquake structural assessment in recent years. The conventional way of image processing and machine learning-based structural assessment are presented briefly. The workflow of the methodology for computer vision and deep learning-based post-earthquake structural assessment was introduced. Then, applications of assessment for multiple civil infrastructures are presented in detail. Finally, the challenges of current studies are summarized for reference in future works to improve the efficiency, robustness and accuracy in this field.