• 제목/요약/키워드: computer vision systems

검색결과 600건 처리시간 0.025초

컴퓨터 비전과 GPS를 이용한 드론 자율 비행 알고리즘 (Autonomous-flight Drone Algorithm use Computer vision and GPS)

  • 김정환;김식
    • 대한임베디드공학회논문지
    • /
    • 제11권3호
    • /
    • pp.193-200
    • /
    • 2016
  • This paper introduces an algorithm to middle-low price drone's autonomous navigation flight system using computer vision and GPS. Existing drone operative system mainly contains using methods such as, by inputting course of the path to the installed software of the particular drone in advance of the flight or following the signal that is transmitted from the controller. However, this paper introduces new algorithm that allows autonomous navigation flight system to locate specific place, specific shape of the place and specific space in an area that the user wishes to discover. Technology developed for military industry purpose was implemented on a lower-quality hobby drones without changing its hardware, and used this paper's algorithm to maximize the performance. Camera mounted on middle-low price drone will process the image which meets user's needs will look through and search for specific area of interest when the user inputs certain image of places it wishes to find. By using this algorithm, middle-low price drone's autonomous navigation flight system expect to be apply to a variety of industries.

Development of an Intelligent Control System to Integrate Computer Vision Technology and Big Data of Safety Accidents in Korea

  • KANG, Sung Won;PARK, Sung Yong;SHIN, Jae Kwon;YOO, Wi Sung;SHIN, Yoonseok
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.721-727
    • /
    • 2022
  • Construction safety remains an ongoing concern, and project managers have been increasingly forced to cope with myriad uncertainties related to human operations on construction sites and the lack of a skilled workforce in hazardous circumstances. Various construction fatality monitoring systems have been widely proposed as alternatives to overcome these difficulties and to improve safety management performance. In this study, we propose an intelligent, automatic control system that can proactively protect workers using both the analysis of big data of past safety accidents, as well as the real-time detection of worker non-compliance in using personal protective equipment (PPE) on a construction site. These data are obtained using computer vision technology and data analytics, which are integrated and reinforced by lessons learned from the analysis of big data of safety accidents that occurred in the last 10 years. The system offers data-informed recommendations for high-risk workers, and proactively eliminates the possibility of safety accidents. As an illustrative case, we selected a pilot project and applied the proposed system to workers in uncontrolled environments. Decreases in workers PPE non-compliance rates, improvements in variable compliance rates, reductions in severe fatalities through guidelines that are customized according to the worker, and accelerations in safety performance achievements are expected.

  • PDF

왜곡보정 영상에서의 그리드 패턴 코너의 자동 검출 방법의 설계 및 구현 (Design and Implementation of Automatic Detection Method of Corners of Grid Pattern from Distortion Corrected Image)

  • 천승환;장종욱;장시웅
    • 한국정보통신학회논문지
    • /
    • 제17권11호
    • /
    • pp.2645-2652
    • /
    • 2013
  • 자동차를 위한 전방향(omni-directional) 감시 시스템, 로봇의 시각 역할 등 다양한 비전 시스템에서 카메라가 장착되어 사용되고 있다. AVM(Around View Monitoring) 시스템에서 그리드 패턴의 코너를 검출하기 위해서는 광각 카메라에서 획득한 비선형적인 방사 왜곡을 가진 영상의 왜곡 보정 작업을 수행한 후 왜곡이 보정된 영상 내부의 그리드 패턴 각 코너들을 자동으로 검출하여야 한다. 기존 AVM 시스템에 사용되는 직선과 코너 검출 방법에는 Sub-Pixel, 허프 변환 등이 있으나, Sub-Pixel 방법은 자동검출이 어렵고, 허프변환은 정확도에 문제가 있다. 따라서, 본 논문에서는 왜곡 보정 영상을 입력 영상으로 받아 그리드 패턴의 코너를 자동으로 정확하게 검출하는 방법을 설계하고 구현하여 성능을 평가함으로써 AVM 시스템에서 코너를 검출하는 부분에 적용시킬 수 있음을 보였다.

Visual Tracking Control of Aerial Robotic Systems with Adaptive Depth Estimation

  • Metni, Najib;Hamel, Tarek
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.51-60
    • /
    • 2007
  • This paper describes a visual tracking control law of an Unmanned Aerial Vehicle(UAV) for monitoring of structures and maintenance of bridges. It presents a control law based on computer vision for quasi-stationary flights above a planar target. The first part of the UAV's mission is the navigation from an initial position to a final position to define a desired trajectory in an unknown 3D environment. The proposed method uses the homography matrix computed from the visual information and derives, using backstepping techniques, an adaptive nonlinear tracking control law allowing the effective tracking and depth estimation. The depth represents the desired distance separating the camera from the target.

MMTF와 인간지각 특성을 이용한 결함성분 추출기법 (Defect Detection Method using Human Visual System and MMTF)

  • 허경무;주영복
    • 제어로봇시스템학회논문지
    • /
    • 제19권12호
    • /
    • pp.1094-1098
    • /
    • 2013
  • AVI (Automatic Vision Inspection) systems automatically detect defect features and measure their sizes via camera vision. Defect detection is not an easy process because of noises from various sources and optical distortion. In this paper the acquired images from a TFT panel are enhanced with the adoption of an HVS (Human Visual System). A human visual system is more sensitive on the defect area than the illumination components because it has greater sensitivity to variations of intensity. In this paper we modified an MTF (Modulation Transfer Function) in the Wavelet domain and utilized the characteristics of an HVS. The proposed algorithm flattens the inner illumination components while preserving the defect information intact.

Crowd Activity Recognition using Optical Flow Orientation Distribution

  • Kim, Jinpyung;Jang, Gyujin;Kim, Gyujin;Kim, Moon-Hyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권8호
    • /
    • pp.2948-2963
    • /
    • 2015
  • In the field of computer vision, visual surveillance systems have recently become an important research topic. Growth in this area is being driven by both the increase in the availability of inexpensive computing devices and image sensors as well as the general inefficiency of manual surveillance and monitoring. In particular, the ultimate goal for many visual surveillance systems is to provide automatic activity recognition for events at a given site. A higher level of understanding of these activities requires certain lower-level computer vision tasks to be performed. So in this paper, we propose an intelligent activity recognition model that uses a structure learning method and a classification method. The structure learning method is provided as a K2-learning algorithm that generates Bayesian networks of causal relationships between sensors for a given activity. The statistical characteristics of the sensor values and the topological characteristics of the generated graphs are learned for each activity, and then a neural network is designed to classify the current activity according to the features extracted from the multiple sensor values that have been collected. Finally, the proposed method is implemented and tested by using PETS2013 benchmark data.

Visual Bean Inspection Using a Neural Network

  • Kim, Taeho;Yongtae Do
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2003년도 ISIS 2003
    • /
    • pp.644-647
    • /
    • 2003
  • This paper describes a neural network based machine vision system designed for inspecting yellow beans in real time. The system consists of a camera. lights, a belt conveyor, air ejectors, and a computer. Beans are conveyed in four lines on a belt and their images are taken by a monochrome line scan camera when they fall down from the belt. Beans are separated easily from their background on images by back-lighting. After analyzing the image, a decision is made by a multilayer artificial neural network (ANN) trained by the error back-propagation (EBP) algorithm. We use the global mean, variance and local change of gray levels of a bean for the input nodes of the network. In an our experiment, the system designed could process about 520kg/hour.

  • PDF

수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법 (Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing)

  • 이상훈;송진모;배종수
    • 한국군사과학기술학회지
    • /
    • 제18권3호
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

Human Action Recognition Using Pyramid Histograms of Oriented Gradients and Collaborative Multi-task Learning

  • Gao, Zan;Zhang, Hua;Liu, An-An;Xue, Yan-Bing;Xu, Guang-Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권2호
    • /
    • pp.483-503
    • /
    • 2014
  • In this paper, human action recognition using pyramid histograms of oriented gradients and collaborative multi-task learning is proposed. First, we accumulate global activities and construct motion history image (MHI) for both RGB and depth channels respectively to encode the dynamics of one action in different modalities, and then different action descriptors are extracted from depth and RGB MHI to represent global textual and structural characteristics of these actions. Specially, average value in hierarchical block, GIST and pyramid histograms of oriented gradients descriptors are employed to represent human motion. To demonstrate the superiority of the proposed method, we evaluate them by KNN, SVM with linear and RBF kernels, SRC and CRC models on DHA dataset, the well-known dataset for human action recognition. Large scale experimental results show our descriptors are robust, stable and efficient, and outperform the state-of-the-art methods. In addition, we investigate the performance of our descriptors further by combining these descriptors on DHA dataset, and observe that the performances of combined descriptors are much better than just using only sole descriptor. With multimodal features, we also propose a collaborative multi-task learning method for model learning and inference based on transfer learning theory. The main contributions lie in four aspects: 1) the proposed encoding the scheme can filter the stationary part of human body and reduce noise interference; 2) different kind of features and models are assessed, and the neighbor gradients information and pyramid layers are very helpful for representing these actions; 3) The proposed model can fuse the features from different modalities regardless of the sensor types, the ranges of the value, and the dimensions of different features; 4) The latent common knowledge among different modalities can be discovered by transfer learning to boost the performance.

Computer vision and deep learning-based post-earthquake intelligent assessment of engineering structures: Technological status and challenges

  • T. Jin;X.W. Ye;W.M. Que;S.Y. Ma
    • Smart Structures and Systems
    • /
    • 제31권4호
    • /
    • pp.311-323
    • /
    • 2023
  • Ever since ancient times, earthquakes have been a major threat to the civil infrastructures and the safety of human beings. The majority of casualties in earthquake disasters are caused by the damaged civil infrastructures but not by the earthquake itself. Therefore, the efficient and accurate post-earthquake assessment of the conditions of structural damage has been an urgent need for human society. Traditional ways for post-earthquake structural assessment rely heavily on field investigation by experienced experts, yet, it is inevitably subjective and inefficient. Structural response data are also applied to assess the damage; however, it requires mounted sensor networks in advance and it is not intuitional. As many types of damaged states of structures are visible, computer vision-based post-earthquake structural assessment has attracted great attention among the engineers and scholars. With the development of image acquisition sensors, computing resources and deep learning algorithms, deep learning-based post-earthquake structural assessment has gradually shown potential in dealing with image acquisition and processing tasks. This paper comprehensively reviews the state-of-the-art studies of deep learning-based post-earthquake structural assessment in recent years. The conventional way of image processing and machine learning-based structural assessment are presented briefly. The workflow of the methodology for computer vision and deep learning-based post-earthquake structural assessment was introduced. Then, applications of assessment for multiple civil infrastructures are presented in detail. Finally, the challenges of current studies are summarized for reference in future works to improve the efficiency, robustness and accuracy in this field.