• Title/Summary/Keyword: Vision-based

Search Result 3,459, Processing Time 0.033 seconds

Reflectance estimation for infrared and visible image fusion

  • Gu, Yan;Yang, Feng;Zhao, Weijun;Guo, Yiliang;Min, Chaobo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.2749-2763
    • /
    • 2021
  • The desirable result of infrared (IR) and visible (VIS) image fusion should have textural details from VIS images and salient targets from IR images. However, detail information in the dark regions of VIS image has low contrast and blurry edges, resulting in performance degradation in image fusion. To resolve the troubles of fuzzy details in dark regions of VIS image fusion, we have proposed a method of reflectance estimation for IR and VIS image fusion. In order to maintain and enhance details in these dark regions, dark region approximation (DRA) is proposed to optimize the Retinex model. With the improved Retinex model based on DRA, quasi-Newton method is adopted to estimate the reflectance of a VIS image. The final fusion outcome is obtained by fusing the DRA-based reflectance of VIS image with IR image. Our method could simultaneously retain the low visibility details in VIS images and the high contrast targets in IR images. Experiment statistic shows that compared to some advanced approaches, the proposed method has superiority on detail preservation and visual quality.

Computer vision monitoring and detection for landslides

  • Chen, Tim;Kuo, C.F.;Chen, J.C.Y.
    • Structural Monitoring and Maintenance
    • /
    • v.6 no.2
    • /
    • pp.161-171
    • /
    • 2019
  • There have been a few checking frameworks intended to ensure and improve the nature of their regular habitat. The greater part of these frameworks are constrained in their capacities. In this paper, the insightful checking framework intended for debacle help and administrations has been exhibited. The ideal administrations, necessities and coming about plan proposition have been indicated. This has prompted a framework that depends fundamentally on ecological examination so as to offer consideration and security administrations to give the self-governance of indigenous habitats. In this sense, ecological acknowledgment is considered, where, in light of past work, novel commitments have been made to help include based and PC vision situations. This epic PC vision procedure utilized as notice framework for avalanche identification depends on changes in the normal landscape. The multi-criteria basic leadership strategy is used to incorporate slope data and the level of variety of the highlights. The reproduction consequences of highlight point discovery are shown in highlight guide coordinating toward discover steady and coordinating component focuses and effectively identified utilizing these two systems, by examining the variety in the distinguished highlights and the element coordinating.

Development of Vision-Based Monitering System Technology for Traffic (교통량 분석 및 감시를 위한 영상 기반 관측 시스템 기술 개발)

  • Hong, Gwang-Soo;Eom, Tae-Jung;Kim, Byung-Gyu
    • Convergence Security Journal
    • /
    • v.11 no.4
    • /
    • pp.59-66
    • /
    • 2011
  • Recently, it is very important to establish and predict a traffic policy for expanding social infra structure like road, because the number of cars is significantly increasing. In this paper, we propose and develop an automated system technology based on vision sensor (CCTV) which can provide an efficient information for the traffic policy establishment and expanding the social infra structure. First, the CCTV image is captured as an input of the developed system. With this image, we propose a scheme for extracting vehicles on the road and classifying small-type, large-type vehicles based on color, motion, and geometric features. Also, we develop a DB (database) system for supplying a whole information of traffic for a specified period. Based on the proposed system, we verify 90.1% of recognition ratio in real-time traffic monitering environment.

FPGA based HW/SW co-design for vision based real-time position measurement of an UAV

  • Kim, Young Sik;Kim, Jeong Ho;Han, Dong In;Lee, Mi Hyun;Park, Ji Hoon;Lee, Dae Woo
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.17 no.2
    • /
    • pp.232-239
    • /
    • 2016
  • Recently, in order to increase the efficiency and mission success rate of UAVs (Unmanned Aerial Vehicles), the necessity for formation flights is increased. In general, GPS (Global Positioning System) is used to obtain the relative position of leader with respect to follower in formation flight. However, it can't be utilized in environment where GPS jamming may occur or communication is impossible. Therefore, in this study, monocular vision is used for measuring relative position. General PC-based vision processing systems has larger size than embedded systems and is hard to install on small vehicles. Thus FPGA-based processing board is used to make our system small and compact. The processing system is divided into two blocks, PL(Programmable Logic) and PS(Processing system). PL is consisted of many parallel logic arrays and it can handle large amount of data fast, and it is designed in hardware-wise. PS is consisted of conventional processing unit like ARM processor in hardware-wise and sequential processing algorithm is installed on it. Consequentially HW/SW co-designed FPGA system is used for processing input images and measuring a relative 3D position of the leader, and this system showed RMSE accuracy of 0.42 cm ~ 0.51 cm.

Vision-based Obstacle State Estimation and Collision Prediction using LSM and CPA for UAV Autonomous Landing (무인항공기의 자동 착륙을 위한 LSM 및 CPA를 활용한 영상 기반 장애물 상태 추정 및 충돌 예측)

  • Seongbong Lee;Cheonman Park;Hyeji Kim;Dongjin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.485-492
    • /
    • 2021
  • Vision-based autonomous precision landing technology for UAVs requires precise position estimation and landing guidance technology. Also, for safe landing, it must be designed to determine the safety of the landing point against ground obstacles and to guide the landing only when the safety is ensured. In this paper, we proposes vision-based navigation, and algorithms for determining the safety of landing point to perform autonomous precision landings. To perform vision-based navigation, CNN technology is used to detect landing pad and the detection information is used to derive an integrated navigation solution. In addition, design and apply Kalman filters to improve position estimation performance. In order to determine the safety of the landing point, we perform the obstacle detection and position estimation in the same manner, and estimate the speed of the obstacle using LSM. The collision or not with the obstacle is determined based on the CPA calculated by using the estimated state of the obstacle. Finally, we perform flight test to verify the proposed algorithm.

Vision-based recognition of a simple non-verbal intent representation by head movements (고개운동에 의한 단순 비언어 의사표현의 비전인식)

  • Yu, Gi-Ho;No, Deok-Su;Lee, Seong-Cheol
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.91-100
    • /
    • 2000
  • In this paper the intent recognition system which recognizes the human's head movements as a simple non-verbal intent representation is presented. The system recognizes five basic intent representations. i.e., strong/weak affirmation. strong/weak negation, and ambiguity by image processing of nodding or shaking movements of head. The vision system for tracking the head movements is composed of CCD camera, image processing board and personal computer. The modified template matching method which replaces the reference image with the searched target image in the previous step is used for the robust tracking of the head movements. For the improvement of the processing speed, the searching is performed in the pyramid representation of the original image. By inspecting the variance of the head movement trajectories. we can recognizes the two basic intent representations - affirmation and negation. Also, by focusing the speed of the head movements, we can see the possibility which recognizes the strength of the intent representation.

  • PDF

Robust Vision-Based Autonomous Navigation Against Environment Changes (환경 변화에 강인한 비전 기반 로봇 자율 주행)

  • Kim, Jungho;Kweon, In So
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.2
    • /
    • pp.57-65
    • /
    • 2008
  • Recently many researches on intelligent robots have been studied. An intelligent robot is capable of recognizing environments or objects to autonomously perform specific tasks using sensor readings. One of fundamental problems in vision-based robot applications is to recognize where it is and to decide safe path to perform autonomous navigation. However, previous approaches only consider well-organized environments that there is no moving object and environment changes. In this paper, we introduce a novel navigation strategy to handle occlusions caused by moving objects using various computer vision techniques. Experimental results demonstrate the capability to overcome such difficulties for autonomous navigation.

  • PDF

Development of the Driving path Estimation Algorithm for Adaptive Cruise Control System and Advanced Emergency Braking System Using Multi-sensor Fusion (ACC/AEBS 시스템용 센서퓨전을 통한 주행경로 추정 알고리즘)

  • Lee, Dongwoo;Yi, Kyongsu;Lee, Jaewan
    • Journal of Auto-vehicle Safety Association
    • /
    • v.3 no.2
    • /
    • pp.28-33
    • /
    • 2011
  • This paper presents driving path estimation algorithm for adaptive cruise control system and advanced emergency braking system using multi-sensor fusion. Through data collection, yaw rate filtering based road curvature and vision sensor road curvature characteristics are analyzed. Yaw rate filtering based road curvature and vision sensor road curvature are fused into the one curvature by weighting factor which are considering characteristics of each curvature data. The proposed driving path estimation algorithm has been investigated via simulation performed on a vehicle package Carsim and Matlab/Simulink. It has been shown via simulation that the proposed driving path estimation algorithm improves primary target detection rate.

Development of Vision System Model for Manipulator's Assemble task (매니퓰레이터의 조립작업을 위한 비젼시스템 모델 개발)

  • 장완식
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.6 no.2
    • /
    • pp.10-18
    • /
    • 1997
  • This paper presents the development of real-time estimation and control details for a computer vision-based robot control method. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes know 4-axis Scorbot manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method. The method is tested experimentally in two ways : First the validity of estimation model is tested by using the self-built test model. Second, the practicality of the presented control method is verified in performing 4-axis manipulator's assembly task. These results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as deburring and welding.

  • PDF

Novel Measurement method for Image Sticking based on Human Vision System

  • Park, Gi-Chang;Lee, Jong-Seo;Souk, Jun-Hyung;Yi, Jun-Sin
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1478-1481
    • /
    • 2007
  • This paper introduced a measurement method for image sticking based on human vision perception. Existing image sticking quantification method is mostly different from visible level by human perception. It takes a long time to measure image sticking which is degraded by time due to using a spot photometer, therefore many test samples could not be evaluated in a given short period of time in mass production line. However, the new measurement method in this paper is possible to evaluate a large quantity of samples in fast and high correlation with human perceptual level of image sticking.

  • PDF