• Title/Summary/Keyword: vision-based method

Search Result 1,463, Processing Time 0.032 seconds

Automatic indoor progress monitoring using BIM and computer vision

  • Deng, Yichuan;Hong, Hao;Luo, Han;Deng, Hui
    • International conference on construction engineering and project management
    • /
    • 2017.10a
    • /
    • pp.252-259
    • /
    • 2017
  • Nowadays, the existing manual method for recording actual progress of the construction site has some drawbacks, such as great reliance on the experience of professional engineers, work-intensive, time consuming and error prone. A method integrating computer vision and BIM(Building Information Modeling) is presented for indoor automatic progress monitoring. The developed method can accurately calculate the engineering quantity of target component in the time-lapse images. Firstly, sample images of on-site target are collected for training the classifier. After the construction images are identified by edge detection and classifier, a voting algorithm based on mathematical geometry and vector operation will divide the target contour. Then, according to the camera calibration principle, the image pixel coordinates are conversed into the real world Coordinate and the real coordinates would be corrected with the help of the geometric information in BIM model. Finally, the actual engineering quantity is calculated.

  • PDF

A Contrast-based Color Conversion Method for the Maintenance of Sense of the People with Color Vision Deficiency (색각 이상자들의 감각 유지를 위한 대비기반 색변환 방법)

  • An, Jihye;Park, Jinho
    • Journal of Digital Contents Society
    • /
    • v.15 no.6
    • /
    • pp.751-761
    • /
    • 2014
  • Color deficient people do not have sufficient discernment for the colors with low saturation and brightness and at the same time express their negative emotions regarding emotion distortion. The purpose of recovering the distortion of the vision which is the basis for emotion is to increase positive emotions rather than negative ones that those with color vision deficiency feel when they experience digital culture contents. Contrast increases saturation and brightness by differing the direction of their conversion and by doing so, delivers emotion distortion such as dynamic vs. static and vivid vs. somber that the original images intend to convey to those with color vision deficiency by reducing such a contrast. In this respect, this study proposes a contrast-based color conversion method to convert saturation and brightness in the zone of color conversion and identifies if this method can reduce emotion distortion by using color conversion simulation and user test.

Vision-based Target Tracking for UAV and Relative Depth Estimation using Optical Flow (무인 항공기의 영상기반 목표물 추적과 광류를 이용한 상대깊이 추정)

  • Jo, Seon-Yeong;Kim, Jong-Hun;Kim, Jung-Ho;Lee, Dae-Woo;Cho, Kyeum-Rae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.3
    • /
    • pp.267-274
    • /
    • 2009
  • Recently, UAVs (Unmanned Aerial Vehicles) are expected much as the Unmanned Systems for various missions. These missions are often based on the Vision System. Especially, missions such as surveillance and pursuit have a process which is carried on through the transmitted vision data from the UAV. In case of small UAVs, monocular vision is often used to consider weights and expenses. Research of missions performance using the monocular vision is continued but, actually, ground and target model have difference in distance from the UAV. So, 3D distance measurement is still incorrect. In this study, Mean-Shift Algorithm, Optical Flow and Subspace Method are posed to estimate the relative depth. Mean-Shift Algorithm is used for target tracking and determining Region of Interest (ROI). Optical Flow includes image motion information using pixel intensity. After that, Subspace Method computes the translation and rotation of image and estimates the relative depth. Finally, we present the results of this study using images obtained from the UAV experiments.

Implementation of a Stereo Vision Using Saliency Map Method

  • Choi, Hyeung-Sik;Kim, Hwan-Sung;Shin, Hee-Young;Lee, Min-Ho
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.36 no.5
    • /
    • pp.674-682
    • /
    • 2012
  • A new intelligent stereo vision sensor system was studied for the motion and depth control of unmanned vehicles. A new bottom-up saliency map model for the human-like active stereo vision system based on biological visual process was developed to select a target object. If the left and right cameras successfully find the same target object, the implemented active vision system with two cameras focuses on a landmark and can detect the depth and the direction information. By using this information, the unmanned vehicle can approach to the target autonomously. A number of tests for the proposed bottom-up saliency map were performed, and their results were presented.

The Moving Object Gripping Using Vision Systems (비젼 시스템을 이용한 이동 물체의 그립핑)

  • Cho, Ki-Heum;Choi, Byong-Joon;Jeon, Jae-Hyun;Hong, Suk-Kyo
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2357-2359
    • /
    • 1998
  • This paper proposes trajectory tracking of the moving object based on one camera vision system. And, this system proposes a method which robot manipulator grips moving object and predicts coordinate of moving objcet. The trajectory tracking and position coordinate are computed by vision data acquired to camera. Robot manipulator tracks and grips moving object by vision data. The proposed vision systems use a algorithm to do real-time processing.

  • PDF

Self-Localization of Mobile Robot Using Single Camera (단일 카메라를 이용한 이동로봇의 자기 위치 추정)

  • 김명호;이쾌희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.404-404
    • /
    • 2000
  • This paper presents a single vision-based sel(-localization method in an corridor environment. We use the Hough transform for finding parallel lines and vertical lines. And we use these cross points as feature points and it is calculated relative distance from mobile robot to these points. For matching environment map to feature points, searching window is defined and self-localization is performed by matching procedure. The result shows the suitability of this method by experiment.

  • PDF

A Vision-Based Jig-Saw Puzzle Matching Method (영상처리 시스템을 이용한 그림조각 맞추기에 관한 연구)

  • 이동주;서일홍;오상록
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.1
    • /
    • pp.96-104
    • /
    • 1990
  • In this paper, a novel method of jig-saw puzzle matching is proposed using a modifided boundary matching algorithm without a priori knowledge for the matched puzzle. Specifically, a boundary tracking algorithm is utilised to segment each puzzle from low-resolution image data. Segmented puzzle is described via corner point, angle and distance between two adjacent coner point, and convexity and/or concavity of corner point. Proposed algorithm is implemented and tested in IBM PC and PC version vision system, and applied successfully to real jig-saw puzzles.

  • PDF

A Computerized Scoring Method of The Hahn Double 15 Hue Test (한식(韓式) 2중(重) 15색상(色相) 검사(檢査)의 컴퓨터를 이용(利用)한 점수화(點數化) 방법(方法))

  • Park, Wan-Seoup;Lee, Jong-Young
    • Journal of Preventive Medicine and Public Health
    • /
    • v.29 no.3 s.54
    • /
    • pp.521-527
    • /
    • 1996
  • The Hahn double 15 hue test is used for social and vacational aptitude test to separate strongly and mildly affected subjects among the colour vision defective persons, detected using colour vision test. However, the assessment of colour vision defect type and severity is based on the hue confusions which are represented diagrammatically on Hahn double 15 hue score sheet, this qualitative assessment of the test results have not provide a numerical score suitable for methematical analysis. This paper presented a new proposal for quantitatively scoring the Hahn double 15 hue test based on those hue confusions made by the subject. With this program large numbers of double 15 hue test results can be processed easily and rapidly, and program helps to compare the severity of specific type colour vision defect and monitor acquired colour vision defect which has various disease process, continuously.

  • PDF

Fine-tuning Neural Network for Improving Video Classification Performance Using Vision Transformer (Vision Transformer를 활용한 비디오 분류 성능 향상을 위한 Fine-tuning 신경망)

  • Kwang-Yeob Lee;Ji-Won Lee;Tae-Ryong Park
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.313-318
    • /
    • 2023
  • This paper proposes a neural network applying fine-tuning as a way to improve the performance of Video Classification based on Vision Transformer. Recently, the need for real-time video image analysis based on deep learning has emerged. Due to the characteristics of the existing CNN model used in Image Classification, it is difficult to analyze the association of consecutive frames. We want to find and solve the optimal model by comparing and analyzing the Vision Transformer and Non-local neural network models with the Attention mechanism. In addition, we propose an optimal fine-tuning neural network model by applying various methods of fine-tuning as a transfer learning method. The experiment trained the model with the UCF101 dataset and then verified the performance of the model by applying a transfer learning method to the UTA-RLDD dataset.

Unusual Motion Detection for Vision-Based Driver Assistance

  • Fu, Li-Hua;Wu, Wei-Dong;Zhang, Yu;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.27-34
    • /
    • 2015
  • For a vision-based driver assistance system, unusual motion detection is one of the important means of preventing accidents. In this paper, we propose a real-time unusual-motion-detection model, which contains two stages: salient region detection and unusual motion detection. In the salient-region-detection stage, we present an improved temporal attention model. In the unusual-motion-detection stage, three kinds of factors, the speed, the motion direction, and the distance, are extracted for detecting unusual motion. A series of experimental results demonstrates the proposed method and shows the feasibility of the proposed model.