• Title/Summary/Keyword: Image Feature

검색결과 3,596건 처리시간 0.028초

우리별 1호 지구 관측 영상의 방사학적 및 기하학적 보정 (Radiometric and Geometric Correction of the KITSAT-1 CCD Earth Images)

  • 이임평;김태정
    • 대한원격탐사학회지
    • /
    • 제12권1호
    • /
    • pp.26-42
    • /
    • 1996
  • CCD 지구 영상 실험 장치(CCD Earth Images Experiment, CEIE)는 우리별 1호의 탑재 체중의 하나이다. 우리별 1호가 발사된 후에 CEIE는 이제까지 약 500여장의 세계 곳곳의 지표면 영상을 촬영하였다. 내재한 방사학적(radiometric) 오차 및 기하학적(goemetric) 찌그러짐으로 인 해, 관측된 영상은 지표면의 모습과 아주 다르다. 관측된 영상을 다양한 목적의 응용을 위해 처리 하고 분석하기 전에 이러한 오차를 제거하기 위한 전처리 과정을 반드시 수행하여야 한다. 이 논 문은 우리별 1호가 관측한 영상에 방사학적 및 기하학적 보정을 수행하는 전처리 과정을 설명한 다.

Autonomous pothole detection using deep region-based convolutional neural network with cloud computing

  • Luo, Longxi;Feng, Maria Q.;Wu, Jianping;Leung, Ryan Y.
    • Smart Structures and Systems
    • /
    • 제24권6호
    • /
    • pp.745-757
    • /
    • 2019
  • Road surface deteriorations such as potholes have caused motorists heavy monetary damages every year. However, effective road condition monitoring has been a continuing challenge to road owners. Depth cameras have a small field of view and can be easily affected by vehicle bouncing. Traditional image processing methods based on algorithms such as segmentation cannot adapt to varying environmental and camera scenarios. In recent years, novel object detection methods based on deep learning algorithms have produced good results in detecting typical objects, such as faces, vehicles, structures and more, even in scenarios with changing object distances, camera angles, lighting conditions, etc. Therefore, in this study, a Deep Learning Pothole Detector (DLPD) based on the deep region-based convolutional neural network is proposed for autonomous detection of potholes from images. About 900 images with potholes and road surface conditions are collected and divided into training and testing data. Parameters of the network in the DLPD are calibrated based on sensitivity tests. Then, the calibrated DLPD is trained by the training data and applied to the 215 testing images to evaluate its performance. It is demonstrated that potholes can be automatically detected with high average precision over 93%. Potholes can be differentiated from manholes by training and applying a manhole-pothole classifier which is constructed using the convolutional neural network layers in DLPD. Repeated detection of the same potholes can be prevented through feature matching of the newly detected pothole with previously detected potholes within a small region.

Requirements Analysis of Image-Based Positioning Algorithm for Vehicles

  • Lee, Yong;Kwon, Jay Hyoun
    • 한국측량학회지
    • /
    • 제37권5호
    • /
    • pp.397-402
    • /
    • 2019
  • Recently, with the emergence of autonomous vehicles and the increasing interest in safety, a variety of research has been being actively conducted to precisely estimate the position of a vehicle by fusing sensors. Previously, researches were conducted to determine the location of moving objects using GNSS (Global Navigation Satellite Systems) and/or IMU (Inertial Measurement Unit). However, precise positioning of a moving vehicle has lately been performed by fusing data obtained from various sensors, such as LiDAR (Light Detection and Ranging), on-board vehicle sensors, and cameras. This study is designed to enhance kinematic vehicle positioning performance by using feature-based recognition. Therefore, an analysis of the required precision of the observations obtained from the images has carried out in this study. Velocity and attitude observations, which are assumed to be obtained from images, were generated by simulation. Various magnitudes of errors were added to the generated velocities and attitudes. By applying these observations to the positioning algorithm, the effects of the additional velocity and attitude information on positioning accuracy in GNSS signal blockages were analyzed based on Kalman filter. The results have shown that yaw information with a precision smaller than 0.5 degrees should be used to improve existing positioning algorithms by more than 10%.

동영상 이미지의 특징정보 분석 시스템 설계 및 구현 (Design and Implementation of the Feature Information Parsing System for Video Image)

  • 최내원;지정규
    • 한국컴퓨터정보학회논문지
    • /
    • 제7권3호
    • /
    • pp.1-8
    • /
    • 2002
  • 컴퓨터 응용기술의 급속한 발전으로 인해 동영상 정보는 인터넷 및 사회전반의 다양한 분야에서 활용되고 그 수가 기하급수적으로 증가되고 있다. 동영상 정보 분석 시스템은 기본적으로 텍스트를 기반으로 하기 때문에, 동영상 정보가 가지는 애매성을 표현하기 곤란하며, 주석 작성에 따르는 과다한 작업부담 및 객관성 결여 등의 문제점을 가지고 있다. 본 논문에서는 대용량의 동영상 정보를 효율적으로 분석하기 위해 동영상 이미지의 분할영역에서 색상정보와 모양정보를 이용한 방법을 제안하고자 한다. 색상정보를 추출하기 위해서는 기존의 RGB 방식에서 HSI방식으로 색상변환 하여 대표색상과 매칭 되는 특징 정보를 사용한다. 그리고 모양정보는 물체의 윤곽선에 해당하는 화소들만을 대상으로 연산을 수행하는 향상된 불변 모멘트(IMI)를 이용한다.

  • PDF

전조등의 시각적 특성을 이용한 야간 사각 지대 차량 검출 기법 (Night-Time Blind Spot Vehicle Detection Using Visual Property of Head-Lamp)

  • 정정은;김현구;박주현;정호열
    • 대한임베디드공학회논문지
    • /
    • 제6권5호
    • /
    • pp.311-317
    • /
    • 2011
  • The blind spot is an area where drivers visibility does not reach. When drivers change a lane to adjacent lane, they need to give an attention because of the blind spot. If drivers try to change lane without notice of vehicle approaching in the blind spot, it causes a reason to have a car accident. This paper proposes a night-time blind spot vehicle detection using cameras. At nighttime, head-lights are used as characteristics to detect vehicles. Candidates of headlight are selected by high luminance feature and then shape filter and kalman filter are employed to remove other noisy blobs having similar luminance to head-lights. In addition, vehicle position is estimated from detected head-light, using virtual center line represented by approximated the first order linear equation. Experiments show that proposed method has relatively high detection porformance in clear weather independent to the road types, but has not sufficient performance in rainy weather because of various ground reflectors.

Two-Stream Convolutional Neural Network for Video Action Recognition

  • Qiao, Han;Liu, Shuang;Xu, Qingzhen;Liu, Shouqiang;Yang, Wanggan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3668-3684
    • /
    • 2021
  • Video action recognition is widely used in video surveillance, behavior detection, human-computer interaction, medically assisted diagnosis and motion analysis. However, video action recognition can be disturbed by many factors, such as background, illumination and so on. Two-stream convolutional neural network uses the video spatial and temporal models to train separately, and performs fusion at the output end. The multi segment Two-Stream convolutional neural network model trains temporal and spatial information from the video to extract their feature and fuse them, then determine the category of video action. Google Xception model and the transfer learning is adopted in this paper, and the Xception model which trained on ImageNet is used as the initial weight. It greatly overcomes the problem of model underfitting caused by insufficient video behavior dataset, and it can effectively reduce the influence of various factors in the video. This way also greatly improves the accuracy and reduces the training time. What's more, to make up for the shortage of dataset, the kinetics400 dataset was used for pre-training, which greatly improved the accuracy of the model. In this applied research, through continuous efforts, the expected goal is basically achieved, and according to the study and research, the design of the original dual-flow model is improved.

Person-Independent Facial Expression Recognition with Histograms of Prominent Edge Directions

  • Makhmudkhujaev, Farkhod;Iqbal, Md Tauhid Bin;Arefin, Md Rifat;Ryu, Byungyong;Chae, Oksam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권12호
    • /
    • pp.6000-6017
    • /
    • 2018
  • This paper presents a new descriptor, named Histograms of Prominent Edge Directions (HPED), for the recognition of facial expressions in a person-independent environment. In this paper, we raise the issue of sampling error in generating the code-histogram from spatial regions of the face image, as observed in the existing descriptors. HPED describes facial appearance changes based on the statistical distribution of the top two prominent edge directions (i.e., primary and secondary direction) captured over small spatial regions of the face. Compared to existing descriptors, HPED uses a smaller number of code-bins to describe the spatial regions, which helps avoid sampling error despite having fewer samples while preserving the valuable spatial information. In contrast to the existing Histogram of Oriented Gradients (HOG) that uses the histogram of the primary edge direction (i.e., gradient orientation) only, we additionally consider the histogram of the secondary edge direction, which provides more meaningful shape information related to the local texture. Experiments on popular facial expression datasets demonstrate the superior performance of the proposed HPED against existing descriptors in a person-independent environment.

컬러 및 질감 특징 추출을 이용한 향상된 이미지 검색 기법 (Improved Image Retrieval Method using Color and Texture Feature Extraction)

  • 박성현;신인경;안효창;이용환;조한진;이준환
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2013년도 추계학술발표대회
    • /
    • pp.1563-1566
    • /
    • 2013
  • 최근 네트워크와 멀티미디어 관련 기술의 발달로 이미지 및 동영상과 같은 대용량 멀티미디어 데이터가 증가하고 있다. 이에 따라 대용량의 데이터에서 영상 정보의 효율적인 검색 방법이 요구 되고 있다. 하지만 기존의 전통적인 색인기술은 관리자가 영상을 직접 보면서 적절한 텍스트 내용을 입력하는 방법으로 시간이 많이 소요되며, 관리자의 성향에 따라 색인어의 입력이 다를 수 있어 검색시 오류를 발생시킬 수 있다. 따라서 본 논문에서는 영상으로부터 컬러 특징과 질감 특징을 추출하여 보다 효율적으로 내용 기반 영상 검색을 수행하는 방법을 제안한다. 실험을 통하여 다른 기존의 영상 검색 방법보다 검색 효율성에서 안정적이며 보다 나은 결과를 얻음을 확인한다.

열화상 이미지와 환경변수를 이용한 콘크리트 균열 깊이 예측 머신 러닝 분석 (Comparison Analysis of Machine Learning for Concrete Crack Depths Prediction Using Thermal Image and Environmental Parameters)

  • 김지형;장아름;박민재;주영규
    • 한국공간구조학회논문집
    • /
    • 제21권2호
    • /
    • pp.99-110
    • /
    • 2021
  • This study presents the estimation of crack depth by analyzing temperatures extracted from thermal images and environmental parameters such as air temperature, air humidity, illumination. The statistics of all acquired features and the correlation coefficient among thermal images and environmental parameters are presented. The concrete crack depths were predicted by four different machine learning models: Multi-Layer Perceptron (MLP), Random Forest (RF), Gradient Boosting (GB), and AdaBoost (AB). The machine learning algorithms are validated by the coefficient of determination, accuracy, and Mean Absolute Percentage Error (MAPE). The AB model had a great performance among the four models due to the non-linearity of features and weak learner aggregation with weights on misclassified data. The maximum depth 11 of the base estimator in the AB model is efficient with high performance with 97.6% of accuracy and 0.07% of MAPE. Feature importances, permutation importance, and partial dependence are analyzed in the AB model. The results show that the marginal effect of air humidity, crack depth, and crack temperature in order is higher than that of the others.

자동차 부품 품질검사를 위한 비전시스템 개발과 머신러닝 모델 비교 (Development of vision system for quality inspection of automotive parts and comparison of machine learning models)

  • 박영민;정동일
    • 문화기술의 융합
    • /
    • 제8권1호
    • /
    • pp.409-415
    • /
    • 2022
  • 컴퓨터 비전은 카메라를 이용하여 측정대상의 영상을 획득하고, 추출하고자 하는 특징 값, 벡터, 영역 등을 알고리즘과 라이브러리 함수를 응용하여 검출한다. 검출된 데이터는 사용하는 목적에 따라 다양한 형태로 계산되고 분석한다. 컴퓨터 비전은 다양한 곳에 활용되고 있으며, 특히 자동차의 부품을 자동으로 인식하거나 품질을 측정하는 분야에 많이 활용된다. 컴퓨터 비전을 산업분야에서 머신비전이라는 용어로 활용되고 있으며, 인공지능과 연결되어 제품의 품질을 판정하거나 결과를 예측하기도 한다. 본 연구에서는 자동차 부품의 품질을 판정하기 위한 비전시스템을 구축하고, 생산된 데이터에 5개의 머신러닝 분류 모델을 적용하여 그 결과를 비교하였다.