• Title/Summary/Keyword: 영상매핑

Search Result 339, Processing Time 0.026 seconds

Comparative Analysis among Radar Image Filters for Flood Mapping (홍수매핑을 위한 레이더 영상 필터의 비교분석)

  • Kim, Daeseong;Jung, Hyung-Sup;Baek, Wonkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.1
    • /
    • pp.43-52
    • /
    • 2016
  • Due to the characteristics of microwave signals, Radar satellite image has been used for flood detection without weather and time influence. The more methods of flood detection were developed, the more detection rate of flood area has been increased. Since flood causes a lot of damages, flooded area should be distinguished from non flooded area. Also, the detection of flood area should be accurate. Therefore, not only image resolution but also the filtering process is critical to minimize resolution degradation. Although a resolution of radar images become better as technology develops, there were a limited focused on a highly suitable filtering methods for flood detection. Thus, the purpose of this study is to find out the most appropriate filtering method for flood detection by comparing three filtering methods: Lee filter, Frost filter and NL-means filter. Therefore, to compare the filters to detect floods, each filters are applied to the radar image. Comparison was drawn among filtered images. Then, the flood map, results of filtered images are compared in that order. As a result, Frost and NL-means filter are more effective in removing the speckle noise compared to Lee filter. In case of Frost filter, resolution degradation occurred severly during removal of the noise. In case of NL-means filter, shadow effect which could be one of the main reasons that causes false detection were not eliminated comparing to other filters. Nevertheless, result of NL-means filter shows the best detection rate because the number of shadow pixels is relatively low in entire image. Kappa coefficient is scored 0.81 for NL-means filtered image and 0.55, 0.64 and 0.74 follows for non filtered image, Lee filtered image and Frost filtered image respectively. Also, in the process of NL-means filter, speckle noise could be removed without resolution degradation. Accordingly, flooded area could be distinguished effectively from other area in NL-means filtered image.

Three-dimensional Texture Coordinate Coding Using Texture Image Rearrangement (텍스처 영상 재배열을 이용한 삼차원 텍스처 좌표 부호화)

  • Kim, Sung-Yeol;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.36-45
    • /
    • 2006
  • Three-dimensional (3-D) texture coordinates mean the position information of torture segments that are mapped into polygons in a 3-D mesh model. In order to compress texture coordinates, previous works reused the same linear predictor that had already been employed to code geometry data. However, the previous approaches could not carry out linear prediction efficiently since texture coordinates were discontinuous along a coding order. Especially, discontinuities of texture coordinates became more serious in the 3-D mesh model including a non-atlas texture. In this paper, we propose a new scheme to code 3-D texture coordinates using as a texture image rearrangement. The proposed coding scheme first extracts texture segments from a texture. Then, we rearrange the texture segments consecutively along the coding order, and apply a linear prediction to compress texture coordinates. Since the proposed scheme minimizes discontinuities of texture coordinates, we can improve coding efficiency of texture coordinates. Experiment results show that the proposed scheme outperforms the MPEG-4 3DMC standard in terms of coding efficiency.

Ensemble Deep Network for Dense Vehicle Detection in Large Image

  • Yu, Jae-Hyoung;Han, Youngjoon;Kim, JongKuk;Hahn, Hernsoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.45-55
    • /
    • 2021
  • This paper has proposed an algorithm that detecting for dense small vehicle in large image efficiently. It is consisted of two Ensemble Deep-Learning Network algorithms based on Coarse to Fine method. The system can detect vehicle exactly on selected sub image. In the Coarse step, it can make Voting Space using the result of various Deep-Learning Network individually. To select sub-region, it makes Voting Map by to combine each Voting Space. In the Fine step, the sub-region selected in the Coarse step is transferred to final Deep-Learning Network. The sub-region can be defined by using dynamic windows. In this paper, pre-defined mapping table has used to define dynamic windows for perspective road image. Identity judgment of vehicle moving on each sub-region is determined by closest center point of bottom of the detected vehicle's box information. And it is tracked by vehicle's box information on the continuous images. The proposed algorithm has evaluated for performance of detection and cost in real time using day and night images captured by CCTV on the road.

Video Event Detection according to Generating of Semantic Unit based on Moving Object (객체 움직임의 의미적 단위 생성을 통한 비디오 이벤트 검출)

  • Shin, Ju-Hyun;Baek, Sun-Kyoung;Kim, Pan-Koo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.2
    • /
    • pp.143-152
    • /
    • 2008
  • Nowadays, many investigators are studying various methodologies concerning event expression for semantic retrieval of video data. However, most of the parts are still using annotation based retrieval that is defined into annotation of each data and content based retrieval using low-level features. So, we propose a method of creation of the motion unit and extracting event through the unit for the more semantic retrieval than existing methods. First, we classify motions by event unit. Second, we define semantic unit about classified motion of object. For using these to event extraction, we create rules that are able to match the low-level features, from which we are able to retrieve semantic event as a unit of video shot. For the evaluation of availability, we execute an experiment of extraction of semantic event in video image and get approximately 80% precision rate.

  • PDF

The Lens Aberration Correction Method for Laser Precision Machining in Machine Vision System (머신비전 시스템에서 레이저 정밀 가공을 위한 렌즈 수차 보정 방법)

  • Park, Yang-Jae
    • Journal of Digital Convergence
    • /
    • v.10 no.10
    • /
    • pp.301-306
    • /
    • 2012
  • We propose a method for accurate image acquisition in a machine vision system in the present study. The most important feature is required by the various lenses to implement real and of the same high quality image-forming optical role. The input of the machine vision system, however, is generated due to the aberration of the lens distortion. Transformation defines the relationship between the real-world coordinate system and the image coordinate system to solve these problems, a mapping function that matrix operations by calculating the distance between two coordinates to specify the exact location. Tolerance Focus Lens caused by the lens aberration correction processing to Galvanometer laser precision machining operations can be improved. Aberration of the aspheric lens has a two-dimensional shape of the curve, but the existing lens correction to linear time-consuming calibration methods by examining a large number of points the problem. How to apply the Bilinear interpolation is proposed in order to reduce the machining error that occurs due to the aberration of the lens processing equipment.

Neural-network based Computerized Emotion Analysis using Multiple Biological Signals (다중 생체신호를 이용한 신경망 기반 전산화 감정해석)

  • Lee, Jee-Eun;Kim, Byeong-Nam;Yoo, Sun-Kook
    • Science of Emotion and Sensibility
    • /
    • v.20 no.2
    • /
    • pp.161-170
    • /
    • 2017
  • Emotion affects many parts of human life such as learning ability, behavior and judgment. It is important to understand human nature. Emotion can only be inferred from facial expressions or gestures, what it actually is. In particular, emotion is difficult to classify not only because individuals feel differently about emotion but also because visually induced emotion does not sustain during whole testing period. To solve the problem, we acquired bio-signals and extracted features from those signals, which offer objective information about emotion stimulus. The emotion pattern classifier was composed of unsupervised learning algorithm with hidden nodes and feature vectors. Restricted Boltzmann machine (RBM) based on probability estimation was used in the unsupervised learning and maps emotion features to transformed dimensions. The emotion was characterized by non-linear classifiers with hidden nodes of a multi layer neural network, named deep belief network (DBN). The accuracy of DBN (about 94 %) was better than that of back-propagation neural network (about 40 %). The DBN showed good performance as the emotion pattern classifier.

3D object generation based on the depth information of an active sensor (능동형 센서의 깊이 정보를 이용한 3D 객체 생성)

  • Kim, Sang-Jin;Yoo, Ji-Sang;Lee, Seung-Hyun
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.5
    • /
    • pp.455-466
    • /
    • 2006
  • In this paper, 3D objects is created from the real scene that is used by an active sensor, which gets depth and RGB information. To get the depth information, this paper uses the $Zcam^{TM}$ camera which has built-in an active sensor module. <중략> Thirdly, calibrate the detailed parameters and create 3D mesh model from the depth information, then connect the neighborhood points for the perfect 3D mesh model. Finally, the value of color image data is applied to the mesh model, then carries out mapping processing to create 3D object. Experimentally, it has shown that creating 3D objects using the data from the camera with active sensors is possible. Also, this method is easier and more useful than the using 3D range scanner.

  • PDF

A Distributed High Dimensional Indexing Structure for Content-based Retrieval of Large Scale Data (대용량 데이터의 내용 기반 검색을 위한 분산 고차원 색인 구조)

  • Cho, Hyun-Hwa;Lee, Mi-Young;Kim, Young-Chang;Chang, Jae-Woo;Lee, Kyu-Chul
    • Journal of KIISE:Databases
    • /
    • v.37 no.5
    • /
    • pp.228-237
    • /
    • 2010
  • Although conventional index structures provide various nearest-neighbor search algorithms for high-dimensional data, there are additional requirements to increase search performances as well as to support index scalability for large scale data. To support these requirements, we propose a distributed high-dimensional indexing structure based on cluster systems, called a Distributed Vector Approximation-tree (DVA-tree), which is a two-level structure consisting of a hybrid spill-tree and VA-files. We also describe the algorithms used for constructing the DVA-tree over multiple machines and performing distributed k-nearest neighbors (NN) searches. To evaluate the performance of the DVA-tree, we conduct an experimental study using both real and synthetic datasets. The results show that our proposed method contributes to significant performance advantages over existing index structures on difference kinds of datasets.

HMM-based Upper-body Gesture Recognition for Virtual Playing Ground Interface (가상 놀이 공간 인터페이스를 위한 HMM 기반 상반신 제스처 인식)

  • Park, Jae-Wan;Oh, Chi-Min;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.11-17
    • /
    • 2010
  • In this paper, we propose HMM-based upper-body gesture. First, to recognize gesture of space, division about pose that is composing gesture once should be put priority. In order to divide poses which using interface, we used two IR cameras established on front side and side. So we can divide and acquire in front side pose and side pose about one pose in each IR camera. We divided the acquired IR pose image using SVM's non-linear RBF kernel function. If we use RBF kernel, we can divide misclassification between non-linear classification poses. Like this, sequences of divided poses is recognized by gesture using HMM's state transition matrix. The recognized gesture can apply to existent application to do mapping to OS Value.

(A) study on location correction method of indoor/outdoor 3D model through data integration of BIM and GIS (BIM과 GIS 데이터 융합을 통한 실내외 3차원 모델 위치보정 방안 연구)

  • Kim, Ji-Eun;Hong, Chang-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.3
    • /
    • pp.56-62
    • /
    • 2017
  • As the need for 3D spatial information increases, many local governments and related industries are establishing map-based 3D spatial information services and offering them to users. In these services, positional accuracy is one of the most important factors determining their applicability to specific tasks. This study studied the location correction method between indoor and outdoor 3D spatial information through the construction of modeling data on a BIM/GIS platform. First, we selected the sites and processed the BIM/GIS data construction with 3 steps. When connecting the BIM model including indoor spatial data and 3D texturing model based on ortho images, mismatches occurred, so we proposed a location correction method. Using the conversion algorithm, the relative coordinate-based BIM data were converted to the absolute positions and then relocated by means of the texturing data on the BIM/GIS platform.