• Title/Summary/Keyword: mapping algorithms

Search Result 365, Processing Time 0.024 seconds

Real-Time Shadow Generation using Image Warping (이미지 와핑을 이용한 실시간 그림자 생성 기법)

  • Kang, Byung-Kwon;Ihm, In-Sung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.5
    • /
    • pp.245-256
    • /
    • 2002
  • Shadows are important elements in producing a realistic image. Generation of exact shapes and positions of shadows is essential in rendering since it provides users with visual cues on the scene. It is also very important to be able to create soft shadows resulted from area light sources since they increase the visual realism drastically. In spite of their importance. the existing shadow generation algorithms still have some problems in producing realistic shadows in real-time. While image-based rendering techniques can often be effective1y applied to real-time shadow generation, such techniques usually demand so large memory space for storing preprocessed shadow maps. An effective compression method can help in reducing memory requirement, only at the additional decoding costs. In this paper, we propose a new image-barred shadow generation method based on image warping. With this method, it is possible to generate realistic shadows using only small sizes of pre-generated shadow maps, and is easy to extend to soft shadow generation. Our method will be efficiently used for generating realistic scenes in many real-time applications such as 3D games and virtual reality systems.

Single Image Dehazing Based on Depth Map Estimation via Generative Adversarial Networks (생성적 대립쌍 신경망을 이용한 깊이지도 기반 연무제거)

  • Wang, Yao;Jeong, Woojin;Moon, Young Shik
    • Journal of Internet Computing and Services
    • /
    • v.19 no.5
    • /
    • pp.43-54
    • /
    • 2018
  • Images taken in haze weather are characteristic of low contrast and poor visibility. The process of reconstructing clear-weather image from a hazy image is called dehazing. The main challenge of image dehazing is to estimate the transmission map or depth map for an input hazy image. In this paper, we propose a single image dehazing method by utilizing the Generative Adversarial Network(GAN) for accurate depth map estimation. The proposed GAN model is trained to learn a nonlinear mapping between the input hazy image and corresponding depth map. With the trained model, first the depth map of the input hazy image is estimated and used to compute the transmission map. Then a guided filter is utilized to preserve the important edge information of the hazy image, thus obtaining a refined transmission map. Finally, the haze-free image is recovered via atmospheric scattering model. Although the proposed GAN model is trained on synthetic indoor images, it can be applied to real hazy images. The experimental results demonstrate that the proposed method achieves superior dehazing results against the state-of-the-art algorithms on both the real hazy images and the synthetic hazy images, in terms of quantitative performance and visual performance.

Timestamps based sequential Localization for Linear Wireless Sensor Networks (선형 무선 센서 네트워크를 위한 시각소인 기반의 순차적 거리측정 기법)

  • Park, Sangjun;Kang, Jungho;Kim, Yongchul;Kim, Young-Joo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.10
    • /
    • pp.1840-1848
    • /
    • 2017
  • Linear wireless sensor networks typically construct a network topology with a high reliability through sequential 1:1 mapping among sensor nodes, so that they are used in various surveillance applications of major national infrastructures. Most existing techniques for identifying sensor nodes in those networks are using GPS, AOA, and RSSI mechanisms. However, GPS or AOA based node identification techniques affect the size or production cost of the nodes so that it is not easy to construct practical sensor networks. RSSI based techniques may have a high deviation regrading location identification according to propagation environments and equipment quality so that complexity of error correction algorithm may increase. We propose a timestamps based sequential localization algorithm that uses transmit and receive timestamps in a message between sensor nodes without using GPS, AOA, and RSSI techniques. The algorithms for distance measurement between each node are expected to measure distance within up to 1 meter in case of an crystal oscillator of 300MHz or more.

Utilizing Airborne LiDAR Data for Building Extraction and Superstructure Analysis for Modeling (항공 LiDAR 데이터를 이용한 건물추출과 상부구조물 특성분석 및 모델링)

  • Jung, Hyung-Sup;Lim, Sae-Bom;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.3
    • /
    • pp.227-239
    • /
    • 2008
  • Processing LiDAR (Light Detection And Ranging) data obtained from ALS (Airborne Laser Scanning) systems mainly involves organization and segmentation of the data for 3D object modeling and mapping purposes. The ALS systems are viable and becoming more mature technology in various applications. ALS technology requires complex integration of optics, opto-mechanics and electronics in the multi-sensor components, Le. data captured from GPS, INS and laser scanner. In this study, digital image processing techniques mainly were implemented to gray level coded image of the LiDAR data for building extraction and superstructures segmentation. One of the advantages to use gray level image is easy to apply various existing digital image processing algorithms. Gridding and quantization of the raw LiDAR data into limited gray level might introduce smoothing effect and loss of the detail information. However, smoothed surface data that are more suitable for surface patch segmentation and modeling could be obtained by the quantization of the height values. The building boundaries were precisely extracted by the robust edge detection operator and regularized with shape constraints. As for segmentation of the roof structures, basically region growing based and gap filling segmentation methods were implemented. The results present that various image processing methods are applicable to extract buildings and to segment surface patches of the superstructures on the roofs. Finally, conceptual methodology for extracting characteristic information to reconstruct roof shapes was proposed. Statistical and geometric properties were utilized to segment and model superstructures. The simulation results show that segmentation of the roof surface patches and modeling were possible with the proposed method.

A Study on the Statistical GIS for Regional Analysis (지역분석을 위한 웹 기반 통계GIS 연구)

  • 박기호;이양원
    • Spatial Information Research
    • /
    • v.9 no.2
    • /
    • pp.239-261
    • /
    • 2001
  • A large suite of official statistical data sets has been compiled for geographical units under the national directives, and it is the quantitative regional analysis procedures that could add values to them. This paper reports our attempts at prototyping a statistical GIS which is capable of serving over the Web a variety of regional analysis routines as well as value-added statistics and maps. A pilot database of some major statistical data was ingested for the city of Seoul. The baseline subset of regional analysis methods of practical usage was selected and accommodated into the business logic of the target system, which ranges from descriptive statistics, regional structure/inequality measures, spatial ANOVA, spatial (auto) correlation to regression and residual analysis. The leading-edge information technologies including the application server were adopted in the system design and implementation so that the database, analysis modules and analytic mapping components may cooperate seamlessly behind the Web front-end. The prototyped system supports tables, maps, and files of downloadable format for input and output of the analyses. One of the most salient features of out proposed system is that both the database and analysis modules are extensible via the bi-directional interface for end users; The system provides users with operators and parsers for algebraic formulae such that the stored statistical variables may be transformed and combined into the newly-derived set of variables. This functionality eventually leads to on-the-fly fabrication of user-defined regional analysis algorithms. The stored dataset may also be temporarily augmented by user-uploaded dataset; The extension of this form, in essence, results in a virtual database which awaits for users commands as usual. An initial evaluation of the proposed system confirms that the issues involving the usage and dissemination of information can be addressed with success.

  • PDF

Current Status of Hyperspectral Data Processing Techniques for Monitoring Coastal Waters (연안해역 모니터링을 위한 초분광영상 처리기법 현황)

  • Kim, Sun-Hwa;Yang, Chan-Su
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.1
    • /
    • pp.48-63
    • /
    • 2015
  • In this study, we introduce various hyperspectral data processing techniques for the monitoring of shallow and coastal waters to enlarge the application range and to improve the accuracy of the end results in Korea. Unlike land, more accurate atmospheric correction is needed in coastal region showing relatively low reflectance in visible wavelengths. Sun-glint which occurs due to a geometry of sun-sea surface-sensor is another issue for the data processing in the ocean application of hyperspectal imagery. After the preprocessing of the hyperspectral data, a semi-analytical algorithm based on a radiative transfer model and a spectral library can be used for bathymetry mapping in coastal area, type classification and status monitoring of benthos or substrate classification. In general, semi-analytical algorithms using spectral information obtained from hyperspectral imagey shows higher accuracy than an empirical method using multispectral data. The water depth and quality are constraint factors in the ocean application of optical data. Although a radiative transfer model suggests the theoretical limit of about 25m in depth for bathymetry and bottom classification, hyperspectral data have been used practically at depths of up to 10 m in shallow and coastal waters. It means we have to focus on the maximum depth of water and water quality conditions that affect the coastal applicability of hyperspectral data, and to define the spectral library of coastal waters to classify the types of benthos and substrates.

Application of Multispectral Remotely Sensed Imagery for the Characterization of Complex Coastal Wetland Ecosystems of southern India: A Special Emphasis on Comparing Soft and Hard Classification Methods

  • Shanmugam, Palanisamy;Ahn, Yu-Hwan;Sanjeevi , Shanmugam
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.189-211
    • /
    • 2005
  • This paper makes an effort to compare the recently evolved soft classification method based on Linear Spectral Mixture Modeling (LSMM) with the traditional hard classification methods based on Iterative Self-Organizing Data Analysis (ISODATA) and Maximum Likelihood Classification (MLC) algorithms in order to achieve appropriate results for mapping, monitoring and preserving valuable coastal wetland ecosystems of southern India using Indian Remote Sensing Satellite (IRS) 1C/1D LISS-III and Landsat-5 Thematic Mapper image data. ISODATA and MLC methods were attempted on these satellite image data to produce maps of 5, 10, 15 and 20 wetland classes for each of three contrast coastal wetland sites, Pitchavaram, Vedaranniyam and Rameswaram. The accuracy of the derived classes was assessed with the simplest descriptive statistic technique called overall accuracy and a discrete multivariate technique called KAPPA accuracy. ISODATA classification resulted in maps with poor accuracy compared to MLC classification that produced maps with improved accuracy. However, there was a systematic decrease in overall accuracy and KAPPA accuracy, when more number of classes was derived from IRS-1C/1D and Landsat-5 TM imagery by ISODATA and MLC. There were two principal factors for the decreased classification accuracy, namely spectral overlapping/confusion and inadequate spatial resolution of the sensors. Compared to the former, the limited instantaneous field of view (IFOV) of these sensors caused occurrence of number of mixture pixels (mixels) in the image and its effect on the classification process was a major problem to deriving accurate wetland cover types, in spite of the increasing spatial resolution of new generation Earth Observation Sensors (EOS). In order to improve the classification accuracy, a soft classification method based on Linear Spectral Mixture Modeling (LSMM) was described to calculate the spectral mixture and classify IRS-1C/1D LISS-III and Landsat-5 TM Imagery. This method considered number of reflectance end-members that form the scene spectra, followed by the determination of their nature and finally the decomposition of the spectra into their endmembers. To evaluate the LSMM areal estimates, resulted fractional end-members were compared with normalized difference vegetation index (NDVI), ground truth data, as well as those estimates derived from the traditional hard classifier (MLC). The findings revealed that NDVI values and vegetation fractions were positively correlated ($r^2$= 0.96, 0.95 and 0.92 for Rameswaram, Vedaranniyam and Pitchavaram respectively) and NDVI and soil fraction values were negatively correlated ($r^2$ =0.53, 0.39 and 0.13), indicating the reliability of the sub-pixel classification. Comparing with ground truth data, the precision of LSMM for deriving moisture fraction was 92% and 96% for soil fraction. The LSMM in general would seem well suited to locating small wetland habitats which occurred as sub-pixel inclusions, and to representing continuous gradations between different habitat types.

Extraction of Important Areas Using Feature Feedback Based on PCA (PCA 기반 특징 되먹임을 이용한 중요 영역 추출)

  • Lee, Seung-Hyeon;Kim, Do-Yun;Choi, Sang-Il;Jeong, Gu-Min
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.6
    • /
    • pp.461-469
    • /
    • 2020
  • In this paper, we propose a PCA-based feature feedback method for extracting important areas of handwritten numeric data sets and face data sets. A PCA-based feature feedback method is proposed by extending the previous LDA-based feature feedback method. In the proposed method, the data is reduced to important feature dimensions by applying the PCA technique, one of the dimension reduction machine learning algorithms. Through the weights derived during the dimensional reduction process, the important points of data in each reduced dimensional axis are identified. Each dimension axis has a different weight in the total data according to the size of the eigenvalue of the axis. Accordingly, a weight proportional to the size of the eigenvalues of each dimension axis is given, and an operation process is performed to add important points of data in each dimension axis. The critical area of the data is calculated by applying a threshold to the data obtained through the calculation process. After that, induces reverse mapping to the original data in the important area of the derived data, and selects the important area in the original data space. The results of the experiment on the MNIST dataset are checked, and the effectiveness and possibility of the pattern recognition method based on PCA-based feature feedback are verified by comparing the results with the existing LDA-based feature feedback method.

Ensemble Deep Network for Dense Vehicle Detection in Large Image

  • Yu, Jae-Hyoung;Han, Youngjoon;Kim, JongKuk;Hahn, Hernsoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.45-55
    • /
    • 2021
  • This paper has proposed an algorithm that detecting for dense small vehicle in large image efficiently. It is consisted of two Ensemble Deep-Learning Network algorithms based on Coarse to Fine method. The system can detect vehicle exactly on selected sub image. In the Coarse step, it can make Voting Space using the result of various Deep-Learning Network individually. To select sub-region, it makes Voting Map by to combine each Voting Space. In the Fine step, the sub-region selected in the Coarse step is transferred to final Deep-Learning Network. The sub-region can be defined by using dynamic windows. In this paper, pre-defined mapping table has used to define dynamic windows for perspective road image. Identity judgment of vehicle moving on each sub-region is determined by closest center point of bottom of the detected vehicle's box information. And it is tracked by vehicle's box information on the continuous images. The proposed algorithm has evaluated for performance of detection and cost in real time using day and night images captured by CCTV on the road.

Wildfire Severity Mapping Using Sentinel Satellite Data Based on Machine Learning Approaches (Sentinel 위성영상과 기계학습을 이용한 국내산불 피해강도 탐지)

  • Sim, Seongmun;Kim, Woohyeok;Lee, Jaese;Kang, Yoojin;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1109-1123
    • /
    • 2020
  • In South Korea with forest as a major land cover class (over 60% of the country), many wildfires occur every year. Wildfires weaken the shear strength of the soil, forming a layer of soil that is vulnerable to landslides. It is important to identify the severity of a wildfire as well as the burned area to sustainably manage the forest. Although satellite remote sensing has been widely used to map wildfire severity, it is often difficult to determine the severity using only the temporal change of satellite-derived indices such as Normalized Difference Vegetation Index (NDVI) and Normalized Burn Ratio (NBR). In this study, we proposed an approach for determining wildfire severity based on machine learning through the synergistic use of Sentinel-1A Synthetic Aperture Radar-C data and Sentinel-2A Multi Spectral Instrument data. Three wildfire cases-Samcheok in May 2017, Gangreung·Donghae in April 2019, and Gosung·Sokcho in April 2019-were used for developing wildfire severity mapping models with three machine learning algorithms (i.e., Random Forest, Logistic Regression, and Support Vector Machine). The results showed that the random forest model yielded the best performance, resulting in an overall accuracy of 82.3%. The cross-site validation to examine the spatiotemporal transferability of the machine learning models showed that the models were highly sensitive to temporal differences between the training and validation sites, especially in the early growing season. This implies that a more robust model with high spatiotemporal transferability can be developed when more wildfire cases with different seasons and areas are added in the future.