• Title/Summary/Keyword: Consistency Algorithm

Search Result 256, Processing Time 0.026 seconds

HS-PSO Hybrid Optimization Algorithm for HS Performance Improvement (HS 성능 향상을 위한 HS-PSO 하이브리드 최적화 알고리즘)

  • Tae-Bong Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.4
    • /
    • pp.203-209
    • /
    • 2023
  • Harmony search(HS) does not use the evaluation of individual harmony when referring to HM when constructing a new harmony, but particle swarm optimization(PSO), on the contrary, uses the evaluation value of individual particles and the evaluation value of the population to find a solution. However, in this study, we tried to improve the performance of the algorithm by finding and identifying similarities between HS and PSO and applying the particle improvement process of PSO to HS. To apply the PSO algorithm, the local best of individual particles and the global best of the swam are required. In this study, the process of HS improving the worst harmony in harmony memory(HM) was viewed as a process very similar to that of PSO. Therefore, the worst harmony of HM was regarded as the local best of a particle, and the best harmony was regarded as the global best of swam. In this way, the performance of the HS was improved by introducing the particle improvement process of the PSO into the HS harmony improvement process. The results of this study were confirmed by comparing examples of optimization values for various functions. As a result, it was found that the suggested HS-PSO was much better than the existing HS in terms of accuracy and consistency.

A Study on the Effectiveness of Small-scale Maps Production Based on Tolerance Changes of Map Generalization Algorithm (지도 일반화 알고리듬의 임계값 설정에 따른 소축척 지도 제작의 효용성 연구)

  • Hwakyung Kim;Jaehak Ryu;Jiyong Huh;Yongtae Shin
    • Journal of Information Technology Services
    • /
    • v.22 no.5
    • /
    • pp.71-86
    • /
    • 2023
  • Recently, various geographic information systems have been used based on spatial information of geographic information systems. Accordingly, it is essential to produce a large-scale map as a small-scale map for various uses of spatial information. However, maps currently being produced have inconsistencies between data due to production timing and limitations in expression, and productivity efficiency is greatly reduced due to errors in products or overlapping processes. In order to improve this, various efforts are being made, such as publishing research and reports for automating domestic mapping, but because there is no specific result, it relies on editors to make maps. This is mainly done by hand, so the time required for mapping is excessive, and quality control for each producer is different. In order to solve these problems, technology that can be automatically produced through computer programs is needed. Research has been conducted to apply the rule base to geometric generalization. The algorithm tolerance setting applied to rule-based modeling is a factor that greatly affects the result, and the level of the result changes accordingly. In this paper, we tried to study the effectiveness of mapping according to tolerance setting. To this end, the utility was verified by comparing it with a manually produced map. In addition, the original data and reduction rate were analyzed by applying generalization algorithms and tolerance values. Although there are some differences by region, it was confirmed that the complexity decreased on average. Through this, it is expected to contribute to the use of spatial information-based services by improving tolerances suitable for small-scale mapping regulations in order to secure spatial information data that guarantees consistency and accuracy.

Urban Area Building Reconstruction Using High Resolution SAR Image (고해상도 SAR 영상을 이용한 도심지 건물 재구성)

  • Kang, Ah-Reum;Lee, Seung-Kuk;Kim, Sang-Wan
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.4
    • /
    • pp.361-373
    • /
    • 2013
  • The monitoring of urban area, target detection and building reconstruction have been actively studied and investigated since high resolution X-band SAR images could be acquired by airborne and/or satellite SAR systems. This paper describes an efficient approach to reconstruct artificial structures (e.g. apartment, building and house) in urban area using high resolution X-band SAR images. Building footprint was first extracted from 1:25,000 digital topographic map and then a corner line of building was detected by an automatic detecting algorithm. With SAR amplitude images, an initial building height was calculated by the length of layover estimated using KS-test (Kolmogorov-Smirnov test) from the corner line. The interferometric SAR phases were simulated depending on SAR geometry and changable building heights ranging from -10 m to +10 m of the initial building height. With an interferogram from real SAR data set, the simulation results were compared using the method of the phase consistency. One of results can be finally defined as the reconstructed building height. The developed algorithm was applied to repeat-pass TerraSAR-X spotlight mode data set over an apartment complex in Daejeon city, Korea. The final building heights were validated against reference heights extracted from LiDAR DSM, with an RMSE (Root Mean Square Error) of about 1~2m.

A Comparison of Pan-sharpening Algorithms for GK-2A Satellite Imagery (천리안위성 2A호 위성영상을 위한 영상융합기법의 비교평가)

  • Lee, Soobong;Choi, Jaewan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.275-292
    • /
    • 2022
  • In order to detect climate changes using satellite imagery, the GCOS (Global Climate Observing System) defines requirements such as spatio-temporal resolution, stability by the time change, and uncertainty. Due to limitation of GK-2A sensor performance, the level-2 products can not satisfy the requirement, especially for spatial resolution. In this paper, we found the optimal pan-sharpening algorithm for GK-2A products. The six pan-sharpening methods included in CS (Component Substitution), MRA (Multi-Resolution Analysis), VO (Variational Optimization), and DL (Deep Learning) were used. In the case of DL, the synthesis property based method was used to generate training dataset. The process of synthesis property is that pan-sharpening model is applied with Pan (Panchromatic) and MS (Multispectral) images with reduced spatial resolution, and fused image is compared with the original MS image. In the synthesis property based method, fused image with desire level for user can be produced only when the geometric characteristics between the PAN with reduced spatial resolution and MS image are similar. However, since the dissimilarity exists, RD (Random Down-sampling) was additionally used as a way to minimize it. Among the pan-sharpening methods, PSGAN was applied with RD (PSGAN_RD). The fused images are qualitatively and quantitatively validated with consistency property and the synthesis property. As validation result, the GSA algorithm performs well in the evaluation index representing spatial characteristics. In the case of spectral characteristics, the PSGAN_RD has the best accuracy with the original MS image. Therefore, in consideration of spatial and spectral characteristics of fused image, we found that PSGAN_RD is suitable for GK-2A products.

An Experiment for Surface Soil Moisture Mapping Using Sentinel-1 and Sentinel-2 Image on Google Earth Engine (Google Earth Engine 제공 Sentinel-1과 Sentinel-2 영상을 이용한 지표 토양수분도 제작 실험)

  • Jihyun Lee ;Kwangseob Kim;Kiwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.599-608
    • /
    • 2023
  • The increasing interest in soil moisture data using satellite data for applications of hydrology, meteorology, and agriculture has led to the development of methods for generating soil moisture maps of variable resolution. This study demonstrated the capability of generating soil moisture maps using Sentinel-1 and Sentinel-2 data provided by Google Earth Engine (GEE). The soil moisture map was derived using synthetic aperture radar (SAR) image and optical image. SAR data provided by the Sentinel-1 analysis ready data in GEE was applied with normalized difference vegetation index (NDVI) based on Sentinel-2 and Environmental Systems Research Institute (ESRI)-based Land Cover map. This study produced a soil moisture map in the research area of Victoria, Australia and compared it with field measurements obtained from a previous study. As for the validation of the applied method's result accuracy, the comparative experimental results showed a meaningful range of consistency as 4-10%p between the values obtained using the algorithm applied in this study and the field-based ones, and they also showed very high consistency with satellite-based soil moisture data as 0.5-2%p. Therefore, public open data provided by GEE and the algorithm applied in this study can be used for high-resolution soil moisture mapping to represent regional land surface characteristics.

A study on the development of quality control algorithm for internet of things (IoT) urban weather observed data based on machine learning (머신러닝기반의 사물인터넷 도시기상 관측자료 품질검사 알고리즘 개발에 관한 연구)

  • Lee, Seung Woon;Jung, Seung Kwon
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1071-1081
    • /
    • 2021
  • In addition to the current quality control procedures for the weather observation performed by the Korea Meteorological Administration (KMA), this study proposes quality inspection standards for Internet of Things (IoT) urban weather observed data based on machine learning that can be used in smart cities of the future. To this end, in order to confirm whether the standards currently set based on ASOS (Automated Synoptic Observing System) and AWS (Automatic Weather System) are suitable for urban weather, usability was verified based on SKT AWS data installed in Seoul, and a machine learning-based quality control algorithm was finally proposed in consideration of the IoT's own data's features. As for the quality control algorithm, missing value test, value pattern test, sufficient data test, statistical range abnormality test, time value abnormality test, spatial value abnormality test were performed first. After that, physical limit test, stage test, climate range test, and internal consistency test, which are QC for suggested by the KMA, were performed. To verify the proposed algorithm, it was applied to the actual IoT urban weather observed data to the weather station located in Songdo, Incheon. Through this, it is possible to identify defects that IoT devices can have that could not be identified by the existing KMA's QC and a quality control algorithm for IoT weather observation devices to be installed in smart cities of future is proposed.

Non-Simultaneous Sampling Deactivation during the Parameter Approximation of a Topic Model

  • Jeong, Young-Seob;Jin, Sou-Young;Choi, Ho-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.81-98
    • /
    • 2013
  • Since Probabilistic Latent Semantic Analysis (PLSA) and Latent Dirichlet Allocation (LDA) were introduced, many revised or extended topic models have appeared. Due to the intractable likelihood of these models, training any topic model requires to use some approximation algorithm such as variational approximation, Laplace approximation, or Markov chain Monte Carlo (MCMC). Although these approximation algorithms perform well, training a topic model is still computationally expensive given the large amount of data it requires. In this paper, we propose a new method, called non-simultaneous sampling deactivation, for efficient approximation of parameters in a topic model. While each random variable is normally sampled or obtained by a single predefined burn-in period in the traditional approximation algorithms, our new method is based on the observation that the random variable nodes in one topic model have all different periods of convergence. During the iterative approximation process, the proposed method allows each random variable node to be terminated or deactivated when it is converged. Therefore, compared to the traditional approximation ways in which usually every node is deactivated concurrently, the proposed method achieves the inference efficiency in terms of time and memory. We do not propose a new approximation algorithm, but a new process applicable to the existing approximation algorithms. Through experiments, we show the time and memory efficiency of the method, and discuss about the tradeoff between the efficiency of the approximation process and the parameter consistency.

A Heuristic Algorithm for Designing Traffic Analysis Zone Using Geographic Information System (Vector GIS를 이용한 교통 Zone체계 알고리즘 개발 방안에 관한 연구)

  • Choi, Kee-Choo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.3 no.1 s.5
    • /
    • pp.91-104
    • /
    • 1995
  • The spatial aggregation of data, in transportation and other planning processes, is an important theoretical consideration because the results of any analysis are not entirely independent of the delineation of zones. Moreover, using a different spatial aggregation may lead to different, and sometimes contradictory conclusions. Two criteria have been considered as important in designing zone systems. They are scale and aggregation. The scale problem arises because of uncertainty about the number of zones needed for a study and the aggregation problem arises because of uncertainty about how the data are to be aggregated to from a given scale problem. In a transportation study, especially in the design of traffic analysis zone(TAZ), the scale problem is directly related to the number dof zones and the aggregation problem involves spatial clustering, meeting the general requirements of forming the zones system such as equal traffic generation, convexity, and the consistency with the political boundary. In this study, first, the comparative study of delineating spatial units has been given. Second, a FORTRAN-based heuristic algorithm for designing TAZ based on socio-economic data has been developed and applied to the Korean peninsula containing 132 micro parcels. The vector type ARC/INFO GIS topological data mosel has been used to provise the adjacency information between parcels. The results, however, leave some to be desired in order to overcome such problems as non-convexity of the agglomerated TAZ system and/or uneven traffic phenomenon for each TAZ.

  • PDF

An Extensible Transaction Model for Real-Time Data Processing (실시간 데이타 처리를 위한 확장 가능한 트랜잭션 모델에 관한 연구)

  • 문승진
    • Journal of Internet Computing and Services
    • /
    • v.1 no.2
    • /
    • pp.11-18
    • /
    • 2000
  • In this paper we present a new extensible model based upon the concept of subtransactions in real-time transaction systems. The nested transaction model originally proposed by J. Moss is extended for real-time uniprocessor transaction systems by adding explicit timing constraints. Based upon the model, an integrated concurrency control and scheduling algorithm is developed, that not only guarantees timing constraints of a set of real-time transactions but also maintains consistency of the database. The algorithm is based on the priority ceiling protocol of Sha et al. We prove that the Real-Time Nested Priority Ceiling Protocol prevents unbounded blocking and deadlock, and maintains the serializability of a set of real-time transactions. We use the upper bound on the duration that a transaction can be blocked to show that it is possible to analyze the schedulability of a transaction set using rate-monotonic priority assignment. This work is viewed as a step toward multiprocessor and distributed real-time nested transaction systems. Also, it is possible to be extended to include the real-time multimedia transactions in the emerging web-based database application areas.

  • PDF

Temporal Color Rolling Suppression Algorithm Considering Time-varying Illuminant (조도 변화를 고려한 동영상 색 유동성 저감 알고리즘)

  • Oh, Hyun-Mook;Kang, Moon-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.55-62
    • /
    • 2011
  • In this paper, a temporal color and luminance variation suppression algorithm for a digital video sequence is proposed by considering time-varying light source. When a video sequence is sampled with the periodically emitting illuminant and with a short exposure time, the color rolling phenomenon occurs, where the color and the luminance of the image periodically change from field to field. In conventional signal processing techniques, the luminance variation remaining in the resultant video sequence degrades the constancy of the image sequence. In the proposed method, we obtain video sequences with constant luminance and color by compensating for the inter-field luminance variation. Based on a motion detection technique, the amount of the luminance variation for each channel is estimated on the background of the sequence without the effects of moving objects. The experimental results clearly show that our strategy efficiently estimated the illuminant change without being affected by moving objects, and the variations were efficiently reduced.