• Title/Summary/Keyword: database systems

Search Result 2,870, Processing Time 0.026 seconds

Implementation of a Web-Based Early Warning System for Meteorological Hazards (기상위험 조기경보를 위한 웹기반 표출시스템 구현)

  • Kong, In Hak;Kim, Hong Joong;Oh, Jai Ho;Lee, Yang Won
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.24 no.4
    • /
    • pp.21-28
    • /
    • 2016
  • Numeric weather prediction is important to prevent meteorological disasters such as heavy rain, heat wave, and cold wave. The Korea meteorological administration provides a realtime special weather report and the rural development administration demonstrates information about 2-day warning of agricultural disasters for farms in a few regions. To improve the early warning systems for meteorological hazards, a nation-wide high-resolution dataset for weather prediction should be combined with web-based GIS. This study aims to develop a web service prototype for early warning of meteorological hazards, which integrates web GIS technologies with a weather prediction database in a temporal resolution of 1 hour and a spatial resolution of 1 km. The spatially and temporally high-resolution dataset for meteorological hazards produced by downscaling of GME was serviced via a web GIS. In addition to the information about current status of meteorological hazards, the proposed system provides the hourly dong-level forecasting of meteorologic hazards for upcoming seven days, such as heavy rain, heat wave, and cold wave. This system can be utilized as an operational information service for municipal governments in Korea by achieving the future work to improve the accuracy of numeric weather predictions and the preprocessing time for raster and vector dataset.

Design and Implementation of An User-Centered Training System for Efficient IT Service Management based on ITIL (ITIL 기반의 효율적 IT 서비스 관리를 위한 사용자 중심 교육 시스템 설계 및 구현)

  • Kim, Do Sung;Lee, Nam Yong
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.12
    • /
    • pp.651-659
    • /
    • 2017
  • IT(Information Technology) has focused on infrastructure-technologies in the past but now focuses on IT service. Many companies strive to save costs and improve IT services. For this reason, they strive to implement a good internalization of ITSM (IT Service Management) by developing ITSM systems based on ITIL (Information Technology Infrastructure Library) from the late 2000s. In particular, IT service operations are one of the structure elements of ITIL version 3 and are highly related to the internalization of ITSM. However, in spite-of-the successful implementation of ITSM, the efficiency of IT service management has not improved due to iterative issues. Therefore, this study developed a user-centered training system by defining the iterative issue guidelines and implementing a database. The implemented user-centered training system provided IT service users with regular training services to produce a good solution for iterative issues after connecting the operation part of the ITSM system. Based on the results of this study, we expect that the proposed ITSM system will contribute to efficiently managing IT services by improving the limitations of IT service operation in the ITSM system.

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taeksoo;Han, Ingoo
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Applying Emotional Information Retrieval Method to Information Appliances Design -The Use of Color Information for Mobile Emotion Retrieval System- (감성검색법을 기초로 한 정보기기 콘텐츠 디자인 연구 -색채정보를 이용한 모바일 감성검색시스템을 사례로-)

  • Kim, Don-Han;Seo, Kyung-Ho
    • Science of Emotion and Sensibility
    • /
    • v.13 no.3
    • /
    • pp.501-510
    • /
    • 2010
  • The knowledge base on emotional information is one of the key elements in the implementation of emotion retrieval systems for contents design of Mobile devices. This study proposed a new approach to the knowledge base implementation by automatically extracting color components from full-color images. In this study, the validity of the proposed method was empirically tested. Database was developed using 100 interior images as visual stimuli and a total of 48 subjects participated in the experiment. In order to test the reliability of the proposed 'emotional information knowledge base', firstly 'recall ratio' that refers to frequencies of correct images from the retrieved images was derived. Secondly, correlation Analysis was performed to compare the ratings by the subjects to what the system calculated. Finally, the rating comparison was used to run a paired-sample t-test. The analysis demonstrated satisfactory recall ration of 62.1%. Also, a significant positive correlation (p<.01) was observed from all the emotion keywords. The paired Sample t-test found that all the emotion keywords except "casual" retrieved the images in the order from more relevant to less relevant images and the difference was statistically significant (t(9)=5.528, p<.05). Findings of this study support that the proposed 'emotional information knowledge base' established only with color information automatically extracted from images can be effectively used for such visual stimuli search tasks as commercial interior images.

  • PDF

Chatbot Design Method Using Hybrid Word Vector Expression Model Based on Real Telemarketing Data

  • Zhang, Jie;Zhang, Jianing;Ma, Shuhao;Yang, Jie;Gui, Guan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1400-1418
    • /
    • 2020
  • In the development of commercial promotion, chatbot is known as one of significant skill by application of natural language processing (NLP). Conventional design methods are using bag-of-words model (BOW) alone based on Google database and other online corpus. For one thing, in the bag-of-words model, the vectors are Irrelevant to one another. Even though this method is friendly to discrete features, it is not conducive to the machine to understand continuous statements due to the loss of the connection between words in the encoded word vector. For other thing, existing methods are used to test in state-of-the-art online corpus but it is hard to apply in real applications such as telemarketing data. In this paper, we propose an improved chatbot design way using hybrid bag-of-words model and skip-gram model based on the real telemarketing data. Specifically, we first collect the real data in the telemarketing field and perform data cleaning and data classification on the constructed corpus. Second, the word representation is adopted hybrid bag-of-words model and skip-gram model. The skip-gram model maps synonyms in the vicinity of vector space. The correlation between words is expressed, so the amount of information contained in the word vector is increased, making up for the shortcomings caused by using bag-of-words model alone. Third, we use the term frequency-inverse document frequency (TF-IDF) weighting method to improve the weight of key words, then output the final word expression. At last, the answer is produced using hybrid retrieval model and generate model. The retrieval model can accurately answer questions in the field. The generate model can supplement the question of answering the open domain, in which the answer to the final reply is completed by long-short term memory (LSTM) training and prediction. Experimental results show which the hybrid word vector expression model can improve the accuracy of the response and the whole system can communicate with humans.

Aerodynamic Design of EAV Propeller using a Multi-Level Design Optimization Framework (다단 최적 설계 프레임워크를 활용한 전기추진 항공기 프로펠러 공력 최적 설계)

  • Kwon, Hyung-Il;Yi, Seul-Gi;Choi, Seongim;Kim, Keunbae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.3
    • /
    • pp.173-184
    • /
    • 2013
  • A multi-level design optimization framework for aerodynamic design of rotary wing such as propeller and helicopter rotor blades is presented in this study. Strategy of the proposed framework is to enhance aerodynamic performance by sequentially applying the planform and sectional design optimization. In the first level of a planform design, we used a genetic algorithm and blade element momentum theory (BEMT) based on two-dimensional aerodynamic database to find optimal planform variables. After an initial planform design, local flow conditions of blade sections are analyzed using high-fidelity CFD methods. During the next level, a sectional design optimization is conducted using two dimensional Navier-Stokes analysis and a gradient based optimization algorithm. When optimal airfoil shape is determined at the several spanwise locations, a planform design is performed again. Through this iterative design process, not only an optimal flow condition but also an optimal shape of an EAV propeller blade is obtained. To validate the optimized propeller-blade design, it is tested in wind-tunnel facility with different flow conditions. An efficiency, which is slightly less than the expected improvement of 7% predicted by our proposed design framework but is still satisfactory to enhance the aerodynamic performance of EAV system.

Susceptibility Mapping of Umyeonsan Using Logistic Regression (LR) Model and Post-validation through Field Investigation (로지스틱 회귀 모델을 이용한 우면산 산사태 취약성도 제작 및 현장조사를 통한 사후검증)

  • Lee, Sunmin;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_2
    • /
    • pp.1047-1060
    • /
    • 2017
  • In recent years, global warming has been continuing and abnormal weather phenomena are occurring frequently. Especially in the 21st century, the intensity and frequency of hydrological disasters are increasing due to the regional trend of water. Since the damage caused by disasters in urban areas is likely to be extreme, it is necessary to prepare a landslide susceptibility maps to predict and prepare the future damage. Therefore, in this study, we analyzed the landslide vulnerability using the logistic model and assessed the management plan after the landslide through the field survey. The landslide area was extracted from aerial photographs and interpretation of the field survey data at the time of the landslides by local government. Landslide-related factors were extracted topographical maps generated from aerial photographs and forest map. Logistic regression (LR) model has been used to identify areas where landslides are likely to occur in geographic information systems (GIS). A landslide susceptibility map was constructed by applying a LR model to a spatial database constructed through a total of 13 factors affecting landslides. The validation accuracy of 77.79% was derived by using the receiver operating characteristic (ROC) curve for the logistic model. In addition, a field investigation was performed to validate how landslides were managed after the landslide. The results of this study can provide a scientific basis for urban governments for policy recommendations on urban landslide management.

Customized Configuration with Template and Options (맞춤구성을 위한 템플릿과 Option 기반의 추론)

  • 이현정;이재규
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.1
    • /
    • pp.119-139
    • /
    • 2002
  • In electronic catalogs, each item is represented as an independent unit while the parts of the item can be composed of a higher level of functionality. Thus, the search for this kind of product database is limited to the retrieval of most similar standard commodities. However, many industrial products need to configure optional parts to fulfill the required specifications. Since there are many paths in finding the required specifications, we need to develop a search system via the configuration process. In this system, we adopt a two-phased approach. The first phase finds the most similar template, and the second phase adjusts the template specifications toward the required set of specifications by the Constraint and Rule Satisfaction Problem approach. There is no guarantee that the most similar template can find the most desirable configuration. The search system needs backtracking capability, so the search can stop at a satisfied local optimal satisfaction. This framework is applied to the configuration of computers and peripherals. Template-based reasoning is basically the same as case-based reasoning. The required set of specifications is represented by a list of criteria, and matched with the product specifications to find the closest ones. To measure the distance, we develop a thesaurus of values, which can identify the meaning of numbers, symbols, and words. With this configuration, the performance of the search by configuration algorithm is evaluated in terms of feasibility and admissibility.

  • PDF

STANDARDISATION OF NIR INSTRUMENTS, INFLUENCE OF THE CALIBRATION METHODS AND THE SIZE OF THE CLONING SET

  • Dardenne, Pierre;Cowe, Ian-A.;Berzaghi, Paolo;Flinn, Peter-C.;Lagerholm, Martin;Shenk, John-S.;Westerhaus, Mark-O.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1121-1121
    • /
    • 2001
  • A previous study (Berzaghi et al., 2001) evaluated the performance of 3 calibration methods, modified partial least squares (MPLS), local PLS (LOCAL) and artificial neural networks (ANN) on the prediction of the chemical composition of forages, using a large NIR database. The study used forage samples (n=25,977) from Australia, Europe (Belgium, Germany, Italy and Sweden) and North America (Canada and U.S.A) with reference values for moisture, crude protein and neutral detergent fibre content. The spectra of the samples were collected using 10 different Foss NIR Systems instruments, only some of which had been standardized to one master instrument. The aim of the present study was to evaluate the behaviour of these different calibration methods when predicting the same samples measured on different instruments. Twenty-two sealed samples of different kind of forages were measured in duplicate on seven instruments (one master and six slaves). Three sets of near infrared spectra (1100 to 2500nm) were created. The first set consisted of the spectra in their original form (unstandardized); the second set was created using a single sample standardization (Clone1); the third was created using a multiple sample procedure (Clone6). WinISI software (Infrasoft International Inc., Port Mathilda, PA, USA) was used to perform both types of standardization, Clone1 is just a photometric offset between a “master” instrument and the “slave” instrument. Clone6 modifies both the X-axis through a wavelength adjustment and the Y-axis through a simple regression wavelength by wavelength. The Clone1 procedure used one sample spectrally close to the centre of the population. The six samples used in Clone 6 were selected to cover the range of spectral variation in the sample set. The remaining fifteen samples were used to evaluate the performances of the different models. The predicted values for dry matter, protein and neutral detergent fibre from the master Instrument were considered as “reference Y values” when computing the statistics RMSEP, SEPC, R, Bias, Slope, mean GH (global Mahalanobis distance) and mean NH (neighbourhood Mahalanobis distance) for the 6 slave instruments. From the results we conclude that i) all the calibration techniques gave satisfactory results after standardization. Without standardization the predicted data from the slaves would have required slope and bias correction to produce acceptable statistics. ii) Standardization reduced the errors for all calibration methods and parameters tested, reducing not only systematic biases but also random errors. iii) Standardization removed slope effects that were significantly different from 1.0 in most of the cases. iv) Clone1 and Clone6 gave similar results except for NDF where Clone6 gave better RMSEP values than Clone1. v) GH and NH were reduced by half even with very large data sets including unstandardized spectra.

  • PDF

A Study on the Development of the Data Linkage Method for Performance-based on Port Facility Maintenance Decision Marking System (성능기반의 항만시설물 유지관리 의사결정체계 개발을 위한 데이터 연계방안 도출에 관한 연구)

  • Kim, Yong-Hee;Kang, Yoon-Koo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.11
    • /
    • pp.9-18
    • /
    • 2020
  • Recently, studies of integrated management platform and performance-based maintenance decision-marking systems have proceeded to the efficient management of port facilities. The purpose of this study was to manage and operate port facilities based on performance and to provide long-term durability and budgetary execution. Thus, it is essential to secure basic data to be analyzed in an integrated platform and decision-marking system. This study derived the data linkage measures to secure port facility design and management information. The target of deriving the data linkage was the POMS (Port Facility Management System) currently in operation by the MOF (Ministry of Oceans and Fisheries). To derive data linkage, analyze the database of POMS and select the data required for the operation-integrated platform and decision-marking system. The final data linkage target was determined by compiling the requirements of the relevant experts and selecting the final target of three groups (port and facility information, management information, and user information). As a result, the API interface design was prepared for detailed linked data and data linkage framework between the linkage data of POMS. The provision of real-time data linkage between POMS and integrated platform is expected to improve the operational efficiency of the integrated platform.