Kong, In Hak;Kim, Hong Joong;Oh, Jai Ho;Lee, Yang Won
Journal of Korean Society for Geospatial Information Science
/
v.24
no.4
/
pp.21-28
/
2016
Numeric weather prediction is important to prevent meteorological disasters such as heavy rain, heat wave, and cold wave. The Korea meteorological administration provides a realtime special weather report and the rural development administration demonstrates information about 2-day warning of agricultural disasters for farms in a few regions. To improve the early warning systems for meteorological hazards, a nation-wide high-resolution dataset for weather prediction should be combined with web-based GIS. This study aims to develop a web service prototype for early warning of meteorological hazards, which integrates web GIS technologies with a weather prediction database in a temporal resolution of 1 hour and a spatial resolution of 1 km. The spatially and temporally high-resolution dataset for meteorological hazards produced by downscaling of GME was serviced via a web GIS. In addition to the information about current status of meteorological hazards, the proposed system provides the hourly dong-level forecasting of meteorologic hazards for upcoming seven days, such as heavy rain, heat wave, and cold wave. This system can be utilized as an operational information service for municipal governments in Korea by achieving the future work to improve the accuracy of numeric weather predictions and the preprocessing time for raster and vector dataset.
IT(Information Technology) has focused on infrastructure-technologies in the past but now focuses on IT service. Many companies strive to save costs and improve IT services. For this reason, they strive to implement a good internalization of ITSM (IT Service Management) by developing ITSM systems based on ITIL (Information Technology Infrastructure Library) from the late 2000s. In particular, IT service operations are one of the structure elements of ITIL version 3 and are highly related to the internalization of ITSM. However, in spite-of-the successful implementation of ITSM, the efficiency of IT service management has not improved due to iterative issues. Therefore, this study developed a user-centered training system by defining the iterative issue guidelines and implementing a database. The implemented user-centered training system provided IT service users with regular training services to produce a good solution for iterative issues after connecting the operation part of the ITSM system. Based on the results of this study, we expect that the proposed ITSM system will contribute to efficiently managing IT services by improving the limitations of IT service operation in the ITSM system.
Proceedings of the Korea Database Society Conference
/
1999.06a
/
pp.175-186
/
1999
Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.
The knowledge base on emotional information is one of the key elements in the implementation of emotion retrieval systems for contents design of Mobile devices. This study proposed a new approach to the knowledge base implementation by automatically extracting color components from full-color images. In this study, the validity of the proposed method was empirically tested. Database was developed using 100 interior images as visual stimuli and a total of 48 subjects participated in the experiment. In order to test the reliability of the proposed 'emotional information knowledge base', firstly 'recall ratio' that refers to frequencies of correct images from the retrieved images was derived. Secondly, correlation Analysis was performed to compare the ratings by the subjects to what the system calculated. Finally, the rating comparison was used to run a paired-sample t-test. The analysis demonstrated satisfactory recall ration of 62.1%. Also, a significant positive correlation (p<.01) was observed from all the emotion keywords. The paired Sample t-test found that all the emotion keywords except "casual" retrieved the images in the order from more relevant to less relevant images and the difference was statistically significant (t(9)=5.528, p<.05). Findings of this study support that the proposed 'emotional information knowledge base' established only with color information automatically extracted from images can be effectively used for such visual stimuli search tasks as commercial interior images.
KSII Transactions on Internet and Information Systems (TIIS)
/
v.14
no.4
/
pp.1400-1418
/
2020
In the development of commercial promotion, chatbot is known as one of significant skill by application of natural language processing (NLP). Conventional design methods are using bag-of-words model (BOW) alone based on Google database and other online corpus. For one thing, in the bag-of-words model, the vectors are Irrelevant to one another. Even though this method is friendly to discrete features, it is not conducive to the machine to understand continuous statements due to the loss of the connection between words in the encoded word vector. For other thing, existing methods are used to test in state-of-the-art online corpus but it is hard to apply in real applications such as telemarketing data. In this paper, we propose an improved chatbot design way using hybrid bag-of-words model and skip-gram model based on the real telemarketing data. Specifically, we first collect the real data in the telemarketing field and perform data cleaning and data classification on the constructed corpus. Second, the word representation is adopted hybrid bag-of-words model and skip-gram model. The skip-gram model maps synonyms in the vicinity of vector space. The correlation between words is expressed, so the amount of information contained in the word vector is increased, making up for the shortcomings caused by using bag-of-words model alone. Third, we use the term frequency-inverse document frequency (TF-IDF) weighting method to improve the weight of key words, then output the final word expression. At last, the answer is produced using hybrid retrieval model and generate model. The retrieval model can accurately answer questions in the field. The generate model can supplement the question of answering the open domain, in which the answer to the final reply is completed by long-short term memory (LSTM) training and prediction. Experimental results show which the hybrid word vector expression model can improve the accuracy of the response and the whole system can communicate with humans.
Journal of the Korean Society for Aeronautical & Space Sciences
/
v.41
no.3
/
pp.173-184
/
2013
A multi-level design optimization framework for aerodynamic design of rotary wing such as propeller and helicopter rotor blades is presented in this study. Strategy of the proposed framework is to enhance aerodynamic performance by sequentially applying the planform and sectional design optimization. In the first level of a planform design, we used a genetic algorithm and blade element momentum theory (BEMT) based on two-dimensional aerodynamic database to find optimal planform variables. After an initial planform design, local flow conditions of blade sections are analyzed using high-fidelity CFD methods. During the next level, a sectional design optimization is conducted using two dimensional Navier-Stokes analysis and a gradient based optimization algorithm. When optimal airfoil shape is determined at the several spanwise locations, a planform design is performed again. Through this iterative design process, not only an optimal flow condition but also an optimal shape of an EAV propeller blade is obtained. To validate the optimized propeller-blade design, it is tested in wind-tunnel facility with different flow conditions. An efficiency, which is slightly less than the expected improvement of 7% predicted by our proposed design framework but is still satisfactory to enhance the aerodynamic performance of EAV system.
In recent years, global warming has been continuing and abnormal weather phenomena are occurring frequently. Especially in the 21st century, the intensity and frequency of hydrological disasters are increasing due to the regional trend of water. Since the damage caused by disasters in urban areas is likely to be extreme, it is necessary to prepare a landslide susceptibility maps to predict and prepare the future damage. Therefore, in this study, we analyzed the landslide vulnerability using the logistic model and assessed the management plan after the landslide through the field survey. The landslide area was extracted from aerial photographs and interpretation of the field survey data at the time of the landslides by local government. Landslide-related factors were extracted topographical maps generated from aerial photographs and forest map. Logistic regression (LR) model has been used to identify areas where landslides are likely to occur in geographic information systems (GIS). A landslide susceptibility map was constructed by applying a LR model to a spatial database constructed through a total of 13 factors affecting landslides. The validation accuracy of 77.79% was derived by using the receiver operating characteristic (ROC) curve for the logistic model. In addition, a field investigation was performed to validate how landslides were managed after the landslide. The results of this study can provide a scientific basis for urban governments for policy recommendations on urban landslide management.
In electronic catalogs, each item is represented as an independent unit while the parts of the item can be composed of a higher level of functionality. Thus, the search for this kind of product database is limited to the retrieval of most similar standard commodities. However, many industrial products need to configure optional parts to fulfill the required specifications. Since there are many paths in finding the required specifications, we need to develop a search system via the configuration process. In this system, we adopt a two-phased approach. The first phase finds the most similar template, and the second phase adjusts the template specifications toward the required set of specifications by the Constraint and Rule Satisfaction Problem approach. There is no guarantee that the most similar template can find the most desirable configuration. The search system needs backtracking capability, so the search can stop at a satisfied local optimal satisfaction. This framework is applied to the configuration of computers and peripherals. Template-based reasoning is basically the same as case-based reasoning. The required set of specifications is represented by a list of criteria, and matched with the product specifications to find the closest ones. To measure the distance, we develop a thesaurus of values, which can identify the meaning of numbers, symbols, and words. With this configuration, the performance of the search by configuration algorithm is evaluated in terms of feasibility and admissibility.
Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
/
2001.06a
/
pp.1121-1121
/
2001
A previous study (Berzaghi et al., 2001) evaluated the performance of 3 calibration methods, modified partial least squares (MPLS), local PLS (LOCAL) and artificial neural networks (ANN) on the prediction of the chemical composition of forages, using a large NIR database. The study used forage samples (n=25,977) from Australia, Europe (Belgium, Germany, Italy and Sweden) and North America (Canada and U.S.A) with reference values for moisture, crude protein and neutral detergent fibre content. The spectra of the samples were collected using 10 different Foss NIR Systems instruments, only some of which had been standardized to one master instrument. The aim of the present study was to evaluate the behaviour of these different calibration methods when predicting the same samples measured on different instruments. Twenty-two sealed samples of different kind of forages were measured in duplicate on seven instruments (one master and six slaves). Three sets of near infrared spectra (1100 to 2500nm) were created. The first set consisted of the spectra in their original form (unstandardized); the second set was created using a single sample standardization (Clone1); the third was created using a multiple sample procedure (Clone6). WinISI software (Infrasoft International Inc., Port Mathilda, PA, USA) was used to perform both types of standardization, Clone1 is just a photometric offset between a “master” instrument and the “slave” instrument. Clone6 modifies both the X-axis through a wavelength adjustment and the Y-axis through a simple regression wavelength by wavelength. The Clone1 procedure used one sample spectrally close to the centre of the population. The six samples used in Clone 6 were selected to cover the range of spectral variation in the sample set. The remaining fifteen samples were used to evaluate the performances of the different models. The predicted values for dry matter, protein and neutral detergent fibre from the master Instrument were considered as “reference Y values” when computing the statistics RMSEP, SEPC, R, Bias, Slope, mean GH (global Mahalanobis distance) and mean NH (neighbourhood Mahalanobis distance) for the 6 slave instruments. From the results we conclude that i) all the calibration techniques gave satisfactory results after standardization. Without standardization the predicted data from the slaves would have required slope and bias correction to produce acceptable statistics. ii) Standardization reduced the errors for all calibration methods and parameters tested, reducing not only systematic biases but also random errors. iii) Standardization removed slope effects that were significantly different from 1.0 in most of the cases. iv) Clone1 and Clone6 gave similar results except for NDF where Clone6 gave better RMSEP values than Clone1. v) GH and NH were reduced by half even with very large data sets including unstandardized spectra.
Journal of the Korea Academia-Industrial cooperation Society
/
v.21
no.11
/
pp.9-18
/
2020
Recently, studies of integrated management platform and performance-based maintenance decision-marking systems have proceeded to the efficient management of port facilities. The purpose of this study was to manage and operate port facilities based on performance and to provide long-term durability and budgetary execution. Thus, it is essential to secure basic data to be analyzed in an integrated platform and decision-marking system. This study derived the data linkage measures to secure port facility design and management information. The target of deriving the data linkage was the POMS (Port Facility Management System) currently in operation by the MOF (Ministry of Oceans and Fisheries). To derive data linkage, analyze the database of POMS and select the data required for the operation-integrated platform and decision-marking system. The final data linkage target was determined by compiling the requirements of the relevant experts and selecting the final target of three groups (port and facility information, management information, and user information). As a result, the API interface design was prepared for detailed linked data and data linkage framework between the linkage data of POMS. The provision of real-time data linkage between POMS and integrated platform is expected to improve the operational efficiency of the integrated platform.
본 웹사이트에 게시된 이메일 주소가 전자우편 수집 프로그램이나
그 밖의 기술적 장치를 이용하여 무단으로 수집되는 것을 거부하며,
이를 위반시 정보통신망법에 의해 형사 처벌됨을 유념하시기 바랍니다.
[게시일 2004년 10월 1일]
이용약관
제 1 장 총칙
제 1 조 (목적)
이 이용약관은 KoreaScience 홈페이지(이하 “당 사이트”)에서 제공하는 인터넷 서비스(이하 '서비스')의 가입조건 및 이용에 관한 제반 사항과 기타 필요한 사항을 구체적으로 규정함을 목적으로 합니다.
제 2 조 (용어의 정의)
① "이용자"라 함은 당 사이트에 접속하여 이 약관에 따라 당 사이트가 제공하는 서비스를 받는 회원 및 비회원을
말합니다.
② "회원"이라 함은 서비스를 이용하기 위하여 당 사이트에 개인정보를 제공하여 아이디(ID)와 비밀번호를 부여
받은 자를 말합니다.
③ "회원 아이디(ID)"라 함은 회원의 식별 및 서비스 이용을 위하여 자신이 선정한 문자 및 숫자의 조합을
말합니다.
④ "비밀번호(패스워드)"라 함은 회원이 자신의 비밀보호를 위하여 선정한 문자 및 숫자의 조합을 말합니다.
제 3 조 (이용약관의 효력 및 변경)
① 이 약관은 당 사이트에 게시하거나 기타의 방법으로 회원에게 공지함으로써 효력이 발생합니다.
② 당 사이트는 이 약관을 개정할 경우에 적용일자 및 개정사유를 명시하여 현행 약관과 함께 당 사이트의
초기화면에 그 적용일자 7일 이전부터 적용일자 전일까지 공지합니다. 다만, 회원에게 불리하게 약관내용을
변경하는 경우에는 최소한 30일 이상의 사전 유예기간을 두고 공지합니다. 이 경우 당 사이트는 개정 전
내용과 개정 후 내용을 명확하게 비교하여 이용자가 알기 쉽도록 표시합니다.
제 4 조(약관 외 준칙)
① 이 약관은 당 사이트가 제공하는 서비스에 관한 이용안내와 함께 적용됩니다.
② 이 약관에 명시되지 아니한 사항은 관계법령의 규정이 적용됩니다.
제 2 장 이용계약의 체결
제 5 조 (이용계약의 성립 등)
① 이용계약은 이용고객이 당 사이트가 정한 약관에 「동의합니다」를 선택하고, 당 사이트가 정한
온라인신청양식을 작성하여 서비스 이용을 신청한 후, 당 사이트가 이를 승낙함으로써 성립합니다.
② 제1항의 승낙은 당 사이트가 제공하는 과학기술정보검색, 맞춤정보, 서지정보 등 다른 서비스의 이용승낙을
포함합니다.
제 6 조 (회원가입)
서비스를 이용하고자 하는 고객은 당 사이트에서 정한 회원가입양식에 개인정보를 기재하여 가입을 하여야 합니다.
제 7 조 (개인정보의 보호 및 사용)
당 사이트는 관계법령이 정하는 바에 따라 회원 등록정보를 포함한 회원의 개인정보를 보호하기 위해 노력합니다. 회원 개인정보의 보호 및 사용에 대해서는 관련법령 및 당 사이트의 개인정보 보호정책이 적용됩니다.
제 8 조 (이용 신청의 승낙과 제한)
① 당 사이트는 제6조의 규정에 의한 이용신청고객에 대하여 서비스 이용을 승낙합니다.
② 당 사이트는 아래사항에 해당하는 경우에 대해서 승낙하지 아니 합니다.
- 이용계약 신청서의 내용을 허위로 기재한 경우
- 기타 규정한 제반사항을 위반하며 신청하는 경우
제 9 조 (회원 ID 부여 및 변경 등)
① 당 사이트는 이용고객에 대하여 약관에 정하는 바에 따라 자신이 선정한 회원 ID를 부여합니다.
② 회원 ID는 원칙적으로 변경이 불가하며 부득이한 사유로 인하여 변경 하고자 하는 경우에는 해당 ID를
해지하고 재가입해야 합니다.
③ 기타 회원 개인정보 관리 및 변경 등에 관한 사항은 서비스별 안내에 정하는 바에 의합니다.
제 3 장 계약 당사자의 의무
제 10 조 (KISTI의 의무)
① 당 사이트는 이용고객이 희망한 서비스 제공 개시일에 특별한 사정이 없는 한 서비스를 이용할 수 있도록
하여야 합니다.
② 당 사이트는 개인정보 보호를 위해 보안시스템을 구축하며 개인정보 보호정책을 공시하고 준수합니다.
③ 당 사이트는 회원으로부터 제기되는 의견이나 불만이 정당하다고 객관적으로 인정될 경우에는 적절한 절차를
거쳐 즉시 처리하여야 합니다. 다만, 즉시 처리가 곤란한 경우는 회원에게 그 사유와 처리일정을 통보하여야
합니다.
제 11 조 (회원의 의무)
① 이용자는 회원가입 신청 또는 회원정보 변경 시 실명으로 모든 사항을 사실에 근거하여 작성하여야 하며,
허위 또는 타인의 정보를 등록할 경우 일체의 권리를 주장할 수 없습니다.
② 당 사이트가 관계법령 및 개인정보 보호정책에 의거하여 그 책임을 지는 경우를 제외하고 회원에게 부여된
ID의 비밀번호 관리소홀, 부정사용에 의하여 발생하는 모든 결과에 대한 책임은 회원에게 있습니다.
③ 회원은 당 사이트 및 제 3자의 지적 재산권을 침해해서는 안 됩니다.
제 4 장 서비스의 이용
제 12 조 (서비스 이용 시간)
① 서비스 이용은 당 사이트의 업무상 또는 기술상 특별한 지장이 없는 한 연중무휴, 1일 24시간 운영을
원칙으로 합니다. 단, 당 사이트는 시스템 정기점검, 증설 및 교체를 위해 당 사이트가 정한 날이나 시간에
서비스를 일시 중단할 수 있으며, 예정되어 있는 작업으로 인한 서비스 일시중단은 당 사이트 홈페이지를
통해 사전에 공지합니다.
② 당 사이트는 서비스를 특정범위로 분할하여 각 범위별로 이용가능시간을 별도로 지정할 수 있습니다. 다만
이 경우 그 내용을 공지합니다.
제 13 조 (홈페이지 저작권)
① NDSL에서 제공하는 모든 저작물의 저작권은 원저작자에게 있으며, KISTI는 복제/배포/전송권을 확보하고
있습니다.
② NDSL에서 제공하는 콘텐츠를 상업적 및 기타 영리목적으로 복제/배포/전송할 경우 사전에 KISTI의 허락을
받아야 합니다.
③ NDSL에서 제공하는 콘텐츠를 보도, 비평, 교육, 연구 등을 위하여 정당한 범위 안에서 공정한 관행에
합치되게 인용할 수 있습니다.
④ NDSL에서 제공하는 콘텐츠를 무단 복제, 전송, 배포 기타 저작권법에 위반되는 방법으로 이용할 경우
저작권법 제136조에 따라 5년 이하의 징역 또는 5천만 원 이하의 벌금에 처해질 수 있습니다.
제 14 조 (유료서비스)
① 당 사이트 및 협력기관이 정한 유료서비스(원문복사 등)는 별도로 정해진 바에 따르며, 변경사항은 시행 전에
당 사이트 홈페이지를 통하여 회원에게 공지합니다.
② 유료서비스를 이용하려는 회원은 정해진 요금체계에 따라 요금을 납부해야 합니다.
제 5 장 계약 해지 및 이용 제한
제 15 조 (계약 해지)
회원이 이용계약을 해지하고자 하는 때에는 [가입해지] 메뉴를 이용해 직접 해지해야 합니다.
제 16 조 (서비스 이용제한)
① 당 사이트는 회원이 서비스 이용내용에 있어서 본 약관 제 11조 내용을 위반하거나, 다음 각 호에 해당하는
경우 서비스 이용을 제한할 수 있습니다.
- 2년 이상 서비스를 이용한 적이 없는 경우
- 기타 정상적인 서비스 운영에 방해가 될 경우
② 상기 이용제한 규정에 따라 서비스를 이용하는 회원에게 서비스 이용에 대하여 별도 공지 없이 서비스 이용의
일시정지, 이용계약 해지 할 수 있습니다.
제 17 조 (전자우편주소 수집 금지)
회원은 전자우편주소 추출기 등을 이용하여 전자우편주소를 수집 또는 제3자에게 제공할 수 없습니다.
제 6 장 손해배상 및 기타사항
제 18 조 (손해배상)
당 사이트는 무료로 제공되는 서비스와 관련하여 회원에게 어떠한 손해가 발생하더라도 당 사이트가 고의 또는 과실로 인한 손해발생을 제외하고는 이에 대하여 책임을 부담하지 아니합니다.
제 19 조 (관할 법원)
서비스 이용으로 발생한 분쟁에 대해 소송이 제기되는 경우 민사 소송법상의 관할 법원에 제기합니다.
[부 칙]
1. (시행일) 이 약관은 2016년 9월 5일부터 적용되며, 종전 약관은 본 약관으로 대체되며, 개정된 약관의 적용일 이전 가입자도 개정된 약관의 적용을 받습니다.