• Title/Summary/Keyword: optimization techniques

Search Result 1,385, Processing Time 0.028 seconds

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Rapid Detection Method for Human Rotavirus from Vegetables by a Combination of Filtration and Integrated Cell Culture/Real-Time Reverse Transcription PCR (Filtration과 Integrated Cell Culture/Real-Time Reverse Transcription PCR 기법을 이용한 채소류에서 Human Rotavirus 신속 검출)

  • Hyeon, Ji-Yeon;Chon, Jung-Whan;Song, Kwang-Young;Hwang, In-Gyun;Kwak, Hyo-Sun;Lee, Jung-Soo;Kim, Moo-Sang;Lee, Jung-Bok;Seo, Kun-Ho
    • Korean Journal of Microbiology
    • /
    • v.47 no.2
    • /
    • pp.117-123
    • /
    • 2011
  • The purpose of this study was to evaluate and compare different elution and concentration methods for optimization of human rotavirus (HRV) detection method using real-time RT-PCR and cell culture techniques. The leafy vegetable samples (lettuce, Chinese cabbage) were artificially inoculated with HRV. Viruses were extracted from the vegetables by two different elution buffers, buffer A (100 mM Tris-HCl, 50 mM glycine, 3% beef extract, pH 9.5) and buffer B (250 mM Threonine, 300 mM NaCl, pH 9.5), and the extracted viruses were concentrated by filtration and PEG precipitation sequentially. To determine infectivity of the viruses, the viruses recovered from the samples were infected to the MA-104 cells, and integrated cell culture real-time RT-PCR was performed at 1, 48, 72, 96, 120, 144, 168 h post-infection (p.i.). The elution buffer A was more efficient in extracting the virus from the produce samples tested than the buffer B, 29.54% and 18.32% of recoveries, respectively. The sensitivity of real-time RT-PCR method was markedly improved when the virus was concentrated by the filtration method. When the viruses were eluted and concentrated by buffer A and filtration, respectively, the average recovery rate was approximately 51.89%. When the viruses recovered from samples were infected to MA-104 cell, infectious HRV was detected within 48 h p.i. by ICC/real-time RT-PCR, whereas cytopathic effects were not observed until 72 h p.i. The optimized detection method evaluated in this study could be useful for rapid and reliable detection of HRV in fresh produce products and applied for detection of other food-borne viruses.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

Low temperature plasma deposition of microcrystalline silicon thin films for active matrix displays: opportunities and challenges

  • Cabarrocas, Pere Roca I;Abramov, Alexey;Pham, Nans;Djeridane, Yassine;Moustapha, Oumkelthoum;Bonnassieux, Yvan;Girotra, Kunal;Chen, Hong;Park, Seung-Kyu;Park, Kyong-Tae;Huh, Jong-Moo;Choi, Joon-Hoo;Kim, Chi-Woo;Lee, Jin-Seok;Souk, Jun-H.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.107-108
    • /
    • 2008
  • The spectacular development of AMLCDs, been made possible by a-Si:H technology, still faces two major drawbacks due to the intrinsic structure of a-Si:H, namely a low mobility and most important a shift of the transfer characteristics of the TFTs when submitted to bias stress. This has lead to strong research in the crystallization of a-Si:H films by laser and furnace annealing to produce polycrystalline silicon TFTs. While these devices show improved mobility and stability, they suffer from uniformity over large areas and increased cost. In the last decade we have focused on microcrystalline silicon (${\mu}c$-Si:H) for bottom gate TFTs, which can hopefully meet all the requirements for mass production of large area AMOLED displays [1,2]. In this presentation we will focus on the transfer of a deposition process based on the use of $SiF_4$-Ar-$H_2$ mixtures from a small area research laboratory reactor into an industrial gen 1 AKT reactor. We will first discuss on the optimization of the process conditions leading to fully crystallized films without any amorphous incubation layer, suitable for bottom gate TFTS, as well as on the use of plasma diagnostics to increase the deposition rate up to 0.5 nm/s [3]. The use of silicon nanocrystals appears as an elegant way to circumvent the opposite requirements of a high deposition rate and a fully crystallized interface [4]. The optimized process conditions are transferred to large area substrates in an industrial environment, on which some process adjustment was required to reproduce the material properties achieved in the laboratory scale reactor. For optimized process conditions, the homogeneity of the optical and electronic properties of the ${\mu}c$-Si:H films deposited on $300{\times}400\;mm$ substrates was checked by a set of complementary techniques. Spectroscopic ellipsometry, Raman spectroscopy, dark conductivity, time resolved microwave conductivity and hydrogen evolution measurements allowed demonstrating an excellent homogeneity in the structure and transport properties of the films. On the basis of these results, optimized process conditions were applied to TFTs, for which both bottom gate and top gate structures were studied aiming to achieve characteristics suitable for driving AMOLED displays. Results on the homogeneity of the TFT characteristics over the large area substrates and stability will be presented, as well as their application as a backplane for an AMOLED display.

  • PDF

Improvement of Pregnancy Rate by Assisted Hatching of Human Embryos in In Vitro Fertilization and Embryo Transfer Program (체외수정시술시 배아의 보조부화술을 이용한 임신율 향상에 관한 연구)

  • Kim, Seok-Hyun;Kim, Kwang-Rye;Chae, Hee-Dong;Lee, Jae-Hoon;Kim, Hee-Sun;Ryu, Buom-Yong;Oh, Sun-Kyung;Suh, Chang-Suk;Choi, Young-Min;Kim, Jung-Gu;Moon, Shin-Yong;Lee, Jin-Yong
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.24 no.1
    • /
    • pp.119-133
    • /
    • 1997
  • In spite of much progress in vitro fertilization and embryo transfer (IVF-ET) program, the pregnancy rate remains at 20-30%, and the endometrial implantation rate per embryo transferred at 10-15%. As a result, about 90% of embryos may fail to implant to the endometrium, and many attempts such as optimization of follicular development, improvement of in vitro culture system including coculture, and micromanipulation of zona pellucida have been made to improve embryonic implantation after IVF-ET. Recently, several procedures of assisted hatching (AH) using micromanipulation have been introduced, and pregnancies and births have been obtained after AH. To develop and establish AH as an effective procedure to improve embryonic implantation, AH with partial zona dissection (PZD) was performed in 116 cycles of 89 infertile couples who had previous repeated failures of standard IVF-ET more than two times (Group I: 71 cycles in 54 patients), or who had implantation failure of embryos with good quality (Group II: 15 cycles in 13), or who had undergone AH without specific indication (Group III: 30 cycles in 22) from January, 1995 to Februry, 1996, and the outcomes of AH were analyzed according to pregnancy rate. The number of oocytes retrieved after controlled ovarian hyperstimulation (COH) was $9.9{\pm}7.1$ in Group I, $11.5{\pm}4.5$ in Group II, and $7.9{\pm}6.4$ in Group III. The number of embryos transferred after AH was $4.7{\pm}1.8$ in Group I, $5.3{\pm}1.3$ in Group II, and $3.5{\pm}2.4$ in Group III. The mean cumulative embryo score (CES) was $56.8{\pm}30.0$ in Group I, $76.1{\pm}35.9$ in Group II, and $38.5{\pm}29.9$ in Group III. The overall clinical pregnancy rate per cycle and per patient was 12.7% (9/71) and 16.7% (9/54) in Group I, 33.3% (5/15) and 38.5% (5/13) in Group II, and 6.7% (2/30) and 9.1% (2/22) in Group III, respectively. There were significant differences in the numbers of oocytes retrieved and embryos transferred, CES, and the clinical pregnancy rate per cycle among three groups. There was a significant inverse correlation between basal serum FSH level and CES, and no pregnancy occurred in patients with CES less than 20. In conclusion, AH of human embryos with PZD prior to ET has improved the implantation and pregnancy rates in IVF-ET patients with the past history of repeated failures, especially in spite of transfer of embryos with good quality, and AH will provide a range of novel techniques which may contribute much to effective management of infertile couples.

  • PDF

Development of lumped model to analyze the hydrological effects landuse change (토지이용 변화에 따른 수문 특성의 변화를 추적하기 위한 Lumped모형의 개발)

  • Son, Ill
    • Journal of the Korean Geographical Society
    • /
    • v.29 no.3
    • /
    • pp.233-252
    • /
    • 1994
  • One of major advantages of Lumped model is its ability to simulate extended flows. A further advantage is that it requires only conventional, readily available hydrological data (rainfall, evaporation and runoff). These two advantages commend the use of this type of model for the analysis of the hydrological effects of landuse change. Experimental Catchment(K11) of Kimakia site in Kenga experienced three phases of landuse change for sixteen and half years. The Institute of Hydrology offered the hydrological data from the catchment for this research. On basis of Blackie's(l972) 9-parameter model, a new model(R1131) was reorganized in consideration of the following aspects to reflect the hydrological characteristics of the catchment: 1) The evapotranspiration necessary for the landuse hydrology, 2) high permeable soils, 3) small catchment, 4) input option for initial soil moisture deficit, and 5) othel modules for water budget analysis. The new model is constructed as a 11-parameter, 3-storage, 1-input option model. Using a number of initial conditions, the model was optimized to the data of three landuse phases. The model efficiencies were 96.78%, 97.20%, 94.62% and the errors of total flow were -1.78%, -3.36%, -5.32%. The bias of the optimized models were tested by several techniques, The extended flows were simulated in the prediction mode using the optimized model and the data set of the whole series of experimental periods. They are used to analyse the change of daily high and low-flow caused by landuse change. The relative water use ratio of the clearing and seedling phase was 60.21%, but that of the next two phases were 81.23% and 83.78% respectively. The annual peak flows of second and third phase at a 1.5-year return period were decreased by 31.3% and 31.2% compared to that of the first phase. The annual peak flow at a 50-year return period in the second phase was an increase of only 4.8%, and that in the third phase was an increase of 12.9%. The annual minimum flow at a 1.5-year return period was decreased by 34.2% in the second phase, and 34.3% in the third phase. The changes in the annual minimum flows were decreased for the larger return periods; a 20.2% decrease in the second phase and 20.9% decrease in the third phase at a 50-year return period. From the results above, two aspects could be concluded. Firstly, the flow regime in Catchment K11 was changed due to the landuse conversion from the clearing and seedling phade to the intermediate stage of pine plantation. But, The flow regime was little affected after the pine trees reached a certain height. Secondly, the effects of the pine plantation on the daily high- and low-flow were reduced with the increase in flood size and the severity of drought.

  • PDF

Optimization for Underwater Welding of Marine Steel Plates (선박용 강판의 수중 용접 최적화에 관한 연구)

  • 오세규
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.20 no.1
    • /
    • pp.49-59
    • /
    • 1984
  • Optimizing investigation of characteristics of underwater welding by a gravity type arc welding process was experimentally carried out by using six types of domestic coated welding electrodes for welding of domestic marine structural steel plates (KR Grade A-1, SWS41A, SWS41B,) in order to develop the underwater welding techniques in practical use. Main results obtained are summarized as follows: 1. The absorption speed of the coating of domestic coated lime titania type welding-electrode became constant at about 60 minutes in water and it was about 0.18%/min during initial 8 minutes of absorption time. 2. Thus, the immediate welding electrode could be used in underwater welding for such a short time in comparison with the joint strength of in-atmosphere-and on-water-welding by dry-, wet-or immediate-welding-electrode. 3. By bead appearance and X-ray inspection, ilmenite, limetitania and high titanium oxide types of electrodes were found better for underwater-welding of 10 mm KR Grade A-1 steel plates, while proper welding angle, current and electrode diameter were 6$0^{\circ}C$, above 160A and 4mm respectively under 28cm/min of welding speed. 4. The weld metal tensile strength or proof stress of underwater-welded-joints has a quadratic relationship with the heat input, and the optimal heat input zone is about 13 to 15KJ/cm for 10mm SWS41A steel plates, resulting from consideration upon both joint efficiency of above-100% and recovery of impact strength and strain. Meanwhile, the optimal heat input zone resulting from tension-tension fatigue limit above the base metal's of SWS41A plates is 16 to 19KJ/cm. Reliability of all the empirical equations reveals 95% confidence level. 6. The microstructure of the underwater welds of SES41A welded in such a zone has no weld defects such as hydrogen brittleness with supreme high hardness, since the HAZ-bond boundary area adjacent to both surface and base metal has only Hv400 max with the microstructure of fine martensite, bainite, pearlite and small amount of ferrite.

  • PDF

Optimization of the cryopreserved condition for utilization of GPCR frozen cells (GPCR 냉동보관 세포의 활용을 위한 냉동조건의 최적화 연구)

  • Noh, Hyojin;Lee, Sunghou
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.2
    • /
    • pp.1200-1206
    • /
    • 2015
  • The major target for drug discovery, G-protein coupled receptor (GPCR) is involved in many physiological activities and related to various diseases and disorders. Among experimental techniques relating to the GPCR drug discovery process, various cell-based screening methods are influenced by cell conditions used in the overall process. Recently, the utilization of frozen cells is suggested in terms of reducing data variation and cost-effectiveness. The aim of this study is to evaluate various conditions in cell freezing such as temperature conditions and storage terms. The stable cell lines for calcium sensing receptor and urotensin receptor were established followed by storing cultured cells at $-80^{\circ}C$ up to 4 weeks. To compare with cell stored at liquid nitrogen, agonist and antagonist responses were recorded based on the luminescence detection by the calcium induced photoprotein activation. Cell signals were reduced as the storage period was increased without the changes in $EC_{50}$ and $IC_{50}$ values $EC_{50}:3.46{\pm}1.36mM$, $IC_{50}:0.49{\pm}0.15{\mu}M$). In case of cells stored in liquid nitrogen, cell responses were decreased comparing to those in live cells, however changes by storage periods and significant variations of $EC_{50}/IC_{50}$ values were not detected. The decrease of cell signals in various frozen cells may be due to the increase of cell damages. From these results, the best way for a long-term cryopreservation is the use of liquid nitrogen condition, and for the purpose of short-term storage within a month, $-80^{\circ}C$ storage condition can be possible to adopt. As a conclusion, the active implementation of frozen cells may contribute to decrease variations of experimental data during the initial cell-based screening process.

Calculation of Soil Moisture and Evapotranspiration of KLDAS applying Ground-Observed Meteorological Data (지상관측 기상자료를 적용한 KLDAS(Korea Land Data Assimilation System)의 토양수분·증발산량 산출)

  • Park, Gwangha;Kye, Changwoo;Lee, Kyungtae;Yu, Wansik;Hwang, Eui-ho;Kang, Dohyuk
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1611-1623
    • /
    • 2021
  • Thisstudy demonstratessoil moisture and evapotranspiration performance using Korea Land Data Assimilation System (KLDAS) under Korea Land Information System (KLIS). Spin-up was repeated 8 times in 2018. In addition, low-resolution and high-resolution meteorological data were generated using meteorological data observed by Korea Meteorological Administration (KMA), Rural Development Administration (RDA), Korea Rural Community Corporation (KRC), Korea Hydro & Nuclear Power Co.,Ltd. (KHNP), Korea Water Resources Corporation (K-water), and Ministry of Environment (ME), and applied to KLDAS. And, to confirm the degree of accuracy improvement of Korea Low spatial resolution (hereafter, K-Low; 0.125°) and Korea High spatial resolution (hereafter, K-High; 0.01°), soil moisture and evapotranspiration to which Modern-Era Retrospective analysis for Research and Applications, version 2 (MERRA-2) and ASOS-Spatial (ASOS-S) used in the previous study were applied were evaluated together. As a result, optimization of the initial boundary condition requires 2 time (58 point), 3 time (6 point), and 6 time (3 point) spin-up for soil moisture. In the case of evapotranspiration, 1 time (58 point) and 2 time (58 point) spin-ups are required. In the case of soil moisture to which MERRA-2, ASOS-S, K-Low, and K-High were applied, the mean of R2 were 0.615, 0.601, 0.594, and 0.664, respectively, and in the case of evapotranspiration, the mean of R2 were 0.531, 0.495, 0.656, and 0.677, respectively, indicating the accuracy of K-High was rated as the highest. The accuracy of KLDAS can be improved by securing a large number of ground observation data through the results of this study and generating high-resolution grid-type meteorological data. However, if the meteorological condition at each point is not sufficiently taken into account when converting the point data into a grid, the accuracy is rather lowered. For a further study, it is expected that higher quality data can be produced by generating and applying grid-type meteorological data using the parameter setting of IDW or other interpolation techniques.