• Title/Summary/Keyword: numerical algorithm

Search Result 4,125, Processing Time 0.03 seconds

Improvements for Atmospheric Motion Vectors Algorithm Using First Guess by Optical Flow Method (옵티컬 플로우 방법으로 계산된 초기 바람 추정치에 따른 대기운동벡터 알고리즘 개선 연구)

  • Oh, Yurim;Park, Hyungmin;Kim, Jae Hwan;Kim, Somyoung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.763-774
    • /
    • 2020
  • Wind data forecasted from the numerical weather prediction (NWP) model is generally used as the first-guess of the target tracking process to obtain the atmospheric motion vectors(AMVs) because it increases tracking accuracy and reduce computational time. However, there is a contradiction that the NWP model used as the first-guess is used again as the reference in the AMVs verification process. To overcome this problem, model-independent first guesses are required. In this study, we propose the AMVs derivation from Lucas and Kanade optical flow method and then using it as the first guess. To retrieve AMVs, Himawari-8/AHI geostationary satellite level-1B data were used at 00, 06, 12, and 18 UTC from August 19 to September 5, 2015. To evaluate the impact of applying the optical flow method on the AMV derivation, cross-validation has been conducted in three ways as follows. (1) Without the first-guess, (2) NWP (KMA/UM) forecasted wind as the first-guess, and (3) Optical flow method based wind as the first-guess. As the results of verification using ECMWF ERA-Interim reanalysis data, the highest precision (RMSVD: 5.296-5.804 ms-1) was obtained using optical flow based winds as the first-guess. In addition, the computation speed for AMVs derivation was the slowest without the first-guess test, but the other two had similar performance. Thus, applying the optical flow method in the target tracking process of AMVs algorithm, this study showed that the optical flow method is very effective as a first guess for model-independent AMVs derivation.

Analysis of Trading Performance on Intelligent Trading System for Directional Trading (방향성매매를 위한 지능형 매매시스템의 투자성과분석)

  • Choi, Heung-Sik;Kim, Sun-Woong;Park, Sung-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.187-201
    • /
    • 2011
  • KOSPI200 index is the Korean stock price index consisting of actively traded 200 stocks in the Korean stock market. Its base value of 100 was set on January 3, 1990. The Korea Exchange (KRX) developed derivatives markets on the KOSPI200 index. KOSPI200 index futures market, introduced in 1996, has become one of the most actively traded indexes markets in the world. Traders can make profit by entering a long position on the KOSPI200 index futures contract if the KOSPI200 index will rise in the future. Likewise, they can make profit by entering a short position if the KOSPI200 index will decline in the future. Basically, KOSPI200 index futures trading is a short-term zero-sum game and therefore most futures traders are using technical indicators. Advanced traders make stable profits by using system trading technique, also known as algorithm trading. Algorithm trading uses computer programs for receiving real-time stock market data, analyzing stock price movements with various technical indicators and automatically entering trading orders such as timing, price or quantity of the order without any human intervention. Recent studies have shown the usefulness of artificial intelligent systems in forecasting stock prices or investment risk. KOSPI200 index data is numerical time-series data which is a sequence of data points measured at successive uniform time intervals such as minute, day, week or month. KOSPI200 index futures traders use technical analysis to find out some patterns on the time-series chart. Although there are many technical indicators, their results indicate the market states among bull, bear and flat. Most strategies based on technical analysis are divided into trend following strategy and non-trend following strategy. Both strategies decide the market states based on the patterns of the KOSPI200 index time-series data. This goes well with Markov model (MM). Everybody knows that the next price is upper or lower than the last price or similar to the last price, and knows that the next price is influenced by the last price. However, nobody knows the exact status of the next price whether it goes up or down or flat. So, hidden Markov model (HMM) is better fitted than MM. HMM is divided into discrete HMM (DHMM) and continuous HMM (CHMM). The only difference between DHMM and CHMM is in their representation of state probabilities. DHMM uses discrete probability density function and CHMM uses continuous probability density function such as Gaussian Mixture Model. KOSPI200 index values are real number and these follow a continuous probability density function, so CHMM is proper than DHMM for the KOSPI200 index. In this paper, we present an artificial intelligent trading system based on CHMM for the KOSPI200 index futures system traders. Traders have experienced on technical trading for the KOSPI200 index futures market ever since the introduction of the KOSPI200 index futures market. They have applied many strategies to make profit in trading the KOSPI200 index futures. Some strategies are based on technical indicators such as moving averages or stochastics, and others are based on candlestick patterns such as three outside up, three outside down, harami or doji star. We show a trading system of moving average cross strategy based on CHMM, and we compare it to a traditional algorithmic trading system. We set the parameter values of moving averages at common values used by market practitioners. Empirical results are presented to compare the simulation performance with the traditional algorithmic trading system using long-term daily KOSPI200 index data of more than 20 years. Our suggested trading system shows higher trading performance than naive system trading.

A Construction of TMO Object Group Model for Distributed Real-Time Services (분산 실시간 서비스를 위한 TMO 객체그룹 모델의 구축)

  • 신창선;김명희;주수종
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.307-318
    • /
    • 2003
  • In this paper, we design and construct a TMO object group that provides the guaranteed real-time services in the distributed object computing environments, and verify execution power of its model for the correct distributed real-time services. The TMO object group we suggested is based on TINA's object group concept. This model consists of TMO objects having real-time properties and some components that support the object management service and the real-time scheduling service in the TMO object group. Also TMO objects can be duplicated or non-duplicated on distributed systems. Our model can execute the guaranteed distributed real-time service on COTS middlewares without restricting the specially ORB or the of operating system. For achieving goals of our model. we defined the concepts of the TMO object and the structure of the TMO object group. Also we designed and implemented the functions and interactions of components in the object group. The TMO object group includes the Dynamic Binder object and the Scheduler object for supporting the object management service and the real-time scheduling service, respectively The Dynamic Binder object supports the dynamic binding service that selects the appropriate one out of the duplicated TMO objects for the clients'request. And the Scheduler object supports the real-time scheduling service that determines the priority of tasks executed by an arbitrary TMO object for the clients'service requests. And then, in order to verify the executions of our model, we implemented the Dynamic Binder object and the Scheduler object adopting the binding priority algorithm for the dynamic binding service and the EDF algorithm for the real-time scheduling service from extending the existing known algorithms. Finally, from the numerical analyzed results we are shown, we verified whether our TMO object group model could support dynamic binding service for duplicated or non-duplicated TMO objects, also real-time scheduling service for an arbitrary TMO object requested from clients.

Retrieval of Hourly Aerosol Optical Depth Using Top-of-Atmosphere Reflectance from GOCI-II and Machine Learning over South Korea (GOCI-II 대기상한 반사도와 기계학습을 이용한 남한 지역 시간별 에어로졸 광학 두께 산출)

  • Seyoung Yang;Hyunyoung Choi;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.933-948
    • /
    • 2023
  • Atmospheric aerosols not only have adverse effects on human health but also exert direct and indirect impacts on the climate system. Consequently, it is imperative to comprehend the characteristics and spatiotemporal distribution of aerosols. Numerous research endeavors have been undertaken to monitor aerosols, predominantly through the retrieval of aerosol optical depth (AOD) via satellite-based observations. Nonetheless, this approach primarily relies on a look-up table-based inversion algorithm, characterized by computationally intensive operations and associated uncertainties. In this study, a novel high-resolution AOD direct retrieval algorithm, leveraging machine learning, was developed using top-of-atmosphere reflectance data derived from the Geostationary Ocean Color Imager-II (GOCI-II), in conjunction with their differences from the past 30-day minimum reflectance, and meteorological variables from numerical models. The Light Gradient Boosting Machine (LGBM) technique was harnessed, and the resultant estimates underwent rigorous validation encompassing random, temporal, and spatial N-fold cross-validation (CV) using ground-based observation data from Aerosol Robotic Network (AERONET) AOD. The three CV results consistently demonstrated robust performance, yielding R2=0.70-0.80, RMSE=0.08-0.09, and within the expected error (EE) of 75.2-85.1%. The Shapley Additive exPlanations(SHAP) analysis confirmed the substantial influence of reflectance-related variables on AOD estimation. A comprehensive examination of the spatiotemporal distribution of AOD in Seoul and Ulsan revealed that the developed LGBM model yielded results that are in close concordance with AERONET AOD over time, thereby confirming its suitability for AOD retrieval at high spatiotemporal resolution (i.e., hourly, 250 m). Furthermore, upon comparing data coverage, it was ascertained that the LGBM model enhanced data retrieval frequency by approximately 8.8% in comparison to the GOCI-II L2 AOD products, ameliorating issues associated with excessive masking over very illuminated surfaces that are often encountered in physics-based AOD retrieval processes.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Development of A Network loading model for Dynamic traffic Assignment (동적 통행배정모형을 위한 교통류 부하모형의 개발)

  • 임강원
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.3
    • /
    • pp.149-158
    • /
    • 2002
  • For the purpose of preciously describing real time traffic pattern in urban road network, dynamic network loading(DNL) models able to simulate traffic behavior are required. A number of different methods are available, including macroscopic, microscopic dynamic network models, as well as analytical model. Equivalency minimization problem and Variation inequality problem are the analytical models, which include explicit mathematical travel cost function for describing traffic behaviors on the network. While microscopic simulation models move vehicles according to behavioral car-following and cell-transmission. However, DNL models embedding such travel time function have some limitations ; analytical model has lacking of describing traffic characteristics such as relations between flow and speed, between speed and density Microscopic simulation models are the most detailed and realistic, but they are difficult to calibrate and may not be the most practical tools for large-scale networks. To cope with such problems, this paper develops a new DNL model appropriate for dynamic traffic assignment(DTA), The model is combined with vertical queue model representing vehicles as vertical queues at the end of links. In order to compare and to assess the model, we use a contrived example network. From the numerical results, we found that the DNL model presented in the paper were able to describe traffic characteristics with reasonable amount of computing time. The model also showed good relationship between travel time and traffic flow and expressed the feature of backward turn at near capacity.

Koreanized Analysis System Development for Groundwater Flow Interpretation (지하수유동해석을 위한 한국형 분석시스템의 개발)

  • Choi, Yun-Yeong
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.3 no.3 s.10
    • /
    • pp.151-163
    • /
    • 2003
  • In this study, the algorithm of groundwater flow process was established for koreanized groundwater program development dealing with the geographic and geologic conditions of the aquifer have dynamic behaviour in groundwater flow system. All the input data settings of the 3-DFM model which is developed in this study are organized in Korean, and the model contains help function for each input data. Thus, it is designed to get detailed information about each input parameter when the mouse pointer is placed on the corresponding input parameter. This model also is designed to easily specify the geologic boundary condition for each stratum or initial head data in the work sheet. In addition, this model is designed to display boxes for input parameter writing for each analysis condition so that the setting for each parameter is not so complicated as existing MODFLOW is when steady and unsteady flow analysis are performed as well as the analysis for the characteristics of each stratum. Descriptions for input data are displayed on the right side of the window while the analysis results are displayed on the left side as well as the TXT file for this results is available to see. The model developed in this study is a numerical model using finite differential method, and the applicability of the model was examined by comparing and analyzing observed and simulated groundwater heads computed by the application of real recharge amount and the estimation of parameters. The 3-DFM model is applied in this study to Sehwa-ri, and Songdang-ri area, Jeju, Korea for analysis of groundwater flow system according to pumping, and obtained the results that the observed and computed groundwater head were almost in accordance with each other showing the range of 0.03 - 0.07 error percent. It is analyzed that the groundwater flow distributed evenly from Nopen-orum and Munseogi-orum to Wolang-bong, Yongnuni-orum, and Songja-bong through the computation of equipotentials and velocity vector using the analysis result of simulation which was performed before the pumping started in the study area. These analysis results show the accordance with MODFLOW's.

Analysis of the Characteristics of the Seismic source and the Wave Propagation Parameters in the region of the Southeastern Korean Peninsula (한반도 남동부 지진의 지각매질 특성 및 지진원 특성 변수 연구)

  • Kim, Jun-Kyoung;Kang, Ik-Bum
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.2 no.1 s.4
    • /
    • pp.135-141
    • /
    • 2002
  • Both non-linear damping values of the deep and shallow crustal materials and seismic source parameters are found from the observed near-field seismic ground motions at the South-eastern Korean Peninsula. The non-linear numerical algorithm applied in this study is Levenberg-Marquadet method. All the 25 sets of horizontal ground motions (east-west and north-south components at each seismic station) from 3 events (micro to macro scale) were used for the analysis of damping values and source parameters. The non-linear damping values of the deep and shallow crustal materials were found to be more similar to those of the region of the Western United States. The seismic source parameters found from this study also showed that the resultant stress drop values are relatively low compared to those of the Western United Sates. Consequently, comparisons of the various seismic parameters from this study and those of the United States Seismo-tectonic data suggest that the seismo-tectonic characteristics of the South eastern Korean Peninsula is more similar to those of the Western U.S.

Compare the Clinical Tissue Dose Distributions to the Derived from the Energy Spectrum of 15 MV X Rays Linear Accelerator by Using the Transmitted Dose of Lead Filter (연(鉛)필터의 투과선량을 이용한 15 MV X선의 에너지스펙트럼 결정과 조직선량 비교)

  • Choi, Tae-Jin;Kim, Jin-Hee;Kim, Ok-Bae
    • Progress in Medical Physics
    • /
    • v.19 no.1
    • /
    • pp.80-88
    • /
    • 2008
  • Recent radiotherapy dose planning system (RTPS) generally adapted the kernel beam using the convolution method for computation of tissue dose. To get a depth and profile dose in a given depth concerened a given photon beam, the energy spectrum was reconstructed from the attenuation dose of transmission of filter through iterative numerical analysis. The experiments were performed with 15 MV X rays (Oncor, Siemens) and ionization chamber (0.125 cc, PTW) for measurements of filter transmitted dose. The energy spectrum of 15MV X-rays was determined from attenuated dose of lead filter transmission from 0.51 cm to 8.04 cm with energy interval 0.25 MeV. In the results, the peak flux revealed at 3.75 MeV and mean energy of 15 MV X rays was 4.639 MeV in this experiments. The results of transmitted dose of lead filter showed within 0.6% in average but maximum 2.5% discrepancy in a 5 cm thickness of lead filter. Since the tissue dose is highly depend on the its energy, the lateral dose are delivered from the lateral spread of energy fluence through flattening filter shape as tangent 0.075 and 0.125 which showed 4.211 MeV and 3.906 MeV. In this experiments, analyzed the energy spectrum has applied to obtain the percent depth dose of RTPS (XiO, Version 4.3.1, CMS). The generated percent depth dose from $6{\times}6cm^2$ of field to $30{\times}30cm^2$ showed very close to that of experimental measurement within 1 % discrepancy in average. The computed dose profile were within 1% discrepancy to measurement in field size $10{\times}10cm$, however, the large field sizes were obtained within 2% uncertainty. The resulting algorithm produced x-ray spectrum that match both quality and quantity with small discrepancy in this experiments.

  • PDF

Calculation of Surface Heat Flux in the Southeastern Yellow Sea Using Ocean Buoy Data (해양부이 자료를 이용한 황해 남동부 해역 표층 열속 산출)

  • Kim, Sun-Bok;Chang, Kyung-Il
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.19 no.3
    • /
    • pp.169-179
    • /
    • 2014
  • Monthly mean surface heat fluxes in the southeastern Yellow Sea are calculated using directly observed airsea variables from an ocean buoy station including short- and longwave radiations, and COARE 3.0 bulk flux algorithm. The calculated monthly mean heat fluxes are then compared with previous estimates of climatological monthly mean surface heat fluxes near the buoy location. Sea surface receives heat through net shortwave radiation ($Q_i$) and loses heat as net longwave radiation ($Q_b$), sensible heat flux ($Q_h$), and latent heat flux ($Q_e$). $Q_e$ is the largest contribution to the total heat loss of about 51 %, and $Q_b$ and $Q_h$ account for 34% and 15% of the total heat loss, respectively. Net heat flux ($Q_n$) shows maximum in May ($191.4W/m^2$) when $Q_i$ shows its annual maximum, and minimum in December ($-264.9W/m^2$) when the heat loss terms show their annual minimum values. Annual mean $Q_n$ is estimated to be $1.9W/m^2$, which is negligibly small considering instrument errors (maximum of ${\pm}19.7W/m^2$). In the previous estimates, summertime incoming radiations ($Q_i$) are underestimated by about $10{\sim}40W/m^2$, and wintertime heat losses due to $Q_e$ and $Q_h$ are overestimated by about $50W/m^2$ and $30{\sim}70W/m^2$, respectively. Consequently, as compared to $Q_n$ from the present study, the amount of net heat gain during the period of net oceanic heat gain between April and August is underestimated, while the ocean's net heat loss in winter is overestimated in other studies. The difference in $Q_n$ is as large as $70{\sim}130W/m^2$ in December and January. Analysis of long-term reanalysis product (MERRA) indicates that the difference in the monthly mean heat fluxes between the present and previous studies is not due to the temporal variability of fluxes but due to inaccurate data used for the calculation of the heat fluxes. This study suggests that caution should be exercised in using the climatological monthly mean surface heat fluxes documented previously for various research and numerical modeling purposes.