• Title/Summary/Keyword: probability prediction

Search Result 776, Processing Time 0.026 seconds

Improved AR-FGS Coding Scheme for Scalable Video Coding (확장형 비디오 부호화(SVC)의 AR-FGS 기법에 대한 부호화 성능 개선 기법)

  • Seo, Kwang-Deok;Jung, Soon-Heung;Kim, Jin-Soo;Kim, Jae-Gon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.12C
    • /
    • pp.1173-1183
    • /
    • 2006
  • In this paper, we propose an efficient method for improving visual quality of AR-FGS (Adaptive Reference FGS) which is adopted as a key scheme for SVC (Scalable Video Coding) or H.264 scalable extension. The standard FGS (Fine Granularity Scalability) adopts AR-FGS that introduces temporal prediction into FGS layer by using a high quality reference signal which is constructed by the weighted average between the base layer reconstructed imageand enhancement reference to improve the coding efficiency in the FGS layer. However, when the enhancement stream is truncated at certain bitstream position in transmission, the rest of the data of the FGS layer will not be available at the FGS decoder. Thus the most noticeable problem of using the enhancement layer in prediction is the degraded visual quality caused by drifting because of the mismatch between the reference frame used by the FGS encoder and that by the decoder. To solve this problem, we exploit the principle of cyclical block coding that is used to encode quantized transform coefficients in a cyclical manner in the FGS layer. Encoding block coefficients in a cyclical manner places 'higher-value' bits earlier in the bitstream. The quantized transform coefficients included in the ealry coding cycle of cyclical block coding have higher probability to be correctly received and decoded than the others included in the later cycle of the cyclical block coding. Therefore, we can minimize visual quality degradation caused by bitstream truncation by adjusting weighting factor to control the contribution of the bitstream produced in each coding cycle of cyclical block coding when constructing the enhancement layer reference frame. It is shown by simulations that the improved AR-FGS scheme outperforms the standard AR-FGS by about 1 dB in maximum in the reconstructed visual quality.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

Preliminary Inspection Prediction Model to select the on-Site Inspected Foreign Food Facility using Multiple Correspondence Analysis (차원축소를 활용한 해외제조업체 대상 사전점검 예측 모형에 관한 연구)

  • Hae Jin Park;Jae Suk Choi;Sang Goo Cho
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.121-142
    • /
    • 2023
  • As the number and weight of imported food are steadily increasing, safety management of imported food to prevent food safety accidents is becoming more important. The Ministry of Food and Drug Safety conducts on-site inspections of foreign food facilities before customs clearance as well as import inspection at the customs clearance stage. However, a data-based safety management plan for imported food is needed due to time, cost, and limited resources. In this study, we tried to increase the efficiency of the on-site inspection by preparing a machine learning prediction model that pre-selects the companies that are expected to fail before the on-site inspection. Basic information of 303,272 foreign food facilities and processing businesses collected in the Integrated Food Safety Information Network and 1,689 cases of on-site inspection information data collected from 2019 to April 2022 were collected. After preprocessing the data of foreign food facilities, only the data subject to on-site inspection were extracted using the foreign food facility_code. As a result, it consisted of a total of 1,689 data and 103 variables. For 103 variables, variables that were '0' were removed based on the Theil-U index, and after reducing by applying Multiple Correspondence Analysis, 49 characteristic variables were finally derived. We build eight different models and perform hyperparameter tuning through 5-fold cross validation. Then, the performance of the generated models are evaluated. The research purpose of selecting companies subject to on-site inspection is to maximize the recall, which is the probability of judging nonconforming companies as nonconforming. As a result of applying various algorithms of machine learning, the Random Forest model with the highest Recall_macro, AUROC, Average PR, F1-score, and Balanced Accuracy was evaluated as the best model. Finally, we apply Kernal SHAP (SHapley Additive exPlanations) to present the selection reason for nonconforming facilities of individual instances, and discuss applicability to the on-site inspection facility selection system. Based on the results of this study, it is expected that it will contribute to the efficient operation of limited resources such as manpower and budget by establishing an imported food management system through a data-based scientific risk management model.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

Wave Analysis and Spectrum Estimation for the Optimal Design of the Wave Energy Converter in the Hupo Coastal Sea (파력발전장치 설계를 위한후포 연안의 파랑 분석 및 스펙트럼 추정)

  • Kweon, Hyuck-Min;Cho, Hongyeon;Jeong, Weon-Mu
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.25 no.3
    • /
    • pp.147-153
    • /
    • 2013
  • There exist various types of the WEC (Wave Energy Converter), and among them, the point absorber is the most popularly investigated type. However, it is difficult to find examples of systematically measured data analysis for the design of the point absorber type of power buoy in the world. The study investigates the wave load acting on the point absorber type resonance power buoy wave energy extraction system proposed by Kweon et al. (2010). This study analyzes the time series spectra with respect to the three-year wave data (2002.05.01~2005.03.29) measured using the pressure type wave gage at the seaside of north breakwater of Hupo harbor located in the east coast of the Korean peninsula. From the analysis results, it could be deduced that monthly wave period and wave height variations were apparent and that monthly wave powers were unevenly distributed annually. The average wave steepness of the usual wave was 0.01, lower than that of the wind wave range of 0.02-0.04. The mode of the average wave period has the value of 5.31 sec, while mode of the wave height of the applicable period has the value of 0.29 m. The occurrence probability of the peak period is a bi-modal type, with a mode value between 4.47 sec and 6.78 sec. The design wave period can be selected from the above four values of 0.01, 5.31, 4.47, 6.78. About 95% of measured wave heights are below 1 m. Through this study, it was found that a resonance power buoy system is necessary in coastal areas with low wave energy and that the optimal design for overcoming the uneven monthly distribution of wave power is a major task in the development of a WEF (Wave Energy Farm). Finding it impossible to express the average spectrum of the usual wave in terms of the standard spectrum equation, this study proposes a new spectrum equation with three parameters, with which basic data for the prediction of the power production using wave power buoy and the fatigue analysis of the system can be given.

Determinants of IPO Failure Risk and Price Response in Kosdaq (코스닥 상장 시 실패위험 결정요인과 주가반응에 관한 연구)

  • Oh, Sung-Bae;Nam, Sam-Hyun;Yi, Hwa-Deuk
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.5 no.4
    • /
    • pp.1-34
    • /
    • 2010
  • Recently, failure rates of Kosdaq IPO firms are increasing and their survival rates tend to be very low, and when these firms do fail, often times backed by a number of governmental financial supports, they may inflict severe financial damage to investors, let alone economy as a whole. To ensure investors' confidence in Kosdaq and foster promising and healthy businesses, it is necessary to precisely assess their intrinsic values and survivability. This study investigates what contributed to the failure of IPO firms and analyzed how these elements are factored into corresponding firms' stock returns. Failure risks are assessed at the time of IPO. This paper considers factors reflecting IPO characteristics, a firm's underwriter prestige, auditor's quality, IPO offer price, firm's age, and IPO proceeds. The study further went on to examine how, if at all, these failure risks involved during IPO led to post-IPO stock prices. Sample firms used in this study include 98 Kosdaq firms that have failed and 569 healthy firms that are classified into the same business categories, and Logit models are used in estimate the probability of failure. Empirical results indicate that auditor's quality, IPO offer price, firm's age, and IPO proceeds shown significant relevance to failure risks at the time of IPO. Of other variables, firm's size and ROA, previously deemed significantly related to failure risks, in fact do not show significant relevance to those risks, whereas financial leverage does. This illustrates the efficacy of a model that appropriately reflects the attributes of IPO firms. Also, even though R&D expenditures were believed to be value relevant by previous studies, this study reveals that R&D is not a significant factor related to failure risks. In examing the relation between failure risks and stock prices, this study finds that failure risks are negatively related to 1 or 2 year size-adjusted abnormal returns after IPO. The results of this study may provide useful knowledge for government regulatory officials in contemplating pertinent policy and for credit analysts in their proper evaluation of a firm's credit standing.

  • PDF

A Study on the Volatility of Global Stock Markets using Markov Regime Switching model (마코브국면전환모형을 이용한 글로벌 주식시장의 변동성에 대한 연구)

  • Lee, Kyung-Hee;Kim, Kyung-Soo
    • Management & Information Systems Review
    • /
    • v.34 no.3
    • /
    • pp.17-39
    • /
    • 2015
  • This study examined the structural changes and volatility in the global stock markets using a Markov Regime Switching ARCH model developed by the Hamilton and Susmel (1994). Firstly, the US, Italy and Ireland showed that variance in the high volatility regime was more than five times that in the low volatility, while Korea, Russia, India, and Greece exhibited that variance in the high volatility regime was increased more than eight times that in the low. On average, a jump from regime 1 to regime 2 implied roughly three times increased in risk, while the risk during regime 3 was up to almost thirteen times than during regime 1 over the study period. And Korea, the US, India, Italy showed ARCH(1) and ARCH(2) effects, leverage and asymmetric effects. Secondly, 278 days were estimated in the persistence of low volatility regime, indicating that the mean transition probability between volatilities exhibited the highest long-term persistence in Korea. Thirdly, the coefficients appeared to be unstable structural changes and volatility for the stock markets in Chow tests during the Asian, Global and European financial crisis. In addition, 1-Step prediction error tests showed that stock markets were unstable during the Asian crisis of 1997-1998 except for Russia, and the Global crisis of 2007-2008 except for Korea and the European crisis of 2010-2011 except for Korea, the US, Russia and India. N-Step tests exhibited that most of stock markets were unstable during the Asian and Global crisis. There was little change in the Asian crisis in CUSUM tests, while stock markets were stable until the late 2000s except for some countries. Also there were stable and unstable stock markets mixed across countries in CUSUMSQ test during the crises. Fourthly, I confirmed a close relevance of the volatility between Korea and other countries in the stock markets through the likelihood ratio tests. Accordingly, I have identified the episode or events that generated the high volatility in the stock markets for the financial crisis, and for all seven stock markets the significant switch between the volatility regimes implied a considerable change in the market risk. It appeared that the high stock market volatility was related with business recession at the beginning in 1990s. By closely examining the history of political and economical events in the global countries, I found that the results of Lamoureux and Lastrapes (1990) were consistent with those of this paper, indicating there were the structural changes and volatility during the crises and specificly every high volatility regime in SWARCH-L(3,2) student t-model was accompanied by some important policy changes or financial crises in countries or other critical events in the international economy. The sophisticated nonlinear models are needed to further analysis.

  • PDF

Bias Correction for GCM Long-term Prediction using Nonstationary Quantile Mapping (비정상성 분위사상법을 이용한 GCM 장기예측 편차보정)

  • Moon, Soojin;Kim, Jungjoong;Kang, Boosik
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.8
    • /
    • pp.833-842
    • /
    • 2013
  • The quantile mapping is utilized to reproduce reliable GCM(Global Climate Model) data by correct systematic biases included in the original data set. This scheme, in general, projects the Cumulative Distribution Function (CDF) of the underlying data set into the target CDF assuming that parameters of target distribution function is stationary. Therefore, the application of stationary quantile mapping for nonstationary long-term time series data of future precipitation scenario computed by GCM can show biased projection. In this research the Nonstationary Quantile Mapping (NSQM) scheme was suggested for bias correction of nonstationary long-term time series data. The proposed scheme uses the statistical parameters with nonstationary long-term trends. The Gamma distribution was assumed for the object and target probability distribution. As the climate change scenario, the 20C3M(baseline scenario) and SRES A2 scenario (projection scenario) of CGCM3.1/T63 model from CCCma (Canadian Centre for Climate modeling and analysis) were utilized. The precipitation data were collected from 10 rain gauge stations in the Han-river basin. In order to consider seasonal characteristics, the study was performed separately for the flood (June~October) and nonflood (November~May) seasons. The periods for baseline and projection scenario were set as 1973~2000 and 2011~2100, respectively. This study evaluated the performance of NSQM by experimenting various ways of setting parameters of target distribution. The projection scenarios were shown for 3 different periods of FF scenario (Foreseeable Future Scenario, 2011~2040 yr), MF scenario (Mid-term Future Scenario, 2041~2070 yr), LF scenario (Long-term Future Scenario, 2071~2100 yr). The trend test for the annual precipitation projection using NSQM shows 330.1 mm (25.2%), 564.5 mm (43.1%), and 634.3 mm (48.5%) increase for FF, MF, and LF scenarios, respectively. The application of stationary scheme shows overestimated projection for FF scenario and underestimated projection for LF scenario. This problem could be improved by applying nonstationary quantile mapping.