• Title/Summary/Keyword: Statistical probability

Search Result 1,630, Processing Time 0.035 seconds

Cause-Specific Mortality at the Provincial Level (시도의 사망원인별 사망력)

  • Park Kyung Ae
    • Korea journal of population studies
    • /
    • v.26 no.2
    • /
    • pp.1-32
    • /
    • 2003
  • An analysis on cause-specific mortality at the provincial level provides essential information for policy formulation and makes it possible to draw hypotheses regarding various diseases and causes of death. Although the mortality level and causes of death at the provincial level are determined by the multiple effects of socioeconomic, cultural, medical and ecological factors, this study primarily intends to examine similarities and differences of cause-specific mortality at the provincial level. Utilizing the registered death and the registered population as of 1998, the delayed death registration and unreported infant deaths were supplemented at the provincial level and age-standardized death rates and life tables were calculated. Regarding the mortality level due to all causes, major findings were as follow: (1) For both sexes as a whole, Seoul showed the lowest mortality level, and Jeonnam showed the highest mortality level; and (2) The differences of the mortality level among provinces were greater for males than females and for those less than 65 years than those 65 years and over. Regarding the cause-specific mortality level revealed in all indicators (cause-specific age-standardized mortality rates and the probability of dying at birth due to a specific cause for males, females, and both sexes combined respectively), the major findings were as follow: (1) The mortality level due to heart diseases was the highest in Busan and the lowest in Gangweon; (2) The mortality level due to liver diseases was the highest in Chonnam; and (3) The mortality level due to traffic accidents was the highest in Chungnam and the lowest in Inchon. As the mortality differentials at the provincial level are related to various factors, exploratory statistical analysis is attempted for the 25 explanatory variables including socioeconomic variables and 90 mortality variables. Mortality due to all causes are related to socioeconomic variables. Among cause-specific mortality, mortality due to liver diseases and traffic accidents is related to socioeconomic variables. Finally, the need to improve the quality of death certificate is discussed.

Bias Correction for GCM Long-term Prediction using Nonstationary Quantile Mapping (비정상성 분위사상법을 이용한 GCM 장기예측 편차보정)

  • Moon, Soojin;Kim, Jungjoong;Kang, Boosik
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.8
    • /
    • pp.833-842
    • /
    • 2013
  • The quantile mapping is utilized to reproduce reliable GCM(Global Climate Model) data by correct systematic biases included in the original data set. This scheme, in general, projects the Cumulative Distribution Function (CDF) of the underlying data set into the target CDF assuming that parameters of target distribution function is stationary. Therefore, the application of stationary quantile mapping for nonstationary long-term time series data of future precipitation scenario computed by GCM can show biased projection. In this research the Nonstationary Quantile Mapping (NSQM) scheme was suggested for bias correction of nonstationary long-term time series data. The proposed scheme uses the statistical parameters with nonstationary long-term trends. The Gamma distribution was assumed for the object and target probability distribution. As the climate change scenario, the 20C3M(baseline scenario) and SRES A2 scenario (projection scenario) of CGCM3.1/T63 model from CCCma (Canadian Centre for Climate modeling and analysis) were utilized. The precipitation data were collected from 10 rain gauge stations in the Han-river basin. In order to consider seasonal characteristics, the study was performed separately for the flood (June~October) and nonflood (November~May) seasons. The periods for baseline and projection scenario were set as 1973~2000 and 2011~2100, respectively. This study evaluated the performance of NSQM by experimenting various ways of setting parameters of target distribution. The projection scenarios were shown for 3 different periods of FF scenario (Foreseeable Future Scenario, 2011~2040 yr), MF scenario (Mid-term Future Scenario, 2041~2070 yr), LF scenario (Long-term Future Scenario, 2071~2100 yr). The trend test for the annual precipitation projection using NSQM shows 330.1 mm (25.2%), 564.5 mm (43.1%), and 634.3 mm (48.5%) increase for FF, MF, and LF scenarios, respectively. The application of stationary scheme shows overestimated projection for FF scenario and underestimated projection for LF scenario. This problem could be improved by applying nonstationary quantile mapping.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Monthly temperature forecasting using large-scale climate teleconnections and multiple regression models (대규모 기후 원격상관성 및 다중회귀모형을 이용한 월 평균기온 예측)

  • Kim, Chul-Gyum;Lee, Jeongwoo;Lee, Jeong Eun;Kim, Nam Won;Kim, Hyeonjun
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.9
    • /
    • pp.731-745
    • /
    • 2021
  • In this study, the monthly temperature of the Han River basin was predicted by statistical multiple regression models that use global climate indices and weather data of the target region as predictors. The optimal predictors were selected through teleconnection analysis between the monthly temperature and the preceding patterns of each climate index, and forecast models capable of predicting up to 12 months in advance were constructed by combining the selected predictors and cross-validating the past period. Fore each target month, 1000 optimized models were derived and forecast ranges were presented. As a result of analyzing the predictability of monthly temperature from January 1992 to December 2020, PBIAS was -1.4 to -0.7%, RSR was 0.15 to 0.16, NSE was 0.98, and r was 0.99, indicating a high goodness-of-fit. The probability of each monthly observation being included in the forecast range was about 64.4% on average, and by month, the predictability was relatively high in September, December, February, and January, and low in April, August, and March. The predicted range and median were in good agreement with the observations, except for some periods when temperature was dramatically lower or higher than in normal years. The quantitative temperature forecast information derived from this study will be useful not only for forecasting changes in temperature in the future period (1 to 12 months in advance), but also in predicting changes in the hydro-ecological environment, including evapotranspiration highly correlated with temperature.

A Study on the Impact of SNS Usage Characteristics, Characteristics of Loan Products, and Personal Characteristics on Credit Loan Repayment (SNS 사용특성, 대출특성, 개인특성이 신용대출 상환에 미치는 영향에 관한 연구)

  • Jeong, Wonhoon;Lee, Jaesoon
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.5
    • /
    • pp.77-90
    • /
    • 2023
  • This study aims to investigate the potential of alternative credit assessment through Social Networking Sites (SNS) as a complementary tool to conventional loan review processes. It seeks to discern the impact of SNS usage characteristics and loan product attributes on credit loan repayment. To achieve this objective, we conducted a binomial logistic regression analysis examining the influence of SNS usage patterns, loan characteristics, and personal attributes on credit loan conditions, utilizing data from Company A's credit loan program, which integrates SNS data into its actual loan review processes. Our findings reveal several noteworthy insights. Firstly, with respect to profile photos that reflect users' personalities and individual characteristics, individuals who choose to upload photos directly connected to their personal lives, such as images of themselves, their private circles (e.g., family and friends), and photos depicting social activities like hobbies, which tend to be favored by individuals with extroverted tendencies, as well as character and humor-themed photos, which are typically favored by individuals with conscientious traits, demonstrate a higher propensity for diligently repaying credit loans. Conversely, the utilization of photos like landscapes or images concealing one's identity did not exhibit a statistically significant causal relationship with loan repayment. Furthermore, a positive correlation was observed between the extent of SNS usage and the likelihood of loan repayment. However, the level of SNS interaction did not exert a significant effect on the probability of loan repayment. This observation may be attributed to the passive nature of the interaction variable, which primarily involves expressing sympathy for other users' comments rather than generating original content. The study also unveiled the statistical significance of loan duration and the number of loans, representing key characteristics of loan portfolios, in influencing credit loan repayment. This underscores the importance of considering loan duration and the quantity of loans as crucial determinants in the design of microcredit products. Among the personal characteristic variables examined, only gender emerged as a significant factor. This implies that the loan program scrutinized in this analysis does not exhibit substantial discrimination based on age and credit scores, as its customer base predominantly consists of individuals in their twenties and thirties with low credit scores, who encounter challenges in securing loans from traditional financial institutions. This research stands out from prior studies by empirically exploring the relationship between SNS usage and credit loan repayment while incorporating variables not typically addressed in existing credit rating research, such as profile pictures. It underscores the significance of harnessing subjective, unstructured information from SNS for loan screening, offering the potential to mitigate the financial disadvantages faced by borrowers with low credit scores or those ensnared in short-term liquidity constraints due to limited credit history a group often referred to as "thin filers." By utilizing such information, these individuals can potentially reduce their credit costs, whereas they are supposed to accrue a more substantial financial history through credit transactions under conventional credit assessment system.

  • PDF

Development of a Model for Analylzing and Evaluating the Suitability of Locations for Cooling Center Considering Local Characteristics (지역 특성을 고려한 무더위쉼터의 입지특성 분석 및 평가 모델 개발)

  • Jieun Ryu;Chanjong Bu;Kyungil Lee;Kyeong Doo Cho
    • Journal of Environmental Impact Assessment
    • /
    • v.33 no.4
    • /
    • pp.143-154
    • /
    • 2024
  • Heat waves caused by climate change are rapidly increasing health damage to vulnerable groups, and to prevent this, the national, regional, and local governments are establishing climate crisis adaptation policy. A representative climate crisis adaptation policy to reduce heat wave damage is to expand the number of cooling centers. Because it is highly effective in a short period of time, most metropolitan local governments, except Jeonbuk, include the project as an adaptation policy. However, the criteria for selecting a cooling centers are different depending on the budget and non-budget, so the utilization rate and effectiveness of the cooling centers are all different. Therefore, in this study, we developed logistic regression models that can predict and evaluate areas with a high probability of expanding cooling centers in order to implement adaptation policy in local governments. In Incheon Metropolitan City, which consists of various heat wave-vulnerable environments due to the coexistence of the old city and the new city, a logistic model was developed to predict areas where heat waves can be cooling centered by dividing it into Ganghwa·Ongjin-gun and other regions, taking into account socioeconomic and environmental differences. As a result of the study, the statistical model for the Ganghwa·Ogjin-gun region showed that the higher the ground surface temperature and the more and more the number of elderly people over 65 years old, the higher the possibility of location of cooling centers, and the prediction accuracy was about 80.93%. The developed logistic regression model can predict and evaluate areas with a high potential as cooling centers by considering regional environmental and social characteristics, and is expected to be used for priority selection and management when designating additional cooling centers in the future.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Importance and Satisfaction Analysis for Vitalization of River Estuary - Focused on the Nakdong Estuary - (강 하구역 활성화를 위한 자원의 중요도·만족도 분석 - 낙동강 하구역의 사례를 중심으로 -)

  • An, Byung-Chul;Kwon, Jin-Wook
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.6
    • /
    • pp.49-59
    • /
    • 2018
  • The purpose of this study was to analyze the importance and satisfaction of resources in the mouth of Nakdong River. A Pearson's chi-square test was performed in SPSS 24.0 for statistical analysis and the result of the study was summarized by three points. First, the results of importance analysis on resources in Nakdong estuary found that the importance of ecology resources was the highest with 27.1%, followed by landscape resources (18.5%), waterside leisure resources (6.5%), complex cultural resources (5.4%), and historic and cultural resources (3.3%). The probability values (p-value) of each group had shown significant differences depending on gender, age, and the location of the survey. For instance, women respondents reported a higher preference to ecology resources and complex cultural resources such as museums than men respondents as much as two times and three times, respectively. Meanwhile, men respondents showed a higher preference to waterside leisure resources in three times as much as women respondents. As for the analysis by age, the respondents in their 20s and 30s recorded a higher value than those in other age groups, and people in their 30s reported a higher preference to waterside leisure resources than those in different age groups by three times. Lastly, no significant differences were found in the preference analysis by occupation (p>.05). With regard to the results of satisfaction analysis, the average level of satisfaction on landscape resources was 6.01, and that of ecology resources and complex cultural resource were 5.65 and 5.15, respectively. Also, significant differences were found between landscape and ecology resources in the satisfaction analysis by age, landscape resources by age, ecology resources by region, and between landscape resources and ecology resources by occupation. The p-value of complex cultural resources was p=0.012, although the satisfaction level of landscape resources and ecology resources were reported to have no significant differences by age. As for the level of satisfaction in landscape resources, respondents in their 40s and 50s showed a high level of satisfaction. However, those in their 20s showed a relatively low level of satisfaction in the same category. The survey respondents living in Busan and South Gyeongsang Province and those living outside the regions revealed no significant differences in terms of satisfaction in landscape resources and complex cultural resources. However, the two same groups were found to show significant differences in the satisfaction analysis on ecology resources. In the satisfaction analysis of landscape resources and ecology resources by occupation, significant differences were found among college students, government employees, ordinary citizens, and expert groups. However, they showed no significant differences in the level of satisfaction to complex cultural resources. Third, the results of importance-satisfaction analysis on Nakdong estuary found that the average levels of satisfaction to landscape resources for each group of respondents who considered landscape, ecology, and cultural resources as important was 6.19, 6.08, and 5.67, respectively. Their levels of satisfaction on ecology resources were 5.95, 5.57, and 5.41 for each. Its correlation to the importance was insignificant. However, it was confirmed that the correlation to the level of satisfaction on complex cultural resources had a significant difference (p=0.025). In addition, the results of the analysis on 15 detailed items that was carried out with the aim to improving values and vitalizing resources in the mouth of Nakdong River found that respondents considered that the vitalization of eco-tourism (49.5%) and restoration of reed marsh (47.5%) were important. The results of detailed analysis revealed respondents' high awareness on the need of enhancing values on ecology resources. Also, improving infrastructure nearby the mouth, creating cycling routes, walkways, waterside leisure facilities, and others were considered as the requirements for the vitalization of Nakdong estuary.