• Title/Summary/Keyword: Impact-Based Forecast

Search Result 105, Processing Time 0.033 seconds

Actuarial analysis of a reverse mortgage applying a modified Lee-Carter model based on the projection of the skewness of the mortality (왜도 예측을 이용한 Lee-Carter 모형의 주택연금 리스크 분석)

  • Lee, Hangsuck;Park, Sangdae;Baek, Hyeyoun
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.1
    • /
    • pp.77-96
    • /
    • 2018
  • A reverse mortgage provides a pension until the death for the insured or last survivor. Long-term risk management is important to estimate the contractual period of a reverse mortgage. It is also necessary to study prediction methods of mortality rates that appropriately reflect the improvement trend of the mortality rate since the extension of the life expectancy, which is the main cause of aging, can have a serious impact on the pension financial soundness. In this study, the Lee-Carter (LC) model reflects the improvement in mortality rates; in addition, multiple life model are also applied to a reverse mortgage. The mortality prediction method by the traditional LC model has shown a dramatic improvement in the mortality rate; therefore, this study suggests mortality projection based on the projection of the skewness for the mortality that has been applied to appropriately reflect the improvement trend of the mortality rate. This paper calculates monthly payments using future mortality rates based on the projection of the skewness of the mortality. As a result, the mortality rates based on this method less reflect the mortality improvement effect than the mortality rates based on a traditional LC model and a larger pension amount is calculated. In conclusion, this method is useful to forecast future mortality trend results in a significant reduction of longevity risk. It can also be used as a risk management method to pay appropriate monthly payments and prevent insufficient payment due to overpayment by the issuing institution and the guarantee institution of the reverse mortgage.

Assessing the Impact of Climate Change on Water Resources: Waimea Plains, New Zealand Case Example

  • Zemansky, Gil;Hong, Yoon-Seeok Timothy;Rose, Jennifer;Song, Sung-Ho;Thomas, Joseph
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.18-18
    • /
    • 2011
  • Climate change is impacting and will increasingly impact both the quantity and quality of the world's water resources in a variety of ways. In some areas warming climate results in increased rainfall, surface runoff, and groundwater recharge while in others there may be declines in all of these. Water quality is described by a number of variables. Some are directly impacted by climate change. Temperature is an obvious example. Notably, increased atmospheric concentrations of $CO_2$ triggering climate change increase the $CO_2$ dissolving into water. This has manifold consequences including decreased pH and increased alkalinity, with resultant increases in dissolved concentrations of the minerals in geologic materials contacted by such water. Climate change is also expected to increase the number and intensity of extreme climate events, with related hydrologic changes. A simple framework has been developed in New Zealand for assessing and predicting climate change impacts on water resources. Assessment is largely based on trend analysis of historic data using the non-parametric Mann-Kendall method. Trend analysis requires long-term, regular monitoring data for both climate and hydrologic variables. Data quality is of primary importance and data gaps must be avoided. Quantitative prediction of climate change impacts on the quantity of water resources can be accomplished by computer modelling. This requires the serial coupling of various models. For example, regional downscaling of results from a world-wide general circulation model (GCM) can be used to forecast temperatures and precipitation for various emissions scenarios in specific catchments. Mechanistic or artificial intelligence modelling can then be used with these inputs to simulate climate change impacts over time, such as changes in streamflow, groundwater-surface water interactions, and changes in groundwater levels. The Waimea Plains catchment in New Zealand was selected for a test application of these assessment and prediction methods. This catchment is predicted to undergo relatively minor impacts due to climate change. All available climate and hydrologic databases were obtained and analyzed. These included climate (temperature, precipitation, solar radiation and sunshine hours, evapotranspiration, humidity, and cloud cover) and hydrologic (streamflow and quality and groundwater levels and quality) records. Results varied but there were indications of atmospheric temperature increasing, rainfall decreasing, streamflow decreasing, and groundwater level decreasing trends. Artificial intelligence modelling was applied to predict water usage, rainfall recharge of groundwater, and upstream flow for two regionally downscaled climate change scenarios (A1B and A2). The AI methods used were multi-layer perceptron (MLP) with extended Kalman filtering (EKF), genetic programming (GP), and a dynamic neuro-fuzzy local modelling system (DNFLMS), respectively. These were then used as inputs to a mechanistic groundwater flow-surface water interaction model (MODFLOW). A DNFLMS was also used to simulate downstream flow and groundwater levels for comparison with MODFLOW outputs. MODFLOW and DNFLMS outputs were consistent. They indicated declines in streamflow on the order of 21 to 23% for MODFLOW and DNFLMS (A1B scenario), respectively, and 27% in both cases for the A2 scenario under severe drought conditions by 2058-2059, with little if any change in groundwater levels.

  • PDF

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

Attention to the Internet: The Impact of Active Information Search on Investment Decisions (인터넷 주의효과: 능동적 정보 검색이 투자 결정에 미치는 영향에 관한 연구)

  • Chang, Young Bong;Kwon, YoungOk;Cho, Wooje
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.117-129
    • /
    • 2015
  • As the Internet becomes ubiquitous, a large volume of information is posted on the Internet with exponential growth every day. Accordingly, it is not unusual that investors in stock markets gather and compile firm-specific or market-wide information through online searches. Importantly, it becomes easier for investors to acquire value-relevant information for their investment decision with the help of powerful search tools on the Internet. Our study examines whether or not the Internet helps investors assess a firm's value better by using firm-level data over long periods spanning from January 2004 to December 2013. To this end, we construct weekly-based search volume for information technology (IT) services firms on the Internet. We limit our focus to IT firms since they are often equipped with intangible assets and relatively less recognized to the public which makes them hard-to measure. To obtain the information on those firms, investors are more likely to consult the Internet and use the information to appreciate the firms more accurately and eventually improve their investment decisions. Prior studies have shown that changes in search volumes can reflect the various aspects of the complex human behaviors and forecast near-term values of economic indicators, including automobile sales, unemployment claims, and etc. Moreover, search volume of firm names or stock ticker symbols has been used as a direct proxy of individual investors' attention in financial markets since, different from indirect measures such as turnover and extreme returns, they can reveal and quantify the interest of investors in an objective way. Following this line of research, this study aims to gauge whether the information retrieved from the Internet is value relevant in assessing a firm. We also use search volume for analysis but, distinguished from prior studies, explore its impact on return comovements with market returns. Given that a firm's returns tend to comove with market returns excessively when investors are less informed about the firm, we empirically test the value of information by examining the association between Internet searches and the extent to which a firm's returns comove. Our results show that Internet searches are negatively associated with return comovements as expected. When sample is split by the size of firms, the impact of Internet searches on return comovements is shown to be greater for large firms than small ones. Interestingly, we find a greater impact of Internet searches on return comovements for years from 2009 to 2013 than earlier years possibly due to more aggressive and informative exploit of Internet searches in obtaining financial information. We also complement our analyses by examining the association between return volatility and Internet search volumes. If Internet searches capture investors' attention associated with a change in firm-specific fundamentals such as new product releases, stock splits and so on, a firm's return volatility is likely to increase while search results can provide value-relevant information to investors. Our results suggest that in general, an increase in the volume of Internet searches is not positively associated with return volatility. However, we find a positive association between Internet searches and return volatility when the sample is limited to larger firms. A stronger result from larger firms implies that investors still pay less attention to the information obtained from Internet searches for small firms while the information is value relevant in assessing stock values. However, we do find any systematic differences in the magnitude of Internet searches impact on return volatility by time periods. Taken together, our results shed new light on the value of information searched from the Internet in assessing stock values. Given the informational role of the Internet in stock markets, we believe the results would guide investors to exploit Internet search tools to be better informed, as a result improving their investment decisions.

A LSTM Based Method for Photovoltaic Power Prediction in Peak Times Without Future Meteorological Information (미래 기상정보를 사용하지 않는 LSTM 기반의 피크시간 태양광 발전량 예측 기법)

  • Lee, Donghun;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.4
    • /
    • pp.119-133
    • /
    • 2019
  • Recently, the importance prediction of photovoltaic power (PV) is considered as an essential function for scheduling adjustments, deciding on storage size, and overall planning for stable operation of PV facility systems. In particular, since most of PV power is generated in peak time, PV power prediction in a peak time is required for the PV system operators that enable to maximize revenue and sustainable electricity quantity. Moreover, Prediction of the PV power output in peak time without meteorological information such as solar radiation, cloudiness, the temperature is considered a challenging problem because it has limitations that the PV power was predicted by using predicted uncertain meteorological information in a wide range of areas in previous studies. Therefore, this paper proposes the LSTM (Long-Short Term Memory) based the PV power prediction model only using the meteorological, seasonal, and the before the obtained PV power before peak time. In this paper, the experiment results based on the proposed model using the real-world data shows the superior performance, which showed a positive impact on improving the PV power in a peak time forecast performance targeted in this study.

Impacts of Seasonal and Interannual Variabilities of Sea Surface Temperature on its Short-term Deep-learning Prediction Model Around the Southern Coast of Korea (한국 남부 해역 SST의 계절 및 경년 변동이 단기 딥러닝 모델의 SST 예측에 미치는 영향)

  • JU, HO-JEONG;CHAE, JEONG-YEOB;LEE, EUN-JOO;KIM, YOUNG-TAEG;PARK, JAE-HUN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.2
    • /
    • pp.49-70
    • /
    • 2022
  • Sea Surface Temperature (SST), one of the ocean features, has a significant impact on climate, marine ecosystem and human activities. Therefore, SST prediction has been always an important issue. Recently, deep learning has drawn much attentions, since it can predict SST by training past SST patterns. Compared to the numerical simulations, deep learning model is highly efficient, since it can estimate nonlinear relationships between input data. With the recent development of Graphics Processing Unit (GPU) in computer, large amounts of data can be calculated repeatedly and rapidly. In this study, Short-term SST will be predicted through Convolutional Neural Network (CNN)-based U-Net that can handle spatiotemporal data concurrently and overcome the drawbacks of previously existing deep learning-based models. The SST prediction performance depends on the seasonal and interannual SST variabilities around the southern coast of Korea. The predicted SST has a wide range of variance during spring and summer, while it has small range of variance during fall and winter. A wide range of variance also has a significant correlation with the change of the Pacific Decadal Oscillation (PDO) index. These results are found to be affected by the intensity of the seasonal and PDO-related interannual SST fronts and their intensity variations along the southern Korean seas. This study implies that the SST prediction performance using the developed deep learning model can be significantly varied by seasonal and interannual variabilities in SST.

Deep Learning Based Prediction Method of Long-term Photovoltaic Power Generation Using Meteorological and Seasonal Information (기후 및 계절정보를 이용한 딥러닝 기반의 장기간 태양광 발전량 예측 기법)

  • Lee, Donghun;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.1
    • /
    • pp.1-16
    • /
    • 2019
  • Recently, since responding to meteorological changes depending on increasing greenhouse gas and electricity demand, the importance prediction of photovoltaic power (PV) is rapidly increasing. In particular, the prediction of PV power generation may help to determine a reasonable price of electricity, and solve the problem addressed such as a system stability and electricity production balance. However, since the dynamic changes of meteorological values such as solar radiation, cloudiness, and temperature, and seasonal changes, the accurate long-term PV power prediction is significantly challenging. Therefore, in this paper, we propose PV power prediction model based on deep learning that can be improved the PV power prediction performance by learning to use meteorological and seasonal information. We evaluate the performances using the proposed model compared to seasonal ARIMA (S-ARIMA) model, which is one of the typical time series methods, and ANN model, which is one hidden layer. As the experiment results using real-world dataset, the proposed model shows the best performance. It means that the proposed model shows positive impact on improving the PV power forecast performance.

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.