• Title/Summary/Keyword: modeling system

Search Result 10,765, Processing Time 0.042 seconds

BVOCs Estimates Using MEGAN in South Korea: A Case Study of June in 2012 (MEGAN을 이용한 국내 BVOCs 배출량 산정: 2012년 6월 사례 연구)

  • Kim, Kyeongsu;Lee, Seung-Jae
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.1
    • /
    • pp.48-61
    • /
    • 2022
  • South Korea is quite vegetation rich country which has 63% forests and 16% cropland area. Massive NOx emissions from megacities, therefore, are easily combined with BVOCs emitted from the forest and cropland area, then produce high ozone concentration. BVOCs emissions have been estimated using well-known emission models, such as BEIS (Biogenic Emission Inventory System) or MEGAN (Model of Emission of Gases and Aerosol from Nature) which were developed using non-Korean emission factors. In this study, we ran MEGAN v2.1 model to estimate BVO Cs emissions in Korea. The MO DIS Land Cover and LAI (Leaf Area Index) products over Korea were used to run the MEGAN model for June 2012. Isoprene and Monoterpenes emissions from the model were inter-compared against the enclosure chamber measurements from Taehwa research forest in Korea, during June 11 and 12, 2012. For estimating emission from the enclosed chamber measurement data. The initial results show that isoprene emissions from the MEGAN model were up to 6.4 times higher than those from the enclosure chamber measurement. Monoterpenes from enclosure chamber measurement were up to 5.6 times higher than MEGAN emission. The differences between two datasets, however, were much smaller during the time of high emissions. More inter-comparison results and the possibilities of improving the MEGAN modeling performance using local measurement data over Korea will be presented and discussed.

Analysis of the effect of long-term water supply improvement by the installation of sand dams in water scarce areas (물부족 지역에서 샌드댐 설치에 의한 장기 물공급 개선 효과 분석)

  • Chung, Il-Moon;Lee, Jeongwoo;Lee, Jeong Eun;Kim, Il-Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.12
    • /
    • pp.999-1009
    • /
    • 2022
  • The Chuncheon Mullori area is an underprivileged area for water welfare that does not have a local water supply system. Here, water is supplied to the village by using a small-scale water supply facility that uses underground water and underground water as the source. To solve the problem of water shortage during drought and to prepare for the increasing water demand, a sand dam was installed near the valley river, and this facility has been operating since May 2022. In this study, in order to evaluate the reliability of water supply when a sand dam is assumed during a drought in the past, groundwater runoff simulation results using MODFLOW were used to generate inflow data from 2011 to 2020, an unmeasured period. After performing SWAT-K basin hydrologic modeling for the watershed upstream of the existing water intake source and the sand dam, the groundwater runoff was calculated, and the relative ratio of the monthly groundwater runoff for the previous 10 years to the monthly groundwater runoff in 2021 was obtained. By applying this ratio to the 2021 inflow time series data, historical inflow data from 2011 to 2020 were generated. As a result of analyzing the availability of water supply during extreme drought in the past for three cases of demand 20 m3/day, 50 m3/day, and 100 m3/day, it can be confirmed that the reliability of water supply increases with the installation of sand dams. In the case of 100 m3/day, it was analyzed that the reliability exceeded 90% only when the existing water intake source and the sand dam were operated in conjunction. All three operating conditions were evaluated to satisfy 50 m3/day or more of demand based on 95% reliability of water supply and 30 m3/day or more of demand based on 99% of reliability.

Improvement of turbid water prediction accuracy using sensor-based monitoring data in Imha Dam reservoir (센서 기반 모니터링 자료를 활용한 임하댐 저수지 탁수 예측 정확도 개선)

  • Kim, Jongmin;Lee, Sang Ung;Kwon, Siyoon;Chung, Se Woong;Kim, Young Do
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.11
    • /
    • pp.931-939
    • /
    • 2022
  • In Korea, about two-thirds of the precipitation is concentrated in the summer season, so the problem of turbidity in the summer flood season varies from year to year. Concentrated rainfall due to abnormal rainfall and extreme weather is on the rise. The inflow of turbidity caused a sudden increase in turbidity in the water, causing a problem of turbidity in the dam reservoir. In particular, in Korea, where rivers and dam reservoirs are used for most of the annual average water consumption, if turbidity problems are prolonged, social and environmental problems such as agriculture, industry, and aquatic ecosystems in downstream areas will occur. In order to cope with such turbidity prediction, research on turbidity modeling is being actively conducted. Flow rate, water temperature, and SS data are required to model turbid water. To this end, the national measurement network measures turbidity by measuring SS in rivers and dam reservoirs, but there is a limitation in that the data resolution is low due to insufficient facilities. However, there is an unmeasured period depending on each dam and weather conditions. As a sensor for measuring turbidity, there are Optical Backscatter Sensor (OBS) and YSI, and a sensor for measuring SS uses equipment such as Laser In-Situ Scattering and Transmissometry (LISST). However, in the case of such a high-tech sensor, there is a limit due to the stability of the equipment. Therefore, there is an unmeasured period through analysis based on the acquired flow rate, water temperature, SS, and turbidity data, so it is necessary to develop a relational expression to calculate the SS used for the input data. In this study, the AEM3D model used in the Water Resources Corporation SURIAN system was used to improve the accuracy of prediction of turbidity through the turbidity-SS relationship developed based on the measurement data near the dam outlet.

MDP(Markov Decision Process) Model for Prediction of Survivor Behavior based on Topographic Information (지형정보 기반 조난자 행동예측을 위한 마코프 의사결정과정 모형)

  • Jinho Son;Suhwan Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.101-114
    • /
    • 2023
  • In the wartime, aircraft carrying out a mission to strike the enemy deep in the depth are exposed to the risk of being shoot down. As a key combat force in mordern warfare, it takes a lot of time, effot and national budget to train military flight personnel who operate high-tech weapon systems. Therefore, this study studied the path problem of predicting the route of emergency escape from enemy territory to the target point to avoid obstacles, and through this, the possibility of safe recovery of emergency escape military flight personnel was increased. based problem, transforming the problem into a TSP, VRP, and Dijkstra algorithm, and approaching it with an optimization technique. However, if this problem is approached in a network problem, it is difficult to reflect the dynamic factors and uncertainties of the battlefield environment that military flight personnel in distress will face. So, MDP suitable for modeling dynamic environments was applied and studied. In addition, GIS was used to obtain topographic information data, and in the process of designing the reward structure of MDP, topographic information was reflected in more detail so that the model could be more realistic than previous studies. In this study, value iteration algorithms and deterministic methods were used to derive a path that allows the military flight personnel in distress to move to the shortest distance while making the most of the topographical advantages. In addition, it was intended to add the reality of the model by adding actual topographic information and obstacles that the military flight personnel in distress can meet in the process of escape and escape. Through this, it was possible to predict through which route the military flight personnel would escape and escape in the actual situation. The model presented in this study can be applied to various operational situations through redesign of the reward structure. In actual situations, decision support based on scientific techniques that reflect various factors in predicting the escape route of the military flight personnel in distress and conducting combat search and rescue operations will be possible.

The Effect of Consumers' Value Motives on the Perception of Blog Reviews Credibility: the Moderation Effect of Tie Strength (소비자의 가치 추구 동인이 블로그 리뷰의 신뢰성 지각에 미치는 영향: 유대강도에 따른 조절효과를 중심으로)

  • Chu, Wujin;Roh, Min Jung
    • Asia Marketing Journal
    • /
    • v.13 no.4
    • /
    • pp.159-189
    • /
    • 2012
  • What attracts consumers to bloggers' reviews? Consumers would be attracted both by the Bloggers' expertise (i.e., knowledge and experience) and by his/her unbiased manner of delivering information. Expertise and trustworthiness are both virtues of information sources, particularly when there is uncertainty in decision-making. Noting this point, we postulate that consumers' motives determine the relative weights they place on expertise and trustworthiness. In addition, our hypotheses assume that tie strength moderates consumers' expectation on bloggers' expertise and trustworthiness: with expectation on expertise enhanced for power-blog user-group (weak-ties), and an expectation on trustworthiness elevated for personal-blog user-group (strong-ties). Finally, we theorize that the effect of credibility on willingness to accept a review is moderated by tie strength; the predictive power of credibility is more prominent for the personal-blog user-groups than for the power-blog user groups. To support these assumptions, we conducted a field survey with blog users, collecting retrospective self-report data. The "gourmet shop" was chosen as a target product category, and obtained data analyzed by structural equations modeling. Findings from these data provide empirical support for our theoretical predictions. First, we found that the purposive motive aimed at satisfying instrumental information needs increases reliance on bloggers' expertise, but interpersonal connectivity value for alleviating loneliness elevates reliance on bloggers' trustworthiness. Second, expertise-based credibility is more prominent for power-blog user-groups than for personal-blog user-groups. While strong ties attract consumers with trustworthiness based on close emotional bonds, weak ties gain consumers' attention with new, non-redundant information (Levin & Cross, 2004). Thus, when the existing knowledge system, used in strong ties, does not work as smoothly for addressing an impending problem, the weak-tie source can be utilized as a handy reference. Thus, we can anticipate that power bloggers secure credibility by virtue of their expertise while personal bloggers trade off on their trustworthiness. Our analysis demonstrates that power bloggers appeal more strongly to consumers than do personal bloggers in the area of expertise-based credibility. Finally, the effect of review credibility on willingness to accept a review is higher for the personal-blog user-group than for the power-blog user-group. Actually, the inference that review credibility is a potent predictor of assessing willingness to accept a review is grounded on the analogy that attitude is an effective indicator of purchase intention. However, if memory about established attitudes is blocked, the predictive power of attitude on purchase intention is considerably diminished. Likewise, the effect of credibility on willingness to accept a review can be affected by certain moderators. Inspired by this analogy, we introduced tie strength as a possible moderator and demonstrated that tie strength moderated the effect of credibility on willingness to accept a review. Previously, Levin and Cross (2004) showed that credibility mediates strong-ties through receipt of knowledge, but this credibility mediation is not observed for weak-ties, where a direct path to it is activated. Thus, the predictive power of credibility on behavioral intention - that is, willingness to accept a review - is expected to be higher for strong-ties.

  • PDF

Interpreting Bounded Rationality in Business and Industrial Marketing Contexts: Executive Training Case Studies (집행관배훈안례연구(阐述工商业背景下的有限合理性):집행관배훈안례연구(执行官培训案例研究))

  • Woodside, Arch G.;Lai, Wen-Hsiang;Kim, Kyung-Hoon;Jung, Deuk-Keyo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.3
    • /
    • pp.49-61
    • /
    • 2009
  • This article provides training exercises for executives into interpreting subroutine maps of executives' thinking in processing business and industrial marketing problems and opportunities. This study builds on premises that Schank proposes about learning and teaching including (1) learning occurs by experiencing and the best instruction offers learners opportunities to distill their knowledge and skills from interactive stories in the form of goal.based scenarios, team projects, and understanding stories from experts. Also, (2) telling does not lead to learning because learning requires action-training environments should emphasize active engagement with stories, cases, and projects. Each training case study includes executive exposure to decision system analysis (DSA). The training case requires the executive to write a "Briefing Report" of a DSA map. Instructions to the executive trainee in writing the briefing report include coverage in the briefing report of (1) details of the essence of the DSA map and (2) a statement of warnings and opportunities that the executive map reader interprets within the DSA map. The length maximum for a briefing report is 500 words-an arbitrary rule that works well in executive training programs. Following this introduction, section two of the article briefly summarizes relevant literature on how humans think within contexts in response to problems and opportunities. Section three illustrates the creation and interpreting of DSA maps using a training exercise in pricing a chemical product to different OEM (original equipment manufacturer) customers. Section four presents a training exercise in pricing decisions by a petroleum manufacturing firm. Section five presents a training exercise in marketing strategies by an office furniture distributer along with buying strategies by business customers. Each of the three training exercises is based on research into information processing and decision making of executives operating in marketing contexts. Section six concludes the article with suggestions for use of this training case and for developing additional training cases for honing executives' decision-making skills. Todd and Gigerenzer propose that humans use simple heuristics because they enable adaptive behavior by exploiting the structure of information in natural decision environments. "Simplicity is a virtue, rather than a curse". Bounded rationality theorists emphasize the centrality of Simon's proposition, "Human rational behavior is shaped by a scissors whose blades are the structure of the task environments and the computational capabilities of the actor". Gigerenzer's view is relevant to Simon's environmental blade and to the environmental structures in the three cases in this article, "The term environment, here, does not refer to a description of the total physical and biological environment, but only to that part important to an organism, given its needs and goals." The present article directs attention to research that combines reports on the structure of task environments with the use of adaptive toolbox heuristics of actors. The DSA mapping approach here concerns the match between strategy and an environment-the development and understanding of ecological rationality theory. Aspiration adaptation theory is central to this approach. Aspiration adaptation theory models decision making as a multi-goal problem without aggregation of the goals into a complete preference order over all decision alternatives. The three case studies in this article permit the learner to apply propositions in aspiration level rules in reaching a decision. Aspiration adaptation takes the form of a sequence of adjustment steps. An adjustment step shifts the current aspiration level to a neighboring point on an aspiration grid by a change in only one goal variable. An upward adjustment step is an increase and a downward adjustment step is a decrease of a goal variable. Creating and using aspiration adaptation levels is integral to bounded rationality theory. The present article increases understanding and expertise of both aspiration adaptation and bounded rationality theories by providing learner experiences and practice in using propositions in both theories. Practice in ranking CTSs and writing TOP gists from DSA maps serves to clarify and deepen Selten's view, "Clearly, aspiration adaptation must enter the picture as an integrated part of the search for a solution." The body of "direct research" by Mintzberg, Gladwin's ethnographic decision tree modeling, and Huff's work on mapping strategic thought are suggestions on where to look for research that considers both the structure of the environment and the computational capabilities of the actors making decisions in these environments. Such research on bounded rationality permits both further development of theory in how and why decisions are made in real life and the development of learning exercises in the use of heuristics occurring in natural environments. The exercises in the present article encourage learning skills and principles of using fast and frugal heuristics in contexts of their intended use. The exercises respond to Schank's wisdom, "In a deep sense, education isn't about knowledge or getting students to know what has happened. It is about getting them to feel what has happened. This is not easy to do. Education, as it is in schools today, is emotionless. This is a huge problem." The three cases and accompanying set of exercise questions adhere to Schank's view, "Processes are best taught by actually engaging in them, which can often mean, for mental processing, active discussion."

  • PDF

Estimation of Internal Motion for Quantitative Improvement of Lung Tumor in Small Animal (소동물 폐종양의 정량적 개선을 위한 내부 움직임 평가)

  • Yu, Jung-Woo;Woo, Sang-Keun;Lee, Yong-Jin;Kim, Kyeong-Min;Kim, Jin-Su;Lee, Kyo-Chul;Park, Sang-Jun;Yu, Ran-Ji;Kang, Joo-Hyun;Ji, Young-Hoon;Chung, Yong-Hyun;Kim, Byung-Il;Lim, Sang-Moo
    • Progress in Medical Physics
    • /
    • v.22 no.3
    • /
    • pp.140-147
    • /
    • 2011
  • The purpose of this study was to estimate internal motion using molecular sieve for quantitative improvement of lung tumor and to localize lung tumor in the small animal PET image by evaluated data. Internal motion has been demonstrated in small animal lung region by molecular sieve contained radioactive substance. Molecular sieve for internal lung motion target was contained approximately 37 kBq Cu-64. The small animal PET images were obtained from Siemens Inveon scanner using external trigger system (BioVet). SD-Rat PET images were obtained at 60 min post injection of FDG 37 MBq/0.2 mL via tail vein for 20 min. Each line of response in the list-mode data was converted to sinogram gated frames (2~16 bin) by trigger signal obtained from BioVet. The sinogram data was reconstructed using OSEM 2D with 4 iterations. PET images were evaluated with count, SNR, FWHM from ROI drawn in the target region for quantitative tumor analysis. The size of molecular sieve motion target was $1.59{\times}2.50mm$. The reference motion target FWHM of vertical and horizontal was 2.91 mm and 1.43 mm, respectively. The vertical FWHM of static, 4 bin and 8 bin was 3.90 mm, 3.74 mm, and 3.16 mm, respectively. The horizontal FWHM of static, 4 bin and 8 bin was 2.21 mm, 2.06 mm, and 1.60 mm, respectively. Count of static, 4 bin, 8 bin, 12 bin and 16 bin was 4.10, 4.83, 5.59, 5.38, and 5.31, respectively. The SNR of static, 4 bin, 8 bin, 12 bin and 16 bin was 4.18, 4.05, 4.22, 3.89, and 3.58, respectively. The FWHM were improved in accordance with gate number increase. The count and SNR were not proportionately improve with gate number, but shown the highest value in specific bin number. We measured the optimal gate number what minimize the SNR loss and gain improved count when imaging lung tumor in small animal. The internal motion estimation provide localized tumor image and will be a useful method for organ motion prediction modeling without external motion monitoring system.

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

How Enduring Product Involvement and Perceived Risk Affect Consumers' Online Merchant Selection Process: The 'Required Trust Level' Perspective (지속적 관여도 및 인지된 위험이 소비자의 온라인 상인선택 프로세스에 미치는 영향에 관한 연구: 요구신뢰 수준 개념을 중심으로)

  • Hong, Il-Yoo B.;Lee, Jung-Min;Cho, Hwi-Hyung
    • Asia pacific journal of information systems
    • /
    • v.22 no.1
    • /
    • pp.29-52
    • /
    • 2012
  • Consumers differ in the way they make a purchase. An audio mania would willingly make a bold, yet serious, decision to buy a top-of-the-line home theater system, while he is not interested in replacing his two-decade-old shabby car. On the contrary, an automobile enthusiast wouldn't mind spending forty thousand dollars to buy a new Jaguar convertible, yet cares little about his junky component system. It is product involvement that helps us explain such differences among individuals in the purchase style. Product involvement refers to the extent to which a product is perceived to be important to a consumer (Zaichkowsky, 2001). Product involvement is an important factor that strongly influences consumer's purchase decision-making process, and thus has been of prime interest to consumer behavior researchers. Furthermore, researchers found that involvement is closely related to perceived risk (Dholakia, 2001). While abundant research exists addressing how product involvement relates to overall perceived risk, little attention has been paid to the relationship between involvement and different types of perceived risk in an electronic commerce setting. Given that perceived risk can be a substantial barrier to the online purchase (Jarvenpaa, 2000), research addressing such an issue will offer useful implications on what specific types of perceived risk an online firm should focus on mitigating if it is to increase sales to a fullest potential. Meanwhile, past research has focused on such consumer responses as information search and dissemination as a consequence of involvement, neglecting other behavioral responses like online merchant selection. For one example, will a consumer seriously considering the purchase of a pricey Guzzi bag perceive a great degree of risk associated with online buying and therefore choose to buy it from a digital storefront rather than from an online marketplace to mitigate risk? Will a consumer require greater trust on the part of the online merchant when the perceived risk of online buying is rather high? We intend to find answers to these research questions through an empirical study. This paper explores the impact of enduring product involvement and perceived risks on required trust level, and further on online merchant choice. For the purpose of the research, five types or components of perceived risk are taken into consideration, including financial, performance, delivery, psychological, and social risks. A research model has been built around the constructs under consideration, and 12 hypotheses have been developed based on the research model to examine the relationships between enduring involvement and five components of perceived risk, between five components of perceived risk and required trust level, between enduring involvement and required trust level, and finally between required trust level and preference toward an e-tailer. To attain our research objectives, we conducted an empirical analysis consisting of two phases of data collection: a pilot test and main survey. The pilot test was conducted using 25 college students to ensure that the questionnaire items are clear and straightforward. Then the main survey was conducted using 295 college students at a major university for nine days between December 13, 2010 and December 21, 2010. The measures employed to test the model included eight constructs: (1) enduring involvement, (2) financial risk, (3) performance risk, (4) delivery risk, (5) psychological risk, (6) social risk, (7) required trust level, (8) preference toward an e-tailer. The statistical package, SPSS 17.0, was used to test the internal consistency among the items within the individual measures. Based on the Cronbach's ${\alpha}$ coefficients of the individual measure, the reliability of all the variables is supported. Meanwhile, the Amos 18.0 package was employed to perform a confirmatory factor analysis designed to assess the unidimensionality of the measures. The goodness of fit for the measurement model was satisfied. Unidimensionality was tested using convergent, discriminant, and nomological validity. The statistical evidences proved that the three types of validity were all satisfied. Now the structured equation modeling technique was used to analyze the individual paths along the relationships among the research constructs. The results indicated that enduring involvement has significant positive relationships with all the five components of perceived risk, while only performance risk is significantly related to trust level required by consumers for purchase. It can be inferred from the findings that product performance problems are mostly likely to occur when a merchant behaves in an opportunistic manner. Positive relationships were also found between involvement and required trust level and between required trust level and online merchant choice. Enduring involvement is concerned with the pleasure a consumer derives from a product class and/or with the desire for knowledge for the product class, and thus is likely to motivate the consumer to look for ways of mitigating perceived risk by requiring a higher level of trust on the part of the online merchant. Likewise, a consumer requiring a high level of trust on the merchant will choose a digital storefront rather than an e-marketplace, since a digital storefront is believed to be trustworthier than an e-marketplace, as it fulfills orders by itself rather than acting as an intermediary. The findings of the present research provide both academic and practical implications. The first academic implication is that enduring product involvement is a strong motivator of consumer responses, especially the selection of a merchant, in the context of electronic shopping. Secondly, academicians are advised to pay attention to the finding that an individual component or type of perceived risk can be used as an important research construct, since it would allow one to pinpoint the specific types of risk that are influenced by antecedents or that influence consequents. Meanwhile, our research provides implications useful for online merchants (both online storefronts and e-marketplaces). Merchants may develop strategies to attract consumers by managing perceived performance risk involved in purchase decisions, since it was found to have significant positive relationship with the level of trust required by a consumer on the part of the merchant. One way to manage performance risk would be to thoroughly examine the product before shipping to ensure that it has no deficiencies or flaws. Secondly, digital storefronts are advised to focus on symbolic goods (e.g., cars, cell phones, fashion outfits, and handbags) in which consumers are relatively more involved than others, whereas e- marketplaces should put their emphasis on non-symbolic goods (e.g., drinks, books, MP3 players, and bike accessories).

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.