• Title/Summary/Keyword: Situation information

Search Result 5,493, Processing Time 0.038 seconds

Spatial effect on the diffusion of discount stores (대형할인점 확산에 대한 공간적 영향)

  • Joo, Young-Jin;Kim, Mi-Ae
    • Journal of Distribution Research
    • /
    • v.15 no.4
    • /
    • pp.61-85
    • /
    • 2010
  • Introduction: Diffusion is process by which an innovation is communicated through certain channel overtime among the members of a social system(Rogers 1983). Bass(1969) suggested the Bass model describing diffusion process. The Bass model assumes potential adopters of innovation are influenced by mass-media and word-of-mouth from communication with previous adopters. Various expansions of the Bass model have been conducted. Some of them proposed a third factor affecting diffusion. Others proposed multinational diffusion model and it stressed interactive effect on diffusion among several countries. We add a spatial factor in the Bass model as a third communication factor. Because of situation where we can not control the interaction between markets, we need to consider that diffusion within certain market can be influenced by diffusion in contiguous market. The process that certain type of retail extends is a result that particular market can be described by the retail life cycle. Diffusion of retail has pattern following three phases of spatial diffusion: adoption of innovation happens in near the diffusion center first, spreads to the vicinity of the diffusing center and then adoption of innovation is completed in peripheral areas in saturation stage. So we expect spatial effect to be important to describe diffusion of domestic discount store. We define a spatial diffusion model using multinational diffusion model and apply it to the diffusion of discount store. Modeling: In this paper, we define a spatial diffusion model and apply it to the diffusion of discount store. To define a spatial diffusion model, we expand learning model(Kumar and Krishnan 2002) and separate diffusion process in diffusion center(market A) from diffusion process in the vicinity of the diffusing center(market B). The proposed spatial diffusion model is shown in equation (1a) and (1b). Equation (1a) is the diffusion process in diffusion center and equation (1b) is one in the vicinity of the diffusing center. $$\array{{S_{i,t}=(p_i+q_i{\frac{Y_{i,t-1}}{m_i}})(m_i-Y_{i,t-1})\;i{\in}\{1,{\cdots},I\}\;(1a)}\\{S_{j,t}=(p_j+q_j{\frac{Y_{j,t-1}}{m_i}}+{\sum\limits_{i=1}^I}{\gamma}_{ij}{\frac{Y_{i,t-1}}{m_i}})(m_j-Y_{j,t-1})\;i{\in}\{1,{\cdots},I\},\;j{\in}\{I+1,{\cdots},I+J\}\;(1b)}}$$ We rise two research questions. (1) The proposed spatial diffusion model is more effective than the Bass model to describe the diffusion of discount stores. (2) The more similar retail environment of diffusing center with that of the vicinity of the contiguous market is, the larger spatial effect of diffusing center on diffusion of the vicinity of the contiguous market is. To examine above two questions, we adopt the Bass model to estimate diffusion of discount store first. Next spatial diffusion model where spatial factor is added to the Bass model is used to estimate it. Finally by comparing Bass model with spatial diffusion model, we try to find out which model describes diffusion of discount store better. In addition, we investigate the relationship between similarity of retail environment(conceptual distance) and spatial factor impact with correlation analysis. Result and Implication: We suggest spatial diffusion model to describe diffusion of discount stores. To examine the proposed spatial diffusion model, 347 domestic discount stores are used and we divide nation into 5 districts, Seoul-Gyeongin(SG), Busan-Gyeongnam(BG), Daegu-Gyeongbuk(DG), Gwan- gju-Jeonla(GJ), Daejeon-Chungcheong(DC), and the result is shown

    . In a result of the Bass model(I), the estimates of innovation coefficient(p) and imitation coefficient(q) are 0.017 and 0.323 respectively. While the estimate of market potential is 384. A result of the Bass model(II) for each district shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. A result of the Bass model(II) shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. In a result of spatial diffusion model(IV), we can notice the changes between coefficients of the bass model and those of the spatial diffusion model. Except for GJ, the estimates of innovation and imitation coefficients in Model IV are lower than those in Model II. The changes of innovation and imitation coefficients are reflected to spatial coefficient(${\gamma}$). From spatial coefficient(${\gamma}$) we can infer that when the diffusion in the vicinity of the diffusing center occurs, the diffusion is influenced by one in the diffusing center. The difference between the Bass model(II) and the spatial diffusion model(IV) is statistically significant with the ${\chi}^2$-distributed likelihood ratio statistic is 16.598(p=0.0023). Which implies that the spatial diffusion model is more effective than the Bass model to describe diffusion of discount stores. So the research question (1) is supported. In addition, we found that there are statistically significant relationship between similarity of retail environment and spatial effect by using correlation analysis. So the research question (2) is also supported.

  • PDF
  • The Value and Growing Characteristics of the Dicentra Spectabilis Community in Daea-ri, Wanju-gun, Jeollabuk-do as a Nature Reserve (전북 완주군 대아리 금낭화 Dicentra spectabilis 군락지의 천연보호구역적 가치와 생육특성)

    • Lee, Suk Woo;Rho, Jae Hyun;Oh, Hyun Kyung
      • Korean Journal of Heritage: History & Science
      • /
      • v.44 no.1
      • /
      • pp.72-105
      • /
      • 2011
    • This study explores the value of the Dicentra spectabilis community as a nature reserve in provincial forests at San 1-2, Daea-ri, Dongsang-myeon, Wanju-gun, Jellabuk-do, also known as Gamakgol, while defining the appropriateness of its living environment and eventually providing basic information to protect this area. For these reasons, we investigated 'morphological and biological features of Dicentra spectabilis' and the 'present situation and problems of designing a herbaceous nature reserve in Korea.' Furthermore, we researched and analyzed the solar, soil and vegetation condition here through a field study in order to comprehend its nature reserve value. The result is as follows. According to the analytic result for information on the domestic wild Dicentra spectabilis community, it is evenly spread throughout mountainous areas, and there is one particularly outstanding in size in Wanju Gamakgol. Upon the findings from literature and the field study about its dispersion, Gamakgol has been discovered as an ideal district for Dicentra spectabilis since it meets all the conditions this plant requires to grow vigorously, such as a quasi-high altitude and rich precipitation during its period of active growth duration in May. Dicentra spectabilis grows in rocky soil ranging from 300~375m above sea level, 344.5m on average, towards the north, northwest and dominantly in the northeast. The mean inclination degree is $19.5^{\circ}$. Also, upon findings from analyzing solar conditions, the average light intensity during its growth duration, from Apr. to Aug., is 30,810lux on average and it tends to increase, as it gets closer to the end. This plant requires around 14,000~18,000lux while growing, but once bloomed, fruits develop regardless of the degree of brightness. The soil pH has shown a slight difference between the topsoil, at 5.2~6.1, and subsoil, at 5.2~6.2. Its mean pH is 5.54 for topsoil and 5.58 for subsoil. These results are very typical for Dicentra spectabilis to grow in, and other comparative areas also present similar conditions. Given the facts, the character of the soil in Gamakgol has been evaluated to have high stability. Analysis of its vegetation environment shows a wide variation of taxa numbering from 13 to 52 depending on area. The total number of taxa is 126 and they are a homogenous group while showing a variety of species as well. The Dicentra spectabilis community in the Daea-ri Arboretum is an herbaceous community consisting of dominantly Dicentra spectabilis, Cardamine leucantha, Boehmeria tricuspi and Impatiens textori while having many differential species such as Impatiens textori, Pueraria thunbergiana, Rubus crataegifolius vs Staphylea bumalda, Securinega suffruticosa, and Actinidia polygama. It suggests that it is a typical subcolony divided by topographic features and soil humidity. Considering the above results on a comprehensive level, this area is an excellent habitat for wild Dicentra spectabilis providing beautiful viewing enjoyment. Additionally, it is the largest wild colony of Dicentra spectabilis in Korea whose climate, topography, soil conditions and vegetation environment can secure sustainability as a wild habitat of Dicentra spectabilis. Therefore, We have determined that the Gamakgol community should be re-examined as natural asset owing to its established habitat conditions and sustainability.

    Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

    • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.1
      • /
      • pp.63-83
      • /
      • 2019
    • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

    The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

    • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
      • Journal of Intelligence and Information Systems
      • /
      • v.27 no.1
      • /
      • pp.83-102
      • /
      • 2021
    • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

    Analysis of the Shijujils(施主秩), the records on the creation of Buddha statues, of wooden seated Vairocana Buddha Triad of Hwaeomsa Temple (화엄사 목조비로자나삼신불좌상의 조성기 「시주질(施主秩)」 분석)

    • Yoo, Geun-Ja
      • MISULJARYO - National Museum of Korea Art Journal
      • /
      • v.100
      • /
      • pp.112-138
      • /
      • 2021
    • This paper mainly analyzes the records titled 'Shijujil(施主秩)' from the Bokjangs of each of the Rocana and Shakyamuni statues enshrined as wooden seated Vairocana Buddha Triadcomposed of Vairocana(center), Rocana(right), and Shakyamuni(left) at the Daeungjeon Hall of Hwaeomsa Temple in Gurye. The Shijujil from the Shakyamuni statue was recovered through Bokjang investigation in September 2015 and has been kept in the museum of Hwaeomsa as an undisclosed relic. After the discovery of the Shijujil from the Rocana statue through an Bokjang investigation in July 2020, both of the Shijujils were only officially released through the special exhibition 'Grand Hwaeomsa Temple in Jirisan Mountain' in September 2021. Existing documents recording on the creation of Buddha statues in the 17th century are in the form of sheets or rolls. However, the Shijujils take the form of simple stitched booklets. The Shijujil from Rocana consists of 19 chapters and 38 pages in one book, and the Shijujil from Shakyamuni consists of 11 chapters and 22 pages in one book. The contents of the Shijujils consist of the purpose of the Buddha statue creation, the creation date, the year and place of enshrining, the names of the statues, the people in charge and their roles, the sculptors, the list of items donated, and the list of the contributors. In addition, the list of monks who were staying at Hwaeomsa Temple at that time are also recorded, so the Shijujil is like a time capsule that tells the situation of Hwaeomsa Temple about 400 years ago. According to the records of the Shijujils and the Writing on the wooden pedestal of Rocana, the Vairocana Triad began to be in March 1634(12th year of King Injo) and was completed in August of that year, and was enshrined in the Daeungjeon Hall in the fall of the following year. It is very important to confirm that the Vairocana Buddha Triad of Hwaeomsa was created in 1634. Since studies on the reconstruction of Hwaeomsa Temple in the 17th century and the roles of Byeokam Gakseong have been mainly based on 『湖南道求禮縣智異山大華嚴寺事蹟』 written by monk Haean in 1636, it has been estimated that the wooden seated Vairocana Buddha Triad was created in 1636. However, it is now known that the Virocana Buddha Triad was created in 1634. The Shijujils are also a good source of information about Byeokam Gakseong who played a pivotal roles in the reconstruction projects of Hwaeomsa Temple in the 17th century. He played leading roles in rebuilding the East Five-story Stone Pagoda(1630), in creating the wooden seated Vairocana Buddha Triad(1634), and in producing the Yeongsanhoe Gwaebul(1653, Hanging Scroll Painting depicting the Shakyamuni preaching). It is also very important that the Shijujils are records that can reveal the relationship between Byeokam Gakseong and royal family of Joseon Dynasty in the 17th century. The Shijujils from Rocana and Shakyamuni are the first documents ever discovered in which the names of royal family members, such as Uichanggun(Gwang Lee, son of King Seonjo), Ikseong Shin(son-in-law of King Seonjo), and Crown Prince Sohyeon(son of King Injo) are recorded in detail in relation to the production of Buddha statues. The Shijujils from Rocana and Shakyamuni contain specific information about the production of the wooden seated Vairocana Buddha Triad in the 17th century, such as the year of production of the Buddha statues, the role of Byeokam Gakseong, and the relationship between Byeokam Gakseong and the royal family, so it is of great value not only for art history but also for historical studies of Hwaeomsa Temple.

    The Effects on CRM Performance and Relationship Quality of Successful Elements in the Establishment of Customer Relationship Management: Focused on Marketing Approach (CRM구축과정에서 마케팅요인이 관계품질과 CRM성과에 미치는 영향)

    • Jang, Hyeong-Yu
      • Journal of Global Scholars of Marketing Science
      • /
      • v.18 no.4
      • /
      • pp.119-155
      • /
      • 2008
    • Customer Relationship Management(CRM) has been a sustainable competitive edge of many companies. CRM analyzes customer data for designing and executing targeted marketing analysing customer behavior in order to make decisions relating to products and services including management information system. It is critical for companies to get and maintain profitable customers. How to manage relationships with customers effectively has become an important issue for both academicians and practitioners in recent years. However, the existing academic literature and the practical applications of customer relationship management(CRM) strategies have been focused on the technical process and organizational structure about the implementation of CRM. These limited focus on CRM lead to the result of numerous reports of failed implementations of various types of CRM projects. Many of these failures are also related to the absence of marketing approach. Identifying successful factors and outcomes focused on marketing concept before introducing a CRM project are a pre-implementation requirements. Many researchers have attempted to find the factors that contribute to the success of CRM. However, these research have some limitations in terms of marketing approach without explaining how the marketing based factors contribute to the CRM success. An understanding of how to manage relationship with crucial customers effectively based marketing approach has become an important topic for both academicians and practitioners. However, the existing papers did not provide a clear antecedent and outcomes factors focused on marketing approach. This paper attempt to validate whether or not such various marketing factors would impact on relational quality and CRM performance in terms of marketing oriented perceptivity. More specifically, marketing oriented factors involving market orientation, customer orientation, customer information orientation, and core customer orientation can influence relationship quality(satisfaction and trust) and CRM outcome(customer retention and customer share). Another major goals of this research are to identify the effect of relationship quality on CRM outcomes consisted of customer retention and share to show the relationship strength between two factors. Based on meta analysis for conventional studies, I can construct the following research model. An empirical study was undertaken to test the hypotheses with data from various companies. Multiple regression analysis and t-test were employed to test the hypotheses. The reliability and validity of our measurements were tested by using Cronbach's alpha coefficient and principal factor analysis respectively, and seven hypotheses were tested through performing correlation test and multiple regression analysis. The first key outcome is a theoretically and empirically sound CRM factors(marketing orientation, customer orientation, customer information orientation, and core customer orientation.) in the perceptive of marketing. The intensification of ${\beta}$coefficient among antecedents factors in terms of marketing was not same. In particular, The effects on customer trust of marketing based CRM antecedents were significantly confirmed excluding core customer orientation. It was notable that the direct effects of core customer orientation on customer trust were not exist. This means that customer trust which is firmly formed by long term tasks will not be directly linked to the core customer orientation. the enduring management concerned with this interactions is probably more important for the successful implementation of CRM. The second key result is that the implementation and operation of successful CRM process in terms of marketing approach have a strong positive association with both relationship quality(customer trust/customer satisfaction) and CRM performance(customer retention and customer possession). The final key fact that relationship quality has a strong positive effect on customer retention and customer share confirms that improvements in customer satisfaction and trust improve accessibility to customers, provide more consistent service and ensure value-for-money within the front office which result in growth of customer retention and customer share. Particularly, customer satisfaction and trust which is main components of relationship quality are found to be positively related to the customer retention and customer share. Interactive managements of these main variables play key roles in connecting the successful antecedent of CRM with final outcome involving customer retention and share. Based on research results, This paper suggest managerial implications concerned with constructions and executions of CRM focusing on the marketing perceptivity. I can conclude in general the CRM can be achieved by the recognition of antecedents and outcomes based on marketing concept. The implementation of marketing concept oriented CRM will be connected with finding out about customers' purchasing habits, opinions and preferences profiling individuals and groups to market more effectively and increase sales changing the way you operate to improve customer service and marketing. Benefiting from CRM is not just a question of investing the right software, but adapt CRM users to the concept of marketing including marketing orientation, customer orientation, and customer information orientation. No one deny that CRM is a process or methodology used to develop stronger relationships being composed of many technological components, but thinking about CRM in primarily technological terms is a big mistake. We can infer from this paper that the more useful way to think and implement about CRM is as a process that will help bring together lots of pieces of marketing concept about customers, marketing effectiveness, and market trends. Finally, a real situation we conducted our research may enable academics and practitioners to understand the antecedents and outcomes in the perceptive of marketing more clearly.

    • PDF

    An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

    • Ka, Hoi-Kwang;Kim, Jin-soo
      • Asia pacific journal of information systems
      • /
      • v.24 no.4
      • /
      • pp.443-472
      • /
      • 2014
    • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

    Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

    • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
      • Journal of Intelligence and Information Systems
      • /
      • v.26 no.4
      • /
      • pp.127-148
      • /
      • 2020
    • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

    Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

    • Yoo, Yoon-ha
      • KDI Journal of Economic Policy
      • /
      • v.13 no.1
      • /
      • pp.151-166
      • /
      • 1991
    • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

    • PDF

    A Study on Maternity Aids Utilization in the Maternal and Child Health and Family Planning (농촌(農村)에 있어서 분만개조요원(分娩介助要員)의 봉사(奉仕)에 의(依)한 모자보건(母子保健)rhk 가족계획(家族計劃)에 관(關) 연구(硏究))

    • Yeh, Min-Hae;Lee, Sung Kwan
      • Journal of Preventive Medicine and Public Health
      • /
      • v.5 no.1
      • /
      • pp.57-95
      • /
      • 1972
    • This study was conducted to assess the effectiveness of service by maternity aids concerning maternal and child health in improving simultaneously infant mortality, contraception and vital registration among expectant mothers in rural Korea, where there is less apportunity for maternal and child health care. It is unrealistic to expect to solve this problem in rural Korea through professional persons considering the situation of medical facilities and the socioeconomic condition of residents. So, we intended to adopt a system of services by maternity aids who were educated formally among indigenous women. After the women were trained in maternal and child health, contraception, and registration for a short period, they were assigned as a maternity aids to each village to help with various activities concerning maternal and child health, for example, registration of pregnant women, home visiting to check for complications, supplying of delivery kits, attendance at delivery, persuasion of contraception, and invitation for registration and so on. Mean-while, four researchers called on the maternity aids to collect materials concerning vital events, maternal child health, contraception and registration, and to give further instruction and supervision as the program proceeded. A. Changes of women's attitude by services of maternity aid. Now, we examined to what extent' such a service system to expectant mothers affected a change in attitude of women residing in the study area as compared to women of the control area. 1) In the birth and death places, there were no changes between last and present infants, in study or control area. 2) In regard to attendants at delivery, there were no changes except for a small percentage of attendance (8%) by maternity aid in study area. But, I expect that more maternity sids could be used as attendants at delivery if they would be trained further and if there was more explanation to the residents about such a service. 3) Considering the rate of utilization of sterilized delivery kit, I am sure that more than 90 percent would be used if the delivery kit were supplied in the proper time. There were significant differences in rates between the study and the control areas. 4) Taking into consideration the utilization rate of the clinic for prenatal care and well baby care, if suck facilities were installed, it would probably be well utilized. 5) In the contraception, the rate of approval was as high as 89 percent in study area as compared to 82 percent in the control area. 6) Considering the rate of pre-and post-partum acceptance on contraception were as much as 70 percent or more, if motivation to use contraception was given to them adequately, the government could reach the goals for family planning as planned. 7) In the vital registration, the rate of birth registration in the study area was some what improved compared to that of the control area, while the rate of death registration was not changed at all. Taking into account the fact that the rate of confirmation of vital events by maternity aids was remarkably high, if the registration system changed to a 'notification' system instead of formal registration ststem, it would be improved significantly compared to present system. B. Effect of the project Thus, with changes in the residents' attitude, was there a reduction in the infant death rate? 1) It is very difficult problem to compare the mortality of infants between last and present infants, because many women don't want to answer accurately about their dead children especially the infants that died within a few days after birth. In this study the data of present death comes from the maternity aides who followed up every pregnancy they had recorded to see what had happened. They seem to have very reliable information on what happened in first few weeks with follow up visitits to check out later changes. From these calculaton, when we compared the rate of infant death between last and present infant, there was remarkable reduction of death rate for present infant compare to that of last children, namely, the former was 30, while the latter 42. The figure is the lowest rate that I have ever heard. As the quality of data we could assess by comparing the causes of death. In the current death rate by communicable disease was much lower compare to the last child especially, tetanus cases and pneumonia. 2) Next, how many respondents used contraception after birth because of frequent contact with the maternity aid. In the registered cases, the respondents showed a tendency to practice contraception at an earlier age and with a small number of children. In a comparison of the rate of contraception between the study and the control area, the rate in the former was significantly higher than that of the latter. What is more, the proportion favoring smaller numbers of children and younger women rose in the study area as compared to the control area. 3) Regarding vital registration, though the rate of registration was gradually improved by efforts of maternity aid, it would be better to change the registration system. 4) In the crude birth rate, the rate in the study area was 22.2 while in the control area was 26.5. Natural increase rate showed 15.4 in the study area, while control area was 19.1. 5) In assessment of the efficiency of the maternity aids judging by the cost-effect viewpoint, the workers in the Medium area seemed to be more efficiency than those of other areas.

    • PDF

    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.