• Title/Summary/Keyword: HAS-2

Search Result 113,879, Processing Time 0.137 seconds

Conclusion of Conventions on Compensation for Damage Caused by Aircraft in Flight to Third Parties (항공운항 시 제3자 피해 배상 관련 협약 채택 -그 혁신적 내용과 배경 고찰-)

  • Park, Won-Hwa
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.24 no.1
    • /
    • pp.35-58
    • /
    • 2009
  • A treaty that governs the compensation on damage caused by aircraft to the third parties on surface was first adopted in Rome in 1933, but without support from the international aviation community it was replaced by another convention adopted again in Rome in 1952. Despite the increase of the compensation amount and some improvements to the old version, the Rome Convention 1952 with 49 State parties as of today is not considered universally accepted. Neither is the Montreal Protocol 1978 amending the Rome Convention 1952, with only 12 State parties excluding major aviation powers like USA, Japan, UK, and Germany. Consequently, it is mostly the local laws that apply to the compensation case of surface damage caused by the aircraft, contrary to the intention of those countries and people who involved themselves in the drafting of the early conventions on surface damage. The terrorist attacks 9/11 proved that even the strongest power in the world like the USA cannot with ease bear all the damages done to the third parties by the terrorist acts involving aircraft. Accordingly as a matter of urgency, the International Civil Aviation Organization(ICAO) picked up the matter and have it considered among member States for a few years through its Legal Committee before proposing for adoption as a new treaty in the Diplomatic Conference held in Montreal, Canada 20 April to 2 May 2009. Accordingly, two treaties based on the drafts of the Legal Committee were adopted in Montreal by consensus, one on the compensation for general risk damage caused by aircraft, the other one on compensation for damage from acts of unlawful interference involving aircraft. Both Conventions improved the old Convention/Protocol in many aspects. Deleting 'surface' in defining the damage to the third parties in the title and contents of the Conventions is the first improvement because the third party damage is not necessarily limited to surface on the soil and sea of the Earth. Thus Mid-air collision is now the new scope of application. Increasing compensation limit in big gallop is another improvement, so is the inclusion of the mental injury accompanied by bodily injury as the damage to be compensated. In fact, jurisprudence in recent years for cases of passengers in aircraft accident holds aircraft operators to be liable to such mental injuries. However, "Terror Convention" involving unlawful interference of aircraft has some unique provisions of innovation and others. While establishing the International Civil Aviation Compensation Fund to supplement, when necessary, the damages that exceed the limit to be covered by aircraft operators through insurance taking is an innovation, leaving the fate of the Convention to a State Party, implying in fact the USA, is harming its universality. Furthermore, taking into account the fact that the damage incurred by the terrorist acts, where ever it takes place targeting whichever sector or industry, are the domain of the State responsibility, imposing the burden of compensation resulting from terrorist acts in the air industry on the aircraft operators and passengers/shippers is a source of serious concern for the prospect of the Convention. This is more so when the risks of terrorist acts normally aimed at a few countries because of current international political situation are spread out to many innocent countries without quid pro quo.

  • PDF

Development of a Traffic Accident Prediction Model and Determination of the Risk Level at Signalized Intersection (신호교차로에서의 사고예측모형개발 및 위험수준결정 연구)

  • 홍정열;도철웅
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.7
    • /
    • pp.155-166
    • /
    • 2002
  • Since 1990s. there has been an increasing number of traffic accidents at intersection. which requires more urgent measures to insure safety on intersection. This study set out to analyze the road conditions, traffic conditions and traffic operation conditions on signalized intersection. to identify the elements that would impose obstructions in safety, and to develop a traffic accident prediction model to evaluate the safety of an intersection using the cop relation between the elements and an accident. In addition, the focus was made on suggesting appropriate traffic safety policies by dealing with the danger elements in advance and on enhancing the safety on the intersection in developing a traffic accident prediction model fir a signalized intersection. The data for the study was collected at an intersection located in Wonju city from January to December 2001. It consisted of the number of accidents, the road conditions, the traffic conditions, and the traffic operation conditions at the intersection. The collected data was first statistically analyzed and then the results identified the elements that had close correlations with accidents. They included the area pattern, the use of land, the bus stopping activities, the parking and stopping activities on the road, the total volume, the turning volume, the number of lanes, the width of the road, the intersection area, the cycle, the sight distance, and the turning radius. These elements were used in the second correlation analysis. The significant level was 95% or higher in all of them. There were few correlations between independent variables. The variables that affected the accident rate were the number of lanes, the turning radius, the sight distance and the cycle, which were used to develop a traffic accident prediction model formula considering their distribution. The model formula was compared with a general linear regression model in accuracy. In addition, the statistics of domestic accidents were investigated to analyze the distribution of the accidents and to classify intersections according to the risk level. Finally, the results were applied to the Spearman-rank correlation coefficient to see if the model was appropriate. As a result, the coefficient of determination was highly significant with the value of 0.985 and the ranks among the intersections according to the risk level were appropriate too. The actual number of accidents and the predicted ones were compared in terms of the risk level and they were about the same in the risk level for 80% of the intersections.

A Study of Guidelines for Genetic Counseling in Preimplantation Genetic Diagnosis (PGD) (착상전 유전진단을 위한 유전상담 현황과 지침개발을 위한 기초 연구)

  • Kim, Min-Jee;Lee, Hyoung-Song;Kang, Inn-Soo;Jeong, Seon-Yong;Kim, Hyon-J.
    • Journal of Genetic Medicine
    • /
    • v.7 no.2
    • /
    • pp.125-132
    • /
    • 2010
  • Purpose: Preimplantation genetic diagnosis (PGD), also known as embryo screening, is a pre-pregnancy technique used to identify genetic defects in embryos created through in vitro fertilization. PGD is considered a means of prenatal diagnosis of genetic abnormalities. PGD is used when one or both genetic parents has a known genetic abnormality; testing is performed on an embryo to determine if it also carries the genetic abnormality. The main advantage of PGD is the avoidance of selective pregnancy termination as it imparts a high likelihood that the baby will be free of the disease under consideration. The application of PGD to genetic practices, reproductive medicine, and genetic counseling is becoming the key component of fertility practice because of the need to develop a custom PGD design for each couple. Materials and Methods: In this study, a survey on the contents of genetic counseling in PGD was carried out via direct contact or e-mail with the patients and specialists who had experienced PGD during the three months from February to April 2010. Results: A total of 91 persons including 60 patients, 49 of whom had a chromosomal disorder and 11 of whom had a single gene disorder, and 31 PGD specialists responded to the survey. Analysis of the survey results revealed that all respondents were well aware of the importance of genetic counseling in all steps of PGD including planning, operation, and follow-up. The patient group responded that the possibility of unexpected results (51.7%), genetic risk assessment and recurrence risk (46.7%), the reproduction options (46.7%), the procedure and limitation of PGD (43.3%) and the information of PGD technology (35.0%) should be included as a genetic counseling information. In detail, 51.7% of patients wanted to be counseled for the possibility of unexpected results and the recurrence risk, while 46.7% wanted to know their reproduction options (46.7%). Approximately 96.7% of specialists replied that a non-M.D. genetic counselor is necessary for effective and systematic genetic counseling in PGD because it is difficult for physicians to offer satisfying information to patients due to lack of counseling time and specific knowledge of the disorders. Conclusions: The information from the survey provides important insight into the overall present situation of genetic counseling for PGD in Korea. The survey results demonstrated that there is a general awareness that genetic counseling is essential for PGD, suggesting that appropriate genetic counseling may play a important role in the success of PGD. The establishment of genetic counseling guidelines for PGD may contribute to better planning and management strategies for PGD.

Spatial effect on the diffusion of discount stores (대형할인점 확산에 대한 공간적 영향)

  • Joo, Young-Jin;Kim, Mi-Ae
    • Journal of Distribution Research
    • /
    • v.15 no.4
    • /
    • pp.61-85
    • /
    • 2010
  • Introduction: Diffusion is process by which an innovation is communicated through certain channel overtime among the members of a social system(Rogers 1983). Bass(1969) suggested the Bass model describing diffusion process. The Bass model assumes potential adopters of innovation are influenced by mass-media and word-of-mouth from communication with previous adopters. Various expansions of the Bass model have been conducted. Some of them proposed a third factor affecting diffusion. Others proposed multinational diffusion model and it stressed interactive effect on diffusion among several countries. We add a spatial factor in the Bass model as a third communication factor. Because of situation where we can not control the interaction between markets, we need to consider that diffusion within certain market can be influenced by diffusion in contiguous market. The process that certain type of retail extends is a result that particular market can be described by the retail life cycle. Diffusion of retail has pattern following three phases of spatial diffusion: adoption of innovation happens in near the diffusion center first, spreads to the vicinity of the diffusing center and then adoption of innovation is completed in peripheral areas in saturation stage. So we expect spatial effect to be important to describe diffusion of domestic discount store. We define a spatial diffusion model using multinational diffusion model and apply it to the diffusion of discount store. Modeling: In this paper, we define a spatial diffusion model and apply it to the diffusion of discount store. To define a spatial diffusion model, we expand learning model(Kumar and Krishnan 2002) and separate diffusion process in diffusion center(market A) from diffusion process in the vicinity of the diffusing center(market B). The proposed spatial diffusion model is shown in equation (1a) and (1b). Equation (1a) is the diffusion process in diffusion center and equation (1b) is one in the vicinity of the diffusing center. $$\array{{S_{i,t}=(p_i+q_i{\frac{Y_{i,t-1}}{m_i}})(m_i-Y_{i,t-1})\;i{\in}\{1,{\cdots},I\}\;(1a)}\\{S_{j,t}=(p_j+q_j{\frac{Y_{j,t-1}}{m_i}}+{\sum\limits_{i=1}^I}{\gamma}_{ij}{\frac{Y_{i,t-1}}{m_i}})(m_j-Y_{j,t-1})\;i{\in}\{1,{\cdots},I\},\;j{\in}\{I+1,{\cdots},I+J\}\;(1b)}}$$ We rise two research questions. (1) The proposed spatial diffusion model is more effective than the Bass model to describe the diffusion of discount stores. (2) The more similar retail environment of diffusing center with that of the vicinity of the contiguous market is, the larger spatial effect of diffusing center on diffusion of the vicinity of the contiguous market is. To examine above two questions, we adopt the Bass model to estimate diffusion of discount store first. Next spatial diffusion model where spatial factor is added to the Bass model is used to estimate it. Finally by comparing Bass model with spatial diffusion model, we try to find out which model describes diffusion of discount store better. In addition, we investigate the relationship between similarity of retail environment(conceptual distance) and spatial factor impact with correlation analysis. Result and Implication: We suggest spatial diffusion model to describe diffusion of discount stores. To examine the proposed spatial diffusion model, 347 domestic discount stores are used and we divide nation into 5 districts, Seoul-Gyeongin(SG), Busan-Gyeongnam(BG), Daegu-Gyeongbuk(DG), Gwan- gju-Jeonla(GJ), Daejeon-Chungcheong(DC), and the result is shown

    . In a result of the Bass model(I), the estimates of innovation coefficient(p) and imitation coefficient(q) are 0.017 and 0.323 respectively. While the estimate of market potential is 384. A result of the Bass model(II) for each district shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. A result of the Bass model(II) shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. In a result of spatial diffusion model(IV), we can notice the changes between coefficients of the bass model and those of the spatial diffusion model. Except for GJ, the estimates of innovation and imitation coefficients in Model IV are lower than those in Model II. The changes of innovation and imitation coefficients are reflected to spatial coefficient(${\gamma}$). From spatial coefficient(${\gamma}$) we can infer that when the diffusion in the vicinity of the diffusing center occurs, the diffusion is influenced by one in the diffusing center. The difference between the Bass model(II) and the spatial diffusion model(IV) is statistically significant with the ${\chi}^2$-distributed likelihood ratio statistic is 16.598(p=0.0023). Which implies that the spatial diffusion model is more effective than the Bass model to describe diffusion of discount stores. So the research question (1) is supported. In addition, we found that there are statistically significant relationship between similarity of retail environment and spatial effect by using correlation analysis. So the research question (2) is also supported.

  • PDF
  • Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

    • Kim, Sunwoong
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.2
      • /
      • pp.39-55
      • /
      • 2019
    • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

    An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

    • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.2
      • /
      • pp.167-194
      • /
      • 2019
    • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

    A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

    • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.2
      • /
      • pp.25-38
      • /
      • 2019
    • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

    Development of New Variables Affecting Movie Success and Prediction of Weekly Box Office Using Them Based on Machine Learning (영화 흥행에 영향을 미치는 새로운 변수 개발과 이를 이용한 머신러닝 기반의 주간 박스오피스 예측)

    • Song, Junga;Choi, Keunho;Kim, Gunwoo
      • Journal of Intelligence and Information Systems
      • /
      • v.24 no.4
      • /
      • pp.67-83
      • /
      • 2018
    • The Korean film industry with significant increase every year exceeded the number of cumulative audiences of 200 million people in 2013 finally. However, starting from 2015 the Korean film industry entered a period of low growth and experienced a negative growth after all in 2016. To overcome such difficulty, stakeholders like production company, distribution company, multiplex have attempted to maximize the market returns using strategies of predicting change of market and of responding to such market change immediately. Since a film is classified as one of experiential products, it is not easy to predict a box office record and the initial number of audiences before the film is released. And also, the number of audiences fluctuates with a variety of factors after the film is released. So, the production company and distribution company try to be guaranteed the number of screens at the opining time of a newly released by multiplex chains. However, the multiplex chains tend to open the screening schedule during only a week and then determine the number of screening of the forthcoming week based on the box office record and the evaluation of audiences. Many previous researches have conducted to deal with the prediction of box office records of films. In the early stage, the researches attempted to identify factors affecting the box office record. And nowadays, many studies have tried to apply various analytic techniques to the factors identified previously in order to improve the accuracy of prediction and to explain the effect of each factor instead of identifying new factors affecting the box office record. However, most of previous researches have limitations in that they used the total number of audiences from the opening to the end as a target variable, and this makes it difficult to predict and respond to the demand of market which changes dynamically. Therefore, the purpose of this study is to predict the weekly number of audiences of a newly released film so that the stakeholder can flexibly and elastically respond to the change of the number of audiences in the film. To that end, we considered the factors used in the previous studies affecting box office and developed new factors not used in previous studies such as the order of opening of movies, dynamics of sales. Along with the comprehensive factors, we used the machine learning method such as Random Forest, Multi Layer Perception, Support Vector Machine, and Naive Bays, to predict the number of cumulative visitors from the first week after a film release to the third week. At the point of the first and the second week, we predicted the cumulative number of visitors of the forthcoming week for a released film. And at the point of the third week, we predict the total number of visitors of the film. In addition, we predicted the total number of cumulative visitors also at the point of the both first week and second week using the same factors. As a result, we found the accuracy of predicting the number of visitors at the forthcoming week was higher than that of predicting the total number of them in all of three weeks, and also the accuracy of the Random Forest was the highest among the machine learning methods we used. This study has implications in that this study 1) considered various factors comprehensively which affect the box office record and merely addressed by other previous researches such as the weekly rating of audiences after release, the weekly rank of the film after release, and the weekly sales share after release, and 2) tried to predict and respond to the demand of market which changes dynamically by suggesting models which predicts the weekly number of audiences of newly released films so that the stakeholders can flexibly and elastically respond to the change of the number of audiences in the film.

    The Ruling System of Silla to Gangneung Area Judged from Archaeological Resources in 5th to 6th Century (고고자료로 본 5~6세기 신라의 강릉지역 지배방식)

    • Shim, Hyun Yong
      • Korean Journal of Heritage: History & Science
      • /
      • v.42 no.3
      • /
      • pp.4-24
      • /
      • 2009
    • This paper examined archaeological resources that discuss how Silla entered the Gangneung area, the coastal region along the East Sea that has been excavated most actively. Silla expanded its territories while organizing the its system as an ancient state and acquired several independent townships in various regions, stretching its forces to the East Sea area faster than any other ancient states of the time. In particular, many early relics and heritages of Silla have been found in Gangneung, the center of the East Sea area. Many archaeological resources prove these circumstances of that time and provide brief texts that are valuable for our interpretation of historical facts. In this respect, it was possible for me to examine these resources to answer my question as to why early relics and heritages of Silla are found in the Gangneung area. Based on my research on Silla's advancement into the Gangneung area, I have acquired the following results: How did Silla rule this area after conquering Yeguk in the Gangneung area? After conquering the Gangneung area, Silla attempted an indirect ruling at first. Later, Silla adopted a direct ruling system. I divided the indirect ruling period into two phases: introduction and settlement. In detail, Silla's earthenware and stone chamber tombs first appeared in Hasi-dong in the fourth quarter of the 4th Century and the tombs spread to Chodang-dong in the second quarter of the 5th Century. A belt with dragon pattern openwork, which seems to be from the second quarter of the 5th Century, was found to tell us that the Gangneung region began receiving rewards from Silla during this time. Thus, the period from the fourth quarter of the 4th Century to the second quarter of the 5th Century is designated as the 1st Phase (Introduction) of indirect ruling in terms of aechaeological findings. This is when Silla was first advanced to the Gangneung area and tolerated independent administration of the conquered. In the third and fourth quarters of the 5th Century, old mound tombs appeared and burials of relics that symbolized power emerged. In the third quarter of the 5th Century, stone chamber tombs were prevalent, but wooden chamber tombs, stone mounded wooden chamber tombs, and lateral entrance stone chamber tombs began to emerge. Also, tombs that were clustered in Hasi-dong and Chodang-dong began to scatter to Byeongsan-dong, Yeongjin-ri, and Bangnae-ri nearby. Steel pots were the symbol of power that emerged at this time. In the fourth quarter of the 5th Century, stone chamber tombs were still dominating, but wooden chamber tombs, stone mounded wooden chamber tombs, and lateral entrance stone chamber tombs became more popular. More crowns, crown ornaments, big daggers, and belts were bestowed by Silla, mostly in Chodang-dong and Byeongsan-dong. The period from the third quarter to the fourth quarter of the 5th Century was designated as the 2nd Phase (Settlement) of indirect ruling in terms of aechaeological findings. At this time, Silla bestowed items of power to the ruling class of the Gangneung area and gave equal power to the rulers of Chodang-dong and Byeongsan-dong to keep them restrained by each other. However, Silla converted the ruling system to direct ruling once it recognized the Gangneung area as the base of its expedition of conquest to the north. In the first quarter of the 6th Century, old mound tombs disappeared and small/medium-sized mounds appeared in the western inlands and the northern areas. In this period, the tunnel entrance stone chamber tombs were large enough for people to enter with doors. A cluster of several tunnel entrance stone chamber tombs was formed in Yeongjin-ri and Bangnae-ri at this time, probably with the influence of Silla's direct ruling. In the first quarter of the 6th Century, Silla dispatched officers from the central government to complete the local administration system and replaced the ruling class of Chodang-dong and Byeongsan-dong with that of Silla-friendly Yeonjin-ri and Bangnae-ri to reorganize the local administration system and gain full control of the Gangneung area.

    Analysis of Korean Dietary Patterns using Food Intake Data - Focusing on Kimchi and Alcoholic Beverages (식품섭취량을 활용한 우리나라 식이 패턴 분석 - 김치류 및 주류 중심으로)

    • Kim, Soo-Hwaun;Choi, Jang-Duck;Kim, Sheen-Hee;Lee, Joon-Goo;Kwon, Yu-Jihn;Shin, Choonshik;Shin, Min-Su;Chun, So-Young;Kang, Gil-Jin
      • Journal of Food Hygiene and Safety
      • /
      • v.34 no.3
      • /
      • pp.251-262
      • /
      • 2019
    • In this study, we analyzed Korean dietary habits with food intake data from the Korea National Health and Nutrition Examination Survey (KNHANES) and the Korea Centers for Disease Control and Prevention and we proposed a set of management guidelines for future Korean dietary habits. A total of 839 food items (1,419 foods) were analyzed according to the food catagories in "Food Code", which is the representative food classification system in Korea. The average total daily food intake was 1,585.77 g/day, with raw and processed foods accounting for 858.96 g/day and 726.81 g/day, respectively. Cereal grains contributed to the highest proportion of the food intake. Over 90% of subjects consumed cereal grains (99.09%) and root and tuber vegetables (95.80%) among the top 15 consumed food groups. According to the analysis by item, rice, Korean cabbage kimchi, apple, radish, egg, chili pepper, onion, wheat, soybean curds, potato, cucumber and pork were major (at least 1% of the average daily intake, 158.6 g/day) and frequently (eaten by more than 25% of subjects, 5,168 persons) consumed food items, and Korean spices were at the top of this list. In the case of kimchi, the proportion of intake of Korean cabbage kimchi (64.89 g/day) was the highest. In the case of alcoholic beverages, intake was highest by order of beer (63.53 g/day), soju (39.11 g/day) and makgeolli (19.70 g/day), and intake frequency was high in order of soju (11.3%), beer (7.2%), and sake (6.6%). Analysis results by seasonal intake trends showed that cereal grains have steadily decreased and beverages have slightly risen. In the case of alcoholic beverage consumption frequency, some kinds of makgeolli, wine, sake, and black raspberry wine have decreased gradually year by year. The consumption trend for kimchi has been gradually decreasing as well.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.