• Title/Summary/Keyword: Business event

Search Result 445, Processing Time 0.021 seconds

Archival Appraisal of Public Records Regarding Urban Planning in Japanese Colonial Period (조선총독부 공문서의 기록학적 평가 -조선총독부 도시계획 관련 공문서군을 중심으로-)

  • Lee, Seung Il
    • The Korean Journal of Archival Studies
    • /
    • no.12
    • /
    • pp.179-235
    • /
    • 2005
  • In this article, the task of evaluating the official documents that were created and issued by the Joseon Governor General office during the Japanese occupation period, with new perspectives based upon the Macro-Appraisal approaches developed by the Canadian scholars and personnel, will be attempted. Recently, the Canadian people and the authorities have been showing a tendency of evaluating the meaning and importance of a particular document with perspectives considering the historical situation and background conditions that gave birth to that document to be a more important factor, even than considering the quality and condition of that very document. Such approach requires the archivists to determine whether they should preserve a certain document or not based upon the meaning, functions and status of the entity that produced the document or the meaning of the documentation practice itself, rather than the actual document. With regard to the task of evaluating the official documents created and issued by the Joseon Governor General office and involved the city plans devised by the office back then, this author established total of 4 primary tasks that would prove crucial in the process of determining whether or not a particular theme, or event, or an ideology should be selected and documents involving those themes, events and ideologies should be preserved as important sources of information regarding the Korean history of the Japanese occupation period. Those four tasks are as follow: First, the archivists should study the current and past trends of historical researches. The archivists, who are usually not in the position of having comprehensive access to historical details, must consult the historians' studies and also the trends mirrored in such studies, in their efforts of selecting important historical events and themes. Second, the archivists should determine the level of importance of the officials who worked inside the Joseon Governor General office as they were the entities that produced the very documents. It is only natural to assume that the level of importance of a particular document must have been determined by the level of importance(in terms of official functions) of the official who authorized the document and ordered it to be released. Third, the archivists should be made well aware of the inner structure and official functions of the Joseon Governor General office, so that they can have more appropriate analyses. Fourth, in order to collect historically important documents that involved the Koreans(the Joseon people), the archivists should analyze not only the functions of the Joseon Governor General office in general but also certain areas of the Office's business in which the Japanese officials and the Koreans would have interacted with each other. The act of analyzing the documents only based upon their respective levels of apparent importance might lead the archivists to miss certain documents that reflected the Koreans' situation or were related to the general interest of the Korean people. This kind of evaluation should provide data that are required in appraising how well the Joseon Governor General office's function of devising city plans were documented back then, and how well they are preserved today, utilizing a comparative study involving the Joseon Governor General office's own evaluations of its documentations and the current status of documents that are in custody of the National Archive. The task would also end up proposing a specialized strategy of collecting data and documents that is direly needed in establishing a well-designed comprehensive archives. We should establish a plan regarding certain documents that were documented by the Joseon Governor General office but do not remain today, and devise a task model for the job of primary collecting that would take place in the future.

Examining the Influence of Science Museum Service Quality on Customer Satisfaction and Revisit Intention - A Case of Gwacheon National Science Museum - (과학관 서비스 품질이 고객만족도 및 재방문 의도에 미치는 영향 분석 - 국립과천과학관을 중심으로 -)

  • Choi, Jung won;Nam, Tae woo;Cho, Jae min
    • Korea Science and Art Forum
    • /
    • v.27
    • /
    • pp.277-288
    • /
    • 2017
  • The number of science museums in Korea has expanded quantitatively from 72 in 2008 to 128 in 2016. This study started with the fact that the government puts a lot of budget into building a science museum, but there are more than one quarter of science museums with less than 50 spectators per day and many inefficient institutions. The number of visitors is an important factor in improving the efficiency of the science museum operation. The purpose of this study is to analyze the relation between the service quality of the science museum and the customer satisfaction and the intention to revisit and to find out what kind of effort should be concentrated in the science museum to attract more visitors. Questionnaires were written in the exhibition, education, and culture fields of the Gwacheon National Science Museum. The results were derived by frequency analysis, reliability analysis, factor analysis, and multiple regression analysis. The results and contents of the study are as follows. First, in the field of exhibition, the quality of exhibition facilities was expected to affect customer satisfaction and intention to return, but did not have a meaningful relationship. Second, the education sector has been found to affect customer satisfaction and return intention in all aspects of service quality (operation and contents, instructors, educational facilities and environment). Third, in the field of culture (event), the quality of the cultural program influences the visitor satisfaction, but it does not affect the intention to revisit. The science museum can provide satisfaction to visitors by combining activities such as science and arts. Despite the limitations, it is necessary to make efforts to improve the visitor satisfaction and revisit by proceeding with the convergence research on the entire National Science Museum in the future.

Research on Archive Opening and Sharing Projects of Korean Terrestrial Broadcasters and External Users of Shared Archives : Focusing on the Case of the 5.18 Footage Video Sharing Project 〈May Story(Owol-Iyagi)〉 Contest Organized by KBS (국내 지상파 방송사의 아카이브 개방·공유 사업과 아카이브 이용자 연구 KBS 5.18 아카이브 시민공유 프로젝트 <5월이야기> 공모전 사례를 중심으로)

  • Choi, Hyojin
    • The Korean Journal of Archival Studies
    • /
    • no.78
    • /
    • pp.197-249
    • /
    • 2023
  • This paper focus on the demand for broadcast and video archive contents by users outside broadcasters as the archive openness and sharing projects of terrestrial broadcasters have become more active in recent years. In the process of creating works using broadcasters' released video footage, the study examined the criteria by which video footage is selected and the methods and processes utilized for editing. To this end, the study analyzed the the case of the 5.18 footage video sharing project 〈May Story(Owol-Iyagi)〉 contest organized by KBS in 2022, in which KBS released its footage about the May 18 Democratic Uprising and invited external users to create new content using them. Analyzing the works that were selected as the winners of the contest, the research conducts in-depth interviews with the creators of each work. As a result, the following points are identified. Among the submitted works, many works deal with the direct or indirect experience of the May 18 Democratic Uprising and focus on the impact of this historical event on individuals and our current society. The study also examined the ways in which broadcasters' footage is used in secondary works. We found ways to use video as a means to share historical events, or to present video as evidence or metaphor. It is found that the need for broadcasters to provide a wider range of public video materials such as the May 18 Democratic Uprising, describing more metadata including copyright information before releasing selected footage, ensuring high-definition and high-fidelity videos that can be used for editing, and strengthening streaming or downloading functions for user friendliness. Through this, the study explores the future direction of broadcasters' video data openness and sharing business, and confirms that broadcasters' archival projects can be an alternative to fulfill public responsibilities such as strengthening social integration between regions, generations, and classes through moving images.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.