• Title/Summary/Keyword: multiple input multiple

Search Result 2,082, Processing Time 0.027 seconds

An Implementation Method of the Character Recognizer for the Sorting Rate Improvement of an Automatic Postal Envelope Sorting Machine (우편물 자동구분기의 구분율 향상을 위한 문자인식기의 구현 방법)

  • Lim, Kil-Taek;Jeong, Seon-Hwa;Jang, Seung-Ick;Kim, Ho-Yon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.4
    • /
    • pp.15-24
    • /
    • 2007
  • The recognition of postal address images is indispensable for the automatic sorting of postal envelopes. The process of the address image recognition is composed of three steps-address image preprocessing, character recognition, address interpretation. The extracted character images from the preprocessing step are forwarded to the character recognition step, in which multiple candidate characters with reliability scores are obtained for each character image extracted. aracters with reliability scores are obtained for each character image extracted. Utilizing those character candidates with scores, we obtain the final valid address for the input envelope image through the address interpretation step. The envelope sorting rate depends on the performance of all three steps, among which character recognition step could be said to be very important. The good character recognizer would be the one which could produce valid candidates with very reliable scores to help the address interpretation step go easy. In this paper, we propose the method of generating character candidates with reliable recognition scores. We utilize the existing MLP(multilayered perceptrons) neural network of the address recognition system in the current automatic postal envelope sorters, as the classifier for the each image from the preprocessing step. The MLP is well known to be one of the best classifiers in terms of processing speed and recognition rate. The false alarm problem, however, might be occurred in recognition results, which made the address interpretation hard. To make address interpretation easy and improve the envelope sorting rate, we propose promising methods to reestimate the recognition score (confidence) of the existing MLP classifier: the generation method of the statistical recognition properties of the classifier and the method of the combination of the MLP and the subspace classifier which roles as a reestimator of the confidence. To confirm the superiority of the proposed method, we have used the character images of the real postal envelopes from the sorters in the post office. The experimental results show that the proposed method produces high reliability in terms of error and rejection for individual characters and non-characters.

  • PDF

Efficiency and Productivity of Seven Large-sized Shipbuilding Firms in Korea (국내 대형조선업계의 효율성 및 생산성 분석)

  • Park, Seok-Ho
    • Journal of Korea Port Economic Association
    • /
    • v.26 no.4
    • /
    • pp.188-206
    • /
    • 2010
  • Data Envelopment Analysis(DEA) is an operations research-based method for measuring the performance efficiency of decision units that are characterized by multiple inputs and outputs. DEA has been applied successfully as a performance evaluation tool in many fields. However, it has not been extensively applied in the shipbuilding industry. This paper applied the input-oriented DEA model, and Malmquist indices to the 7 shipbuilding firms to measure the efficiency and productivity changes during the period of 2004 to 2009. The Malmquist indices will be decomposed into three components such as pure efficiency change, scale efficiency change, and technical change. The empirical results show the following findings. First, the DEA findings indicate that main source of inefficiency is scale rather than pure technical. Second, the Malmquist indices show that an overall decrease in productivity.

The Optimal Turbo Coded V-BLAST Technique in the Adaptive Modulation System corresponding to each MIMO Scheme (적응 변조 시스템에서 각 MIMO 기법에 따른 최적의 터보 부호화된 V-BLAST 기법)

  • Lee, Kyung-Hwan;Ryoo, Sang-Jin;Choi, Kwang-Wook;You, Cheol-Woo;Hong, Dae-Ki;Kim, Dae-Jin;Hwang, In-Tae;Kim, Cheol-Sung
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.6 s.360
    • /
    • pp.40-47
    • /
    • 2007
  • In this paper, we propose and analyze the Adaptive Modulation System with optimal Turbo Coded V-BLAST(Vertical-Bell-lab Layered Space-Time) technique that adopts the extrinsic information from MAP (Maximum A Posteriori) Decoder with Iterative Decoding as a priori probability in two decoding procedures of V-BLAST; the ordering and the slicing. Also, we consider and compare the Adaptive Modulation System using conventional Turbo Coded V-BLAST technique that is simply combined V-BLAST with Turbo Coding scheme and the Adaptive Modulation System using conventional Turbo Coded V-BLAST technique that is decoded by the ML (Maximum Likelihood) decoding algorithm. We observe a throughput performance and a complexity. As a result of a performance comparison of each system, it has been proved that the complexity of the proposed decoding algorithm is lower than that of the ML decoding algorithm but is higher than that of the conventional V-BLAST decoding algorithm. however, we can see that the proposed system achieves a better throughput performance than the conventional system in the whole SNR (Signal to Noise Ratio) range. And the result shows that the proposed system achieves a throughput performance close to the ML decoded system. Specifically, a simulation shows that the maximum throughput improvement in each MIMO scheme is respectively about 350 kbps, 460 kbps, and 740 kbps compared to the conventional system. It is suggested that the effect of the proposed decoding algorithm accordingly gets higher as the number of system antenna increases.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Bankruptcy Prediction Modeling Using Qualitative Information Based on Big Data Analytics (빅데이터 기반의 정성 정보를 활용한 부도 예측 모형 구축)

  • Jo, Nam-ok;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.33-56
    • /
    • 2016
  • Many researchers have focused on developing bankruptcy prediction models using modeling techniques, such as statistical methods including multiple discriminant analysis (MDA) and logit analysis or artificial intelligence techniques containing artificial neural networks (ANN), decision trees, and support vector machines (SVM), to secure enhanced performance. Most of the bankruptcy prediction models in academic studies have used financial ratios as main input variables. The bankruptcy of firms is associated with firm's financial states and the external economic situation. However, the inclusion of qualitative information, such as the economic atmosphere, has not been actively discussed despite the fact that exploiting only financial ratios has some drawbacks. Accounting information, such as financial ratios, is based on past data, and it is usually determined one year before bankruptcy. Thus, a time lag exists between the point of closing financial statements and the point of credit evaluation. In addition, financial ratios do not contain environmental factors, such as external economic situations. Therefore, using only financial ratios may be insufficient in constructing a bankruptcy prediction model, because they essentially reflect past corporate internal accounting information while neglecting recent information. Thus, qualitative information must be added to the conventional bankruptcy prediction model to supplement accounting information. Due to the lack of an analytic mechanism for obtaining and processing qualitative information from various information sources, previous studies have only used qualitative information. However, recently, big data analytics, such as text mining techniques, have been drawing much attention in academia and industry, with an increasing amount of unstructured text data available on the web. A few previous studies have sought to adopt big data analytics in business prediction modeling. Nevertheless, the use of qualitative information on the web for business prediction modeling is still deemed to be in the primary stage, restricted to limited applications, such as stock prediction and movie revenue prediction applications. Thus, it is necessary to apply big data analytics techniques, such as text mining, to various business prediction problems, including credit risk evaluation. Analytic methods are required for processing qualitative information represented in unstructured text form due to the complexity of managing and processing unstructured text data. This study proposes a bankruptcy prediction model for Korean small- and medium-sized construction firms using both quantitative information, such as financial ratios, and qualitative information acquired from economic news articles. The performance of the proposed method depends on how well information types are transformed from qualitative into quantitative information that is suitable for incorporating into the bankruptcy prediction model. We employ big data analytics techniques, especially text mining, as a mechanism for processing qualitative information. The sentiment index is provided at the industry level by extracting from a large amount of text data to quantify the external economic atmosphere represented in the media. The proposed method involves keyword-based sentiment analysis using a domain-specific sentiment lexicon to extract sentiment from economic news articles. The generated sentiment lexicon is designed to represent sentiment for the construction business by considering the relationship between the occurring term and the actual situation with respect to the economic condition of the industry rather than the inherent semantics of the term. The experimental results proved that incorporating qualitative information based on big data analytics into the traditional bankruptcy prediction model based on accounting information is effective for enhancing the predictive performance. The sentiment variable extracted from economic news articles had an impact on corporate bankruptcy. In particular, a negative sentiment variable improved the accuracy of corporate bankruptcy prediction because the corporate bankruptcy of construction firms is sensitive to poor economic conditions. The bankruptcy prediction model using qualitative information based on big data analytics contributes to the field, in that it reflects not only relatively recent information but also environmental factors, such as external economic conditions.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

Rare Earth Elements (REE)-bearing Coal Deposits: Potential of Coal Beds as an Unconventional REE Source (함희토류 탄층: 비전통적 희토류 광체로서의 가능성에 대한 고찰)

  • Choi, Woohyun;Park, Changyun
    • Economic and Environmental Geology
    • /
    • v.55 no.3
    • /
    • pp.241-259
    • /
    • 2022
  • In general, the REE were produced by mining conventional deposits, such as the carbonatite or the clay-hosted REE deposits. However, because of the recent demand increase for REE in modern industries, unconventional REE deposits emerged as a necessary research topic. Among the unconventional REE recovery methods, the REE-bearing coal deposits are recently receiving attentions. R-types generally have detrital originations from the bauxite deposits, and show LREE enriched REE patterns. Tuffaceous-types are formed by syngenetic volcanic activities and following input of volcanic ash into the basin. This type shows specific occurrence of the detrital volcanic ash-driven minerals and the authigenic phosphorous minerals focused at narrow horizon between coal seams and tonstein layers. REE patterns of tuffaceous-types show flat shape in general. Hydrothermal-types can be formed by epigenetic inflow of REE originated from granitic intrusions. Occurrence of the authigenic halogen-bearing phosphorous minerals and the water-bearing minerals are the specific characteristics of this type. They generally show HREE enriched REE patterns. Each type of REE-bearing coal deposits may occur by independent genesis, but most of REE-bearing coal deposits with high REE concentrations have multiple genesis. For the case of the US, the rare earth oxides (REO) with high purity has been produced from REE-bearing coals and their byproducts in pilot plants from 2018. Their goal is to supply about 7% of national REE demand. For the coal deposits in Korea, lignite layers found in Gyungju-Yeongil coal fields shows coexistence of tuff layers and coal seams. They are also based in Tertiary basins, and low affection from compaction and coalification might resulted into high-REE tuffaceous-type coal deposits. Thus, detailed geologic researches and explorations for domestic coal deposits are required.

Assessment of water supply reliability in the Geum River Basin using univariate climate response functions: a case study for changing instreamflow managements (단변량 기후반응함수를 이용한 금강수계 이수안전도 평가: 하천유지유량 관리 변화를 고려한 사례연구)

  • Kim, Daeha;Choi, Si Jung;Jang, Su Hyung;Kang, Dae Hu
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.12
    • /
    • pp.993-1003
    • /
    • 2023
  • Due to the increasing greenhouse gas emissions, the global mean temperature has risen by 1.1℃ compared to pre-industrial levels, and significant changes are expected in functioning of water supply systems. In this study, we assessed impacts of climate change and instreamflow management on water supply reliability in the Geum River basin, Korea. We proposed univariate climate response functions, where mean precipitation and potential evaporation were coupled as an explanatory variable, to assess impacts of climate stress on multiple water supply reliabilities. To this end, natural streamflows were generated in the 19 sub-basins with the conceptual GR6J model. Then, the simulated streamflows were input into the Water Evaluation And Planning (WEAP) model. The dynamic optimization by WEAP allowed us to assess water supply reliability against the 2020 water demand projections. Results showed that when minimizing the water shortage of the entire river basin under the 1991-2020 climate, water supply reliability was lowest in the Bocheongcheon among the sub-basins. In a scenario where the priority of instreamflow maintenance is adjusted to be the same as municipal and industrial water use, water supply reliability in the Bocheongcheon, Chogang, and Nonsancheon sub-basins significantly decreased. The stress tests with 325 sets of climate perturbations showed that water supply reliability in the three sub-basins considerably decreased under all the climate stresses, while the sub-basins connected to large infrastructures did not change significantly. When using the 2021-2050 climate projections with the stress test results, water supply reliability in the Geum River basin was expected to generally improve, but if the priority of instreamflow maintenance is increased, water shortage is expected to worsen in geographically isolated sub-basins. Here, we suggest that the climate response function can be established by a single explanatory variable to assess climate change impacts of many sub-basin's performance simultaneously.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Excavation of Kim Jeong-gi and Korean Archeology (창산 김정기의 유적조사와 한국고고학)

  • Lee, Ju-heun
    • Korean Journal of Heritage: History & Science
    • /
    • v.50 no.4
    • /
    • pp.4-19
    • /
    • 2017
  • Kim Jeong-gi (pen-name: Changsan, Mar. 31, 1930 - Aug. 26, 2015) made a major breakthrough in the history of cultural property excavation in Korea: In 1959, he began to develop an interest in cultural heritage after starting work as an employee of the National Museum of Korea. For about thirty years until he retired from the National Research Institute of Cultural Heritage in 1987, he devoted his life to the excavation of our country's historical relics and artifacts and compiled countless data about them. He continued striving to identify the unique value and meaning of our cultural heritage in universities and excavation organizations until he passed away in 2015. Changsan spearheaded all of Korea's monumental archeological excavations and research. He is widely known at home and abroad as a scholar of Korean archeology, particularly in the early years of its existence as an academic discipline. As such, he has had a considerable influence on the development of Korean archeology. Although his multiple activities and roles are meaningful in terms of the country's archaeological history, there are limits to his contributions nevertheless. The Deoksugung Palace period (1955-1972), when the National Museum of Korea was situated in Deoksugung Palace, is considered to be a time of great significance for Korean archeology, as relics with diverse characteristics were researched during this period. Changsan actively participated in archeological surveys of prehistoric shell mounds and dwellings, conducted surveys of historical relics, measured many historical sites, and took charge of photographing and drawing such relics. He put to good use all the excavation techniques that he had learned in Japan, while his countrywide archaeological surveys are highly regarded in terms of academic history as well. What particularly sets his perspectives apart in archaeological terms is the fact that he raised the possibility of underwater tombs in ancient times, and also coined the term "Haemi Culture" as part of a theory of local culture aimed at furthering understanding of Bronze Age cultures in Korea. His input was simply breathtaking. In 1969, the National Research Institute of Cultural Heritage (NRICH) was founded and Changsan was appointed as its head. Despite the many difficulties he faced in running the institute with limited financial and human resources, he gave everything he had to research and field studies of the brilliant cultural heritages that Korea has preserved for so long. Changsan succeeded in restoring Bulguksa Temple, and followed this up with the successful excavation of the Cheonmachong Tomb and the Hwangnamdaechong Tomb in Gyeongju. He then explored the Hwangnyongsa Temple site, Bunhwangsa Temple, and the Mireuksa Temple site in order to systematically evaluate the Buddhist culture and structures of the Three Kingdoms Period. We can safely say that the large excavation projects that he organized and carried out at that time not only laid the foundations for Korean archeology but also made significant contributions to studies in related fields. Above all, in terms of the developmental process of Korean archeology, the achievements he generated with his exceptional passion during the period are almost too numerous to mention, but they include his systematization of various excavation methods, cultivation of archaeologists, popularization of archeological excavations, formalization of survey records, and promotion of data disclosure. On the other hand, although this "Excavation King" devoted himself to excavations, kept precise records, and paid keen attention to every detail, he failed to overcome the limitations of his era in the process of defining the nature of cultural remains and interpreting historical sites and structures. Despite his many roles in Korean archeology, the fact that he left behind a controversy over the identity of the occupant of the Hwangnamdaechong Tomb remains a sore spot in his otherwise perfect reputation.