• Title/Summary/Keyword: additional information

Search Result 4,948, Processing Time 0.036 seconds

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.

Chinese Communist Party's Management of Records & Archives during the Chinese Revolution Period (혁명시기 중국공산당의 문서당안관리)

  • Lee, Won-Kyu
    • The Korean Journal of Archival Studies
    • /
    • no.22
    • /
    • pp.157-199
    • /
    • 2009
  • The organization for managing records and archives did not emerge together with the founding of the Chinese Communist Party. Such management became active with the establishment of the Department of Documents (文書科) and its affiliated offices overseeing reading and safekeeping of official papers, after the formation of the Central Secretariat(中央秘書處) in 1926. Improving the work of the Secretariat's organization became the focus of critical discussions in the early 1930s. The main criticism was that the Secretariat had failed to be cognizant of its political role and degenerated into a mere "functional organization." The solution to this was the "politicization of the Secretariat's work." Moreover, influenced by the "Rectification Movement" in the 1940s, the party emphasized the responsibility of the Resources Department (材料科) that extended beyond managing documents to collecting, organizing and providing various kinds of important information data. In the mean time, maintaining security with regard to composing documents continued to be emphasized through such methods as using different names for figures and organizations or employing special inks for document production. In addition, communications between the central political organs and regional offices were emphasized through regular reports on work activities and situations of the local areas. The General Secretary not only composed the drafts of the major official documents but also handled the reading and examination of all documents, and thus played a central role in record processing. The records, called archives after undergoing document processing, were placed in safekeeping. This function was handled by the "Document Safekeeping Office(文件保管處)" of the Central Secretariat's Department of Documents. Although the Document Safekeeping Office, also called the "Central Repository(中央文庫)", could no longer accept, beginning in the early 1930s, additional archive transfers, the Resources Department continued to strengthen throughout the 1940s its role of safekeeping and providing documents and publication materials. In particular, collections of materials for research and study were carried out, and with the recovery of regions which had been under the Japanese rule, massive amounts of archive and document materials were collected. After being stipulated by rules in 1931, the archive classification and cataloguing methods became actively systematized, especially in the 1940s. Basically, "subject" classification methods and fundamental cataloguing techniques were adopted. The principle of assuming "importance" and "confidentiality" as the criteria of management emerged from a relatively early period, but the concept or process of evaluation that differentiated preservation and discarding of documents was not clear. While implementing a system of secure management and restricted access for confidential information, the critical view on providing use of archive materials was very strong, as can be seen in the slogan, "the unification of preservation and use." Even during the revolutionary movement and wars, the Chinese Communist Party continued their efforts to strengthen management and preservation of records & archives. The results were not always desirable nor were there any reasons for such experiences to lead to stable development. The historical conditions in which the Chinese Communist Party found itself probably made it inevitable. The most pronounced characteristics of this process can be found in the fact that they not only pursued efficiency of records & archives management at the functional level but, while strengthening their self-awareness of the political significance impacting the Chinese Communist Party's revolution movement, they also paid attention to the value possessed by archive materials as actual evidence for revolutionary policy research and as historical evidence of the Chinese Communist Party.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Target-Aspect-Sentiment Joint Detection with CNN Auxiliary Loss for Aspect-Based Sentiment Analysis (CNN 보조 손실을 이용한 차원 기반 감성 분석)

  • Jeon, Min Jin;Hwang, Ji Won;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.1-22
    • /
    • 2021
  • Aspect Based Sentiment Analysis (ABSA), which analyzes sentiment based on aspects that appear in the text, is drawing attention because it can be used in various business industries. ABSA is a study that analyzes sentiment by aspects for multiple aspects that a text has. It is being studied in various forms depending on the purpose, such as analyzing all targets or just aspects and sentiments. Here, the aspect refers to the property of a target, and the target refers to the text that causes the sentiment. For example, for restaurant reviews, you could set the aspect into food taste, food price, quality of service, mood of the restaurant, etc. Also, if there is a review that says, "The pasta was delicious, but the salad was not," the words "steak" and "salad," which are directly mentioned in the sentence, become the "target." So far, in ABSA, most studies have analyzed sentiment only based on aspects or targets. However, even with the same aspects or targets, sentiment analysis may be inaccurate. Instances would be when aspects or sentiment are divided or when sentiment exists without a target. For example, sentences like, "Pizza and the salad were good, but the steak was disappointing." Although the aspect of this sentence is limited to "food," conflicting sentiments coexist. In addition, in the case of sentences such as "Shrimp was delicious, but the price was extravagant," although the target here is "shrimp," there are opposite sentiments coexisting that are dependent on the aspect. Finally, in sentences like "The food arrived too late and is cold now." there is no target (NULL), but it transmits a negative sentiment toward the aspect "service." Like this, failure to consider both aspects and targets - when sentiment or aspect is divided or when sentiment exists without a target - creates a dual dependency problem. To address this problem, this research analyzes sentiment by considering both aspects and targets (Target-Aspect-Sentiment Detection, hereby TASD). This study detected the limitations of existing research in the field of TASD: local contexts are not fully captured, and the number of epochs and batch size dramatically lowers the F1-score. The current model excels in spotting overall context and relations between each word. However, it struggles with phrases in the local context and is relatively slow when learning. Therefore, this study tries to improve the model's performance. To achieve the objective of this research, we additionally used auxiliary loss in aspect-sentiment classification by constructing CNN(Convolutional Neural Network) layers parallel to existing models. If existing models have analyzed aspect-sentiment through BERT encoding, Pooler, and Linear layers, this research added CNN layer-adaptive average pooling to existing models, and learning was progressed by adding additional loss values for aspect-sentiment to existing loss. In other words, when learning, the auxiliary loss, computed through CNN layers, allowed the local context to be captured more fitted. After learning, the model is designed to do aspect-sentiment analysis through the existing method. To evaluate the performance of this model, two datasets, SemEval-2015 task 12 and SemEval-2016 task 5, were used and the f1-score increased compared to the existing models. When the batch was 8 and epoch was 5, the difference was largest between the F1-score of existing models and this study with 29 and 45, respectively. Even when batch and epoch were adjusted, the F1-scores were higher than the existing models. It can be said that even when the batch and epoch numbers were small, they can be learned effectively compared to the existing models. Therefore, it can be useful in situations where resources are limited. Through this study, aspect-based sentiments can be more accurately analyzed. Through various uses in business, such as development or establishing marketing strategies, both consumers and sellers will be able to make efficient decisions. In addition, it is believed that the model can be fully learned and utilized by small businesses, those that do not have much data, given that they use a pre-training model and recorded a relatively high F1-score even with limited resources.

Preservation of World Records Heritage in Korea and Further Registry (한국의 세계기록유산 보존 현황 및 과제)

  • Kim, Sung-Soo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.5 no.2
    • /
    • pp.27-48
    • /
    • 2005
  • This study investigates the current preservation and management of four records and documentary heritage in Korea that is in the UNESCO's Memory of the World Register. The study analyzes their problems and corresponding solutions in digitizing those world records heritages. This study also reviews additional four documentary books in Korea that are in the wish list to add to UNESCO's Memory of the World Register. This study is organized as the following: Chapter 2 examines the value and meanings of world records and documentary heritage in Korea. The registry requirements and procedures of UNESCO's Memory of the World Register are examined. The currently registered records of Korea include Hunmin-Chongum, the Annals of the Choson Dynasty, the Diaries of the Royal Secretariat (Seungjeongwon Ilgi), and Buljo- Jikji-Simche-Yojeol (vol. II). These records heritage's worth and significance are carefully analyzed. For example, Hunmin-Chongum("訓民正音") is consisted of unique and systematic letters. Letters were delicately explained with examples in its original manual at the time of letter's creation, which is an unparalleled case in the world documentary history. The Annals of the Choson Dynasty("朝鮮王朝實錄") are the most comprehensive historic documents that contain the longest period of time in history. Their truthfulness and reliability in describing history give credits to the annals. The Royal Secretariat Diary (called Seungjeongwon-Ilgi("承政院日記")) is the most voluminous primary resources in history, superior to the Annals of Choson Dynasty and Twenty Five Histories in China. Jikji("直指") is the oldest existing book published by movable metal print sets in the world. It evidences the beginning of metal printing in the world printing history and is worthy of being as world heritage. The review of the four registered records confirms that they are valuable world documentary heritage that transfers culture of mankind to next generations and should be preserved carefully and safely without deterioration or loss. Chapter 3 investigates the current status of preservation and management of three repositories that store the four registered records in Korea. The repositories include Kyujanggak Archives in Seoul National University, Pusan Records and Information Center of National Records and Archives Service, and Gansong Art Museum. The quality of their preservation and management are excellent in all of three institutions by the following aspects: 1) detailed security measures are close to perfection 2) archiving practices are very careful by using a special stack room in steady temperature and humidity and depositing it in stack or archival box made of paulownia tree and 3) fire prevention, lighting, and fumigation are thoroughly prepared. Chapter 4 summarizes the status quo of digitization projects of records heritage in Korea. The most important issue related to digitization and database construction on Korean records heritage is likely to set up the standardization of digitization processes and facilities. It is urgently necessary to develop comprehensive standard systems for digitization. Two institutions are closely interested in these tasks: 1) the National Records and Archives Service experienced in developing government records management systems; and 2) the Cultural Heritage Administration interested in digitization of Korean old documents. In collaboration of these two institutions, a new standard system will be designed for digitizing records heritage on Korean Studies. Chapter 5 deals with additional Korean records heritage in the wish list for UNESCO's Memory of the World Register, including: 1) Wooden Printing Blocks(經板) of Koryo-Taejangkyong(高麗大藏經) in Haein Temple(海印寺); 2) Dongui-Bogam("東醫寶鑑") 3) Samguk-Yusa("三國遺事") and 4) Mugujeonggwangdaedaranigyeong. Their world value and importance are examined as followings. Wooden Printing Blocks of Koryo-Taejangkyong in Haein Temple is the worldly oldest wooden printing block of cannon of Buddhism that still exist and was created over 750 years ago. It needs a special conservation treatment to disinfect germs residing in surface and inside of wooden plates. Otherwise, it may be damaged seriously. For its effective conservation and preservation, we hope that UNESCO and Government will schedule special care and budget and join the list of Memory of the Word Register. Dongui-Bogam is the most comprehensive and well-written medical book in the Korean history, summarizing all medical books in Korea and China from the Ancient Times through the early 17th century and concentrating on Korean herb medicine and prescriptions. It is proved as the best clinical guidebook in the 17th century for doctors and practitioners to easily use. The book was also published in China and Japan in the 18th century and greatly influenced the development of practical clinic and medical research in Asia at that time. This is why Dongui Bogam is in the wish list to register to the Memory of the World. Samguk-Yusa is evaluated as one of the most comprehensive history books and treasure sources in Korea, which illustrates foundations of Korean people and covers histories and cultures of ancient Korean peninsula and nearby countries. The book contains the oldest fixed form verse, called Hyang-Ka(鄕歌), and became the origin of Korean literature. In particular, the section of Gi-ee(紀異篇) describes the historical processes of dynasty transition from the first dynasty Gochosun(古朝鮮) to Goguryeo(高句麗) and illustrates the identity of Korean people from its historical origin. This book is worthy of adding to the Memory of the World Register. Mugujeonggwangdaedaranigyeong is the oldest book printed by wooden type plates, and it is estimated to print in between 706 and 751. It contains several reasons and evidence to be worthy of adding to the list of the Memory of the World. It is the greatest documentary heritage that represents the first wooden printing book that still exists in the world as well as illustrates the history of wooden printing in Korea.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Studies on Garlic Mosaic Virus -lts isolation, symptom expression in test plants, physical properties, purification, serology and electron microscopy- (마늘 모자이크 바이러스에 관한 연구 -마늘 모자이크 바이러스의 분리, 검정식물상의 반응, 물리적성질, 순화, 혈청반응 및 전자현미경적관찰-)

  • La Yong-Joon
    • Korean journal of applied entomology
    • /
    • v.12 no.3
    • /
    • pp.93-107
    • /
    • 1973
  • Garlic (Allium sativum L.) is an important vegetable crop for the Korean people and has long been cultivated extensively in Korea. More recently it has gained importance as a source of certain pharmaceuticals. This additional use has also contributed to the increasing demand for Korean garlic. Garlic has been propagated vegetatively for a long time without control measures against virus diseases. As a result it is presumed that most of the garlic varieties in Korea may have degenerated. The production of virus-free plants offers the most feasible way to control the virus diseases of garlic. However, little is known about garlic viruses both domestically and in foreign countries. More basic information regarding garlic viruses is needed before a sound approach to the control of these diseases can be developed. Currently garlic mosaic disease is most prevalent in plantings throughout Korea and is considered to be the most important disease of garlic in Korea. Because of this importance, studies were initiated to isolate and characterize the garlic mosaic virus. Symptom expression in test plants, physical properties, purification, serological reaction and morphological characteristics of the garlic mosaic virus were determined. Results of these studies are summarized as follows. 1. Surveys made throughout the important garlic growing areas in Korea during 1970-1972 revealed that most of the garlic plants were heavily infected with mosaic disease. 2. A strain of garlic mosaic virus was obtained from infected garlic leaves and transmitted mechanically to Chenopodium amaranticolor by single lesion isolation technique. 3. The symptom expression of this garlic mosaic virus isolate was examined on 26 species of test plants. Among these, Chenopodium amaranticolor, C. quince, C. album and C. koreanse expressed chlorotic local lesions on inoculated leaves 11-12 days after mechanical inoculation with infective sap. The remaining 22 species showed no symptoms and no virus was recovered from them whet back-inoculated to C. amaranticolor. 4. Among the four species of Chtnopodium mentioned above, C. amaranticolor and C. quinoa appear to be the most suitable local lesion test plants for garlic mosaic virus. 5. Cloves and top·sets originating from mosaic infected garlic plants were $100\%$ infected with the same virus. Consequently the garlic mosaic virus is successively transmitted through infected cloves and top-sets. 6. Garlic mosaic virus was mechanically transmitted to C, amaranticolor when inoculations were made with infective sap of cloves and top-sets. 7. Physical properties of the garlic mosaic virus as determined by inoculation onto C. amaranticolor were as follows. Thermal inactivation point: $65-70^{\circ}C$, Dilution end poiut: $10^-2-10^-3$, Aging in vitro: 2 days. 8. Electron microscopic examination of the garlic mosaic virus revealed long rod shaped particles measuring 1200-1250mu. 9. Garlic mosaic virus was purified from leaf materials of C. amaranticolor by using two cycles of differential centrifugation followed by Sephadex gel filtration. 10. Garlic mosaic virus was successfully detected from infected garlic cloves and top-sets by a serological microprecipitin test. 11 Serological tests of 150 garlic cloves and 30 top-sets collected randomly from seperated plants throughout five different garlic growing regions in Korea revealed $100\%$ infection with garlic mosaic virus. Accordingly it is concluded that most of the garlic cloves and top-sets now being used for propagation in Korea are carriers of the garlic mosaic virus. 12. Serological studies revealed that the garlic mosaic virus is not related with potato viruses X, Y, S and M. 13. Because of the difficulty in securing mosaic virus-free garlic plants, direct inoculation with isolated virus to the garlic plants was not accomplished. Results of the present study, however, indicate that the virus isolate used here is the causal virus of the garlic mosaic disease in Korea.

  • PDF

Seeking a Better Place: Sustainability in the CPG Industry (추심경호적지방(追寻更好的地方): 유포장적소비품적산업적가지속발전(有包装的消费品的产业的可持续发展))

  • Rapert, Molly Inhofe;Newman, Christopher;Park, Seong-Yeon;Lee, Eun-Mi
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.2
    • /
    • pp.199-207
    • /
    • 2010
  • For us, there is virtually no distinction between being a responsible citizen and a successful business... they are one and the same for Wal-Mart today." ~ Lee Scott, al-Mart CEO after the 2005 Katrina disaster; cited in Green to Gold (Esty and Winston 2006). Lee Scott's statement signaled a new era in sustainability as manufacturers and retailers around the globe watched the world's largest mass merchandiser confirm its intentions with respect to sustainability. For decades, the environmental movement has grown, slowly bleeding over into the corporate world. Companies have been born, products have been created, academic journals have been launched, and government initiatives have been undertaken - all in the pursuit of sustainability (Peattie and Crane 2005). While progress has been admittedly slower than some may desire, the emergence and entrance of environmentally concerned mass merchandisers has done much to help with sustainable efforts. To better understand this movement, we incorporate the perspectives of both executives and consumers involved in the consumer packaged goods (CPG) industry. This research relies on three underlying themes: (1) Conceptual and anecdotal evidence suggests that companies undertake sustainability initiatives for a plethora of reasons, (2) The number of sustainability initiatives continues to increase in the consumer packaged goods industries, and (3) That it is, therefore, necessary to explore the role that sustainability plays in the minds of consumers. In light of these themes, surveys were administered to and completed by 143 college students and 101 business executives to assess a number of variables in regards to sustainability including willingness-to-pay, behavioral intentions, attitudes, willingness-to-pay, and preferences. Survey results indicate that the top three reasons why executives believe sustainability to be important include (1) the opportunity for profitability, (2) the fulfillment of an obligation to the environment, and (3) a responsibility to customers and shareholders. College students identified the top three reasons as (1) a responsibility to the environment, (2) an indebtedness to future generations, and (3) an effective management of resources. While the rationale for supporting sustainability efforts differed between college students and executives, the executives and consumers reported similar responses for the majority of the remaining sustainability issues. Furthermore, when we asked consumers to assess the importance of six key issues (healthcare, economy, education, crime, government spending, and environment) previously identified as important to consumers by Gallup Poll, protecting the environment only ranked fourth out of the six (Carlson 2005). While all six of these issues were identified as important, the top three that emerged as most important were (1) improvements in education, (2) the economy, and (3) health care. As the pursuit and incorporation of sustainability continues to evolve, so too will the expected outcomes. New definitions of performance that reflect the social/business benefits as well as the lengthened implementation period are relevant and warranted (Ehrenfeld 2005; Hitchcock and Willard 2006). We identified three primary categories of outcomes based on a literature review of both anecdotal and conceptual expectations of sustainability: (1) improvements in constituent satisfaction, (2) differentiation opportunities, and (3) financial rewards. Within each of these categories, several specific outcomes were identified resulting in eleven different outcomes arising from sustainability initiatives. Our survey results indicate that the top five most likely outcomes for companies that pursue sustainability are: (1) green consumers will be more satisfied, (2) company image will be better, (3) corporate responsibility will be enhanced, (4) energy costs will be reduced, and (5) products will be more innovative. Additionally, to better understand the interesting intersection between the environmental "identity" of a consumer and the willingness to manifest that identity with marketplace purchases, we extended prior research developed by Experian Research (2008). Accordingly, respondents were categorized as one of four types of green consumers (Behavioral Greens, Think Greens, Potential Greens, or True Browns) to garner a better understanding of the green consumer in addition to assisting with a more effective interpretation of results. We assessed these consumers' willingness to engage in eco-friendly behavior by evaluating three options: (1) shopping at retailers that support environmental initiatives, (2) paying more for products that protect the environment, and (3) paying higher taxes so the government can support environmental initiatives. Think Greens expressed the greatest willingness to change, followed by Behavioral Greens, Potential Greens, and True Browns. These differences were all significant at p<.01. Further Conclusions and Implications We have undertaken a descriptive study which seeks to enhance our understanding of the strategic domain of sustainability. Specifically, this research fills a gap in the literature by comparing and contrasting the sustainability views of business executives and consumers with specific regard to preferences, intentions, willingness-to-pay, behavior, and attitudes. For practitioners, much can be gained from a strategic standpoint. In addition to the many results already reported, respondents also reported than willing to pay more for products that protect the environment. Other specific results indicate that female respondents consistently communicate a stronger willingness than males to pay more for these products and to shop at eco-friendly retailers. Knowing this additional information, practitioners can now have a more specific market in which to target and communicate their sustainability efforts. While this research is only an initial step towards understanding similarities and differences among practitioners and consumers regarding sustainability, it presents original findings that contribute to both practice and research. Future research should be directed toward examining other variables affecting this relationship, as well as other specific industries.