• Title/Summary/Keyword: Resource Classification system

Search Result 170, Processing Time 0.027 seconds

Characterization of Groundwater Level and Water Quality by Classification of Aquifer Types in South Korea (국내 대수층 유형 분류를 통한 지하수위와 수질의 특성화)

  • Lee, Jae Min;Ko, Kyung-Seok;Woo, Nam C.
    • Economic and Environmental Geology
    • /
    • v.53 no.5
    • /
    • pp.619-629
    • /
    • 2020
  • The National Groundwater Monitoring Network (NGMN) in South Korea has been implemented in alluvial/ bedrock aquifers for efficient management of groundwater resources. In this study, aquifer types were reclassified with unconfined and confined aquifers based on water-level fluctuation and water quality characteristics. Principal component analysis (PCA) of water-level data from paired monitoring wells of alluvial/bedrock aquifers results in the principal components of both aquifers showing similar water-level fluctuation pattern. There was no significant difference in the rate of water-level rises responding to precipitations and in the NO3-N concentrations between the alluvial and bedrock aquifers. In contrast, in the results classified with the hydrogeological type, the principal components of water level were different between unconfined and confined conditions. The water-level rises to precipitation events were estimated to be 4.6 (R2=0.8) in the unconfined and 2.1 (R2=0.4) in the confined aquifers, respectively, indicating less impact of precipitation recharge to the confined aquifer. The confined aquifers have the average NO3-N concentration below 3 mg/L, implying the natural background level protected from the sources at surface. In summary, reclassification of aquifers into hydrogeological types clearly shows the differences between unconfined and confined aquifers in the water-level fluctuation pattern and NO3-N concentrations. The hydrogeologic condition of aquifer could improve groundwater resource management by providing critical information on groundwater quantity through recharge estimation and quality for protection from potential contamination sources.

A Study on the Real Condition and the Improvement Directions for the Protection of Industrial Technology (산업기술 보호 관리실태 및 발전방안에 관한 연구)

  • Chung, Tae-Hwang;Chang, Hang-Bae
    • Korean Security Journal
    • /
    • no.24
    • /
    • pp.147-170
    • /
    • 2010
  • This study is to present a improvement directions for the protection of industrial key technology. For the purpose of the study, the survey was carried out on the administrative security activity of 68 enterprises including Large companies, small-midium companies and public corporations. survey result on the 10 items of security policy, 10 items of personal management and 7 items of the assets management are as follows; First, stable foundation for the efficient implement of security policy is needed. Carrying a security policy into practice and continuous upgrade should be fulfilled with drawing-up of the policy. Also for the vitalization of security activity, arrangement of security organization and security manager are needed with mutual assistance in the company. Periodic security inspection should be practiced for the improvement of security level and security understanding. Second, the increase of investment for security job is needed for security invigoration. Securing cooperation channel with professional security facility such as National Intelligence Service, Korea internet & security agency, Information security consulting company, security research institute is needed, also security outsourcing could be considered as the method of above investment. Especially small-midium company is very vulnerable compared with Large company and public corporation in security management, so increase of government's budget for security support system is necessary. Third, human resource management is important, because the main cause of leak of confidential information is person. Regular education rate for new employee and staff members is relatively high, but the vitalization of security oath for staff members and the third party who access to key technology is necessary. Also access right to key information should be changed whenever access right changes. Reinforcement of management of resigned person such as security oath, the elimination of access right to key information and the deletion of account. is needed. Forth, the control and management of important asset including patent and design should be tightened. Classification of importance of asset and periodic inspection are necessary with the effects evaluation of leak of asset.

  • PDF

An Economic Factor Analysis of Air Pollutants Emission Using Index Decomposition Methods (대기오염 배출량 변화의 경제적 요인 분해)

  • Park, Dae Moon;Kim, Ki Heung
    • Environmental and Resource Economics Review
    • /
    • v.14 no.1
    • /
    • pp.167-199
    • /
    • 2005
  • The following policy implications can be drawn from this study: 1) The Air Pollution Emission Amount Report published by the Ministry of Environment since 1991 classifies industries into 4 sectors, i. e., heating, manufacturing, transportation and power generation. Currently, the usability of report is very low and extra efforts should be given to refine the current statistics and to improve the industrial classification. 2) Big pollution industries are as follows - s7, s17 and s20. The current air pollution control policy for these sectors compared to other sectors are found to be inefficient. This finding should be noted in the implementation of future air pollution policy. 3) s10 and s17 are found to be a big polluting industrial sector and its pollution reduction effect is also significant. 4) The effect of emission coefficient (${\Delta}f$) has the biggest impact on the reduction of emission amount change and the effect of economic growth coefficient (${\Delta}y$) has the biggest impact on the increase of emission volume. The effect of production technology factor (${\Delta}D$) and the effect of the change of the final demand structure (${\Delta}u$) are insignificant in terms of the change of emission volume. 5) Further studies on emission estimation techniques on each industry sector and the economic analysis are required to promote effective enforcement of the total volume control system of air pollutants, the differential management of pollution causing industrial sectors and the integration of environment and economy. 6) Korea's economic growth in 1990 is not pollution-driven in terms of the Barry Commoner's hypothesis, even though the overall industrial structure and the demand structure are not environmentally friendly. It indicates that environmental policies for the improvement of air quality depend mainly on the government initiatives and systematic national level consideration of industrial structures and the development of green technologies are not fully incorporated.

  • PDF

Spatial Distribution of Benthic Macroinvertebrate Assemblages in Wetlands of Jeju Island, Korea (제주도 일대 습지에 서식하는 저서성 대형무척추동물의 군집 분포 특성)

  • Yung Chul Jun;Seung Phil Cheon;Mi Suk Kang;Jae Heung Park;Chang Su Lee;Soon Jik Kwon
    • Korean Journal of Ecology and Environment
    • /
    • v.57 no.1
    • /
    • pp.1-16
    • /
    • 2024
  • Most wetlands worldwide have suffered from extensive human exploitation. Unfortunately they have been less explored compared to river and lake ecosystems despite their ecological importance and economic values. This is the same case in Korea. This study was aimed to estimate the assemblage attributes and distribution characteristics of benthic macroinvertebrates for fifty wetlands distributed throughout subtropical Jeju Island in 2021. A total of 133 taxa were identified during survey periods belonging to 53 families, 19 orders, 5 classes and 3 phyla. Taxa richness ranged from 4 to 31 taxa per wetland with an average of 17.5 taxa. Taxa richness and abundance of predatory insect groups such as Odonata, Hemiptera and Coleoptera respectively accounted for 67.7% and 68.2% of the total. Among them Coleoptera were the most diverse and abundant. Taxa richness and abundance did not significantly differ from each wetland type classified in accordance with the National Wetland Classification System. There were three endangered species (Clithon retropictum, Lethocerus deyrolli and Cybister (Cybister) chinensis) and several restrictively distributed species only in Jeju Island. Cluster analysis based on the similarity in the benthic macroinvertebrate composition largely classified 50 wetlands into two major clusters: small wetlands located in lowland areas and medium-sized wetlands in middle mountainous regions. All cluster groups displayed significant differences in wetland area, long axis, percentage of fine particles and macrophyte composition ratio. Indicator Species Analysis selected 19 important indicators with the highest indicator value of Ceriagrion melanurum at 63%, followed by Noterus japonicus (59%) and Polypylis hemisphaerula (58%). Our results are expected to provide fundamental information on the biodiversity and habitat environments for benthic macroinvertebrates in wetland ecosystems, consequently helping to establish conservation and restoration plans for small wetlands relatively vulnerable to human disturbance.

A Study on the Traditional Geographic System Recognition and Environmental Value Estimate of Hannamkeumbuk-Keumbuk Mountains for the Establishment of a Management Plan (관리계획 수립을 위한 한남금북.금북정맥의 전통적 지리체계인식과 환경가치 추정 연구)

  • Kang, Kee-Rae;Kim, Dong-Pil
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.1
    • /
    • pp.23-33
    • /
    • 2012
  • In this study, how much users of Hannamkeumbuk Keumbuk Mountains are aware of Baekdaegan and its attached mountain chains, a traditional geographic system, according to Sangyungpyo and basic data like the degree of awareness and use-behaviors, etc. have been studied. In addition, the environmental value of Hannamkeumbuk Keumbuk Mountains separating the central and the southern part of Korea among attached mountain ranges, secondary mountain chains, which act as an ecosystem buffer in the Baekdudaegan Range, has been estimated at the current amount of currency. In the questions of the perception of the traditional classification standard of mountain chains and Baekdudaegan, more than 70% of respondents answered that they had heard of or known them but 66.8% werenot aware of Hannamkeumbuk Keumbuk Mountains. While the awareness for Baekdudaegan is high, the perception of its attached mountain chains was very poor. DBDC responder system and CVM, which is used widely for the value estimate method of environment goods, were used. As the result, an additional benefit got when a person visits Hannamkeumbuk Keumbuk mountains was estimated as 5,813 won. It could find out that this amount was very low compared with 51,984 won, average visit cost. It judged that the reason was that damage of environmental conditions, the monotony of the trails and progress of indiscriminate environmental destruction, etc. The results of this study will offer a new perspective on public relations activities and resource conservation of Baekdudaegan and its attached mountain chains and estimate perceptions and efficient services for visitors to HannamKeumbuk Keumbuk Mountains. This study will act as data for basic planning and management to increase the mountains' value and to preserve them. Further studies are needed to make a frame of work division and management with various organizations so that the management of Hannamkeumbuk-Keumbuk Mountains may be properly established and their value may been hanced.

A Methodology to Develop a Curriculum of Landscape Architecture based on National Competency Standards (국가직무능력표준(NCS) 기반 조경분야 교육과정 개발)

  • Byeon, Jae-Sang;Shin, Sang-Hyun;Ahn, Seong-Ro
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.45 no.2
    • /
    • pp.23-39
    • /
    • 2017
  • This study began from the question, "is there a way to efficiently apply industrial demand in the university curriculum?" Research focused on how to actively accept and respond to the era of the NCS (National Competency Standards). In order to apply NCS to individual departments of the university, industrial personnel must positively participate to form a practical-level curriculum by the NCS, which can be linked to the work and qualifications. A valid procedure for developing a curriculum based on the NCS of this study is as follows: First, the university must select a specific classification of NCS considering the relevant industry outlook, the speciality of professors in the university, the relationship with regional industries and the prospects for future employment, and the need for industrial manpower. Second, departments must establish a type of human resource that compromises goals for the university education and the missions of the chosen NCS. In this process, a unique competency unit of the university that can support the basic or applied subjects should be added to the task model. Third, the task model based on the NCS should be completed through the verification of each competency unit considering the acceptance or rejection in the curriculum. Fourth, subjects in response to each competency units within the task model should be developed while considering time and credits according to university regulations. After this, a clear subject description of how to operate and evaluate the contents of the curriculum should be created. Fifth, a roadmap for determining the period of operating subjects for each semester or year should be built. This roadmap will become a basis for the competency achievement frame to decide upon the adoption of a Process Evaluation Qualification System. In order for the NCS to be successfully established within the university, a consensus on the necessity of the NCS should be preceded by professors, students and staff members. Unlike a traditional curriculum by professors, the student-oriented NCS curriculum is needed sufficient understanding and empathy for the many sacrifices and commitment of the members of the university.

Dynamic Virtual Ontology using Tags with Semantic Relationship on Social-web to Support Effective Search (효율적 자원 탐색을 위한 소셜 웹 태그들을 이용한 동적 가상 온톨로지 생성 연구)

  • Lee, Hyun Jung;Sohn, Mye
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.19-33
    • /
    • 2013
  • In this research, a proposed Dynamic Virtual Ontology using Tags (DyVOT) supports dynamic search of resources depending on user's requirements using tags from social web driven resources. It is general that the tags are defined by annotations of a series of described words by social users who usually tags social information resources such as web-page, images, u-tube, videos, etc. Therefore, tags are characterized and mirrored by information resources. Therefore, it is possible for tags as meta-data to match into some resources. Consequently, we can extract semantic relationships between tags owing to the dependency of relationships between tags as representatives of resources. However, to do this, there is limitation because there are allophonic synonym and homonym among tags that are usually marked by a series of words. Thus, research related to folksonomies using tags have been applied to classification of words by semantic-based allophonic synonym. In addition, some research are focusing on clustering and/or classification of resources by semantic-based relationships among tags. In spite of, there also is limitation of these research because these are focusing on semantic-based hyper/hypo relationships or clustering among tags without consideration of conceptual associative relationships between classified or clustered groups. It makes difficulty to effective searching resources depending on user requirements. In this research, the proposed DyVOT uses tags and constructs ontologyfor effective search. We assumed that tags are extracted from user requirements, which are used to construct multi sub-ontology as combinations of tags that are composed of a part of the tags or all. In addition, the proposed DyVOT constructs ontology which is based on hierarchical and associative relationships among tags for effective search of a solution. The ontology is composed of static- and dynamic-ontology. The static-ontology defines semantic-based hierarchical hyper/hypo relationships among tags as in (http://semanticcloud.sandra-siegel.de/) with a tree structure. From the static-ontology, the DyVOT extracts multi sub-ontology using multi sub-tag which are constructed by parts of tags. Finally, sub-ontology are constructed by hierarchy paths which contain the sub-tag. To create dynamic-ontology by the proposed DyVOT, it is necessary to define associative relationships among multi sub-ontology that are extracted from hierarchical relationships of static-ontology. The associative relationship is defined by shared resources between tags which are linked by multi sub-ontology. The association is measured by the degree of shared resources that are allocated into the tags of sub-ontology. If the value of association is larger than threshold value, then associative relationship among tags is newly created. The associative relationships are used to merge and construct new hierarchy the multi sub-ontology. To construct dynamic-ontology, it is essential to defined new class which is linked by two more sub-ontology, which is generated by merged tags which are highly associative by proving using shared resources. Thereby, the class is applied to generate new hierarchy with extracted multi sub-ontology to create a dynamic-ontology. The new class is settle down on the ontology. So, the newly created class needs to be belong to the dynamic-ontology. So, the class used to new hyper/hypo hierarchy relationship between the class and tags which are linked to multi sub-ontology. At last, DyVOT is developed by newly defined associative relationships which are extracted from hierarchical relationships among tags. Resources are matched into the DyVOT which narrows down search boundary and shrinks the search paths. Finally, we can create the DyVOT using the newly defined associative relationships. While static data catalog (Dean and Ghemawat, 2004; 2008) statically searches resources depending on user requirements, the proposed DyVOT dynamically searches resources using multi sub-ontology by parallel processing. In this light, the DyVOT supports improvement of correctness and agility of search and decreasing of search effort by reduction of search path.

Suggestion of Urban Regeneration Type Recommendation System Based on Local Characteristics Using Text Mining (텍스트 마이닝을 활용한 지역 특성 기반 도시재생 유형 추천 시스템 제안)

  • Kim, Ikjun;Lee, Junho;Kim, Hyomin;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.149-169
    • /
    • 2020
  • "The Urban Renewal New Deal project", one of the government's major national projects, is about developing underdeveloped areas by investing 50 trillion won in 100 locations on the first year and 500 over the next four years. This project is drawing keen attention from the media and local governments. However, the project model which fails to reflect the original characteristics of the area as it divides project area into five categories: "Our Neighborhood Restoration, Housing Maintenance Support Type, General Neighborhood Type, Central Urban Type, and Economic Base Type," According to keywords for successful urban regeneration in Korea, "resident participation," "regional specialization," "ministerial cooperation" and "public-private cooperation", when local governments propose urban regeneration projects to the government, they can see that it is most important to accurately understand the characteristics of the city and push ahead with the projects in a way that suits the characteristics of the city with the help of local residents and private companies. In addition, considering the gentrification problem, which is one of the side effects of urban regeneration projects, it is important to select and implement urban regeneration types suitable for the characteristics of the area. In order to supplement the limitations of the 'Urban Regeneration New Deal Project' methodology, this study aims to propose a system that recommends urban regeneration types suitable for urban regeneration sites by utilizing various machine learning algorithms, referring to the urban regeneration types of the '2025 Seoul Metropolitan Government Urban Regeneration Strategy Plan' promoted based on regional characteristics. There are four types of urban regeneration in Seoul: "Low-use Low-Level Development, Abandonment, Deteriorated Housing, and Specialization of Historical and Cultural Resources" (Shon and Park, 2017). In order to identify regional characteristics, approximately 100,000 text data were collected for 22 regions where the project was carried out for a total of four types of urban regeneration. Using the collected data, we drew key keywords for each region according to the type of urban regeneration and conducted topic modeling to explore whether there were differences between types. As a result, it was confirmed that a number of topics related to real estate and economy appeared in old residential areas, and in the case of declining and underdeveloped areas, topics reflecting the characteristics of areas where industrial activities were active in the past appeared. In the case of the historical and cultural resource area, since it is an area that contains traces of the past, many keywords related to the government appeared. Therefore, it was possible to confirm political topics and cultural topics resulting from various events. Finally, in the case of low-use and under-developed areas, many topics on real estate and accessibility are emerging, so accessibility is good. It mainly had the characteristics of a region where development is planned or is likely to be developed. Furthermore, a model was implemented that proposes urban regeneration types tailored to regional characteristics for regions other than Seoul. Machine learning technology was used to implement the model, and training data and test data were randomly extracted at an 8:2 ratio and used. In order to compare the performance between various models, the input variables are set in two ways: Count Vector and TF-IDF Vector, and as Classifier, there are 5 types of SVM (Support Vector Machine), Decision Tree, Random Forest, Logistic Regression, and Gradient Boosting. By applying it, performance comparison for a total of 10 models was conducted. The model with the highest performance was the Gradient Boosting method using TF-IDF Vector input data, and the accuracy was 97%. Therefore, the recommendation system proposed in this study is expected to recommend urban regeneration types based on the regional characteristics of new business sites in the process of carrying out urban regeneration projects."

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

A study on the classification of research topics based on COVID-19 academic research using Topic modeling (토픽모델링을 활용한 COVID-19 학술 연구 기반 연구 주제 분류에 관한 연구)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.155-174
    • /
    • 2022
  • From January 2020 to October 2021, more than 500,000 academic studies related to COVID-19 (Coronavirus-2, a fatal respiratory syndrome) have been published. The rapid increase in the number of papers related to COVID-19 is putting time and technical constraints on healthcare professionals and policy makers to quickly find important research. Therefore, in this study, we propose a method of extracting useful information from text data of extensive literature using LDA and Word2vec algorithm. Papers related to keywords to be searched were extracted from papers related to COVID-19, and detailed topics were identified. The data used the CORD-19 data set on Kaggle, a free academic resource prepared by major research groups and the White House to respond to the COVID-19 pandemic, updated weekly. The research methods are divided into two main categories. First, 41,062 articles were collected through data filtering and pre-processing of the abstracts of 47,110 academic papers including full text. For this purpose, the number of publications related to COVID-19 by year was analyzed through exploratory data analysis using a Python program, and the top 10 journals under active research were identified. LDA and Word2vec algorithm were used to derive research topics related to COVID-19, and after analyzing related words, similarity was measured. Second, papers containing 'vaccine' and 'treatment' were extracted from among the topics derived from all papers, and a total of 4,555 papers related to 'vaccine' and 5,971 papers related to 'treatment' were extracted. did For each collected paper, detailed topics were analyzed using LDA and Word2vec algorithms, and a clustering method through PCA dimension reduction was applied to visualize groups of papers with similar themes using the t-SNE algorithm. A noteworthy point from the results of this study is that the topics that were not derived from the topics derived for all papers being researched in relation to COVID-19 (

    ) were the topic modeling results for each research topic (
    ) was found to be derived from For example, as a result of topic modeling for papers related to 'vaccine', a new topic titled Topic 05 'neutralizing antibodies' was extracted. A neutralizing antibody is an antibody that protects cells from infection when a virus enters the body, and is said to play an important role in the production of therapeutic agents and vaccine development. In addition, as a result of extracting topics from papers related to 'treatment', a new topic called Topic 05 'cytokine' was discovered. A cytokine storm is when the immune cells of our body do not defend against attacks, but attack normal cells. Hidden topics that could not be found for the entire thesis were classified according to keywords, and topic modeling was performed to find detailed topics. In this study, we proposed a method of extracting topics from a large amount of literature using the LDA algorithm and extracting similar words using the Skip-gram method that predicts the similar words as the central word among the Word2vec models. The combination of the LDA model and the Word2vec model tried to show better performance by identifying the relationship between the document and the LDA subject and the relationship between the Word2vec document. In addition, as a clustering method through PCA dimension reduction, a method for intuitively classifying documents by using the t-SNE technique to classify documents with similar themes and forming groups into a structured organization of documents was presented. In a situation where the efforts of many researchers to overcome COVID-19 cannot keep up with the rapid publication of academic papers related to COVID-19, it will reduce the precious time and effort of healthcare professionals and policy makers, and rapidly gain new insights. We hope to help you get It is also expected to be used as basic data for researchers to explore new research directions.


  • (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.