• Title/Summary/Keyword: 자동화된 기계 학습

Search Result 99, Processing Time 0.026 seconds

Convergence of Artificial Intelligence Techniques and Domain Specific Knowledge for Generating Super-Resolution Meteorological Data (기상 자료 초해상화를 위한 인공지능 기술과 기상 전문 지식의 융합)

  • Ha, Ji-Hun;Park, Kun-Woo;Im, Hyo-Hyuk;Cho, Dong-Hee;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.63-70
    • /
    • 2021
  • Generating a super-resolution meteological data by using a high-resolution deep neural network can provide precise research and useful real-life services. We propose a new technique of generating improved training data for super-resolution deep neural networks. To generate high-resolution meteorological data with domain specific knowledge, Lambert conformal conic projection and objective analysis were applied based on observation data and ERA5 reanalysis field data of specialized institutions. As a result, temperature and humidity analysis data based on domain specific knowledge showed improved RMSE by up to 42% and 46%, respectively. Next, a super-resolution generative adversarial network (SRGAN) which is one of the aritifial intelligence techniques was used to automate the manual data generation technique using damain specific techniques as described above. Experiments were conducted to generate high-resolution data with 1 km resolution from global model data with 10 km resolution. Finally, the results generated with SRGAN have a higher resoltuion than the global model input data, and showed a similar analysis pattern to the manually generated high-resolution analysis data, but also showed a smooth boundary.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Method of Machine Learning-based Defective Health Functional Food Detection System for Efficient Inspection of Imported Food (효율적 수입식품 검사를 위한 머신러닝 기반 부적합 건강기능식품 탐지 방법)

  • Lee, Kyoungsu;Bak, Yerin;Shin, Yoonjong;Sohn, Kwonsang;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.139-159
    • /
    • 2022
  • As interest in health functional foods has increased since COVID-19, the importance of imported food safety inspections is growing. However, in contrast to the annual increase in imports of health functional foods, the budget and manpower required for inspections for import and export are reaching their limit. Hence, the purpose of this study is to propose a machine learning model that efficiently detects unsuitable food suitable for the characteristics of data possessed by government offices on imported food. First, the components of food import/export inspections data that affect the judgment of nonconformity were examined and derived variables were newly created. Second, in order to select features for the machine learning, class imbalance and nonlinearity were considered when performing exploratory analysis on imported food-related data. Third, we try to compare the performance and interpretability of each model by applying various machine learning techniques. In particular, the ensemble model was the best, and it was confirmed that the derived variables and models proposed in this study can be helpful to the system used in import/export inspections.

Implementation of an Automated Agricultural Frost Observation System (AAFOS) (농업서리 자동관측 시스템(AAFOS)의 구현)

  • Kyu Rang Kim;Eunsu Jo;Myeong Su Ko;Jung Hyuk Kang;Yunjae Hwang;Yong Hee Lee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.26 no.1
    • /
    • pp.63-74
    • /
    • 2024
  • In agriculture, frost can be devastating, which is why observation and forecasting are so important. According to a recent report analyzing frost observation data from the Korea Meteorological Administration, despite global warming due to climate change, the late frost date in spring has not been accelerated, and the frequency of frost has not decreased. Therefore, it is important to automate and continuously operate frost observation in risk areas to prevent agricultural frost damage. In the existing frost observation using leaf wetness sensors, there is a problem that the reference voltage value fluctuates over a long period of time due to contamination of the observation sensor or changes in the humidity of the surrounding environment. In this study, a datalogger program was implemented to automatically solve these problems. The established frost observation system can stably and automatically accumulate time-resolved observation data over a long period of time. This data can be utilized in the future for the development of frost diagnosis models using machine learning methods and the production of frost occurrence prediction information for surrounding areas.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Roles and Preparation for the Future Nurse-Educators (미래 간호교육자의 역할과 이를 위한 준비)

  • Kim Susie
    • The Korean Nurse
    • /
    • v.20 no.4 s.112
    • /
    • pp.39-49
    • /
    • 1981
  • 기존 간호 영역 내 간호는 질적으로, 양적으로 급격히 팽창 확대되어 가고 있다. 많은 나라에서 건강관리체계가 부적절하게 분배되어 있으며 따라서 많은 사람들이 적절한 건강관리를 제공받지 못하고 있어 수준 높은 양질의 건강관리를 전체적으로 확대시키는 것이 시급하다. 혹 건강관리의 혜택을 받는다고 해도 이들 역시 보다 더 양질의 인간적인 간호를 요하고 있는 실정이다. 간호는 또한 간호영역 자체 내에서도 급격히 확대되어가고 있다. 예를들면, 미국같은 선진국가의 건강간호사(Nurse practitioner)는 간호전문직의 새로운 직종으로 건강관리체계에서 독자적인 실무자로 그 두각을 나타내고 있다. 의사의 심한 부족난으로 고심하는 발전도상에 있는 나라들에서는 간호원들에게 전통적인 간호기능 뿐 아니라 건강관리체계에서 보다 많은 역할을 수행하도록 기대하며 일선지방의 건강센터(Health center) 직종에 많은 간호원을 투입하고 있다. 가령 우리 한국정부에서 최근에 시도한 무의촌지역에서 졸업간호원들이 건강관리를 제공할 수 있도록 한 법적 조치는 이러한 구체적인 예라고 할 수 있다. 기존 간호영역내외의 이런 급격한 변화는 Melvin Toffler가 말한 대로 ''미래의 충격''을 초래하게 되었다. 따라서 이러한 역동적인 변화는 간호전문직에 대하여 몇가지 질문을 던져준다. 첫째, 미래사회에서 간호영역의 특성은 무엇인가? 둘째, 이러한 새로운 영역에서 요구되는 간호원을 길러내기 위해 간호교육자는 어떤 역할을 수행해야 하는가? 셋째 내일의 간호원을 양성하는 간호교육자를 준비시키기 위한 실질적이면서도 현실적인 전략은 무엇인가 등이다. 1. 미래사회에서 간호영역의 특성은 무엇인가? 미래의 간호원은 다음에 열거하는 여러가지 요인으로 인하여 지금까지의 것과는 판이한 환경에서 일하게 될 것이다. 1) 건강관리를 제공하는 과정에서 컴퓨터화되고 자동화된 기계 및 기구 등 새로운 기술을 많이 사용할 것이다. 2) 1차건강관리가 대부분 간호원에 의해 제공될 것이다. 3) 내일의 건강관리는 소비자 주축의 것이 될 것이다. 4) 간호영역내에 많은 새로운 전문분야들이 생길 것이다. 5) 미래의 건강관리체계는 사회적인 변화와 이의 요구에 더 민감한 반응을 하게 될 것이다. 6) 건강관리체계의 강조점이 의료진료에서 건강관리로 바뀔 것이다. 7) 건강관리체계에서의 간호원의 역할은 의료적인 진단과 치료계획의 기능에서 크게 탈피하여 병원내외에서 보다 더 독특한 실무형태로 발전될 것이다. 이러한 변화와 더불어 미래 간호영역에서 보다 효과적인 간호를 수행하기 위해 미래 간호원들은 지금까지의 간호원보다 더 광범위하고 깊은 교육과 훈련을 받아야 한다. 보다 발전된 기술환경에서 전인적인 접근을 하기위해 신체과학이나 의학뿐 아니라 행동과학 $\cdot$ 경영과학 등에 이르기까지 다양한 훈련을 받아야 할 필요가 있다. 또한 행동양상면에서 전문직인 답게 보다 진취적이고 표현적이며 자동적이고 응용과학적인 역할을 수행하도록 훈련을 받아야 한다. 그리하여 간호원은 효과적인 의사결정자$\cdot$문제해결자$\cdot$능숙한 실무자일 뿐 아니라 소비자의 건강요구를 예리하게 관찰하고 이 요구에 효과적인 존재를 발전시켜 나가는 연구자가 되어야 한다. 2. 미래의 간호교육자는 어떤 역할을 수행해야 하는가? 간호교육은 전문직으로서의 실무를 제공하기 위한 기초석이다. 이는 간호교육자야말로 미래사회에서 국민의 건강요구를 충족시키기는 능력있는 간호원을 공급하는 일에 전무해야 함을 시사해준다. 그러면 이러한 일을 달성하기 위해 간호교육자는 무엇을 해야 하는가? 우선 간호교육자는 두가지 측면에서 이 일을 수정해야 된다고 본다. 그 하나는 간호교육기관에서의 측면이고 다른 하나는 간호교육자 개인적인 측면엣서이다. 우선 간호교육기관에서 간호교육자는 1) 미래사회에서 요구되는 간호원을 교육시키기 위한 프로그램을 제공해야 한다. 2) 효과적인 교과과정의 발전과 수정보완을 계속적으로 진행시켜야 한다. 3) 잘된 교과과정에 따라 적절한 훈련을 철저히 시켜야 한다. 4) 간호교육자 자신이 미래의 예측된 현상을 오늘의 교육과정에 포함시킬 수 있는 자신감과 창의력을 가지고 모델이 되어야 한다. 5) 연구 및 학생들의 학습에 영향을 미치는 중요한 의사결정에 학생들을 참여시키도록 해야한다. 간호교육자 개인적인 측면에서는 교육자 자신들이 능력있고 신빙성있으며 간호의 이론$\cdot$실무$\cdot$연구면에 걸친 권위와 자동성$\cdot$독창성, 그리고 인간을 진정으로 이해하려는 자질을 갖추도록 계속 노력해야 한다. 3. 미래의 간호원을 양성하는 능력있는 간호교육자를 준비시키기 위한 실질적이면서도 현실적인 전략은 무엇인가? 내일의 도전을 충족시킬 수 있는 능력있는 간호교육자를 준비시키기 위한 실질적이고 현실적인 전략을 논함에 있어 우리나라의 실정을 참조하겠다. 전문직 간호교육자를 준비하는데 세가지 방법을 통해 할 수 있다고 생각한다. 첫째는 간호원 훈련수준을 전문직 실무를 수행할 수 있는 단계로 면허를 높이는 것이고, 둘째는 훈련수준을 더 향상시키기 위하여 학사 및 석사간호교육과정을 발전시키고 확대하는 것이며, 셋째는 현존하는 간호교육 프로그램의 질을 높이는 것이다. 첫째와 둘째방법은 정부의 관할이 직접 개입되는 방법이기 때문에 여기서는 생략하고 현존하는 교과과정을 발전시키고 그 질을 향상시키는 것에 대해서만 언급하고자 한다. 미래의 여러가지 도전에 부응할 수 있는 교육자를 준비시키는 교육과정의 발전을 두가지 면에서 추진시킬 수 있다고 본다. 첫째는 국제간의 교류를 통하여 idea 및 경험을 나눔으로서 교육과정의 질을 높일 수 있다. 서로 다른 나라의 간호교육자들이 정기적으로 모여 생각과 경험을 교환하고 연구하므로서 보다 체계적이고 효과적인 발전체인(chain)이 형성되는 것이다. ICN같은 국제적인 조직에 의해 이러한 모임을 시도하는 것인 가치있는 기회라고 생각한다. 국가간 또는 국제적인 간호교육자 훈련을 위한 교육과정의 교환은 한 나라안에서 그 idea를 확산시키는데 효과적인 영향을 미칠 수 있다. 충분한 간호교육전문가를 갖춘 간호교육기관이 새로운 교육과정을 개발하여 그렇지 못한 기관과의 연차적인 conference를 가지므로 확산시킬 수도 있으며 이런 방법은 경제적인 면에서도 효과적일 뿐만 아니라 그 나라 그 문화상황에 적합한 교과과정 개발에도 효과적일 수 있다. 간호교육자를 준비시키는 둘째전략은 현존간호교육자들이 간호이론과 실무$\cdot$연구를 통합하고 발전시키는데 있어서 당면하는 여러가지 요인-전인적인 간호에 적절한 과목을 이수하지 못하고 임상실무경험의 부족등-을 보충하는 방법이다. 이런 실제적인 문제를 잠정적으로 해결하기 위하여 1) 몇몇 대학에서 방학중에 계속교육 프로그램을 개발하여 현직 간호교육자들에게 필요하고 적절한 과목을 이수하도록 한다. 따라서 임상실무교육도 이때 실시할 수 있다. 2) 대학원과정 간호교육프로그램의 입학자의 자격에 2$\~$3년의 실무경험을 포함시키도록 한다. 결론적으로 교수와 학생간의 진정한 동반자관계는 자격을 구비한 능력있는 교수의 실천적인 모델을 통하여서 가능하게 이루어 질수 있다고 믿는 바이다.

  • PDF

Analysis of Emerging Geo-technologies and Markets Focusing on Digital Twin and Environmental Monitoring in Response to Digital and Green New Deal (디지털 트윈, 환경 모니터링 등 디지털·그린 뉴딜 정책 관련 지질자원 유망기술·시장 분석)

  • Ahn, Eun-Young;Lee, Jaewook;Bae, Junhee;Kim, Jung-Min
    • Economic and Environmental Geology
    • /
    • v.53 no.5
    • /
    • pp.609-617
    • /
    • 2020
  • After introducing the industry 4.0 policy, Korean government announced 'Digital New Deal' and 'Green New Deal' as 'Korean New Deal' in 2020. We analyzed Korea Institute of Geoscience and Mineral Resources (KIGAM)'s research projects related to that policy and conducted markets analysis focused on Digital Twin and environmental monitoring technologies. Regarding 'Data Dam' policy, we suggested the digital geo-contents with Augmented Reality (AR) & Virtual Reality (VR) and the public geo-data collection & sharing system. It is necessary to expand and support the smart mining and digital oil fields research for '5th generation mobile communication (5G) and artificial intelligence (AI) convergence into all industries' policy. Korean government is suggesting downtown 3D maps for 'Digital Twin' policy. KIGAM can provide 3D geological maps and Internet of Things (IoT) systems for social overhead capital (SOC) management. 'Green New Deal' proposed developing technologies for green industries including resource circulation, Carbon Capture Utilization and Storage (CCUS), and electric & hydrogen vehicles. KIGAM has carried out related research projects and currently conducts research on domestic energy storage minerals. Oil and gas industries are presented as representative applications of digital twin. Many progress is made in mining automation and digital mapping and Digital Twin Earth (DTE) is a emerging research subject. The emerging research subjects are deeply related to data analysis, simulation, AI, and the IoT, therefore KIGAM should collaborate with sensors and computing software & system companies.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.