• Title/Summary/Keyword: Recall and Precision

Search Result 724, Processing Time 0.035 seconds

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Generative Adversarial Network-Based Image Conversion Among Different Computed Tomography Protocols and Vendors: Effects on Accuracy and Variability in Quantifying Regional Disease Patterns of Interstitial Lung Disease

  • Hye Jeon Hwang;Hyunjong Kim;Joon Beom Seo;Jong Chul Ye;Gyutaek Oh;Sang Min Lee;Ryoungwoo Jang;Jihye Yun;Namkug Kim;Hee Jun Park;Ho Yun Lee;Soon Ho Yoon;Kyung Eun Shin;Jae Wook Lee;Woocheol Kwon;Joo Sung Sun;Seulgi You;Myung Hee Chung;Bo Mi Gil;Jae-Kwang Lim;Youkyung Lee;Su Jin Hong;Yo Won Choi
    • Korean Journal of Radiology
    • /
    • v.24 no.8
    • /
    • pp.807-820
    • /
    • 2023
  • Objective: To assess whether computed tomography (CT) conversion across different scan parameters and manufacturers using a routable generative adversarial network (RouteGAN) can improve the accuracy and variability in quantifying interstitial lung disease (ILD) using a deep learning-based automated software. Materials and Methods: This study included patients with ILD who underwent thin-section CT. Unmatched CT images obtained using scanners from four manufacturers (vendors A-D), standard- or low-radiation doses, and sharp or medium kernels were classified into groups 1-7 according to acquisition conditions. CT images in groups 2-7 were converted into the target CT style (Group 1: vendor A, standard dose, and sharp kernel) using a RouteGAN. ILD was quantified on original and converted CT images using a deep learning-based software (Aview, Coreline Soft). The accuracy of quantification was analyzed using the dice similarity coefficient (DSC) and pixel-wise overlap accuracy metrics against manual quantification by a radiologist. Five radiologists evaluated quantification accuracy using a 10-point visual scoring system. Results: Three hundred and fifty CT slices from 150 patients (mean age: 67.6 ± 10.7 years; 56 females) were included. The overlap accuracies for quantifying total abnormalities in groups 2-7 improved after CT conversion (original vs. converted: 0.63 vs. 0.68 for DSC, 0.66 vs. 0.70 for pixel-wise recall, and 0.68 vs. 0.73 for pixel-wise precision; P < 0.002 for all). The DSCs of fibrosis score, honeycombing, and reticulation significantly increased after CT conversion (0.32 vs. 0.64, 0.19 vs. 0.47, and 0.23 vs. 0.54, P < 0.002 for all), whereas those of ground-glass opacity, consolidation, and emphysema did not change significantly or decreased slightly. The radiologists' scores were significantly higher (P < 0.001) and less variable on converted CT. Conclusion: CT conversion using a RouteGAN can improve the accuracy and variability of CT images obtained using different scan parameters and manufacturers in deep learning-based quantification of ILD.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Performance Evaluation of Monitoring System for Sargassum horneri Using GOCI-II: Focusing on the Results of Removing False Detection in the Yellow Sea and East China Sea (GOCI-II 기반 괭생이모자반 모니터링 시스템 성능 평가: 황해 및 동중국해 해역 오탐지 제거 결과를 중심으로)

  • Han-bit Lee;Ju-Eun Kim;Moon-Seon Kim;Dong-Su Kim;Seung-Hwan Min;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1615-1633
    • /
    • 2023
  • Sargassum horneri is one of the floating algae in the sea, which breeds in large quantities in the Yellow Sea and East China Sea and then flows into the coast of Republic of Korea, causing various problems such as destroying the environment and damaging fish farms. In order to effectively prevent damage and preserve the coastal environment, the development of Sargassum horneri detection algorithms using satellite-based remote sensing technology has been actively developed. However, incorrect detection information causes an increase in the moving distance of ships collecting Sargassum horneri and confusion in the response of related local governments or institutions,so it is very important to minimize false detections when producing Sargassum horneri spatial information. This study applied technology to automatically remove false detection results using the GOCI-II-based Sargassum horneri detection algorithm of the National Ocean Satellite Center (NOSC) of the Korea Hydrographic and Oceanography Agency (KHOA). Based on the results of analyzing the causes of major false detection results, it includes a process of removing linear and sporadic false detections and green algae that occurs in large quantities along the coast of China in spring and summer by considering them as false detections. The technology to automatically remove false detection was applied to the dates when Sargassum horneri occurred from February 24 to June 25, 2022. Visual assessment results were generated using mid-resolution satellite images, qualitative and quantitative evaluations were performed. Linear false detection results were completely removed, and most of the sporadic and green algae false detection results that affected the distribution were removed. Even after the automatic false detection removal process, it was possible to confirm the distribution area of Sargassum horneri compared to the visual assessment results, and the accuracy and precision calculated using the binary classification model averaged 97.73% and 95.4%, respectively. Recall value was very low at 29.03%, which is presumed to be due to the effect of Sargassum horneri movement due to the observation time discrepancy between GOCI-II and mid-resolution satellite images, differences in spatial resolution, location deviation by orthocorrection, and cloud masking. The results of this study's removal of false detections of Sargassum horneri can determine the spatial distribution status in near real-time, but there are limitations in accurately estimating biomass. Therefore, continuous research on upgrading the Sargassum horneri monitoring system must be conducted to use it as data for establishing future Sargassum horneri response plans.

A Multimodal Profile Ensemble Approach to Development of Recommender Systems Using Big Data (빅데이터 기반 추천시스템 구현을 위한 다중 프로파일 앙상블 기법)

  • Kim, Minjeong;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.93-110
    • /
    • 2015
  • The recommender system is a system which recommends products to the customers who are likely to be interested in. Based on automated information filtering technology, various recommender systems have been developed. Collaborative filtering (CF), one of the most successful recommendation algorithms, has been applied in a number of different domains such as recommending Web pages, books, movies, music and products. But, it has been known that CF has a critical shortcoming. CF finds neighbors whose preferences are like those of the target customer and recommends products those customers have most liked. Thus, CF works properly only when there's a sufficient number of ratings on common product from customers. When there's a shortage of customer ratings, CF makes the formation of a neighborhood inaccurate, thereby resulting in poor recommendations. To improve the performance of CF based recommender systems, most of the related studies have been focused on the development of novel algorithms under the assumption of using a single profile, which is created from user's rating information for items, purchase transactions, or Web access logs. With the advent of big data, companies got to collect more data and to use a variety of information with big size. So, many companies recognize it very importantly to utilize big data because it makes companies to improve their competitiveness and to create new value. In particular, on the rise is the issue of utilizing personal big data in the recommender system. It is why personal big data facilitate more accurate identification of the preferences or behaviors of users. The proposed recommendation methodology is as follows: First, multimodal user profiles are created from personal big data in order to grasp the preferences and behavior of users from various viewpoints. We derive five user profiles based on the personal information such as rating, site preference, demographic, Internet usage, and topic in text. Next, the similarity between users is calculated based on the profiles and then neighbors of users are found from the results. One of three ensemble approaches is applied to calculate the similarity. Each ensemble approach uses the similarity of combined profile, the average similarity of each profile, and the weighted average similarity of each profile, respectively. Finally, the products that people among the neighborhood prefer most to are recommended to the target users. For the experiments, we used the demographic data and a very large volume of Web log transaction for 5,000 panel users of a company that is specialized to analyzing ranks of Web sites. R and SAS E-miner was used to implement the proposed recommender system and to conduct the topic analysis using the keyword search, respectively. To evaluate the recommendation performance, we used 60% of data for training and 40% of data for test. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. A widely used combination metric called F1 metric that gives equal weight to both recall and precision was employed for our evaluation. As the results of evaluation, the proposed methodology achieved the significant improvement over the single profile based CF algorithm. In particular, the ensemble approach using weighted average similarity shows the highest performance. That is, the rate of improvement in F1 is 16.9 percent for the ensemble approach using weighted average similarity and 8.1 percent for the ensemble approach using average similarity of each profile. From these results, we conclude that the multimodal profile ensemble approach is a viable solution to the problems encountered when there's a shortage of customer ratings. This study has significance in suggesting what kind of information could we use to create profile in the environment of big data and how could we combine and utilize them effectively. However, our methodology should be further studied to consider for its real-world application. We need to compare the differences in recommendation accuracy by applying the proposed method to different recommendation algorithms and then to identify which combination of them would show the best performance.

Diagnostic Classification of Chest X-ray Pneumonia using Inception V3 Modeling (Inception V3를 이용한 흉부촬영 X선 영상의 폐렴 진단 분류)

  • Kim, Ji-Yul;Ye, Soo-Young
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.6
    • /
    • pp.773-780
    • /
    • 2020
  • With the development of the 4th industrial, research is being conducted to prevent diseases and reduce damage in various fields of science and technology such as medicine, health, and bio. As a result, artificial intelligence technology has been introduced and researched for image analysis of radiological examinations. In this paper, we will directly apply a deep learning model for classification and detection of pneumonia using chest X-ray images, and evaluate whether the deep learning model of the Inception series is a useful model for detecting pneumonia. As the experimental material, a chest X-ray image data set provided and shared free of charge by Kaggle was used, and out of the total 3,470 chest X-ray image data, it was classified into 1,870 training data sets, 1,100 validation data sets, and 500 test data sets. I did. As a result of the experiment, the result of metric evaluation of the Inception V3 deep learning model was 94.80% for accuracy, 97.24% for precision, 94.00% for recall, and 95.59 for F1 score. In addition, the accuracy of the final epoch for Inception V3 deep learning modeling was 94.91% for learning modeling and 89.68% for verification modeling for pneumonia detection and classification of chest X-ray images. For the evaluation of the loss function value, the learning modeling was 1.127% and the validation modeling was 4.603%. As a result, it was evaluated that the Inception V3 deep learning model is a very excellent deep learning model in extracting and classifying features of chest image data, and its learning state is also very good. As a result of matrix accuracy evaluation for test modeling, the accuracy of 96% for normal chest X-ray image data and 97% for pneumonia chest X-ray image data was proven. The deep learning model of the Inception series is considered to be a useful deep learning model for classification of chest diseases, and it is expected that it can also play an auxiliary role of human resources, so it is considered that it will be a solution to the problem of insufficient medical personnel. In the future, this study is expected to be presented as basic data for similar studies in the case of similar studies on the diagnosis of pneumonia using deep learning.

Assessment of climate change impact on aquatic ecology health indices in Han river basin using SWAT and random forest (SWAT 및 random forest를 이용한 기후변화에 따른 한강유역의 수생태계 건강성 지수 영향 평가)

  • Woo, So Young;Jung, Chung Gil;Kim, Jin Uk;Kim, Seong Joon
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.10
    • /
    • pp.863-874
    • /
    • 2018
  • The purpose of this study is to evaluate the future climate change impact on stream aquatic ecology health of Han River watershed ($34,148km^2$) using SWAT (Soil and Water Assessment Tool) and random forest. The 8 years (2008~2015) spring (April to June) Aquatic ecology Health Indices (AHI) such as Trophic Diatom Index (TDI), Benthic Macroinvertebrate Index (BMI) and Fish Assessment Index (FAI) scored (0~100) and graded (A~E) by NIER (National Institute of Environmental Research) were used. The 8 years NIER indices with the water quality (T-N, $NH_4$, $NO_3$, T-P, $PO_4$) showed that the deviation of AHI score is large when the concentration of water quality is low, and AHI score had negative correlation when the concentration is high. By using random forest, one of the Machine Learning techniques for classification analysis, the classification results for the 3 indices grade showed that all of precision, recall, and f1-score were above 0.81. The future SWAT hydrology and water quality results under HadGEM3-RA RCP 4.5 and 8.5 scenarios of Korea Meteorological Administration (KMA) showed that the future nitrogen-related water quality in watershed average increased up to 43.2% by the baseflow increase effect and the phosphorus-related water quality decreased up to 18.9% by the surface runoff decrease effect. The future FAI and BMI showed a little better Index grade while the future TDI showed a little worse index grade. We can infer that the future TDI is more sensitive to nitrogen-related water quality and the future FAI and BMI are responded to phosphorus-related water quality.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

A study on improving self-inference performance through iterative retraining of false positives of deep-learning object detection in tunnels (터널 내 딥러닝 객체인식 오탐지 데이터의 반복 재학습을 통한 자가 추론 성능 향상 방법에 관한 연구)

  • Kyu Beom Lee;Hyu-Soung Shin
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.2
    • /
    • pp.129-152
    • /
    • 2024
  • In the application of deep learning object detection via CCTV in tunnels, a large number of false positive detections occur due to the poor environmental conditions of tunnels, such as low illumination and severe perspective effect. This problem directly impacts the reliability of the tunnel CCTV-based accident detection system reliant on object detection performance. Hence, it is necessary to reduce the number of false positive detections while also enhancing the number of true positive detections. Based on a deep learning object detection model, this paper proposes a false positive data training method that not only reduces false positives but also improves true positive detection performance through retraining of false positive data. This paper's false positive data training method is based on the following steps: initial training of a training dataset - inference of a validation dataset - correction of false positive data and dataset composition - addition to the training dataset and retraining. In this paper, experiments were conducted to verify the performance of this method. First, the optimal hyperparameters of the deep learning object detection model to be applied in this experiment were determined through previous experiments. Then, in this experiment, training image format was determined, and experiments were conducted sequentially to check the long-term performance improvement through retraining of repeated false detection datasets. As a result, in the first experiment, it was found that the inclusion of the background in the inferred image was more advantageous for object detection performance than the removal of the background excluding the object. In the second experiment, it was found that retraining by accumulating false positives from each level of retraining was more advantageous than retraining independently for each level of retraining in terms of continuous improvement of object detection performance. After retraining the false positive data with the method determined in the two experiments, the car object class showed excellent inference performance with an AP value of 0.95 or higher after the first retraining, and by the fifth retraining, the inference performance was improved by about 1.06 times compared to the initial inference. And the person object class continued to improve its inference performance as retraining progressed, and by the 18th retraining, it showed that it could self-improve its inference performance by more than 2.3 times compared to the initial inference.

Application of Deep Learning for Classification of Ancient Korean Roof-end Tile Images (딥러닝을 활용한 고대 수막새 이미지 분류 검토)

  • KIM Younghyun
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.3
    • /
    • pp.24-35
    • /
    • 2024
  • Recently, research using deep learning technologies such as artificial intelligence, convolutional neural networks, etc. has been actively conducted in various fields including healthcare, manufacturing, autonomous driving, and security, and is having a significant influence on society. In line with this trend, the present study attempted to apply deep learning to the classification of archaeological artifacts, specifically ancient Korean roof-end tiles. Using 100 images of roof-end tiles from each of the Goguryeo, Baekje, and Silla dynasties, for a total of 300 base images, a dataset was formed and expanded to 1,200 images using data augmentation techniques. After building a model using transfer learning from the pre-trained EfficientNetB0 model and conducting five-fold cross-validation, an average training accuracy of 98.06% and validation accuracy of 97.08% were achieved. Furthermore, when model performance was evaluated with a test dataset of 240 images, it could classify the roof-end tile images from the three dynasties with a minimum accuracy of 91%. In particular, with a learning rate of 0.0001, the model exhibited the highest performance, with accuracy of 92.92%, precision of 92.96%, recall of 92.92%, and F1 score of 92.93%. This optimal result was obtained by preventing overfitting and underfitting issues using various learning rate settings and finding the optimal hyperparameters. The study's findings confirm the potential for applying deep learning technologies to the classification of Korean archaeological materials, which is significant. Additionally, it was confirmed that the existing ImageNet dataset and parameters could be positively applied to the analysis of archaeological data. This approach could lead to the creation of various models for future archaeological database accumulation, the use of artifacts in museums, and classification and organization of artifacts.