• Title/Summary/Keyword: 평가 데이터셋

Search Result 487, Processing Time 0.028 seconds

Missing Value Imputation Technique for Water Quality Dataset

  • Jin-Young Jun;Youn-A Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.39-46
    • /
    • 2024
  • Many researchers make efforts to evaluate water quality using various models. Such models require a dataset without missing values, but in real world, most datasets include missing values for various reasons. Simple deletion of samples having missing value(s) could distort distribution of the underlying data and pose a significant risk of biasing the model's inference when the missing mechanism is not MCAR. In this study, to explore the most appropriate technique for handing missing values in water quality data, several imputation techniques were experimented based on existing KNN and MICE imputation with/without the generative neural network model, Autoencoder(AE) and Denoising Autoencoder(DAE). The results shows that KNN and MICE combined imputation without generative networks provides the closest estimated values to the true values. When evaluating binary classification models based on support vector machine and ensemble algorithms after applying the combined imputation technique to the observed water quality dataset with missing values, it shows better performance in terms of Accuracy, F1 score, RoC-AuC score and MCC compared to those evaluated after deleting samples having missing values.

SRLev-BIH: An Evaluation Metric for Korean Generative Commonsense Reasoning (SRLev-BIH: 한국어 일반 상식 추론 및 생성 능력 평가 지표)

  • Jaehyung Seo;Yoonna Jang;Jaewook Lee;Hyeonseok Moon;Sugyeong Eo;Chanjun Park;Aram So;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.176-181
    • /
    • 2022
  • 일반 상식 추론 능력은 가장 사람다운 능력 중 하나로써, 인공지능 모델이 쉽게 모사하기 어려운 영역이다. 딥러닝 기반의 언어 모델은 여전히 일반 상식에 기반한 추론을 필요로 하는 분야에서 부족한 성능을 보인다. 특히, 한국어에서는 일반 상식 추론과 관련한 연구가 상당히 부족한 상황이다. 이러한 문제 완화를 위해 최근 생성 기반의 일반 상식 추론을 위한 한국어 데이터셋인 Korean CommonGen [1]이 발표되었다. 그러나, 해당 데이터셋의 평가 지표는 어휘 단계의 유사성과 중첩에 의존하는 한계를 지니며, 생성한 문장이 일반 상식에 부합한 문장인지 측정하기 어렵다. 따라서 본 논문은 한국어 일반 상식 추론 및 생성 능력에 대한 평가 지표를 개선하기 위해 문장 성분의 의미역과 자모의 형태 변화를 바탕으로 생성 결과를 평가하는 SRLev, 사람의 평가 결과를 학습한 BIH, 그리고 두 평가 지표의 장점을 결합한 SRLev-BIH를 제안한다.

  • PDF

Prompt-based Data Augmentation for Generating Personalized Conversation Using Past Counseling Dialogues (과거 상담대화를 활용한 개인화 대화생성을 위한 프롬프트 기반 데이터 증강)

  • Chae-Gyun Lim;Hye-Woo Lee;Kyeong-Jin Oh;Joo-Won Sung;Ho-Jin Choi
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.209-213
    • /
    • 2023
  • 최근 자연어 이해 분야에서 대규모 언어모델 기반으로 프롬프트를 활용하여 모델과 상호작용하는 방법이 널리 연구되고 있으며, 특히 상담 분야에서 언어모델을 활용한다면 내담자와의 자연스러운 대화를 주도할 수 있는 대화생성 모델로 확장이 가능하다. 내담자의 상황에 따라 개인화된 상담대화를 진행하는 모델을 학습시키려면 동일한 내담자에 대한 과거 및 차기 상담대화가 필요하지만, 기존의 데이터셋은 대체로 단일 대화세션으로 구축되어 있다. 본 논문에서는 언어모델을 활용하여 단일 대화세션으로 구축된 기존 상담대화 데이터셋을 확장하여 연속된 대화세션 구성의 학습데이터를 확보할 수 있는 프롬프트 기반 데이터 증강 기법을 제안한다. 제안 기법은 기존 대화내용을 반영한 요약질문 생성단계와 대화맥락을 유지한 차기 상담대화 생성 단계로 구성되며, 프롬프트 엔지니어링을 통해 상담 분야의 데이터셋을 확장하고 사용자 평가를 통해 제안 기법의 데이터 증강이 품질에 미치는 영향을 확인한다.

  • PDF

Practical Datasets for Similarity Measures and Their Threshold Values (유사도 측정 데이터 셋과 쓰레숄드)

  • Yang, Byoungju;Shim, Junho
    • The Journal of Society for e-Business Studies
    • /
    • v.18 no.1
    • /
    • pp.97-105
    • /
    • 2013
  • In the e-business domain where data objects are quantitatively large, measuring similarity to find the same or similar objects is important. It basically requires comparing and computing the features of objects in pairs, and therefore takes longer time as the amount of data becomes bigger. Recent studies have shown various algorithms to efficiently perform it. Most of them show their performance superiority by empirical tests over some sets of data. In this paper, we introduce those data sets, present their characteristics and the meaningful threshold values that each of data sets contain in nature. The analysis on practical data sets with respect to their threshold values may serve as a referential baseline to the future experiments of newly developed algorithms.

A Cross-Validation of SeismicVulnerability Assessment Model: Application to Earthquake of 9.12 Gyeongju and 2017 Pohang (지진 취약성 평가 모델 교차검증: 경주(2016)와 포항(2017) 지진을 대상으로)

  • Han, Jihye;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.649-655
    • /
    • 2021
  • This study purposes to cross-validate its performance by applying the optimal seismic vulnerability assessment model based on previous studies conducted in Gyeongju to other regions. The test area was Pohang City, the occurrence site for the 2017 Pohang Earthquake, and the dataset was built the same influencing factors and earthquake-damaged buildings as in the previous studies. The validation dataset was built via random sampling, and the prediction accuracy was derived by applying it to a model based on a random forest (RF) of Gyeongju. The accuracy of the model success and prediction in Gyeongju was 100% and 94.9%, respectively, and as a result of confirming the prediction accuracy by applying the Pohang validation dataset, it appeared as 70.4%.

Performance Evaluation of AHDR Model using Channel Attention (채널 어텐션을 이용한 AHDR 모델의 성능 평가)

  • Youn, Seok Jun;Lee, Keuntek;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.335-338
    • /
    • 2021
  • 본 논문에서는 기존 AHDRNet에 channel attention 기법을 적용했을 때 성능에 어떠한 변화가 있는지를 평가하였다. 기존 모델의 병합 망에 존재하는 DRDB(Dilated Residual Dense Block) 사이, 그리고 DRDB 내의 확장된 합성곱 레이어 (dilated convolutional layer) 뒤에 또다른 합성곱 레이어를 추가하는 방식으로 channel attention 기법을 적용하였다. 데이터셋은 Kalantari의 데이터셋을 사용하였으며, PSNR(Peak Signal-to-Noise Ratio)로 비교해본 결과 기존의 AHDRNet의 PSNR은 42.1656이며, 제안된 모델의 PSNR은 42.8135로 더 높아진 것을 확인하였다.

  • PDF

Mining Frequent Itemsets using Time Unit Grouping (시간 단위 그룹핑을 이용한 빈발 아이템셋 마이닝)

  • Hwang, Jeong Hee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.647-653
    • /
    • 2022
  • Data mining is a technique that explores knowledge such as relationships and patterns between data by exploring and analyzing data. Data that occurs in the real world includes a temporal attribute. Temporal data mining research to find useful knowledge from data with temporal properties can be effectively utilized for predictive judgment that can predict the future. In this paper, we propose an algorithm using time-unit grouping to classify the database into regular time period units and discover frequent pattern itemsets in time units. The proposed algorithm organizes the transaction and items included in the time unit into a matrix, and discovers frequent items in the time unit through grouping. In the experimental results for the performance evaluation, it was found that the execution time was 1.2 times that of the existing algorithm, but more than twice the frequent pattern itemsets were discovered.

Performance Improvement Analysis of Building Extraction Deep Learning Model Based on UNet Using Transfer Learning at Different Learning Rates (전이학습을 이용한 UNet 기반 건물 추출 딥러닝 모델의 학습률에 따른 성능 향상 분석)

  • Chul-Soo Ye;Young-Man Ahn;Tae-Woong Baek;Kyung-Tae Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1111-1123
    • /
    • 2023
  • In recent times, semantic image segmentation methods using deep learning models have been widely used for monitoring changes in surface attributes using remote sensing imagery. To enhance the performance of various UNet-based deep learning models, including the prominent UNet model, it is imperative to have a sufficiently large training dataset. However, enlarging the training dataset not only escalates the hardware requirements for processing but also significantly increases the time required for training. To address these issues, transfer learning is used as an effective approach, enabling performance improvement of models even in the absence of massive training datasets. In this paper we present three transfer learning models, UNet-ResNet50, UNet-VGG19, and CBAM-DRUNet-VGG19, which are combined with the representative pretrained models of VGG19 model and ResNet50 model. We applied these models to building extraction tasks and analyzed the accuracy improvements resulting from the application of transfer learning. Considering the substantial impact of learning rate on the performance of deep learning models, we also analyzed performance variations of each model based on different learning rate settings. We employed three datasets, namely Kompsat-3A dataset, WHU dataset, and INRIA dataset for evaluating the performance of building extraction results. The average accuracy improvements for the three dataset types, in comparison to the UNet model, were 5.1% for the UNet-ResNet50 model, while both UNet-VGG19 and CBAM-DRUNet-VGG19 models achieved a 7.2% improvement.

Sentence Filtering Dataset Construction Method about Web Corpus (웹 말뭉치에 대한 문장 필터링 데이터 셋 구축 방법)

  • Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.11
    • /
    • pp.1505-1511
    • /
    • 2021
  • Pretrained models with high performance in various tasks within natural language processing have the advantage of learning the linguistic patterns of sentences using large corpus during the training, allowing each token in the input sentence to be represented with appropriate feature vectors. One of the methods of constructing a corpus required for a pre-trained model training is a collection method using web crawler. However, sentences that exist on web may contain unnecessary words in some or all of the sentences because they have various patterns. In this paper, we propose a dataset construction method for filtering sentences containing unnecessary words using neural network models for corpus collected from the web. As a result, we construct a dataset containing a total of 2,330 sentences. We also evaluated the performance of neural network models on the constructed dataset, and the BERT model showed the highest performance with an accuracy of 93.75%.

A Study on Methodology on Building NLI Benchmark Dataset in korean (한국어 추론 벤치마크 데이터 구축을 위한 방법론 연구)

  • Han, Jiyoon;Kim, Hansaem
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.292-297
    • /
    • 2020
  • 자연어 추론 모델은 전제와 가설 사이의 의미 관계를 함의와 모순, 중립 세 가지로 판별한다. 영어에서는 RTE(recognizing textual entailment) 데이터셋과 다양한 NLI(Natural Language Inference) 데이터셋이 이러한 모델을 개발하고 평가하기 위한 벤치마크로 공개되어 있다. 본 연구는 국외의 텍스트 추론 데이터 주석 가이드라인 및 함의 데이터를 언어학적으로 분석한 결과와 함의 및 모순 관계에 대한 의미론적 연구의 토대 위에서 한국어 자연어 추론 벤치마크 데이터 구축 방법론을 탐구한다. 함의 및 모순 관계를 주석하기 위하여 각각의 의미 관계와 관련된 언어 현상을 정의하고 가설을 생성하는 방안에 대하여 제시하며 이를 바탕으로 실제 구축될 데이터의 형식과 주석 프로세스에 대해서도 논의한다.

  • PDF