• Title/Summary/Keyword: Research dataset

검색결과 1,350건 처리시간 0.029초

행정정보데이터세트의 데이터 품질평가 연구 (A Study on Data Quality Evaluation of Administrative Information Dataset)

  • 송치호;임진희
    • 기록학연구
    • /
    • 제71호
    • /
    • pp.237-272
    • /
    • 2022
  • 2019년부터 국가기록원의 주도로 행정정보데이터세트 기록관리체계 구축 시범사업이 본격적으로 시작되었다. 2021년까지 3년에 걸친 사업의 결과를 바탕으로 개선된 행정정보데이터세트 관리방안이 공공기록물 관련 법령과 지침에 반영될 예정이다. 이를 통해 행정정보데이터세트는 본격적인 공공기록관리의 대상이 된다. 공공기록이 전자문서 중심으로 전환되었고 행정정보시스템의 데이터세트까지 본격적인 공공기록관리의 대상으로 포함되었지만, 기록을 구성하는 원 자료(raw data)로서의 데이터 자체의 품질 요건에 관한 연구는 아직 부족한 상황이다. 데이터 품질이 보장되지 않으면 데이터의 구성체이며 기록의 집합체인 데이터세트는 기록의 4대 속성 전체가 위협받게 된다. 더욱이 표준기록관리시스템의 규격을 고려하지 않고 기관 실무 부서의 다양한 요구를 반영하여 구축된 행정정보시스템의 데이터는 기록관리 관점에서 그 품질에 대한 신뢰성이 부족할 경우 공공기록 자체의 신뢰성을 확보할 수 없을 것이다. 본 연구는 2021년 국가기록원에서 진행한 "행정정보데이터세트 기록정보 서비스 및 활용모형 연구"에서 제시된 행정정보데이터세트 관리방안을 기반으로, 적극적으로 개념이 확장된 평가, 그중에서 데이터 품질평가에 관한 연구를 수행하였다. 범정부적으로 추진되고 있는 다양한 데이터, 특히 공공 데이터 관련 정책과 가이드를 참고하여 기록관리 차원에서의 품질평가 요건을 도출하고, 구체적인 지표를 제시해 보고자 한다. 이를 통해 향후 본격화될 행정정보데이터세트 기록관리에 도움이 되기를 기대한다.

Land Cover Classification over Yellow River Basin using Land Cover Classification over Yellow River Basin using

  • Matsuoka, M.;Hayasaka, T.;Fukushima, Y.;Honda, Y.
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.511-512
    • /
    • 2003
  • The Terra/MODIS data set over Yellow River Basin, China is generated for the purpose of an input parameter into the water resource management model, which has been developed in the Research Revolution 2002 (RR2002) project. This dataset is mainly utilized for the land cover classification and radiation budget analysis. In this paper, the outline of the dataset generation, and a simple land cover classification method, which will be developed to avoid the influence of cloud contamination and missing data, are introduced.

  • PDF

Developing an Intrusion Detection Framework for High-Speed Big Data Networks: A Comprehensive Approach

  • Siddique, Kamran;Akhtar, Zahid;Khan, Muhammad Ashfaq;Jung, Yong-Hwan;Kim, Yangwoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권8호
    • /
    • pp.4021-4037
    • /
    • 2018
  • In network intrusion detection research, two characteristics are generally considered vital to building efficient intrusion detection systems (IDSs): an optimal feature selection technique and robust classification schemes. However, the emergence of sophisticated network attacks and the advent of big data concepts in intrusion detection domains require two more significant aspects to be addressed: employing an appropriate big data computing framework and utilizing a contemporary dataset to deal with ongoing advancements. As such, we present a comprehensive approach to building an efficient IDS with the aim of strengthening academic anomaly detection research in real-world operational environments. The proposed system has the following four characteristics: (i) it performs optimal feature selection using information gain and branch-and-bound algorithms; (ii) it employs machine learning techniques for classification, namely, Logistic Regression, Naïve Bayes, and Random Forest; (iii) it introduces bulk synchronous parallel processing to handle the computational requirements of large-scale networks; and (iv) it utilizes a real-time contemporary dataset generated by the Information Security Centre of Excellence at the University of Brunswick (ISCX-UNB) to validate its efficacy. Experimental analysis shows the effectiveness of the proposed framework, which is able to achieve high accuracy, low computational cost, and reduced false alarms.

기반시설 마스터데이터 표준요소 구축에 관한 연구 - 기반시설 표준데이터를 중심으로 - (A Study on the Establishment of Standard Elements of Infrastructure Master Data: Focused on Infrastructure Standard Dataset)

  • 손혜인;남영준
    • 한국비블리아학회지
    • /
    • 제28권4호
    • /
    • pp.35-55
    • /
    • 2017
  • 마스터데이터는 기관 내부의 광범위한 이용을 목적으로 구축되며, 주로 기업에서 많이 활용되고 있는 분야이다. 이 연구는 국가의 공공기관에서 활용할 수 있는 기반시설에 관한 마스터데이터 구축을 목적으로 연구를 진행하였다. 이를 위해 공공데이터포털에서 제공하는 표준데이터세트에 기반을 두고 해당 데이터세트의 개별 속성을 분석하였다. 이 중 마스터데이터의 특성에 맞는 표준요소를 추출하였고, 최종적으로 종합한 표준요소를 국가에서 활용하고 있는 표준화 체계를 통하여 검증하였다.

Two-Stream Convolutional Neural Network for Video Action Recognition

  • Qiao, Han;Liu, Shuang;Xu, Qingzhen;Liu, Shouqiang;Yang, Wanggan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3668-3684
    • /
    • 2021
  • Video action recognition is widely used in video surveillance, behavior detection, human-computer interaction, medically assisted diagnosis and motion analysis. However, video action recognition can be disturbed by many factors, such as background, illumination and so on. Two-stream convolutional neural network uses the video spatial and temporal models to train separately, and performs fusion at the output end. The multi segment Two-Stream convolutional neural network model trains temporal and spatial information from the video to extract their feature and fuse them, then determine the category of video action. Google Xception model and the transfer learning is adopted in this paper, and the Xception model which trained on ImageNet is used as the initial weight. It greatly overcomes the problem of model underfitting caused by insufficient video behavior dataset, and it can effectively reduce the influence of various factors in the video. This way also greatly improves the accuracy and reduces the training time. What's more, to make up for the shortage of dataset, the kinetics400 dataset was used for pre-training, which greatly improved the accuracy of the model. In this applied research, through continuous efforts, the expected goal is basically achieved, and according to the study and research, the design of the original dual-flow model is improved.

A biomedically oriented automatically annotated Twitter COVID-19 dataset

  • Hernandez, Luis Alberto Robles;Callahan, Tiffany J.;Banda, Juan M.
    • Genomics & Informatics
    • /
    • 제19권3호
    • /
    • pp.21.1-21.5
    • /
    • 2021
  • The use of social media data, like Twitter, for biomedical research has been gradually increasing over the years. With the coronavirus disease 2019 (COVID-19) pandemic, researchers have turned to more non-traditional sources of clinical data to characterize the disease in near-real time, study the societal implications of interventions, as well as the sequelae that recovered COVID-19 cases present. However, manually curated social media datasets are difficult to come by due to the expensive costs of manual annotation and the efforts needed to identify the correct texts. When datasets are available, they are usually very small and their annotations don't generalize well over time or to larger sets of documents. As part of the 2021 Biomedical Linked Annotation Hackathon, we release our dataset of over 120 million automatically annotated tweets for biomedical research purposes. Incorporating best-practices, we identify tweets with potentially high clinical relevance. We evaluated our work by comparing several SpaCy-based annotation frameworks against a manually annotated gold-standard dataset. Selecting the best method to use for automatic annotation, we then annotated 120 million tweets and released them publicly for future downstream usage within the biomedical domain.

Exploiting Neural Network for Temporal Multi-variate Air Quality and Pollutant Prediction

  • Khan, Muneeb A.;Kim, Hyun-chul;Park, Heemin
    • 한국멀티미디어학회논문지
    • /
    • 제25권2호
    • /
    • pp.440-449
    • /
    • 2022
  • In recent years, the air pollution and Air Quality Index (AQI) has been a pivotal point for researchers due to its effect on human health. Various research has been done in predicting the AQI but most of these studies, either lack dense temporal data or cover one or two air pollutant elements. In this paper, a hybrid Convolutional Neural approach integrated with recurrent neural network architecture (CNN-LSTM), is presented to find air pollution inference using a multivariate air pollutant elements dataset. The aim of this research is to design a robust and real-time air pollutant forecasting system by exploiting a neural network. The proposed approach is implemented on a 24-month dataset from Seoul, Republic of Korea. The predicted results are cross-validated with the real dataset and compared with the state-of-the-art techniques to evaluate its robustness and performance. The proposed model outperforms SVM, SVM-Polynomial, ANN, and RF models with 60.17%, 68.99%, 14.6%, and 6.29%, respectively. The model performs SVM and SVM-Polynomial in predicting O3 by 78.04% and 83.79%, respectively. Overall performance of the model is measured in terms of Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and the Root Mean Square Error (RMSE).

다양한 컨볼루션 신경망을 이용한 태국어 숫자 인식 (Handwriting Thai Digit Recognition Using Convolution Neural Networks)

  • ;정한민;김태홍
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 춘계학술대회
    • /
    • pp.15-17
    • /
    • 2021
  • 필기체 인식 연구는 주로 딥러닝 기술에 초점이 맞추어져 있으며, 최근 몇 년 동안 많은 발전을 이루었다. 특히, 필기체 태국어 숫자 인식은 태국 공식 문서와 영수증과 같은 숫자 정보를 포함한 많은 분야에서 중요한 연구 분야지만, 동시에 도전적인 분야이기도 하다. 대규모 태국어 숫자 데이터 집합의 부재를 해결하기 위해, 본 연구는 자체적인 데이터 집합을 구축하고 이를 다양한 컨볼루션 신경망으로 학습시켰다. 정확도 메트릭을 이용하여 평가한 결과, 배치 정규화 기반 VGG 13이 98.29%의 가장 높은 성능을 보였다.

  • PDF

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving

  • Han, Seung-Jun;Kang, Jungyu;Min, Kyoung-Wook;Choi, Jungdan
    • ETRI Journal
    • /
    • 제43권4호
    • /
    • pp.603-616
    • /
    • 2021
  • Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.