• 제목/요약/키워드: Datasets

검색결과 2,094건 처리시간 0.026초

Knowledge Model for Disaster Dataset Navigation

  • Hwang, Yun-Young;Yuk, Jin-Hee;Shin, Sumi
    • Journal of Information Science Theory and Practice
    • /
    • 제9권4호
    • /
    • pp.35-49
    • /
    • 2021
  • In a situation where there are multiple diverse datasets, it is essential to have an efficient method to provide users with the datasets they require. To address this suggestion, necessary datasets should be selected on the basis of the relationships between the datasets. In particular, in order to discover the necessary datasets for disaster resolution, we need to consider the disaster resolution stage. In this paper, in order to provide the necessary datasets for each stage of disaster resolution, we constructed a disaster type and disaster management process ontology and designed a method to determine the necessary datasets for each disaster type and disaster management process step. In addition, we introduce a method to determine relationships between datasets necessary for disaster response. We propose a method for discovering datasets based on minimal relationships such as "isA," "sameAs," and "subclassOf." To discover suitable datasets, we designed a knowledge exploration model and collected 651 disaster-related datasets for improving our method. These datasets were categorized by disaster type from the perspective of disaster management. Categorizing actual datasets into disaster types and disaster management types allows a single dataset to be classified as multiple types in both categories. We built a knowledge exploration model on the basis of disaster examples to ensure the configuration of our model.

Effects of Preprocessing on Text Classification in Balanced and Imbalanced Datasets

  • Mehmet F. Karaca
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권3호
    • /
    • pp.591-609
    • /
    • 2024
  • In this study, preprocessings with all combinations were examined in terms of the effects on decreasing word number, shortening the duration of the process and the classification success in balanced and imbalanced datasets which were unbalanced in different ratios. The decreases in the word number and the processing time provided by preprocessings were interrelated. It was seen that more successful classifications were made with Turkish datasets and English datasets were affected more from the situation of whether the dataset is balanced or not. It was found out that the incorrect classifications, which are in the classes having few documents in highly imbalanced datasets, were made by assigning to the class close to the related class in terms of topic in Turkish datasets and to the class which have many documents in English datasets. In terms of average scores, the highest classification was obtained in Turkish datasets as follows: with not applying lowercase, applying stemming and removing stop words, and in English datasets as follows: with applying lowercase and stemming, removing stop words. Applying stemming was the most important preprocessing method which increases the success in Turkish datasets, whereas removing stop words in English datasets. The maximum scores revealed that feature selection, feature size and classifier are more effective than preprocessing in classification success. It was concluded that preprocessing is necessary for text classification because it shortens the processing time and can achieve high classification success, a preprocessing method does not have the same effect in all languages, and different preprocessing methods are more successful for different languages.

Comparison of Parametric and Bootstrap Method in Bioequivalence Test

  • Ahn, Byung-Jin;Yim, Dong-Seok
    • The Korean Journal of Physiology and Pharmacology
    • /
    • 제13권5호
    • /
    • pp.367-371
    • /
    • 2009
  • The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled data sets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

가정간호의 욕창 의사결정지원 서비스를 위한 욕창 사정 MDS 규명 및 간호 기록 분석 (Identifying Minimum Datasets for Pressure Ulcer Assessment and Analysis of Nursing Records in Home Nursing)

  • 김현영;박현애
    • 지역사회간호학회지
    • /
    • 제20권1호
    • /
    • pp.105-111
    • /
    • 2009
  • Purpose: The purpose of this study was to identify minimum datasets for ulcer assessment and to map the minimum datasets to paper-based nursing records for pressure ulcer care in homecare setting. Methods: To identify minimum datasets for pressure ulcer assessment, the authors reviewed four guidelines for pressure ulcer care. The content validity of the minimum datasets was assessed by three homecare nurse specialists. To map the minimum datasets to nursing records, the authors examined 107 pressure ulcer events derived from 45 pressure ulcer patients who received home nursing from two hospitals in Gyeonggi Province. Results: The minimum datasets for initial assessment were anatomical location, stage, size, tissue, exudate, condition of periwound skin, undermining, odor, and pain. 'Location' was recorded best, accounting for a complete recording rate of 98.1%. 'Exudate' and 'pain' showed the poorest record, accounting for 2.8% and 0%, respectively. The minimum datasets for progress assessment were wound size, tissue, and exudate, each accounted for 31.8%, 2.8%, and 4.7%, respectively. Conclusion: This study concluded that data on pressure ulcer assessment was not sufficient homecare and it can be improved by adopting minimum datasets as identified in this study.

  • PDF

Labeling Big Spatial Data: A Case Study of New York Taxi Limousine Dataset

  • AlBatati, Fawaz;Alarabi, Louai
    • International Journal of Computer Science & Network Security
    • /
    • 제21권6호
    • /
    • pp.207-212
    • /
    • 2021
  • Clustering Unlabeled Spatial-datasets to convert them to Labeled Spatial-datasets is a challenging task specially for geographical information systems. In this research study we investigated the NYC Taxi Limousine Commission dataset and discover that all of the spatial-temporal trajectory are unlabeled Spatial-datasets, which is in this case it is not suitable for any data mining tasks, such as classification and regression. Therefore, it is necessary to convert unlabeled Spatial-datasets into labeled Spatial-datasets. In this research study we are going to use the Clustering Technique to do this task for all the Trajectory datasets. A key difficulty for applying machine learning classification algorithms for many applications is that they require a lot of labeled datasets. Labeling a Big-data in many cases is a costly process. In this paper, we show the effectiveness of utilizing a Clustering Technique for labeling spatial data that leads to a high-accuracy classifier.

TOD: Trash Object Detection Dataset

  • Jo, Min-Seok;Han, Seong-Soo;Jeong, Chang-Sung
    • Journal of Information Processing Systems
    • /
    • 제18권4호
    • /
    • pp.524-534
    • /
    • 2022
  • In this paper, we produce Trash Object Detection (TOD) dataset to solve trash detection problems. A well-organized dataset of sufficient size is essential to train object detection models and apply them to specific tasks. However, existing trash datasets have only a few hundred images, which are not sufficient to train deep neural networks. Most datasets are classification datasets that simply classify categories without location information. In addition, existing datasets differ from the actual guidelines for separating and discharging recyclables because the category definition is primarily the shape of the object. To address these issues, we build and experiment with trash datasets larger than conventional trash datasets and have more than twice the resolution. It was intended for general household goods. And annotated based on guidelines for separating and discharging recyclables from the Ministry of Environment. Our dataset has 10 categories, and around 33K objects were annotated for around 5K images with 1280×720 resolution. The dataset, as well as the pre-trained models, have been released at https://github.com/jms0923/tod.

Integration of Single-Cell RNA-Seq Datasets: A Review of Computational Methods

  • Yeonjae Ryu;Geun Hee Han;Eunsoo Jung;Daehee Hwang
    • Molecules and Cells
    • /
    • 제46권2호
    • /
    • pp.106-119
    • /
    • 2023
  • With the increased number of single-cell RNA sequencing (scRNA-seq) datasets in public repositories, integrative analysis of multiple scRNA-seq datasets has become commonplace. Batch effects among different datasets are inevitable because of differences in cell isolation and handling protocols, library preparation technology, and sequencing platforms. To remove these batch effects for effective integration of multiple scRNA-seq datasets, a number of methodologies have been developed based on diverse concepts and approaches. These methods have proven useful for examining whether cellular features, such as cell subpopulations and marker genes, identified from a certain dataset, are consistently present, or whether their condition-dependent variations, such as increases in cell subpopulations in particular disease-related conditions, are consistently observed in different datasets generated under similar or distinct conditions. In this review, we summarize the concepts and approaches of the integration methods and their pros and cons as has been reported in previous literature.

Possibility of the Use of Public Microarray Database for Identifying Significant Genes Associated with Oral Squamous Cell Carcinoma

  • Kim, Ki-Yeol;Cha, In-Ho
    • Genomics & Informatics
    • /
    • 제10권1호
    • /
    • pp.23-32
    • /
    • 2012
  • There are lots of studies attempting to identify the expression changes in oral squamous cell carcinoma. Most studies include insufficient samples to apply statistical methods for detecting significant gene sets. This study combined two small microarray datasets from a public database and identified significant genes associated with the progress of oral squamous cell carcinoma. There were different expression scales between the two datasets, even though these datasets were generated under the same platforms - Affymetrix U133A gene chips. We discretized gene expressions of the two datasets by adjusting the differences between the datasets for detecting the more reliable information. From the combination of the two datasets, we detected 51 significant genes that were upregulated in oral squamous cell carcinoma. Most of them were published in previous studies as cancer-related genes. From these selected genes, significant genetic pathways associated with expression changes were identified. By combining several datasets from the public database, sufficient samples can be obtained for detecting reliable information. Most of the selected genes were known as cancer-related genes, including oral squamous cell carcinoma. Several unknown genes can be biologically evaluated in further studies.

인공지능에서 저작권과 라이선스 이슈 분석 (Analysis of Copyright and Licensing Issues in Artificial Intelligence)

  • 류원옥;이승윤;정성인
    • 전자통신동향분석
    • /
    • 제38권6호
    • /
    • pp.84-94
    • /
    • 2023
  • Open source has many advantages and is widely used in various fields. However, legal disputes regarding copyright and licensing of datasets and learning models have recently arisen in artificial intelligence developments. We examine how datasets affect artificial intelligence learning and services from the perspective of copyrighting and licensing when datasets are used for training models. The licensing conditions of datasets can lead to copyright infringement and license violation, thus determining the scope of disclosure and commercialization of the trained model. In addition, we examine related legal issues.

Comparison of survival prediction models for pancreatic cancer: Cox model versus machine learning models

  • Kim, Hyunsuk;Park, Taesung;Jang, Jinyoung;Lee, Seungyeoun
    • Genomics & Informatics
    • /
    • 제20권2호
    • /
    • pp.23.1-23.9
    • /
    • 2022
  • A survival prediction model has recently been developed to evaluate the prognosis of resected nonmetastatic pancreatic ductal adenocarcinoma based on a Cox model using two nationwide databases: Surveillance, Epidemiology and End Results (SEER) and Korea Tumor Registry System-Biliary Pancreas (KOTUS-BP). In this study, we applied two machine learning methods-random survival forests (RSF) and support vector machines (SVM)-for survival analysis and compared their prediction performance using the SEER and KOTUS-BP datasets. Three schemes were used for model development and evaluation. First, we utilized data from SEER for model development and used data from KOTUS-BP for external evaluation. Second, these two datasets were swapped by taking data from KOTUS-BP for model development and data from SEER for external evaluation. Finally, we mixed these two datasets half and half and utilized the mixed datasets for model development and validation. We used 9,624 patients from SEER and 3,281 patients from KOTUS-BP to construct a prediction model with seven covariates: age, sex, histologic differentiation, adjuvant treatment, resection margin status, and the American Joint Committee on Cancer 8th edition T-stage and N-stage. Comparing the three schemes, the performance of the Cox model, RSF, and SVM was better when using the mixed datasets than when using the unmixed datasets. When using the mixed datasets, the C-index, 1-year, 2-year, and 3-year time-dependent areas under the curve for the Cox model were 0.644, 0.698, 0.680, and 0.687, respectively. The Cox model performed slightly better than RSF and SVM.