• Title/Summary/Keyword: few data

Search Result 4,177, Processing Time 0.035 seconds

A note on the distance distribution paradigm for Mosaab-metric to process segmented genomes of influenza virus

  • Daoud, Mosaab
    • Genomics & Informatics
    • /
    • v.18 no.1
    • /
    • pp.7.1-7.7
    • /
    • 2020
  • In this paper, we present few technical notes about the distance distribution paradigm for Mosaab-metric using 1, 2, and 3 grams feature extraction techniques to analyze composite data points in high dimensional feature spaces. This technical analysis will help the specialist in bioinformatics and biotechnology to deeply explore the biodiversity of influenza virus genome as a composite data point. Various technical examples are presented in this paper, in addition, the integrated statistical learning pipeline to process segmented genomes of influenza virus is illustrated as sequential-parallel computational pipeline.

Logistic Model for Normality by Neural Networks

  • Lee, Jea-Young;Rhee, Seong-Won
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.1
    • /
    • pp.119-129
    • /
    • 2003
  • We propose a new logistic regression model of normality curves for normal(diseased) and abnormal(nondiseased) classifications by neural networks in data mining. The fitted logistic regression lines are estimated, interpreted and plotted by the neural network technique. A few goodness-of-fit test statistics for normality are discussed and the performances by the fitted logistic regression lines are conducted.

  • PDF

Combining Support Vector Machine Recursive Feature Elimination and Intensity-dependent Normalization for Gene Selection in RNAseq (RNAseq 빅데이터에서 유전자 선택을 위한 밀집도-의존 정규화 기반의 서포트-벡터 머신 병합법)

  • Kim, Chayoung
    • Journal of Internet Computing and Services
    • /
    • v.18 no.5
    • /
    • pp.47-53
    • /
    • 2017
  • In past few years, high-throughput sequencing, big-data generation, cloud computing, and computational biology are revolutionary. RNA sequencing is emerging as an attractive alternative to DNA microarrays. And the methods for constructing Gene Regulatory Network (GRN) from RNA-Seq are extremely lacking and urgently required. Because GRN has obtained substantial observation from genomics and bioinformatics, an elementary requirement of the GRN has been to maximize distinguishable genes. Despite of RNA sequencing techniques to generate a big amount of data, there are few computational methods to exploit the huge amount of the big data. Therefore, we have suggested a novel gene selection algorithm combining Support Vector Machines and Intensity-dependent normalization, which uses log differential expression ratio in RNAseq. It is an extended variation of support vector machine recursive feature elimination (SVM-RFE) algorithm. This algorithm accomplishes minimum relevancy with subsets of Big-Data, such as NCBI-GEO. The proposed algorithm was compared to the existing one which uses gene expression profiling DNA microarrays. It finds that the proposed algorithm have provided as convenient and quick method than previous because it uses all functions in R package and have more improvement with regard to the classification accuracy based on gene ontology and time consuming in terms of Big-Data. The comparison was performed based on the number of genes selected in RNAseq Big-Data.

Fuzzy Fingerprint Vault using Multiple Polynomials (다중 다항식을 이용한 지문 퍼지볼트)

  • Moon, Dae-Sung;Choi, Woo-Yong;Moon, Ki-Young
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.1
    • /
    • pp.125-133
    • /
    • 2009
  • Security of biometric data is particularly important as the compromise of the data will be permanent. To protect the biometric data, we need to store it in a non.invertible transformed version. Thus, even if the transformed version is compromised, its valid biometric data are securely remained. Fuzzy vault mechanism was proposed to provide cryptographic secure protection of critical data(e.g., encryption key) with the fingerprint data in a way that only the authorized user can access the critical data by providing the valid fingerprint. However, all the previous results cannot operate on the fingerprint image with a few minutiae, because they use fixed degree of the polynomial without considering the number of fingerprint minutiae. To solve this problem, we use adaptive degree of polynomial considering the number of minutiae. Also, we apply multiple polynomials to operate the fingerprint with a few minutiae. Based on the experimental results, we confirm that the proposed approach can enhance the security level and verification accuracy.

A Representation of Data Semantics using Bill of Data (자료 구성표를 이용한 데이터의 생성적 의미 표현 연구)

  • Lee, Choon-Yeul
    • Asia pacific journal of information systems
    • /
    • v.7 no.3
    • /
    • pp.167-180
    • /
    • 1997
  • Data semantics is an well recognized issue in areas of information systems research. It provides indispensable information for management of data, It describes what data mean, how they are created, where they can be applied to, to name a few. Because of these diverse nature of data semantics, it has been described from different perspectives of formalization. This article proposes to formalize data semantics by the processes that data are created or transformed, A scheme is proposed to describe the structure that data are created and transformed, which is called Bill of Data. Bill of Data is a directed graph, whose leaves are primary input data and whose internal nodes are output data objects produced from input data objects. Using Bill of Data, algorithms are developed to compare data semantics.

  • PDF

Trends in Personal Data Storage Technologies for the Data Economy (데이터 경제를 위한 개인 데이터 저장 기술 동향)

  • Jung, H.Y.;Lee, S.Y.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.5
    • /
    • pp.54-61
    • /
    • 2022
  • Data are an essential resource for artificial intelligence-based services. It is considered a vital resource in the 4th industrial revolution era based on artificial intelligence. However, it is well-known that only a few giant platforms that provide most of the current online services tend to monopolize personal data. Therefore, some governments have started enforcing personal data protection and mobility regulations to address this problem. Additionally, there are some notable activities from a technical perspective, and Web 3.0 is one of these. Web 3.0 focuses on distributed architecture to protect people's data sovereignty. An important technical challenge of Web 3.0 is how to facilitate the personal data storage technology to provide valuable data for new data-based services while providing data for producers' sovereignty. This study reviews some currently proposed personal data storage technologies. Furthermore, we discuss the domestic countermeasures from MyData perspective, which is a typical project for data-based businesses in Korea.

Analysis of Incomplete Data with Nonignorable Missing Values

  • Kim, Hyun-Jeong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.13 no.2
    • /
    • pp.167-174
    • /
    • 2002
  • In the case of "nonignorable missing data", it is necessary to assume a model dealing with the missing on each situations. In this article, for example, we sometimes meet situations where data set are income amounts in a survey of individuals and assume a model as the values are the larger, a missing data probability is the higher. The method is to maximize using the EM(Expectation and Maximization) algorithm based on the (missing data) mechanism that creates missing data of the case of exponential distribution. The method started from any initial values, and converged in a few iterations. We changed the missing data probability and the artificial data size to show the estimated accuracy. Then we discuss the properties of estimates.

  • PDF

Research data repository requirements: A case study from universities in North Macedonia

  • Fidan Limani;Arben Hajra;Mexhid Ferati;Vladimir Radevski
    • International Journal of Knowledge Content Development & Technology
    • /
    • v.13 no.1
    • /
    • pp.75-100
    • /
    • 2023
  • With research data generation on the rise, Institutional Repositories (IR) are one of the tools to manage it. However, the variety of data practices across institutions, domains, communities, etc., often requires dedicated studies in order to identify the research data management (RDM) require- ments and mapping them to IR features to support them. In this study, we investigated the data practices for a few national universities in North Macedonia, including 110 participants from different departments. The methodology we adopted to this end enabled us to derive some of the key RDM requirements for a variety of data-related activities. Finally, we mapped these requirements to 6 features that our participants asked for in an IR solution: (1) create (meta)data and documentation, (2) distribute, share, and promote data, (3) provide access control, (4) store, (5) backup, and (6) archive. This list of IR features could prove useful for any university that has not yet established an IR solution.

TAGS: Text Augmentation with Generation and Selection (생성-선정을 통한 텍스트 증강 프레임워크)

  • Kim Kyung Min;Dong Hwan Kim;Seongung Jo;Heung-Seon Oh;Myeong-Ha Hwang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.455-460
    • /
    • 2023
  • Text augmentation is a methodology that creates new augmented texts by transforming or generating original texts for the purpose of improving the performance of NLP models. However existing text augmentation techniques have limitations such as lack of expressive diversity semantic distortion and limited number of augmented texts. Recently text augmentation using large language models and few-shot learning can overcome these limitations but there is also a risk of noise generation due to incorrect generation. In this paper, we propose a text augmentation method called TAGS that generates multiple candidate texts and selects the appropriate text as the augmented text. TAGS generates various expressions using few-shot learning while effectively selecting suitable data even with a small amount of original text by using contrastive learning and similarity comparison. We applied this method to task-oriented chatbot data and achieved more than sixty times quantitative improvement. We also analyzed the generated texts to confirm that they produced semantically and expressively diverse texts compared to the original texts. Moreover, we trained and evaluated a classification model using the augmented texts and showed that it improved the performance by more than 0.1915, confirming that it helps to improve the actual model performance.

Improvement for Chromaticity Coordinate Quality of Automotive White LED Packages (차량용 백색 LED 패키지의 색 좌표 품질 개선)

  • So, Soon Jin;Jeoung, Choung Woo;Moon, Tae Eul;Kim, Jeong Bin;Hong, Sung Hoon
    • Journal of Korean Society for Quality Management
    • /
    • v.50 no.3
    • /
    • pp.425-440
    • /
    • 2022
  • Purpose: The purpose of this paper is to improve the chromaticity coordinate quality of white LED packages for automobiles that require high quality and reliability. Methods: The project follows the structured methodology of the Six Sigma DMAIC Roadmap, which consists of Define, Measure, Analyze, Improve and Control phases. Results: A CTQ is determined based on COPQ analysis, and a process map and a XY matrix are utilized for selecting process input variables. Three vital Few Xs are identified through data analysis; amount to mix at one time, deviation by head pumps, and deviation by production magazines, and process improvements are performed for each of the three vital Few Xs. Conclusion: The improved process conditions for the three vital Few Xs are applied to the production line, and the results show that the percent defective of chromaticity coordinate has improved from 1.59% to 0.63% and a financial effect of about 50 million won per year is obtained.