• Title/Summary/Keyword: Data transforms

Search Result 257, Processing Time 0.025 seconds

Classification of Environmentally Distorted Acoustic Signals in Shallow Water Using Neural Networks : Application to Simulated and Measured Signal

  • Na, Young-Nam;Park, Joung-Soo;Chang, Duck-Hong;Kim, Chun-Duck
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.1E
    • /
    • pp.54-65
    • /
    • 1998
  • This study attempts to test the classifying performance of a neural network and thereby examine its applicability to the signals distorted in a shallow water environment. Linear frequency modulated(LFM) signals are simulated by using an acoustic model and also measured through sea experiment. The network is constructed to have three layers and trained on both data sets. To get normalized power spectra as feature vectors, the study considers the three transforms : shot-time Fourier transform (STFT), wavelet transform (WT) and pseudo Wigner-Ville distribution (PWVD). After trained on the simulated signals over water depth, the network gives over 95% performance with the signal to noise ratio (SNR) being up to-10 dB. Among the transforms, the PWVD presents the best performance particularly in a highly noisy condition. The network performs worse with the summer sound speed profile than with the winter profile. It is also expected to present much different performance by the variation of bottom property. When the network is trained on the measured signals, it gives a little better results than that trained on the simulated data. In conclusion, the simulated signals are successfully applied to training a network, and the trained network performs well in classifying the signals distorted by a surrounding environment and corrupted by noise.

  • PDF

The Confidence Intervals for Logistic Model in Contingency Table

  • Cho, Tae-Kyoung
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.3
    • /
    • pp.997-1005
    • /
    • 2003
  • We can use the logistic model for categorical data when the response variables are binary data. In this paper we consider the problem of constructing the confidence intervals for logistic model in I${\times}$J${\times}$2 contingency table. These constructions are simplified by applying logit transformation. This transforms the problem to consider linear form which called the logit model. After obtaining the confidence intervals for the logit model, the reverse transform is applied to obtain the confidence intervals for the logistic model.

Image Data Processing by Hadamard-Center Line Symmetric Hear (Hadamard-Center Line Symmetric Haar에 의한 Image Data 처리에 관한 연구)

  • 안성렬;소상호;황재정;이문호
    • Proceedings of the Korean Institute of Communication Sciences Conference
    • /
    • 1984.04a
    • /
    • pp.13-17
    • /
    • 1984
  • A hybrid version of the Hadamard and center Line Symmetric Haar Transform called H-CLSH is defined and developed. Efficient algorithms for fast computation of the H-CLSH and its inverse are developed. The H-CLSH is applied to digital signal and image processing and its utility and image processing and its utility and effectiveness are compared with Hadamard-Haar discrete transforms on the basis of some standard performance criteria.

  • PDF

Implementation of Gene Information System (유전자 정보시스템 설계 및 구현)

  • Choi, Nak-Joong;Choi, Han Suk;Kim, Dong-Wook
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.549-550
    • /
    • 2018
  • We have developed a web server for the high throughput annotation of gene. This system processes entire data sets with an automated pipeline of 13 analytic services, then deposits the data into the MySQL database and transforms it into three kinds of reports: preprocessing, assembling and annotation.

  • PDF

Searchable Encrypted String for Query Support on Different Encrypted Data Types

  • Azizi, Shahrzad;Mohammadpur, Davud
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.4198-4213
    • /
    • 2020
  • Data encryption, particularly application-level data encryption, is a common solution to protect data confidentiality and deal with security threats. Application-level encryption is a process in which data is encrypted before being sent to the database. However, cryptography transforms data and makes the query difficult to execute. Various studies have been carried out to find ways in order to implement a searchable encrypted database. In the current paper, we provide a new encrypting method and querying on encrypted data (ZSDB) for different data types. It is worth mentioning that the proposed method is based on secret sharing. ZSDB provides data confidentiality by dividing sensitive data into two parts and using the additional server as Dictionary Server. In addition, it supports required operations on various types of data, especially LIKE operator functioning on string data type. ZSDB dedicates the largest volume of execution tasks on queries to the server. Therefore, the data owner only needs to encrypt and decrypt data.

Forecasting Short-Term KOSPI using Wavelet Transforms and Fuzzy Neural Network (웨이블릿 변환과 퍼지 신경망을 이용한 단기 KOSPI 예측)

  • Shin, Dong-Kun;Chung, Kyung-Yong
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The methodology of KOSPI forecast has been considered as one of the most difficult problem to develop accurately since short-term KOSPI is correlated with various factors including politics and economics. In this paper, we presents a methodology for forecasting short-term trends of stock price for five days using the feature selection method based on a neural network with weighted fuzzy membership functions (NEWFM). The distributed non-overlap area measurement method selects the minimized number of input features by removing the worst input features one by one. A technical indicator are selected for preprocessing KOSPI data in the first step. In the second step, thirty-nine numbers of input features are produced by wavelet transforms. Twelve numbers of input features are selected as the minimized numbers of input features from thirty-nine numbers of input features using the non-overlap area distribution measurement method. The proposed method shows that sensitivity, specificity, and accuracy rates are 72.79%, 74.76%, and 73.84%, respectively.

ICAIM;An Improved CAIM Algorithm for Knowledge Discovery

  • Yaowapanee, Piriya;Pinngern, Ouen
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.2029-2032
    • /
    • 2004
  • The quantity of data were rapidly increased recently and caused the data overwhelming. This led to be difficult in searching the required data. The method of eliminating redundant data was needed. One of the efficient methods was Knowledge Discovery in Database (KDD). Generally data can be separate into 2 cases, continuous data and discrete data. This paper describes algorithm that transforms continuous attributes into discrete ones. We present an Improved Class Attribute Interdependence Maximization (ICAIM), which designed to work with supervised data, for discretized process. The algorithm does not require user to predefine the number of intervals. ICAIM improved CAIM by using significant test to determine which interval should be merged to one interval. Our goal is to generate a minimal number of discrete intervals and improve accuracy for classified class. We used iris plant dataset (IRIS) to test this algorithm compare with CAIM algorithm.

  • PDF

Correlation Analysis of the Frequency and Death Rates in Arterial Intervention using C4.5

  • Jung, Yong Gyu;Jung, Sung-Jun;Cha, Byeong Heon
    • International journal of advanced smart convergence
    • /
    • v.6 no.3
    • /
    • pp.22-28
    • /
    • 2017
  • With the recent development of technologies to manage vast amounts of data, data mining technology has had a major impact on all industries.. Data mining is the process of discovering useful correlations hidden in data, extracting executable information for the future, and using it for decision making. In other words, it is a core process of Knowledge Discovery in data base(KDD) that transforms input data and derives useful information. It extracts information that we did not know until now from a large data base. In the decision tree, c4.5 algorithm was used. In addition, the C4.5 algorithm was used in the decision tree to analyze the difference between frequency and mortality in the region. In this paper, the frequency and mortality of percutaneous coronary intervention for patients with heart disease were divided into regions.

Integration of Gear Design Data using XML in the Web-based Environment (웹 기반 환경에서 XML을 이용한 기어 설계 데이터의 통합)

  • 정태형;박승현
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2001.04a
    • /
    • pp.627-630
    • /
    • 2001
  • XML is suitable to integrate various forms of engineering design data since it possesses the characteristics of both documents and data. In this research a web-based design system has been developed, which integrates various gear design data in the form of XML. The system generates XML document containing gear design data and transforms gear design data in the relational database into XML document form automatically. The XML documents are transmitted to gear modeler agent through SOAP, and then the agent is automatically executed and generates CAD model files and VRML files. The designer can check the generated VRML model of gear immediately in the web service.

  • PDF

Design and Implementation of a XHTML to VoiceXML Converter based on EXI in Pervasive Environments (편재형 컴퓨팅 환경에서 XHTML과 VoiceXML간 EXI 문서의 변환시스템 설계와 구현)

  • Shin, Kyoung-Hee;Kwak, Dong-Gyu;Yoo, Chae-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.11
    • /
    • pp.13-20
    • /
    • 2009
  • In a pervasive environment, there are various applications as much as connections of various devices. In this computing environment, XML is the most suitable data representation method. XML is able to transform data for other application areas using XSLT. XML is text-based, the file size of XML document is bigger than binary data file. Therefore, XML has a disadvantage that it is hard to deal with XML in a pervasive environment. In this paper, we survey encoding methods of XML documents, and then we propose a transform method that transforms an encoded EXI format XML document into an EXI format XML document suited for other applications. Among various applications, we present a system that transforms an EXI format XHTML document into an VoiceXML document. This system can improve reusability of EXI format XML documents in a pervasive environment and it is expected to contributes utilization of EXI format XML documents.