• Title/Summary/Keyword: entropy-based test

Search Result 67, Processing Time 0.028 seconds

Designing Rich-Secure Network Covert Timing Channels Based on Nested Lattices

  • Liu, Weiwei;Liu, Guangjie;Ji, Xiaopeng;Zhai, Jiangtao;Dai, Yuewei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1866-1883
    • /
    • 2019
  • As the youngest branch of information hiding, network covert timing channels conceal the existence of secret messages by manipulating the timing information of the overt traffic. The popular model-based framework for constructing covert timing channels always utilizes cumulative distribution function (CDF) of the inter-packet delays (IPDs) to modulate secret messages, whereas discards high-order statistics of the IPDs completely. The consequence is the vulnerability to high-order statistical tests, e.g., entropy test. In this study, a rich security model of covert timing channels is established based on IPD chains, which can be used to measure the distortion of multi-order timing statistics of a covert timing channel. To achieve rich security, we propose two types of covert timing channels based on nested lattices. The CDF of the IPDs is used to construct dot-lattice and interval-lattice for quantization, which can ensure the cell density of the lattice consistent with the joint distribution of the IPDs. Furthermore, compensative quantization and guard band strategy are employed to eliminate the regularity and enhance the robustness, respectively. Experimental results on real traffic show that the proposed schemes are rich-secure, and robust to channel interference, whereas some state-of-the-art covert timing channels cannot evade detection under the rich security model.

Comparison of Word Extraction Methods Based on Unsupervised Learning for Analyzing East Asian Traditional Medicine Texts (한의학 고문헌 텍스트 분석을 위한 비지도학습 기반 단어 추출 방법 비교)

  • Oh, Junho
    • Journal of Korean Medical classics
    • /
    • v.32 no.3
    • /
    • pp.47-57
    • /
    • 2019
  • Objectives : We aim to assist in choosing an appropriate method for word extraction when analyzing East Asian Traditional Medical texts based on unsupervised learning. Methods : In order to assign ranks to substrings, we conducted a test using one method(BE:Branching Entropy) for exterior boundary value, three methods(CS:cohesion score, TS:t-score, SL:simple-ll) for interior boundary value, and six methods(BExSL, BExTS, BExCS, CSxTS, CSxSL, TSxSL) from combining them. Results : When Miss Rate(MR) was used as the criterion, the error was minimal when the TS and SL were used together, while the error was maximum when CS was used alone. When number of segmented texts was applied as weight value, the results were the best in the case of SL, and the worst in the case of BE alone. Conclusions : Unsupervised-Learning-Based Word Extraction is a method that can be used to analyze texts without a prepared set of vocabulary data. When using this method, SL or the combination of SL and TS could be considered primarily.

Hardware Implementation of Context Modeler in HEVC CABAC Decoder (HEVC CABAC 복호기의 문맥 모델러 설계)

  • Kim, Sohyun;Kim, Doohwan;Lee, Seongsoo
    • Journal of IKEEE
    • /
    • v.21 no.3
    • /
    • pp.280-283
    • /
    • 2017
  • HEVC (high efficiency video coding) exploits CABAC (context-based adaptive binary arithmetic coding) for entropy coding, where a context model estimates the probability for each syntax element. In this paper, a context modeler was designed and implemented for CABAC decoding. lookup table was used to reduce computation and to increase speed. 12 simulations for HEVC standard test sequences and encoder configurations were performed, and the context modeler was verified to perform correction operations. The designed context modeler was synthesized in 0.18um technology. Maximum frequency, maximum throughput, and gate count are 200 MHz, 200 Mbin/s, and 29,268 gates, respectively.

Preprocessing Methods for Insect Identification Using Footprints (발자국 패턴을 이용한 곤충 판별 기법을 위한 전처리 과정)

  • Woo, Young-Woon;Cho, Kyoung-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.485-488
    • /
    • 2005
  • The comparison of 3 conventional binarization methods for insect footprints and the result of performance evaluation using a proposed performance criterion are introduced in this paper. The 3 different binarization algorithms for comparison are based on different category each, and the proposed performance criterion is based on the characteristics of insect footprints which have very smaller foreground area than background area. In the experiments, average performance results using 71 test images are compared and analyzed. The higher-order entropy binarization algorithm proposed by Abutaleb showed the best result for pattern recognition applications of insect footprints.

  • PDF

Geometry Coding of Three-dimensional Mesh Models Using a Joint Prediction (통합예측을 이용한 삼차원 메쉬의 기하정보 부호화 알고리듬)

  • 안정환;호요성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.3
    • /
    • pp.185-193
    • /
    • 2003
  • The conventional parallelogram prediction uses only three previously traversed vertices in a single adjacent triangle; thus, the predicted vertex can be located at a biased position. Moreover, vortices on curved surfaces may not be predicted effectively since each parallelogram is assumed to lie on the same plane. In order to improve the prediction performance, we use all the neighboring vertices that precede the current vertex. After we order vortices using a vertex layer traversal algorithm, we estimate the current vertex position based on observations of the previously coded vertex positions in the layer traversal order. The difference between the original and the predicted vertex coordinate values is encoded by a uniform quantizer and an entropy coder. The proposed scheme demonstrates improved coding efficiency for various VRML test data.

Acoustic Model Improvement and Performance Evaluation of the Variable Vocabulary Speech Recognition System (가변 어휘 음성 인식기의 음향모델 개선 및 성능분석)

  • 이승훈;김회린
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.8
    • /
    • pp.3-8
    • /
    • 1999
  • Previous variable vocabulary speech recognition systems with context-independent acoustic modeling, could not represent the effect of neighboring phonemes. To solve this problem, we use allophone-based context-dependent acoustic model. This paper describes the method to improve acoustic model of the system effectively. Acoustic model is improved by using allophone clustering technique that uses entropy as a similarity measure and the optimal allophone model is generated by changing the number of allophones. We evaluate performance of the improved system by using Phonetically Optimized Words(POW) DB and PC commands(PC) DB. As a result, the allophone model composed of six hundreds allophones improved the recognition rate by 13% from the original context independent model m POW test DB.

  • PDF

A combined spline chirplet transform and local maximum synchrosqueezing technique for structural instantaneous frequency identification

  • Ping-Ping Yuan;Zhou-Jie Zhao;Ya Liu;Zhong-Xiang Shen
    • Smart Structures and Systems
    • /
    • v.33 no.3
    • /
    • pp.201-215
    • /
    • 2024
  • Spline chirplet transform and local maximum synchrosqueezing are introduced to present a novel structural instantaneous frequency (IF) identification method named local maximum synchrosqueezing spline chirplet transform (LMSSSCT). Namely spline chirplet transform (SCT), a transform is firstly introduced based on classic chirplet transform and spline interpolated kernel function. Applying SCT in association with local maximum synchrosqueezing, the LMSSSCT is then proposed. The index of accuracy and Rényi entropy show that LMSSSCT outperforms the other time-frequency analysis (TFA) methods in processing analytical signals, especially in the presence of noise. Numerical examples of a Duffing nonlinear system with single degree of freedom and a two-layer shear frame structure with time-varying stiffness are used to verify the effectiveness of structural IF identification. Moreover, a nonlinear supported beam structure test is conducted and the LMSSSCT is utilized for structural IF identification. Numerical simulation and experimental results demonstrate that the presented LMSSSCT can effectively identify the IFs of nonlinear structures and time-varying structures with good accuracy and stability.

Discretization of Numerical Attributes and Approximate Reasoning by using Rough Membership Function) (러프 소속 함수를 이용한 수치 속성의 이산화와 근사 추론)

  • Kwon, Eun-Ah;Kim, Hong-Gi
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.545-557
    • /
    • 2001
  • In this paper we propose a hierarchical classification algorithm based on rough membership function which can reason a new object approximately. We use the fuzzy reasoning method that substitutes fuzzy membership value for linguistic uncertainty and reason approximately based on the composition of membership values of conditional sttributes Here we use the rough membership function instead of the fuzzy membership function It can reduce the process that the fuzzy algorithm using fuzzy membership function produces fuzzy rules In addition, we transform the information system to the understandable minimal decision information system In order to do we, study the discretization of continuous valued attributes and propose the discretization algorithm based on the rough membership function and the entropy of the information theory The test shows a good partition that produce the smaller decision system We experimented the IRIS data etc. using our proposed algorithm The experimental results with IRIS data shows 96%~98% rate of classification.

  • PDF

Performance Evaluation of Deep Neural Network (DNN) Based on HRV Parameters for Judgment of Risk Factors for Coronary Artery Disease (관상동맥질환 위험인자 유무 판단을 위한 심박변이도 매개변수 기반 심층 신경망의 성능 평가)

  • Park, Sung Jun;Choi, Seung Yeon;Kim, Young Mo
    • Journal of Biomedical Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.62-67
    • /
    • 2019
  • The purpose of this study was to evaluate the performance of deep neural network model in order to determine whether there is a risk factor for coronary artery disease based on the cardiac variation parameter. The study used unidentifiable 297 data to evaluate the performance of the model. Input data consists of heart rate parameters, which are SDNN (standard deviation of the N-N intervals), PSI (physical stress index), TP (total power), VLF (very low frequency), LF (low frequency), HF (high frequency), RMSSD (root mean square of successive difference) APEN (approximate entropy) and SRD (successive R-R interval difference), the age group and sex. Output data are divided into normal and patient groups, and the patient group consists of those diagnosed with diabetes, high blood pressure, and hyperlipidemia among the various risk factors that can cause coronary artery disease. Based on this, a binary classification model was applied using Deep Neural Network of deep learning techniques to classify normal and patient groups efficiently. To evaluate the effectiveness of the model used in this study, Kernel SVM (support vector machine), one of the classification models in machine learning, was compared and evaluated using same data. The results showed that the accuracy of the proposed deep neural network was train set 91.79% and test set 85.56% and the specificity was 87.04% and the sensitivity was 83.33% from the point of diagnosis. These results suggest that deep learning is more efficient when classifying these medical data because the train set accuracy in the deep neural network was 7.73% higher than the comparative model Kernel SVM.

Sequence Mining based Manufacturing Process using Decision Model in Cognitive Factory (스마트 공장에서 의사결정 모델을 이용한 순차 마이닝 기반 제조공정)

  • Kim, Joo-Chang;Jung, Hoill;Yoo, Hyun;Chung, Kyungyong
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.3
    • /
    • pp.53-59
    • /
    • 2018
  • In this paper, we propose a sequence mining based manufacturing process using a decision model in cognitive factory. The proposed model is a method to increase the production efficiency by applying the sequence mining decision model in a small scale production process. The data appearing in the production process is composed of the input variables. And the output variable is composed the production rate and the defect rate per hour. We use the GSP algorithm and the REPTree algorithm to generate rules and models using the variables with high significance level through t-test. As a result, the defect rate are improved by 0.38% and the average hourly production rate was increased by 1.89. This has a meaning results for improving the production efficiency through data mining analysis in the small scale production of the cognitive factory.