• 제목/요약/키워드: random data analysis

검색결과 1,698건 처리시간 0.031초

블록 암호 연산 모드 RBF(Random Block Feedback)의 알려진/선택 평문 공격에 대한 안전성 비교 분석 (Safety Comparison Analysis Against Known/Chosen Plaintext Attack of RBF (Random Block Feedback) Mode to Other Block Cipher Modes of Operation)

  • 김윤정;이강
    • 한국통신학회논문지
    • /
    • 제39B권5호
    • /
    • pp.317-322
    • /
    • 2014
  • 데이타 보안과 무결성은 유무선 통신 환경에서 데이터 전송 시에 중요한 요소이다. 대량의 데이터는 전송 전에, 통상 암호 연산 모드를 이용한 블록 암호 알고리즘에 의하여 암호화된다. ECB, CBC 등의 기존 연산 모드 외에 블록 암호 연산 모드로 RBF 모드가 제안된 바 있다. 본 논문에서는, 알려진 평문 공격 (known plaintext attack) 및 선택 평문 공격 (chosen plaintext attack)에 대한, RBF 모드의 안전성을 기존 모드들과 비교 분석한 내용을 소개한다. 분석 결과, 기존의 연산 모드들이 알려진/선택 평문 공격에 취약한데 반하여, RBF 모드는 이들 공격에 안전함을 알 수 있었다.

Asymptotic Test for Dimensionality in Probabilistic Principal Component Analysis with Missing Values

  • Park, Chong-sun
    • Communications for Statistical Applications and Methods
    • /
    • 제11권1호
    • /
    • pp.49-58
    • /
    • 2004
  • In this talk we proposed an asymptotic test for dimensionality in the latent variable model for probabilistic principal component analysis with missing values at random. Proposed algorithm is a sequential likelihood ratio test for an appropriate Normal latent variable model for the principal component analysis. Modified EM-algorithm is used to find MLE for the model parameters. Results from simulations and real data sets give us promising evidences that the proposed method is useful in finding necessary number of components in the principal component analysis with missing values at random.

Negative binomial loglinear mixed models with general random effects covariance matrix

  • Sung, Youkyung;Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • 제25권1호
    • /
    • pp.61-70
    • /
    • 2018
  • Modeling of the random effects covariance matrix in generalized linear mixed models (GLMMs) is an issue in analysis of longitudinal categorical data because the covariance matrix can be high-dimensional and its estimate must satisfy positive-definiteness. To satisfy these constraints, we consider the autoregressive and moving average Cholesky decomposition (ARMACD) to model the covariance matrix. The ARMACD creates a more flexible decomposition of the covariance matrix that provides generalized autoregressive parameters, generalized moving average parameters, and innovation variances. In this paper, we analyze longitudinal count data with overdispersion using GLMMs. We propose negative binomial loglinear mixed models to analyze longitudinal count data and we also present modeling of the random effects covariance matrix using the ARMACD. Epilepsy data are analyzed using our proposed model.

A New Approach for Information Security using an Improved Steganography Technique

  • Juneja, Mamta;Sandhu, Parvinder Singh
    • Journal of Information Processing Systems
    • /
    • 제9권3호
    • /
    • pp.405-424
    • /
    • 2013
  • This research paper proposes a secured, robust approach of information security using steganography. It presents two component based LSB (Least Significant Bit) steganography methods for embedding secret data in the least significant bits of blue components and partial green components of random pixel locations in the edges of images. An adaptive LSB based steganography is proposed for embedding data based on the data available in MSB's (Most Significant Bits) of red, green, and blue components of randomly selected pixels across smooth areas. A hybrid feature detection filter is also proposed that performs better to predict edge areas even in noisy conditions. AES (Advanced Encryption Standard) and random pixel embedding is incorporated to provide two-tier security. The experimental results of the proposed approach are better in terms of PSNR and capacity. The comparison analysis of output results with other existing techniques is giving the proposed approach an edge over others. It has been thoroughly tested for various steganalysis attacks like visual analysis, histogram analysis, chi-square, and RS analysis and could sustain all these attacks very well.

머신러닝을 이용한 세금 계정과목 분류 (Taxation Analysis Using Machine Learning)

  • 최동빈;조인수;박용범
    • 반도체디스플레이기술학회지
    • /
    • 제18권2호
    • /
    • pp.73-77
    • /
    • 2019
  • Data mining techniques can also be used to increase the efficiency of production in the tax sector, which requires professional skills. As tax-related computerization was carried out, large amounts of data were accumulated, creating a good environment for data mining. In this paper, we have developed a system that can help tax accountant who have existing professional abilities by using data mining techniques on accumulated tax related data. The data mining technique used is random forest and improved by using f1-score. Using the implemented system, data accumulated over two years was learned, showing high accuracy at prediction.

A simple and efficient data loss recovery technique for SHM applications

  • Thadikemalla, Venkata Sainath Gupta;Gandhi, Abhay S.
    • Smart Structures and Systems
    • /
    • 제20권1호
    • /
    • pp.35-42
    • /
    • 2017
  • Recently, compressive sensing based data loss recovery techniques have become popular for Structural Health Monitoring (SHM) applications. These techniques involve an encoding process which is onerous to sensor node because of random sensing matrices used in compressive sensing. In this paper, we are presenting a model where the sampled raw acceleration data is directly transmitted to base station/receiver without performing any type of encoding at transmitter. The received incomplete acceleration data after data losses can be reconstructed faithfully using compressive sensing based reconstruction techniques. An in-depth simulated analysis is presented on how random losses and continuous losses affects the reconstruction of acceleration signals (obtained from a real bridge). Along with performance analysis for different simulated data losses (from 10 to 50%), advantages of performing interleaving before transmission are also presented.

3차원 알고리듬을 이용한 랜덤(or s-랜덤) 인터리버를 적용한 터보코드의 성능분석 (Performance Analysis of Turbo-Code with Random (and s-random) Interleaver based on 3-Dimension Algorithm)

  • 공형윤;최지웅
    • 정보처리학회논문지A
    • /
    • 제9A권3호
    • /
    • pp.295-300
    • /
    • 2002
  • 본 논문에서는 3차원 입출력 알고리즘을 랜덤 인터리버와 s-랜덤 인터리버에 적용하였으며, 이를 터보코드 인터리버에 적용하여 성능을 분석하였다. 인터리버의 성능은 인접 데이터간 최소 거리에 의해 결정되어지므로, 인접 데이터간의 최소거리를 증가시키는 방법으로 인터리버의 성능을 향상 시켰다. 3차원 알고리즘을 적용한 인터리버는 3차원 저장공간을 이용해 입력 데이터를 저장하고 랜덤하게 추출하는 방식이다. 이러한 방식은 기존의 랜덤 인터리버와 s-랜덤 인터리버에 비해 인접 데이터간 최소거리와 평균거리를 증가시킨다. 컴퓨터 시뮬레이션을 이용하여 3차원 알고리듬을 적용한 터보코드의 성능을 분석하였으며, 전송 환경을 가우시안 채널로 설정하였다.

전화조사에서 재통화 규칙준수와 응답자 임의선택의 영향 - R&R 울산 사례의 통계적 재분석 - (Effects of Call-back Rules and Random Selection of Respondents: Statistical Re-analysis of R&R’s Ulsan Survey Data.)

  • 허명회;임여주;노규형
    • 응용통계연구
    • /
    • 제16권2호
    • /
    • pp.247-259
    • /
    • 2003
  • 우리나라 조사업계에서는 전화조사의 방법론으로 성과 나이, 지역에 표본 수를 사전 지정하는 방식의 할당표집 (quota sampling)을 주로 쓰고 있다. 이러한 할당표집은 조사비용과 기간의 단축이라는 이점을 갖지만 이론적 타당성이 결여되어 있어 학문적으로는 받아들이기 어렵다. 때문에, 학계에서는 그 동안 수차례 임의표집(random sampling)에 근거한 전화조사를 조사업계에 요구해 왔다. 이에 응하여, (주)리서치 앤 리서치가 2002년 울산시장 선거예측 조사에 임의표집에 의한 전화조사를 실시하였다 본 사례연구는 이 자료를 심층적으로 재분석하여 임의표집에서의 재통화 및 응답자 임의선정 절차가 자료 질 및 최종 예측치에 주는 영향에 대하여 살펴볼 것이다.

Multiple Comparisons With the Best in the Analysis of Covariance

  • Lee, Young-Hoon
    • Journal of the Korean Statistical Society
    • /
    • 제23권1호
    • /
    • pp.53-62
    • /
    • 1994
  • When a comparison is made with respect to the unknown best treatment, Hsu (1984, 1985) proposed the so called multiple comparisons procedures with the best in the analysis of variance model. Applying Hsu's results to the analysis of covariance model, simultaneous confidence intervals for multiple comparisons with the best in a balanced one-way layout with a random covariate are developed and are applied to a real data example.

  • PDF

An Analysis of Panel Count Data from Multiple random processes

  • 박유성;김희영
    • 한국통계학회:학술대회논문집
    • /
    • 한국통계학회 2002년도 추계 학술발표회 논문집
    • /
    • pp.265-272
    • /
    • 2002
  • An Integer-valued autoregressive integrated (INARI) model is introduced to eliminate stochastic trend and seasonality from time series of count data. This INARI extends the previous integer-valued ARMA model. We show that it is stationary and ergodic to establish asymptotic normality for conditional least squares estimator. Optimal estimating equations are used to reflect categorical and serial correlations arising from panel count data and variations arising from three random processes for obtaining observation into estimation. Under regularity conditions for martingale sequence, we show asymptotic normality for estimators from the estimating equations. Using cancer mortality data provided by the U.S. National Center for Health Statistics (NCHS), we apply our results to estimate the probability of cells classified by 4 causes of death and 6 age groups and to forecast death count of each cell. We also investigate impact of three random processes on estimation.

  • PDF