• Title/Summary/Keyword: Data matrix

Search Result 2,924, Processing Time 0.028 seconds

Algorithm of Decoding the Base 256 mode in Two-Dimensional Data Matrix Barcode (이차원 Data Matrix 바코드에서 Base 256 모드의 디코딩 알고리즘)

  • Han, Hee June;Lee, Hyo Chang;Lee, Jong Yun
    • Journal of the Korea Convergence Society
    • /
    • v.4 no.3
    • /
    • pp.27-33
    • /
    • 2013
  • Conventional bar code has the appearance of line bars and spaces, called as one-dimensional bar code. In contrast, the information in two-dimensional bar code is represented by either a small, rectangular or square with the types of mosaic and Braille. The two-dimensional bar code is much more efficient than one-dimensional bar code because it can allow to store and express large amounts of data in a small space and so far there is also a little information about decoding the Data Matrix in base 256 mode. According to the ISO international standards, there are four kinds of bar code: QR code, Data Matrix, PDF417, and Maxi code. In this paper, among them, we focus on describing the basic concepts of Data Matrix in base 256 mode, how to encode and decode them, and how to organize them in detail. In addition, Data Matrix can be organized efficiently depending on the modes of numeric, alphanumeric characters, and binary system and expecially, we focus on describing how to decode the Data Matrix code by four modes.

Predicting Personal Credit Rating with Incomplete Data Sets Using Frequency Matrix technique (Frequency Matrix 기법을 이용한 결측치 자료로부터의 개인신용예측)

  • Bae, Jae-Kwon;Kim, Jin-Hwa;Hwang, Kook-Jae
    • Journal of Information Technology Applications and Management
    • /
    • v.13 no.4
    • /
    • pp.273-290
    • /
    • 2006
  • This study suggests a frequency matrix technique to predict personal credit rate more efficiently using incomplete data sets. At first this study test on multiple discriminant analysis and logistic regression analysis for predicting personal credit rate with incomplete data sets. Missing values are predicted with mean imputation method and regression imputation method here. An artificial neural network and frequency matrix technique are also tested on their performance in predicting personal credit rating. A data set of 8,234 customers in 2004 on personal credit information of Bank A are collected for the test. The performance of frequency matrix technique is compared with that of other methods. The results from the experiments show that the performance of frequency matrix technique is superior to that of all other models such as MDA-mean, Logit-mean, MDA-regression, Logit-regression, and artificial neural networks.

  • PDF

Adaptive data hiding scheme based on magic matrix of flexible dimension

  • Wu, Hua;Horng, Ji-Hwei;Chang, Chin-Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.9
    • /
    • pp.3348-3364
    • /
    • 2021
  • Magic matrix-based data hiding schemes are applied to transmit secret information through open communication channels safely. With the development of various magic matrices, some higher dimensional magic matrices are proposed for improving the security level. However, with the limitation of computing resource and the requirement of real time processing, these higher dimensional magic matrix-based methods are not advantageous. Hence, a kind of data hiding scheme based on a single or a group of multi-dimensional flexible magic matrices is proposed in this paper, whose magic matrix can be expanded to higher dimensional ones with less computing resource. Furthermore, an adaptive mechanism is proposed to reduce the embedding distortion. Adapting to the secret data, the magic matrix with least distortion is chosen to embed the data and a marker bit is exploited to record the choice. Experimental results confirm that the proposed scheme hides data with high security and a better visual quality.

Poisson linear mixed models with ARMA random effects covariance matrix

  • Choi, Jiin;Lee, Keunbaik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.4
    • /
    • pp.927-936
    • /
    • 2017
  • To analyze longitudinal count data, Poisson linear mixed models are commonly used. In the models the random effects covariance matrix explains both within-subject variation and serial correlation of repeated count outcomes. When the random effects covariance matrix is assumed to be misspecified, the estimates of covariates effects can be biased. Therefore, we propose reasonable and flexible structures of the covariance matrix using autoregressive and moving average Cholesky decomposition (ARMACD). The ARMACD factors the covariance matrix into generalized autoregressive parameters (GARPs), generalized moving average parameters (GMAPs) and innovation variances (IVs). Positive IVs guarantee the positive-definiteness of the covariance matrix. In this paper, we use the ARMACD to model the random effects covariance matrix in Poisson loglinear mixed models. We analyze epileptic seizure data using our proposed model.

Speed-up of the Matrix Computation on the Ridge Regression

  • Lee, Woochan;Kim, Moonseong;Park, Jaeyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3482-3497
    • /
    • 2021
  • Artificial intelligence has emerged as the core of the 4th industrial revolution, and large amounts of data processing, such as big data technology and rapid data analysis, are inevitable. The most fundamental and universal data interpretation technique is an analysis of information through regression, which is also the basis of machine learning. Ridge regression is a technique of regression that decreases sensitivity to unique or outlier information. The time-consuming calculation portion of the matrix computation, however, basically includes the introduction of an inverse matrix. As the size of the matrix expands, the matrix solution method becomes a major challenge. In this paper, a new algorithm is introduced to enhance the speed of ridge regression estimator calculation through series expansion and computation recycle without adopting an inverse matrix in the calculation process or other factorization methods. In addition, the performances of the proposed algorithm and the existing algorithm were compared according to the matrix size. Overall, excellent speed-up of the proposed algorithm with good accuracy was demonstrated.

SINGULARITY OF A COEFFICIENT MATRIX

  • Lee, Joon-Sook
    • Communications of the Korean Mathematical Society
    • /
    • v.10 no.4
    • /
    • pp.849-854
    • /
    • 1995
  • The interpolation of scattered data with radial basis functions is knwon for its good fitting. But if data get large, the coefficient matrix becomes almost singular. We introduce different knots and nodes to improve condition number of coefficient matrix. The singulaity of new coefficient matrix is investigated here.

  • PDF

Enhanced data-driven simulation of non-stationary winds using DPOD based coherence matrix decomposition

  • Liyuan Cao;Jiahao Lu;Chunxiang Li
    • Wind and Structures
    • /
    • v.39 no.2
    • /
    • pp.125-140
    • /
    • 2024
  • The simulation of non-stationary wind velocity is particularly crucial for the wind resistant design of slender structures. Recently, some data-driven simulation methods have received much attention due to their straightforwardness. However, as the number of simulation points increases, it will face efficiency issues. Under such a background, in this paper, a time-varying coherence matrix decomposition method based on Diagonal Proper Orthogonal Decomposition (DPOD) interpolation is proposed for the data-driven simulation of non-stationary wind velocity based on S-transform (ST). Its core idea is to use coherence matrix decomposition instead of the decomposition of the measured time-frequency power spectrum matrix based on ST. The decomposition result of the time-varying coherence matrix is relatively smooth, so DPOD interpolation can be introduced to accelerate its decomposition, and the DPOD interpolation technology is extended to the simulation based on measured wind velocity. The numerical experiment has shown that the reconstruction results of coherence matrix interpolation are consistent with the target values, and the interpolation calculation efficiency is higher than that of the coherence matrix time-frequency interpolation method and the coherence matrix POD interpolation method. Compared to existing data-driven simulation methods, it addresses the efficiency issue in simulations where the number of Cholesky decompositions increases with the increase of simulation points, significantly enhancing the efficiency of simulating multivariate non-stationary wind velocities. Meanwhile, the simulation data preserved the time-frequency characteristics of the measured wind velocity well.

Bayesian modeling of random effects precision/covariance matrix in cumulative logit random effects models

  • Kim, Jiyeong;Sohn, Insuk;Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.1
    • /
    • pp.81-96
    • /
    • 2017
  • Cumulative logit random effects models are typically used to analyze longitudinal ordinal data. The random effects covariance matrix is used in the models to demonstrate both subject-specific and time variations. The covariance matrix may also be homogeneous; however, the structure of the covariance matrix is assumed to be homoscedastic and restricted because the matrix is high-dimensional and should be positive definite. To satisfy these restrictions two Cholesky decomposition methods were proposed in linear (mixed) models for the random effects precision matrix and the random effects covariance matrix, respectively: modified Cholesky and moving average Cholesky decompositions. In this paper, we use these two methods to model the random effects precision matrix and the random effects covariance matrix in cumulative logit random effects models for longitudinal ordinal data. The methods are illustrated by a lung cancer data set.

Negative binomial loglinear mixed models with general random effects covariance matrix

  • Sung, Youkyung;Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.1
    • /
    • pp.61-70
    • /
    • 2018
  • Modeling of the random effects covariance matrix in generalized linear mixed models (GLMMs) is an issue in analysis of longitudinal categorical data because the covariance matrix can be high-dimensional and its estimate must satisfy positive-definiteness. To satisfy these constraints, we consider the autoregressive and moving average Cholesky decomposition (ARMACD) to model the covariance matrix. The ARMACD creates a more flexible decomposition of the covariance matrix that provides generalized autoregressive parameters, generalized moving average parameters, and innovation variances. In this paper, we analyze longitudinal count data with overdispersion using GLMMs. We propose negative binomial loglinear mixed models to analyze longitudinal count data and we also present modeling of the random effects covariance matrix using the ARMACD. Epilepsy data are analyzed using our proposed model.

SMOOTH SINGULAR VALUE THRESHOLDING ALGORITHM FOR LOW-RANK MATRIX COMPLETION PROBLEM

  • Geunseop Lee
    • Journal of the Korean Mathematical Society
    • /
    • v.61 no.3
    • /
    • pp.427-444
    • /
    • 2024
  • The matrix completion problem is to predict missing entries of a data matrix using the low-rank approximation of the observed entries. Typical approaches to matrix completion problem often rely on thresholding the singular values of the data matrix. However, these approaches have some limitations. In particular, a discontinuity is present near the thresholding value, and the thresholding value must be manually selected. To overcome these difficulties, we propose a shrinkage and thresholding function that smoothly thresholds the singular values to obtain more accurate and robust estimation of the data matrix. Furthermore, the proposed function is differentiable so that the thresholding values can be adaptively calculated during the iterations using Stein unbiased risk estimate. The experimental results demonstrate that the proposed algorithm yields a more accurate estimation with a faster execution than other matrix completion algorithms in image inpainting problems.