• Title/Summary/Keyword: consistent algorithms

Search Result 99, Processing Time 0.027 seconds

ASSVD: Adaptive Sparse Singular Value Decomposition for High Dimensional Matrices

  • Ding, Xiucai;Chen, Xianyi;Zou, Mengling;Zhang, Guangxing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2634-2648
    • /
    • 2020
  • In this paper, an adaptive sparse singular value decomposition (ASSVD) algorithm is proposed to estimate the signal matrix when only one data matrix is observed and there is high dimensional white noise, in which we assume that the signal matrix is low-rank and has sparse singular vectors, i.e. it is a simultaneously low-rank and sparse matrix. It is a structured matrix since the non-zero entries are confined on some small blocks. The proposed algorithm estimates the singular values and vectors separable by exploring the structure of singular vectors, in which the recent developments in Random Matrix Theory known as anisotropic Marchenko-Pastur law are used. And then we prove that when the signal is strong in the sense that the signal to noise ratio is above some threshold, our estimator is consistent and outperforms over many state-of-the-art algorithms. Moreover, our estimator is adaptive to the data set and does not require the variance of the noise to be known or estimated. Numerical simulations indicate that ASSVD still works well when the signal matrix is not very sparse.

Fast Intra Mode Decision for H.264/AVC based on Directional Information (방향 정보를 이용한 H.264/AVC의 고속 인트라 모드 결정)

  • Lee, Kyung-Hee;Kim, Jong-Gu;Suh, Jae-Won
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.3
    • /
    • pp.20-27
    • /
    • 2009
  • H.264/AVC video coding standard adapting a rate-distortion optimization technique to select the best coding mode with multi reference frames for each macroblock gets a higher coding efficiency than those of previous video coding standards but the computational complexity increases drastically. Therefore, many fast mode decision algorithms are proposed to reduce the computational complexity. Among them, we propose a fast intra mode decision algorithm based on directional information of I4MB. The proposed algorithm achieves consistent time saving about 15% in IPPP sequences and 44% in all I frame sequences with negligible loss in PSNR and small increment of bit rate compared with that of JM11.0.

Novel Method for Face Recognition using Laplacian of Gaussian Mask with Local Contour Pattern

  • Jeon, Tae-jun;Jang, Kyeong-uk;Lee, Seung-ho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5605-5623
    • /
    • 2016
  • We propose a face recognition method that utilizes the LCP face descriptor. The proposed method applies a LoG mask to extract a face contour response, and employs the LCP algorithm to produce a binary pattern representation that ensures high recognition performance even under the changes in illumination, noise, and aging. The proposed LCP algorithm produces excellent noise reduction and efficiency in removing unnecessary information from the face by extracting a face contour response using the LoG mask, whose behavior is similar to the human eye. Majority of reported algorithms search for face contour response information. On the other hand, our proposed LCP algorithm produces results expressing major facial information by applying the threshold to the search area with only 8 bits. However, the LCP algorithm produces results that express major facial information with only 8-bits by applying a threshold value to the search area. Therefore, compared to previous approaches, the LCP algorithm maintains a consistent accuracy under varying circumstances, and produces a high face recognition rate with a relatively small feature vector. The test results indicate that the LCP algorithm produces a higher facial recognition rate than the rate of human visual's recognition capability, and outperforms the existing methods.

Noisy label based discriminative least squares regression and its kernel extension for object identification

  • Liu, Zhonghua;Liu, Gang;Pu, Jiexin;Liu, Shigang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2523-2538
    • /
    • 2017
  • In most of the existing literature, the definition of the class label has the following characteristics. First, the class label of the samples from the same object has an absolutely fixed value. Second, the difference between class labels of the samples from different objects should be maximized. However, the appearance of a face varies greatly due to the variations of the illumination, pose, and expression. Therefore, the previous definition of class label is not quite reasonable. Inspired by discriminative least squares regression algorithm (DLSR), a noisy label based discriminative least squares regression algorithm (NLDLSR) is presented in this paper. In our algorithm, the maximization difference between the class labels of the samples from different objects should be satisfied. Meanwhile, the class label of the different samples from the same object is allowed to have small difference, which is consistent with the fact that the different samples from the same object have some differences. In addition, the proposed NLDLSR is expanded to the kernel space, and we further propose a novel kernel noisy label based discriminative least squares regression algorithm (KNLDLSR). A large number of experiments show that our proposed algorithms can achieve very good performance.

Probabilistic penalized principal component analysis

  • Park, Chongsun;Wang, Morgan C.;Mo, Eun Bi
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.2
    • /
    • pp.143-154
    • /
    • 2017
  • A variable selection method based on probabilistic principal component analysis (PCA) using penalized likelihood method is proposed. The proposed method is a two-step variable reduction method. The first step is based on the probabilistic principal component idea to identify principle components. The penalty function is used to identify important variables in each component. We then build a model on the original data space instead of building on the rotated data space through latent variables (principal components) because the proposed method achieves the goal of dimension reduction through identifying important observed variables. Consequently, the proposed method is of more practical use. The proposed estimators perform as the oracle procedure and are root-n consistent with a proper choice of regularization parameters. The proposed method can be successfully applied to high-dimensional PCA problems with a relatively large portion of irrelevant variables included in the data set. It is straightforward to extend our likelihood method in handling problems with missing observations using EM algorithms. Further, it could be effectively applied in cases where some data vectors exhibit one or more missing values at random.

Constrained Sparse Concept Coding algorithm with application to image representation

  • Shu, Zhenqiu;Zhao, Chunxia;Huang, Pu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.9
    • /
    • pp.3211-3230
    • /
    • 2014
  • Recently, sparse coding has achieved remarkable success in image representation tasks. In practice, the performance of clustering can be significantly improved if limited label information is incorporated into sparse coding. To this end, in this paper, a novel semi-supervised algorithm, called constrained sparse concept coding (CSCC), is proposed for image representation. CSCC considers limited label information into graph embedding as additional hard constraints, and hence obtains embedding results that are consistent with label information and manifold structure information of the original data. Therefore, CSCC can provide a sparse representation which explicitly utilizes the prior knowledge of the data to improve the discriminative power in clustering. Besides, a kernelized version of our proposed CSCC, namely kernel constrained sparse concept coding (KCSCC), is developed to deal with nonlinear data, which leads to more effective clustering performance. The experimental evaluations on the MNIST, PIE and Yale image sets show the effectiveness of our proposed algorithms.

Denoising solar SDO/HMI magnetograms using Deep Learning

  • Park, Eunsu;Moon, Yong-Jae;Lim, Daye;Lee, Harim
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.43.1-43.1
    • /
    • 2019
  • In this study, we apply a deep learning model to denoising solar magnetograms. For this, we design a model based on conditional generative adversarial network, which is one of the deep learning algorithms, for the image-to-image translation from a single magnetogram to a denoised magnetogram. For the single magnetogram, we use SDO/HMI line-of-sight magnetograms at the center of solar disk. For the denoised magnetogram, we make 21-frame-stacked magnetograms at the center of solar disk considering solar rotation. We train a model using 7004 paris of the single and denoised magnetograms from 2013 January to 2013 October and test the model using 1432 pairs from 2013 November to 2013 December. Our results from this study are as follows. First, our model successfully denoise SDO/HMI magnetograms and the denoised magnetograms from our model are similar to the stacked magnetograms. Second, the average pixel-to-pixel correlation coefficient value between denoised magnetograms from our model and stacked magnetogrmas is larger than 0.93. Third, the average noise level of denoised magnetograms from our model is greatly reduced from 10.29 G to 3.89 G, and it is consistent with or smaller than that of stacked magnetograms 4.11 G. Our results can be applied to many scientific field in which the integration of many frames are used to improve the signal-to-noise ratio.

  • PDF

A detailed analysis of nearby young stellar moving groups

  • Lee, Jinhee
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.63.3-63.3
    • /
    • 2019
  • Nearby young moving groups (NYMGs hereafter) are gravitationally unbound loose young stellar associations located within 100 pc of the Sun. Since NYMGs are crucial laboratories for studying low-mass stars and planets, intensive searches for NYMG members have been performed. For identification of NYMG members, various strategies and methods have been applied. As a result, the reliability of the members in terms of membership is not uniform, which means that a careful membership re-assessment is required. In this study, I developed a NYMG membership probability calculation tool based on Bayesian inference (Bayesian Assessment of Moving Groups: BAMG). For the development of the BAMG tool, I constructed ellipsoidal models for nine NYMGs via iterative and self-consistent processes. Using BAMG, memberships of claimed members in the literature (N~2000) were evaluated, and 35 per cent of members were confirmed as bona fide members of NYMGs. Based on the deficiency of low-mass members appeared in mass function using these bona fide members, low mass members from Gaia DR2 are identified. About 2000 new M dwarf and brown dwarf candidate members were identified. Memberships of ~70 members with RV from Gaia were confirmed, and the additional ~20 members were confirmed via spectroscopic observation. Not relying on previous knowledge about the existence of nine NYMGs, unsupervised machine learning analyses were applied to NYMG members. K-means and Agglomerative Clustering algorithms result in similar trends of grouping. As a result, six previously known groups (TWA, beta-Pic, Carina, Argus, AB Doradus, and Volans-Carina) were rediscovered. Three the other known groups are recognized as well; however, they are combined into two new separate groups (ThOr+Columba and TucHor+Columba).

  • PDF

A Comparative Performance Analysis of Segmentation Models for Lumbar Key-points Extraction (요추 특징점 추출을 위한 영역 분할 모델의 성능 비교 분석)

  • Seunghee Yoo;Minho Choi ;Jun-Su Jang
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.354-361
    • /
    • 2023
  • Most of spinal diseases are diagnosed based on the subjective judgment of a specialist, so numerous studies have been conducted to find objectivity by automating the diagnosis process using deep learning. In this paper, we propose a method that combines segmentation and feature extraction, which are frequently used techniques for diagnosing spinal diseases. Four models, U-Net, U-Net++, DeepLabv3+, and M-Net were trained and compared using 1000 X-ray images, and key-points were derived using Douglas-Peucker algorithms. For evaluation, Dice Similarity Coefficient(DSC), Intersection over Union(IoU), precision, recall, and area under precision-recall curve evaluation metrics were used and U-Net++ showed the best performance in all metrics with an average DSC of 0.9724. For the average Euclidean distance between estimated key-points and ground truth, U-Net was the best, followed by U-Net++. However the difference in average distance was about 0.1 pixels, which is not significant. The results suggest that it is possible to extract key-points based on segmentation and that it can be used to accurately diagnose various spinal diseases, including spondylolisthesis, with consistent criteria.

A Study on Character Consistency Generated in [Midjourney V6] Technology

  • Xi Chen;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.142-147
    • /
    • 2024
  • The emergence of programs like Midjourney, particularly known for its text-to-image capability, has significantly impacted design and creative industries. Midjourney continually updates its database and algorithms to enhance user experience, with a focus on character consistency. This paper's examination of the latest V6 version of Midjourney reveals notable advancements in its characteristics and design principles, especially in the realm of character generation. By comparing V6 with its predecessors, this study underscores the significant strides made in ensuring consistent character portrayal across different plots and timelines.Such improvements in AI-driven character consistency are pivotal for storytelling. They ensure coherent and reliable character representation, which is essential for narrative clarity, emotional resonance, and overall effectiveness. This coherence supports a more immersive and engaging storytelling experience, fostering deeper audience connection and enhancing creative expression.The findings of this study encourage further exploration of Midjourney's capabilities for artistic innovation. By leveraging its advanced character consistency, creators can push the boundaries of storytelling, leading to new and exciting developments in the fusion of technology and art.