• 제목/요약/키워드: Data Matrix

검색결과 2,907건 처리시간 0.029초

가중치가 적용된 공분산을 이용한 2D-LDA 기반의 얼굴인식 (Improved Face Recognition based on 2D-LDA using Weighted Covariance Scatter)

  • 이석진;오치민;이칠우
    • 한국멀티미디어학회논문지
    • /
    • 제17권12호
    • /
    • pp.1446-1452
    • /
    • 2014
  • Existing LDA uses the transform matrix that maximizes distance between classes. So we have to convert from an image to one-dimensional vector as training vector. However, in 2D-LDA, we can directly use two-dimensional image itself as training matrix, so that the classification performance can be enhanced about 20% comparing LDA, since the training matrix preserves the spatial information of two-dimensional image. However 2D-LDA uses same calculation schema for transformation matrix and therefore both LDA and 2D-LDA has the heteroscedastic problem which means that the class classification cannot obtain beneficial information of spatial distances of class clusters since LDA uses only data correlation-based covariance matrix of the training data without any reference to distances between classes. In this paper, we propose a new method to apply training matrix of 2D-LDA by using WPS-LDA idea that calculates the reciprocal of distance between classes and apply this weight to between class scatter matrix. The experimental result shows that the discriminating power of proposed 2D-LDA with weighted between class scatter has been improved up to 2% than original 2D-LDA. This method has good performance, especially when the distance between two classes is very close and the dimension of projection axis is low.

Dual graph-regularized Constrained Nonnegative Matrix Factorization for Image Clustering

  • Sun, Jing;Cai, Xibiao;Sun, Fuming;Hong, Richang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권5호
    • /
    • pp.2607-2627
    • /
    • 2017
  • Nonnegative matrix factorization (NMF) has received considerable attention due to its effectiveness of reducing high dimensional data and importance of producing a parts-based image representation. Most of existing NMF variants attempt to address the assertion that the observed data distribute on a nonlinear low-dimensional manifold. However, recent research results showed that not only the observed data but also the features lie on the low-dimensional manifolds. In addition, a few hard priori label information is available and thus helps to uncover the intrinsic geometrical and discriminative structures of the data space. Motivated by the two aspects above mentioned, we propose a novel algorithm to enhance the effectiveness of image representation, called Dual graph-regularized Constrained Nonnegative Matrix Factorization (DCNMF). The underlying philosophy of the proposed method is that it not only considers the geometric structures of the data manifold and the feature manifold simultaneously, but also mines valuable information from a few known labeled examples. These schemes will improve the performance of image representation and thus enhance the effectiveness of image classification. Extensive experiments on common benchmarks demonstrated that DCNMF has its superiority in image classification compared with state-of-the-art methods.

콜레스테롤 자료에 대한 적정 공분산행렬 형태 산출에 관한 통계적 분석 (A statistical analysis on the selection of the optimal covariance matrix pattern for the cholesterol data)

  • 조진남;백재욱
    • Journal of the Korean Data and Information Science Society
    • /
    • 제21권6호
    • /
    • pp.1263-1270
    • /
    • 2010
  • 60명의 환자들을 20명씩3개 그룹으로 나누어 각 그룹마다 다른 종류의 식이요법을 실시한 후 1주 간격으로 5주간에 걸쳐서 콜레스테롤 수치에 대한 반복측정 자료를 얻었다. 해당자료를 바탕으로 적합성여부와 유의성 검정을 실시한 결과 등분산 Toeplitz가 다양한 공분산행렬 형태들 중에서 가장 적합한 공분산구조 모형으로 판명되었다. 이 모형에서는 시점들 간의 상관계수는 0.64-0.78로 대체적으로 높은 상관관계들을 보여주고 있으며, 모수인자들의 유의성검정 결과, 시간효과는 대단히 유의하게 나타났으나, 처리 및 처리와 시간과의 교호작용효과는 유의하지 않은 것으로 판명되었다.

GPU-Based ECC Decode Unit for Efficient Massive Data Reception Acceleration

  • Kwon, Jisu;Seok, Moon Gi;Park, Daejin
    • Journal of Information Processing Systems
    • /
    • 제16권6호
    • /
    • pp.1359-1371
    • /
    • 2020
  • In transmitting and receiving such a large amount of data, reliable data communication is crucial for normal operation of a device and to prevent abnormal operations caused by errors. Therefore, in this paper, it is assumed that an error correction code (ECC) that can detect and correct errors by itself is used in an environment where massive data is sequentially received. Because an embedded system has limited resources, such as a low-performance processor or a small memory, it requires efficient operation of applications. In this paper, we propose using an accelerated ECC-decoding technique with a graphics processing unit (GPU) built into the embedded system when receiving a large amount of data. In the matrix-vector multiplication that forms the Hamming code used as a function of the ECC operation, the matrix is expressed in compressed sparse row (CSR) format, and a sparse matrix-vector product is used. The multiplication operation is performed in the kernel of the GPU, and we also accelerate the Hamming code computation so that the ECC operation can be performed in parallel. The proposed technique is implemented with CUDA on a GPU-embedded target board, NVIDIA Jetson TX2, and compared with execution time of the CPU.

대기질 예보의 성능 향상을 위한 커널 삼중대각 희소행렬을 이용한 고속 자료동화 (Fast Data Assimilation using Kernel Tridiagonal Sparse Matrix for Performance Improvement of Air Quality Forecasting)

  • 배효식;유숙현;권희용
    • 한국멀티미디어학회논문지
    • /
    • 제20권2호
    • /
    • pp.363-370
    • /
    • 2017
  • Data assimilation is an initializing method for air quality forecasting such as PM10. It is very important to enhance the forecasting accuracy. Optimal interpolation is one of the data assimilation techniques. It is very effective and widely used in air quality forecasting fields. The technique, however, requires too much memory space and long execution time. It makes the PM10 air quality forecasting difficult in real time. We propose a fast optimal interpolation data assimilation method for PM10 air quality forecasting using a new kernel tridiagonal sparse matrix and CUDA massively parallel processing architecture. Experimental results show the proposed method is 5~56 times faster than conventional ones.

폐루프 공진 주파수를 이용한 모델 개선법 (Model Updating Using the Closed-loop Natural Frequency)

  • 정훈상;박영진
    • 한국소음진동공학회논문집
    • /
    • 제14권9호
    • /
    • pp.801-810
    • /
    • 2004
  • Parameter modification of a linear finite element model(FEM) based on modal sensitivity matrix is usually performed through an effort to match FEM modal data to experimental ones. However, there are cases where this method can't be applied successfully; lack of reliable modal data and ill-conditioning of the modal sensitivity matrix constitute such cases. In this research, a novel concept of introducing feedback loops to the conventional modal test setup is proposed. This method uses closed-loop natural frequency data for parameter modification to overcome the problems associated with the conventional method based on modal sensitivity matrix. We proposed the whole procedure of parameter modification using the closed-loop natural frequency data including the modal sensitivity modification and controller design method. Proposed controller design method is efficient in changing modes. Numerical simulation of parameter estimation based on time-domain input/output data is provided to demonstrate the estimation performance of the proposed method.

2차 마르코프 사슬 모델을 이용한 시계열 인공 풍속 자료의 생성 (Generation of Synthetic Time Series Wind Speed Data using Second-Order Markov Chain Model)

  • 유기완
    • 풍력에너지저널
    • /
    • 제14권1호
    • /
    • pp.37-43
    • /
    • 2023
  • In this study, synthetic time series wind data was generated numerically using a second-order Markov chain. One year of wind data in 2020 measured by the AWS on Wido Island was used to investigate the statistics for measured wind data. Both the transition probability matrix and the cumulative transition probability matrix for annual hourly mean wind speed were obtained through statistical analysis. Probability density distribution along the wind speed and autocorrelation according to time were compared with the first- and the second-order Markov chains with various lengths of time series wind data. Probability density distributions for measured wind data and synthetic wind data using the first- and the second-order Markov chains were also compared to each other. For the case of the second-order Markov chain, some improvement of the autocorrelation was verified. It turns out that the autocorrelation converges to zero according to increasing the wind speed when the data size is sufficiently large. The generation of artificial wind data is expected to be useful as input data for virtual digital twin wind turbines.

TI-92 계산기를 활용한 이산수학의 이해과정 탐구-「행렬과 그래프」단원을 중심으로- (An Inquiry on the Understanding Process of Discrete Mathematics using TI-92 Calculator - Matrix and Graph-)

  • 강윤수;이보라
    • 한국학교수학회논문집
    • /
    • 제7권2호
    • /
    • pp.81-97
    • /
    • 2004
  • 본 논문은 그래픽 계산기를 활용한 이산수학의 ‘행렬과 그래프’개념의 이해과정에 관한 연구이다. 본 연구의 목적을 위해 우리는 TI-92 계산기를 활용하여 ‘행렬과 그래프’ 개념을 학습해 가는 두 명의 중학생을 조사하였다. 이 과정에서 우리는 켐코더나 녹음기를 활용하여 질적자료를 수집하였으며 이 자료들을 테크놀로지에 관한 학생들의 태도, 용어의 의미 이해, 행렬 연산의 이해 과정, 수학적 의사소통 등으로 범주화하였다. 이로부터 우리는 다음과 같은 결론을 얻었다. 첫째, 학생들은 그래픽 계산기를 활용하여 행렬의 의미와 역할을 그들 스스로 탐구하였으며 계산기는 이 과정에서 훌륭한 학습동반자 역할을 수행하였다. 둘째, 탐구과정에서 학생들이 오류를 범했을 때 그래픽 계산기가 에러메시지를 곧바로 출력함으로써 학생들의 자기주도적 학습을 가능하게 하였다. 셋째, 계산기는 교사와 학생들간, 혹은 학생들 사이의 수학적 의사소통을 강화시키는 역할을 하였다.

  • PDF

동시발생 행렬과 하둡 분산처리를 이용한 추천시스템에 관한 연구 (A Study On Recommend System Using Co-occurrence Matrix and Hadoop Distribution Processing)

  • 김창복;정재필
    • 한국항행학회논문지
    • /
    • 제18권5호
    • /
    • pp.468-475
    • /
    • 2014
  • 추천시스템은 선호 데이터가 대형화, 컴퓨터 처리능력과 추천 알고리즘 등에 의해 실시간 추천이 어려워지고 있다. 이에 따라 추천시스템은 대형 선호데이터를 분산처리 하는 방법에 대한 연구가 활발히 진행되고 있다. 본 논문은 하둡 분산처리 플랫폼과 머하웃 기계학습 라이브러리를 이용하여, 선호데이터를 분산 처리하는 방법을 연구하였다. 추천 알고리즘은 아이템 협업필터링과 유사한 동시발생 행렬을 이용하였다. 동시발생 행렬은 하둡 클러스터의 여러 노드에서 분산처리를 할 수 있으며, 기본적으로 많은 계산량이 필요하지만, 분산처리과정에서 계산량을 줄일 수 있다. 또한, 본 논문은 동시발생 행렬처리의 분산 처리과정을 4 단계에서 3 단계로 단순화하였다. 결과로서, 맵리듀스 잡을 감소할 수 있으며, 동일한 추천 파일을 생성할 수 있었다. 또한, 하둡 의사 분산모드를 이용하여 데이터를 처리하였을 때 빠른 처리속도를 보였으며, 맵 출력 데이터가 감소되었다.