• 제목/요약/키워드: Pixel representation

검색결과 67건 처리시간 0.031초

Efficient 3D Model based Face Representation and Recognition Algorithmusing Pixel-to-Vertex Map (PVM)

  • Jeong, Kang-Hun;Moon, Hyeon-Joon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제5권1호
    • /
    • pp.228-246
    • /
    • 2011
  • A 3D model based approach for a face representation and recognition algorithm has been investigated as a robust solution for pose and illumination variation. Since a generative 3D face model consists of a large number of vertices, a 3D model based face recognition system is generally inefficient in computation time and complexity. In this paper, we propose a novel 3D face representation algorithm based on a pixel to vertex map (PVM) to optimize the number of vertices. We explore shape and texture coefficient vectors of the 3D model by fitting it to an input face using inverse compositional image alignment (ICIA) to evaluate face recognition performance. Experimental results show that the proposed face representation and recognition algorithm is efficient in computation time while maintaining reasonable accuracy.

Exploiting Chaotic Feature Vector for Dynamic Textures Recognition

  • Wang, Yong;Hu, Shiqiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권11호
    • /
    • pp.4137-4152
    • /
    • 2014
  • This paper investigates the description ability of chaotic feature vector to dynamic textures. First a chaotic feature and other features are calculated from each pixel intensity series. Then these features are combined to a chaotic feature vector. Therefore a video is modeled as a feature vector matrix. Next by the aid of bag of words framework, we explore the representation ability of the proposed chaotic feature vector. Finally we investigate recognition rate between different combinations of chaotic features. Experimental results show the merit of chaotic feature vector for pixel intensity series representation.

Adaptive Hyperspectral Image Classification Method Based on Spectral Scale Optimization

  • Zhou, Bing;Bingxuan, Li;He, Xuan;Liu, Hexiong
    • Current Optics and Photonics
    • /
    • 제5권3호
    • /
    • pp.270-277
    • /
    • 2021
  • The adaptive sparse representation (ASR) can effectively combine the structure information of a sample dictionary and the sparsity of coding coefficients. This algorithm can effectively consider the correlation between training samples and convert between sparse representation-based classifier (SRC) and collaborative representation classification (CRC) under different training samples. Unlike SRC and CRC which use fixed norm constraints, ASR can adaptively adjust the constraints based on the correlation between different training samples, seeking a balance between l1 and l2 norm, greatly strengthening the robustness and adaptability of the classification algorithm. The correlation coefficients (CC) can better identify the pixels with strong correlation. Therefore, this article proposes a hyperspectral image classification method called correlation coefficients and adaptive sparse representation (CCASR), based on ASR and CC. This method is divided into three steps. In the first step, we determine the pixel to be measured and calculate the CC value between the pixel to be tested and various training samples. Then we represent the pixel using ASR and calculate the reconstruction error corresponding to each category. Finally, the target pixels are classified according to the reconstruction error and the CC value. In this article, a new hyperspectral image classification method is proposed by fusing CC and ASR. The method in this paper is verified through two sets of experimental data. In the hyperspectral image (Indian Pines), the overall accuracy of CCASR has reached 0.9596. In the hyperspectral images taken by HIS-300, the classification results show that the classification accuracy of the proposed method achieves 0.9354, which is better than other commonly used methods.

Hyperspectral Image Classification via Joint Sparse representation of Multi-layer Superpixles

  • Sima, Haifeng;Mi, Aizhong;Han, Xue;Du, Shouheng;Wang, Zhiheng;Wang, Jianfang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권10호
    • /
    • pp.5015-5038
    • /
    • 2018
  • In this paper, a novel spectral-spatial joint sparse representation algorithm for hyperspectral image classification is proposed based on multi-layer superpixels in various scales. Superpixels of various scales can provide complete yet redundant correlated information of the class attribute for test pixels. Therefore, we design a joint sparse model for a test pixel by sampling similar pixels from its corresponding superpixels combinations. Firstly, multi-layer superpixels are extracted on the false color image of the HSI data by principal components analysis model. Secondly, a group of discriminative sampling pixels are exploited as reconstruction matrix of test pixel which can be jointly represented by the structured dictionary and recovered sparse coefficients. Thirdly, the orthogonal matching pursuit strategy is employed for estimating sparse vector for the test pixel. In each iteration, the approximation can be computed from the dictionary and corresponding sparse vector. Finally, the class label of test pixel can be directly determined with minimum reconstruction error between the reconstruction matrix and its approximation. The advantages of this algorithm lie in the development of complete neighborhood and homogeneous pixels to share a common sparsity pattern, and it is able to achieve more flexible joint sparse coding of spectral-spatial information. Experimental results on three real hyperspectral datasets show that the proposed joint sparse model can achieve better performance than a series of excellent sparse classification methods and superpixels-based classification methods.

선형 회귀분석 기반 합산영역테이블 정밀도 향상 기법 (Linear Regression-Based Precision Enhancement of Summed Area Table)

  • 정주현;이성길
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제2권11호
    • /
    • pp.809-814
    • /
    • 2013
  • 합산영역테이블은 이미지 픽셀 주변 임의의 사각 영역 내 픽셀 값의 합을 4개 픽셀의 합차로 표현할 수 있는 자료구조이다. 그러나 합산영역테이블은 픽셀의 값을 한쪽 모서리에서 다른 쪽 모서리로 순차 누적하므로, 이미지의 크기가 큰 경우에 부동소수점 방식의 표현 범위를 초과하는 문제가 일어날 수 있다. 이를 해결하기 위해 본 논문은 선형 회귀분석을 이용하여 이미지를 근사하고, 회귀분석식과의 차이만을 누적하여 정밀도 누적 오차를 감소시킬 수 있는 제안한다. 또한, 이미지의 복원 시 회귀분석식의 합을 2중 적분을 이용하여 상수시간에 구할 수 있는 방법을 함께 제안 한다. 이미지의 복원에 대한 실험을 수행하였고, 결과는 제안하는 방식이 일반적인 고정오프셋 방식보다 누적 오차를 감소시킴을 보였다.

Neural-network-based Impulse Noise Removal Using Group-based Weighted Couple Sparse Representation

  • Lee, Yongwoo;Bui, Toan Duc;Shin, Jitae;Oh, Byung Tae
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권8호
    • /
    • pp.3873-3887
    • /
    • 2018
  • In this paper, we propose a novel method to recover images corrupted by impulse noise. The proposed method uses two stages: noise detection and filtering. In the first stage, we use pixel values, rank-ordered logarithmic difference values, and median values to train a neural-network-based impulse noise detector. After training, we apply the network to detect noisy pixels in images. In the next stage, we use group-based weighted couple sparse representation to filter the noisy pixels. During this second stage, conventional methods generally use only clean pixels to recover corrupted pixels, which can yield unsuccessful dictionary learning if the noise density is high and the number of useful clean pixels is inadequate. Therefore, we use reconstructed pixels to balance the deficiency. Experimental results show that the proposed noise detector has better performance than the conventional noise detectors. Also, with the information of noisy pixel location, the proposed impulse-noise removal method performs better than the conventional methods, through the recovered images resulting in better quality.

Robust appearance feature learning using pixel-wise discrimination for visual tracking

  • Kim, Minji;Kim, Sungchan
    • ETRI Journal
    • /
    • 제41권4호
    • /
    • pp.483-493
    • /
    • 2019
  • Considering the high dimensions of video sequences, it is often challenging to acquire a sufficient dataset to train the tracking models. From this perspective, we propose to revisit the idea of hand-crafted feature learning to avoid such a requirement from a dataset. The proposed tracking approach is composed of two phases, detection and tracking, according to how severely the appearance of a target changes. The detection phase addresses severe and rapid variations by learning a new appearance model that classifies the pixels into foreground (or target) and background. We further combine the raw pixel features of the color intensity and spatial location with convolutional feature activations for robust target representation. The tracking phase tracks a target by searching for frame regions where the best pixel-level agreement to the model learned from the detection phase is achieved. Our two-phase approach results in efficient and accurate tracking, outperforming recent methods in various challenging cases of target appearance changes.

다시점 영상 및 깊이 영상의 효율적인 표현을 위한 순차적 복원 기반 포인트 클라우드 생성 기법 (Sequential Point Cloud Generation Method for Efficient Representation of Multi-view plus Depth Data)

  • 강세희;한현민;김빛나;이민회;황성수;방건
    • 한국멀티미디어학회논문지
    • /
    • 제23권2호
    • /
    • pp.166-173
    • /
    • 2020
  • Multi-view images, which are widely used for providing free-viewpoint services, can enhance the quality of synthetic views when the number of views increases. However, there needs an efficient representation method because of the tremendous amount of data. In this paper, we propose a method for generating point cloud data for the efficient representation of multi-view color and depth images. The proposed method conducts sequential reconstruction of point clouds at each viewpoint as a method of deleting duplicate data. A 3D point of a point cloud is projected to a frame to be reconstructed, and the color and depth of the 3D point is compared with the pixel where it is projected. When the 3D point and the pixel are similar enough, then the pixel is not used for generating a 3D point. In this way, we can reduce the number of reconstructed 3D points. Experimental results show that the propose method generates a point cloud which can generate multi-view images while minimizing the number of 3D points.

3차원 얼굴인식을 위한 픽셀 대 정점 맵 기반 얼굴 표현방법 (Face Representation Method Using Pixel-to-Vertex Map(PVM) for 3D Model Based Face Recognition)

  • 문현준;정강훈;홍태화
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.1031-1032
    • /
    • 2006
  • A 3D model based face recognition system is generally inefficient in computation time because 3D face model consists of a large number of vertices. In this paper, we propose a novel 3D face representation algorithm to reduce the number of vertices and optimize its computation time.

  • PDF

영상 객체의 특징 추출을 이용한 내용 기반 영상 검색 시스템 (Content-Based Image Retrieval System using Feature Extraction of Image Objects)

  • 정세환;서광규
    • 산업경영시스템학회지
    • /
    • 제27권3호
    • /
    • pp.59-65
    • /
    • 2004
  • This paper explores an image segmentation and representation method using Vector Quantization(VQ) on color and texture for content-based image retrieval system. The basic idea is a transformation from the raw pixel data to a small set of image regions which are coherent in color and texture space. These schemes are used for object-based image retrieval. Features for image retrieval are three color features from HSV color model and five texture features from Gray-level co-occurrence matrices. Once the feature extraction scheme is performed in the image, 8-dimensional feature vectors represent each pixel in the image. VQ algorithm is used to cluster each pixel data into groups. A representative feature table based on the dominant groups is obtained and used to retrieve similar images according to object within the image. The proposed method can retrieve similar images even in the case that the objects are translated, scaled, and rotated.