• Title/Summary/Keyword: Pixel representation

Search Result 67, Processing Time 0.029 seconds

Efficient 3D Model based Face Representation and Recognition Algorithmusing Pixel-to-Vertex Map (PVM)

  • Jeong, Kang-Hun;Moon, Hyeon-Joon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.1
    • /
    • pp.228-246
    • /
    • 2011
  • A 3D model based approach for a face representation and recognition algorithm has been investigated as a robust solution for pose and illumination variation. Since a generative 3D face model consists of a large number of vertices, a 3D model based face recognition system is generally inefficient in computation time and complexity. In this paper, we propose a novel 3D face representation algorithm based on a pixel to vertex map (PVM) to optimize the number of vertices. We explore shape and texture coefficient vectors of the 3D model by fitting it to an input face using inverse compositional image alignment (ICIA) to evaluate face recognition performance. Experimental results show that the proposed face representation and recognition algorithm is efficient in computation time while maintaining reasonable accuracy.

Exploiting Chaotic Feature Vector for Dynamic Textures Recognition

  • Wang, Yong;Hu, Shiqiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4137-4152
    • /
    • 2014
  • This paper investigates the description ability of chaotic feature vector to dynamic textures. First a chaotic feature and other features are calculated from each pixel intensity series. Then these features are combined to a chaotic feature vector. Therefore a video is modeled as a feature vector matrix. Next by the aid of bag of words framework, we explore the representation ability of the proposed chaotic feature vector. Finally we investigate recognition rate between different combinations of chaotic features. Experimental results show the merit of chaotic feature vector for pixel intensity series representation.

Adaptive Hyperspectral Image Classification Method Based on Spectral Scale Optimization

  • Zhou, Bing;Bingxuan, Li;He, Xuan;Liu, Hexiong
    • Current Optics and Photonics
    • /
    • v.5 no.3
    • /
    • pp.270-277
    • /
    • 2021
  • The adaptive sparse representation (ASR) can effectively combine the structure information of a sample dictionary and the sparsity of coding coefficients. This algorithm can effectively consider the correlation between training samples and convert between sparse representation-based classifier (SRC) and collaborative representation classification (CRC) under different training samples. Unlike SRC and CRC which use fixed norm constraints, ASR can adaptively adjust the constraints based on the correlation between different training samples, seeking a balance between l1 and l2 norm, greatly strengthening the robustness and adaptability of the classification algorithm. The correlation coefficients (CC) can better identify the pixels with strong correlation. Therefore, this article proposes a hyperspectral image classification method called correlation coefficients and adaptive sparse representation (CCASR), based on ASR and CC. This method is divided into three steps. In the first step, we determine the pixel to be measured and calculate the CC value between the pixel to be tested and various training samples. Then we represent the pixel using ASR and calculate the reconstruction error corresponding to each category. Finally, the target pixels are classified according to the reconstruction error and the CC value. In this article, a new hyperspectral image classification method is proposed by fusing CC and ASR. The method in this paper is verified through two sets of experimental data. In the hyperspectral image (Indian Pines), the overall accuracy of CCASR has reached 0.9596. In the hyperspectral images taken by HIS-300, the classification results show that the classification accuracy of the proposed method achieves 0.9354, which is better than other commonly used methods.

Hyperspectral Image Classification via Joint Sparse representation of Multi-layer Superpixles

  • Sima, Haifeng;Mi, Aizhong;Han, Xue;Du, Shouheng;Wang, Zhiheng;Wang, Jianfang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.5015-5038
    • /
    • 2018
  • In this paper, a novel spectral-spatial joint sparse representation algorithm for hyperspectral image classification is proposed based on multi-layer superpixels in various scales. Superpixels of various scales can provide complete yet redundant correlated information of the class attribute for test pixels. Therefore, we design a joint sparse model for a test pixel by sampling similar pixels from its corresponding superpixels combinations. Firstly, multi-layer superpixels are extracted on the false color image of the HSI data by principal components analysis model. Secondly, a group of discriminative sampling pixels are exploited as reconstruction matrix of test pixel which can be jointly represented by the structured dictionary and recovered sparse coefficients. Thirdly, the orthogonal matching pursuit strategy is employed for estimating sparse vector for the test pixel. In each iteration, the approximation can be computed from the dictionary and corresponding sparse vector. Finally, the class label of test pixel can be directly determined with minimum reconstruction error between the reconstruction matrix and its approximation. The advantages of this algorithm lie in the development of complete neighborhood and homogeneous pixels to share a common sparsity pattern, and it is able to achieve more flexible joint sparse coding of spectral-spatial information. Experimental results on three real hyperspectral datasets show that the proposed joint sparse model can achieve better performance than a series of excellent sparse classification methods and superpixels-based classification methods.

Linear Regression-Based Precision Enhancement of Summed Area Table (선형 회귀분석 기반 합산영역테이블 정밀도 향상 기법)

  • Jeong, Juhyeon;Lee, Sungkil
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.11
    • /
    • pp.809-814
    • /
    • 2013
  • Summed area table (SAT) is a data structure in which the sum of pixel values in an arbitrary rectangular area can be represented by the linear combination of four pixel values. Since SAT serially accumulates the pixel values from an image corner to the other corner, a high-resolution image can yield overflow in a floating-point representation. In this paper, we present a new SAT construction technique, which accumulates only the residuals from the linearly-regressed representation of an image and thereby significantly reduces the accumulation errors. Also, we propose a method to find the integral of the linear regression in constant time using double integral. We performed experiments on the image reconstruction, and the results showed that our approach more reduces the accumulation errors than the conventional fixed-offset SAT.

Neural-network-based Impulse Noise Removal Using Group-based Weighted Couple Sparse Representation

  • Lee, Yongwoo;Bui, Toan Duc;Shin, Jitae;Oh, Byung Tae
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.3873-3887
    • /
    • 2018
  • In this paper, we propose a novel method to recover images corrupted by impulse noise. The proposed method uses two stages: noise detection and filtering. In the first stage, we use pixel values, rank-ordered logarithmic difference values, and median values to train a neural-network-based impulse noise detector. After training, we apply the network to detect noisy pixels in images. In the next stage, we use group-based weighted couple sparse representation to filter the noisy pixels. During this second stage, conventional methods generally use only clean pixels to recover corrupted pixels, which can yield unsuccessful dictionary learning if the noise density is high and the number of useful clean pixels is inadequate. Therefore, we use reconstructed pixels to balance the deficiency. Experimental results show that the proposed noise detector has better performance than the conventional noise detectors. Also, with the information of noisy pixel location, the proposed impulse-noise removal method performs better than the conventional methods, through the recovered images resulting in better quality.

Robust appearance feature learning using pixel-wise discrimination for visual tracking

  • Kim, Minji;Kim, Sungchan
    • ETRI Journal
    • /
    • v.41 no.4
    • /
    • pp.483-493
    • /
    • 2019
  • Considering the high dimensions of video sequences, it is often challenging to acquire a sufficient dataset to train the tracking models. From this perspective, we propose to revisit the idea of hand-crafted feature learning to avoid such a requirement from a dataset. The proposed tracking approach is composed of two phases, detection and tracking, according to how severely the appearance of a target changes. The detection phase addresses severe and rapid variations by learning a new appearance model that classifies the pixels into foreground (or target) and background. We further combine the raw pixel features of the color intensity and spatial location with convolutional feature activations for robust target representation. The tracking phase tracks a target by searching for frame regions where the best pixel-level agreement to the model learned from the detection phase is achieved. Our two-phase approach results in efficient and accurate tracking, outperforming recent methods in various challenging cases of target appearance changes.

Sequential Point Cloud Generation Method for Efficient Representation of Multi-view plus Depth Data (다시점 영상 및 깊이 영상의 효율적인 표현을 위한 순차적 복원 기반 포인트 클라우드 생성 기법)

  • Kang, Sehui;Han, Hyunmin;Kim, Binna;Lee, Minhoe;Hwang, Sung Soo;Bang, Gun
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.166-173
    • /
    • 2020
  • Multi-view images, which are widely used for providing free-viewpoint services, can enhance the quality of synthetic views when the number of views increases. However, there needs an efficient representation method because of the tremendous amount of data. In this paper, we propose a method for generating point cloud data for the efficient representation of multi-view color and depth images. The proposed method conducts sequential reconstruction of point clouds at each viewpoint as a method of deleting duplicate data. A 3D point of a point cloud is projected to a frame to be reconstructed, and the color and depth of the 3D point is compared with the pixel where it is projected. When the 3D point and the pixel are similar enough, then the pixel is not used for generating a 3D point. In this way, we can reduce the number of reconstructed 3D points. Experimental results show that the propose method generates a point cloud which can generate multi-view images while minimizing the number of 3D points.

Face Representation Method Using Pixel-to-Vertex Map(PVM) for 3D Model Based Face Recognition (3차원 얼굴인식을 위한 픽셀 대 정점 맵 기반 얼굴 표현방법)

  • Moon, Hyeon-Jun;Jeong, Kang-Hun;Hong, Tae-Hwa
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.1031-1032
    • /
    • 2006
  • A 3D model based face recognition system is generally inefficient in computation time because 3D face model consists of a large number of vertices. In this paper, we propose a novel 3D face representation algorithm to reduce the number of vertices and optimize its computation time.

  • PDF

Content-Based Image Retrieval System using Feature Extraction of Image Objects (영상 객체의 특징 추출을 이용한 내용 기반 영상 검색 시스템)

  • Jung Seh-Hwan;Seo Kwang-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.27 no.3
    • /
    • pp.59-65
    • /
    • 2004
  • This paper explores an image segmentation and representation method using Vector Quantization(VQ) on color and texture for content-based image retrieval system. The basic idea is a transformation from the raw pixel data to a small set of image regions which are coherent in color and texture space. These schemes are used for object-based image retrieval. Features for image retrieval are three color features from HSV color model and five texture features from Gray-level co-occurrence matrices. Once the feature extraction scheme is performed in the image, 8-dimensional feature vectors represent each pixel in the image. VQ algorithm is used to cluster each pixel data into groups. A representative feature table based on the dominant groups is obtained and used to retrieve similar images according to object within the image. The proposed method can retrieve similar images even in the case that the objects are translated, scaled, and rotated.