• Title/Summary/Keyword: Patch image

Search Result 223, Processing Time 0.022 seconds

Camera Source Identification of Digital Images Based on Sample Selection

  • Wang, Zhihui;Wang, Hong;Li, Haojie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3268-3283
    • /
    • 2018
  • With the advent of the Information Age, the source identification of digital images, as a part of digital image forensics, has attracted increasing attention. Therefore, an effective technique to identify the source of digital images is urgently needed at this stage. In this paper, first, we study and implement some previous work on image source identification based on sensor pattern noise, such as the Lukas method, principal component analysis method and the random subspace method. Second, to extract a purer sensor pattern noise, we propose a sample selection method to improve the random subspace method. By analyzing the image texture feature, we select a patch with less complexity to extract more reliable sensor pattern noise, which improves the accuracy of identification. Finally, experiment results reveal that the proposed sample selection method can extract a purer sensor pattern noise, which further improves the accuracy of image source identification. At the same time, this approach is less complicated than the deep learning models and is close to the most advanced performance.

A Fast and Efficient Sliding Window based URV Decomposition Algorithm for Template Tracking (템플릿 추적 문제를 위한 효율적인 슬라이딩 윈도우 기반 URV Decomposition 알고리즘)

  • Lee, Geunseop
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.1
    • /
    • pp.35-43
    • /
    • 2019
  • Template tracking refers to the procedure of finding the most similar image patch corresponding to the given template through an image sequence. In order to obtain more accurate trajectory of the template, the template requires to be updated to reflect various appearance changes as it traverses through an image sequence. To do that, appearance images are used to model appearance variations and these are obtained by the computation of the principal components of the augmented image matrix at every iteration. Unfortunately, it is prohibitively expensive to compute the principal components at every iteration. Thus in this paper, we suggest a new Sliding Window based truncated URV Decomposition (TURVD) algorithm which enables updating their structure by recycling their previous decomposition instead of decomposing the image matrix from the beginning. Specifically, we show an efficient algorithm for updating and downdating the TURVD simultaneously, followed by the rank-one update to the TURVD while tracking the decomposition error accurately and adjusting the truncation level adaptively. Experiments show that the proposed algorithm produces no-meaningful differences but much faster execution speed compared to the typical algorithms in template tracking applications, thereby maintaining a good approximation for the principal components.

Camera pose estimation framework for array-structured images

  • Shin, Min-Jung;Park, Woojune;Kim, Jung Hee;Kim, Joonsoo;Yun, Kuk-Jin;Kang, Suk-Ju
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.10-23
    • /
    • 2022
  • Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes.

Analysis of Image Integration Methods for Applying of Multiresolution Satellite Images (다중 위성영상 활용을 위한 영상 통합 기법 분석)

  • Lee Jee Kee;Han Dong Seok
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.22 no.4
    • /
    • pp.359-365
    • /
    • 2004
  • Data integration techniques are becoming increasing1y important for conquering a limitation with a single data. Image fusion which improves the spatial and spectral resolution from a set of images with difffrent spatial and spectral resolutions, and image registration which matches two images so that corresponding coordinate points in the two images correspond to the same physical region of the scene being imaged have been researched. In this paper, we compared with six image fusion methods(Brovey, IHS, PCA, HPF, CN, and MWD) with panchromatic and multispectral images of IKONOS and developed the registration method for applying to SPOT-5 satellite image and RADARSAT SAR satellite image. As the result of tests on image fusion and image registration, we could find that MWD and HPF methods showed the good result in term of visual comparison analysis and statistical analysis. And we could extract patches which depict detailed topographic information from SPOT-5 and RADARSAT and obtain encouraging results in image registration.

A Comparison of Deep Reinforcement Learning and Deep learning for Complex Image Analysis

  • Khajuria, Rishi;Quyoom, Abdul;Sarwar, Abid
    • Journal of Multimedia Information System
    • /
    • v.7 no.1
    • /
    • pp.1-10
    • /
    • 2020
  • The image analysis is an important and predominant task for classifying the different parts of the image. The analysis of complex image analysis like histopathological define a crucial factor in oncology due to its ability to help pathologists for interpretation of images and therefore various feature extraction techniques have been evolved from time to time for such analysis. Although deep reinforcement learning is a new and emerging technique but very less effort has been made to compare the deep learning and deep reinforcement learning for image analysis. The paper highlights how both techniques differ in feature extraction from complex images and discusses the potential pros and cons. The use of Convolution Neural Network (CNN) in image segmentation, detection and diagnosis of tumour, feature extraction is important but there are several challenges that need to be overcome before Deep Learning can be applied to digital pathology. The one being is the availability of sufficient training examples for medical image datasets, feature extraction from whole area of the image, ground truth localized annotations, adversarial effects of input representations and extremely large size of the digital pathological slides (in gigabytes).Even though formulating Histopathological Image Analysis (HIA) as Multi Instance Learning (MIL) problem is a remarkable step where histopathological image is divided into high resolution patches to make predictions for the patch and then combining them for overall slide predictions but it suffers from loss of contextual and spatial information. In such cases the deep reinforcement learning techniques can be used to learn feature from the limited data without losing contextual and spatial information.

Rearranged DCT Feature Analysis Based on Corner Patches for CBIR (contents based image retrieval) (CBIR을 위한 코너패치 기반 재배열 DCT특징 분석)

  • Lee, Jimin;Park, Jongan;An, Youngeun;Oh, Sangeon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.12
    • /
    • pp.2270-2277
    • /
    • 2016
  • In modern society, creation and distribution of multimedia contents is being actively conducted. These multimedia information have come out the enormous amount daily, the amount of data is also large enough it can't be compared with past text information. Since it has been increased for a need of the method to efficiently store multimedia information and to easily search the information, various methods associated therewith have been actively studied. In particular, image search methods for finding what you want from the video database or multiple sequential images, have attracted attention as a new field of image processing. Image retrieval method to be implemented in this paper, utilizes the attribute of corner patches based on the corner points of the object, for providing a new method of efficient and robust image search. After detecting the edge of the object within the image, the straight lines using a Hough transformation is extracted. A corner patches is formed by defining the extracted intersection of the straight line as a corner point. After configuring the feature vectors with patches rearranged, the similarity between images in the database is measured. Finally, for an accurate comparison between the proposed algorithm and existing algorithms, the recall precision rate, which has been widely used in content-based image retrieval was used to measure the performance evaluation. For the image used in the experiment, it was confirmed that the image is detected more accurately in the proposed method than the conventional image retrieval methods.

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.

Reconstruction of Head Surface based on Cross Sectional Contours (단면 윤곽선을 기반으로 한 두부표변의 재구성)

  • 한영환;성현경;홍승홍
    • Journal of Biomedical Engineering Research
    • /
    • v.18 no.4
    • /
    • pp.365-373
    • /
    • 1997
  • In this paper, a new method of the 3D(dimensional) image reconstruction is proposed to build up the 3D image from 2D images using digital image processing techniques and computer graphics. First, the new feature extraction algorithm that doesn't need various input parameters and is not affected by threshold is adopted This new algorithm extracts feature points by eliminating some undesirable points on the ground of the connectivity. Second, as the cast function to reconstruct surfaces using extracted feature points, the minimum distance measure between two plane images has been adopted According to this measure, the surface formation algorithm doesn't need complex calculation and takes the form of triangle or trapezoid To investigate usefulness, this approach has been applied to a head CT image and compared with other methods. Experimental comparisons show that the suggested algorithm yields better performance on feature extraction than others. In contrast with the other methods, the complex calculation for surface formation in the proposed algorithm is not necessary.

  • PDF

A Study for Individual Identification by Discriminating the Finger Face Image (손가락 면 영상 판별에 의한 개인 식별 연구)

  • Kim, Hee-Sung;Bae, Byung-Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.3
    • /
    • pp.378-391
    • /
    • 2010
  • In this paper, it is tested that an individual is able to be identified with finger face images and the results are presented. Special operators, FFG(Facet Function Gradient) masks by which the gradient of a facet function fit on a gray levels of image patches can be computed are used and a new procedure named F-algorithm is introduced to match the finger face images. The finger face image is divided into the equal subregions and each subregions are divided into equal patches with this algorithm. The FFG masks are used for convolution operation over each patch to produce scalar values. These values from a feature matrix, and the identity of fingers is determined by a norm of the elements of the feature matrices. The distribution of the norms shows conspicuous differences between the pairs of hand images of the same persons and the pairs of the different persons. This is a result to prove the ability of discrimination with the finger face image. An identification rate of 95.0% is obtained as a result of the test in which 500 hand images taken from 100 persons are processed through F-algorithm. It is affirmed that the finger face reveals to be such a good biometrics as other hand parts owing to the ability of discrimination and the identification rate.

An Efficient CT Image Denoising using WT-GAN Model

  • Hae Chan Jeong;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.21-29
    • /
    • 2024
  • Reducing the radiation dose during CT scanning can lower the risk of radiation exposure, but not only does the image resolution significantly deteriorate, but the effectiveness of diagnosis is reduced due to the generation of noise. Therefore, noise removal from CT images is a very important and essential processing process in the image restoration. Until now, there are limitations in removing only the noise by separating the noise and the original signal in the image area. In this paper, we aim to effectively remove noise from CT images using the wavelet transform-based GAN model, that is, the WT-GAN model in the frequency domain. The GAN model used here generates images with noise removed through a U-Net structured generator and a PatchGAN structured discriminator. To evaluate the performance of the WT-GAN model proposed in this paper, experiments were conducted on CT images damaged by various noises, namely Gaussian noise, Poisson noise, and speckle noise. As a result of the performance experiment, the WT-GAN model is better than the traditional filter, that is, the BM3D filter, as well as the existing deep learning models, such as DnCNN, CDAE model, and U-Net GAN model, in qualitative and quantitative measures, that is, PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index Measure) showed excellent results.