• Title/Summary/Keyword: Reconstruction error

Search Result 435, Processing Time 0.022 seconds

Person Re-identification using Sparse Representation with a Saliency-weighted Dictionary

  • Kim, Miri;Jang, Jinbeum;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.4
    • /
    • pp.262-268
    • /
    • 2017
  • Intelligent video surveillance systems have been developed to monitor global areas and find specific target objects using a large-scale database. However, person re-identification presents some challenges, such as pose change and occlusions. To solve the problems, this paper presents an improved person re-identification method using sparse representation and saliency-based dictionary construction. The proposed method consists of three parts: i) feature description based on salient colors and textures for dictionary elements, ii) orthogonal atom selection using cosine similarity to deal with pose and viewpoint change, and iii) measurement of reconstruction error to rank the gallery corresponding a probe object. The proposed method provides good performance, since robust descriptors used as a dictionary atom are generated by weighting some salient features, and dictionary atoms are selected by reducing excessive redundancy causing low accuracy. Therefore, the proposed method can be applied in a large scale-database surveillance system to search for a specific object.

A STUDY ON THE DIMENSIONAL ACCURACY OF MODELS USING 3-DIMENSIONAL COMPUTER TOMOGRAPHY AND 2 RAPID PROTOTYPING METHODS

  • Cho Lee-Ra;Park Chan-Jin;Park In-Woo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.39 no.6
    • /
    • pp.633-640
    • /
    • 2001
  • Statement of problem. Relatively low success rate of root analogue implant system was supposed to be due to the time duration between extraction and implant installation. The use of three-dimensional computer tomography and the reconstruction of objects using rapid prototyping methods would be helpful to shorten this time. Purpose. This aim of this study was to evaluate the application possibility of the 3-dimensional computer tomography and the rapid prototyping to root analogue implants. Material and methods. Ten single rooted teeth were prepared. Width and height of the teeth were measured by the marking points. This was followed by CT scanning, data conversion and rapid prototyping model fabrication. 2 methods were used; fused deposition modelling and stereolithography. Same width and height of this models were measured and compared to the original tooth. Results. Fused deposition modelling showed an enlarged width and reduced height. The stereolithography showed more exact data compared with the fused deposition modelling. Smaller standard deviation were recorded in the stereolithographic method. Overall width error from tooth to rapid prototyping was 7.15% in fused deposition modelling and 0.2% in stereolithography. Overall height showed the tendency of reducing dimensions. Conclusion. From the results of this study, stereolithography seems to be very predictable method of fabricating root analogue implant.

  • PDF

Earthquake time-frequency analysis using a new compatible wavelet function family

  • Moghaddam, Amir Bazrafshan;Bagheripour, Mohammad H.
    • Earthquakes and Structures
    • /
    • v.3 no.6
    • /
    • pp.839-852
    • /
    • 2012
  • Earthquake records are often analyzed in various earthquake engineering problems, making time-frequency analysis for such records of primary concern. The best tool for such analysis appears to be based on wavelet functions; selection of which is not an easy task and is commonly carried through trial and error process. Furthermore, often a particular wavelet is adopted for analysis of various earthquakes irrespective of record's prime characteristics, e.g. wave's magnitude. A wavelet constructed based on records' characteristics may yield a more accurate solution and more efficient solution procedure in time-frequency analysis. In this study, a low-pass reconstruction filter is obtained for each earthquake record based on multi-resolution decomposition technique; the filter is then assigned to be the normalized version of the last approximation component with respect to its magnitude. The scaling and wavelet functions are computed using two-scale relations. The calculated wavelets are highly efficient in decomposing the original records as compared to other commonly used wavelets such as Daubechies2 wavelet. The method is further advantageous since it enables one to decompose the original record in such a way that a clear time-frequency resolution is obtained.

A Study on Optimum Subband Filter Bank Design Using Vector Quantizer (벡터 양자화기를 사용한 최적의 부대역 필터 뱅크 구현에 관한 연구)

  • Jee, Innho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.1
    • /
    • pp.107-113
    • /
    • 2017
  • This paper provides a new approach for modeling of vector quantizer(VQ) followed by analysis and design of subband codecs with imbedded VQ's. We compute the mean squared reconstruction error(MSE) which depend on N the number of entries in each codebook, k the length of each codeword, and on the filter bank(FB) coefficients in subband codecs. We show that the optimum M-band filter bank structure in presence of pdf-optimized vector quantizer can be designed by a suitable choice of equivalent scalar quantizer parameters. Specific design examples have been developed for two different classes of filter banks, paraunitary and the biorthogonal FB and the 2 channel case. These theoretical results are confirmed by Monte Carlo simulation.

Image Coding by Block Based Fractal Approximation (블록단위의 프래탈 근사화를 이용한 영상코딩)

  • 정현민;김영규;윤택현;강현철;이병래;박규태
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.2
    • /
    • pp.45-55
    • /
    • 1994
  • In this paper, a block based image approximation technique using the Self Affine System(SAS) from the fractal theory is suggested. Each block of an image is divided into 4 tiles and 4 affine mapping coefficients are found for each tile. To find the affine mapping cefficients that minimize the error between the affine transformed image block and the reconstructed image block, the matrix euation is solved by setting each partial differential coefficients to aero. And to ensure the convergence of coding block. 4 uniformly partitioned affine transformation is applied. Variable block size technique is employed in order to applynatural image reconstruction property of fractal image coding. Large blocks are used for encoding smooth backgrounds to yield high compression efficiency and texture and edge blocks are divided into smaller blocks to preserve the block detail. Affine mapping coefficinets are found for each block having 16$\times$16, 8$\times$8 or 4$\times$4 size. Each block is classified as shade, texture or edge. Average gray level is transmitted for shade bolcks, and coefficients are found for texture and edge blocks. Coefficients are quantized and only 16 bytes per block are transmitted. Using the proposed algorithm, the computational load increases linearly in proportion to image size. PSNR of 31.58dB is obtained as the result using 512$\times$512, 8 bits per pixel Lena image.

  • PDF

ECG Compression Structure Design Using of Multiple Wavelet Basis Functions (다중웨이브렛 기저함수를 이용한 심전도 압축구조설계)

  • Kim Tae-hyung;Kwon Chang-Young;Yoon Dong-Han
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.3
    • /
    • pp.467-472
    • /
    • 2005
  • ECG signals are recorded for diagnostic purposes in many clinical situations. Also, In order to permit good clinical interpretation, data is needed at high resolutions and sampling rates. Therefore In this paper, we designed to compression structure using multiple wavelet basis function(SWBF) and compared to single wavelet basis function(SWBF) and discrete cosine transform(DCT). For experience objectivity, Simulation was performed using the arrhythmia data with sampling frequency 360Hz, resolution lIbit at MIT-BIH database. An estimate of performance estimate evaluate the reconstruction error. Consequently compression structure using MWBF has high performance result.

Image deblurring via adaptive proximal conjugate gradient method

  • Pan, Han;Jing, Zhongliang;Li, Minzhe;Dong, Peng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4604-4622
    • /
    • 2015
  • It is not easy to reconstruct the geometrical characteristics of the distorted images captured by the devices. One of the most popular optimization methods is fast iterative shrinkage/ thresholding algorithm. In this paper, to deal with its approximation error and the turbulence of the decrease process, an adaptive proximal conjugate gradient (APCG) framework is proposed. It contains three stages. At first stage, a series of adaptive penalty matrices are generated iterate-to-iterate. Second, to trade off the reconstruction accuracy and the computational complexity of the resulting sub-problem, a practical solution is presented, which is characterized by solving the variable ellipsoidal-norm based sub-problem through exploiting the structure of the problem. Third, a correction step is introduced to improve the estimated accuracy. The numerical experiments of the proposed algorithm, in comparison to the favorable state-of-the-art methods, demonstrate the advantages of the proposed method and its potential.

A Precise Projectile Trajectory Registration Algorithm Based on Weighted PDOP (PDOP 가중치 기반 정밀 탄궤적 정합 알고리즘)

  • Shin, Seok-Hyun;Kim, Jong-Ju
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.44 no.6
    • /
    • pp.502-511
    • /
    • 2016
  • Recently, many kind of smart projectiles are being developed. In case of smart projectile, studying in advance, it uses a navigation data acquired from the GNSS receiver to check its location on the geocentric(WGS84) coordinates and to estimate P.O.I(point of impact). However, because of various error inducing factors, the result of positioning involve some errors. We introduce the advanced algorithm for the reconstruction of a navigation trajectory using weighted PDOP, based on a simulated trajectory acquired from PRODAS. It is very fast and robust to noise and shows reliable output. It can be widely used to estimate an actual trajectory of a projectile.

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

Real-time Fluorescence Lifetime Imaging Microscopy Implementation by Analog Mean-Delay Method through Parallel Data Processing

  • Kim, Jayul;Ryu, Jiheun;Gweon, Daegab
    • Applied Microscopy
    • /
    • v.46 no.1
    • /
    • pp.6-13
    • /
    • 2016
  • Fluorescence lifetime imaging microscopy (FLIM) has been considered an effective technique to investigate chemical properties of the specimens, especially of biological samples. Despite of this advantageous trait, researchers in this field have had difficulties applying FLIM to their systems because acquiring an image using FLIM consumes too much time. Although analog mean-delay (AMD) method was introduced to enhance the imaging speed of commonly used FLIM based on time-correlated single photon counting (TCSPC), a real-time image reconstruction using AMD method has not been implemented due to its data processing obstacles. In this paper, we introduce a real-time image restoration of AMD-FLIM through fast parallel data processing by using Threading Building Blocks (TBB; Intel) and octa-core processor (i7-5960x; Intel). Frame rate of 3.8 frames per second was achieved in $1,024{\times}1,024$ resolution with over 4 million lifetime determinations per second and measurement error within 10%. This image acquisition speed is 184 times faster than that of single-channel TCSPC and 9.2 times faster than that of 8-channel TCSPC (state-of-art photon counting rate of 80 million counts per second) with the same lifetime accuracy of 10% and the same pixel resolution.