• Title/Summary/Keyword: 화소기반

Search Result 693, Processing Time 0.023 seconds

2-D DCT/IDCT Processor Design Reducing Adders in DA Architecture (DA구조 이용 가산기 수를 감소한 2-D DCT/IDCT 프로세서 설계)

  • Jeong Dong-Yun;Seo Hae-Jun;Bae Hyeon-Deok;Cho Tae-Won
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.3 s.345
    • /
    • pp.48-58
    • /
    • 2006
  • This paper presents 8x8 two dimensional DCT/IDCT processor of adder-based distributed arithmetic architecture without applying ROM units in conventional memories. To reduce hardware cost in the coefficient matrix of DCT and IDCT, an odd part of the coefficient matrix was shared. The proposed architecture uses only 29 adders to compute coefficient operation in the 2-D DCT/IDCT processor, while 1-D DCT processor consists of 18 adders to compute coefficient operation. This architecture reduced 48.6% more than the number of adders in 8x8 1-D DCT NEDA architecture. Also, this paper proposed a form of new transpose network which is different from the conventional transpose memory block. The proposed transpose network block uses 64 registers with reduction of 18% more than the number of transistors in conventional memory architecture. Also, to improve throughput, eight input data receive eight pixels in every clock cycle and accordingly eight pixels are produced at the outputs.

Stereo Matching For Satellite Images using The Classified Terrain Information (지형식별정보를 이용한 입체위성영상매칭)

  • Bang, Soo-Nam;Cho, Bong-Whan
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.4 no.1 s.6
    • /
    • pp.93-102
    • /
    • 1996
  • For an atomatic generation of DEM(Digital Elevation Model) by computer, it is a time-consumed work to determine adquate matches from stereo images. Correlation and evenly distributed area-based method is generally used for matching operation. In this paper, we propose a new approach that computes matches efficiantly by changing the size of mask window and search area according to the given terrain information. For image segmentation, at first edge-preserving smoothing filter is used for preprocessing, and then region growing algorithm is applied for the filterd images. The segmented regions are classifed into mountain, plain and water area by using MRF(Markov Random Filed) model. Maching is composed of predicting parallex and fine matching. Predicted parallex determines the location of search area in fine matching stage. The size of search area and mask window is determined by terrain information for each pixel. The execution time of matching is reduced by lessening the size of search area in the case of plain and water. For the experiments, four images which are covered $10km{\times}10km(1024{\times}1024\;pixel)$ of Taejeon-Kumsan in each are studied. The result of this study shows that the computing time of the proposed method using terrain information for matching operation can be reduced from 25% to 35%.

  • PDF

A Fast Motion Vector Search in Integer Pixel Unit for Variable Blocks Siz (가변 크기 블록에서 정수단위 화소 움직임 벡터의 빠른 검색)

  • 이융기;이영렬
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.388-396
    • /
    • 2003
  • In this paper, a fast motion search algorithm that performs motion search for variable blocks in integer pixel unit is proposed. The proposed method is based on the successive elimination algorithm (SEA) using sum norms to find the best estimate of motion vector and obtains the best estimate of the motion vectors of blocks, including 16${\times}$8, 8${\times}$16, and 8${\times}$8, by searching eight pixels around the best motion vector of 16${\times}$16 block obtained from all candidates. And the motion vectors of blocks, including 8${\times}$4, 4${\times}$8, and 4${\times}$4, is obtained by searching eight pixels around the best motion vector of 8${\times}$8 block. The proposed motion search is applied to the H.264 encoder that performs variable blocks motion estimation (ME). In terms of computational complexity, the proposed search algorithm for motion estimation (ME) calculates motion vectors in about 23.8 times speed compared with the spiral full search without early termination and 4.6 times speed compared with the motion estimation method using hierarchical sum of absolute difference (SAD) of 4${\times}$4 blocks, while it shows 0.1dB∼0.4dB peak signal-to-noise ratio (PSNR) drop in comparison to the spiral full search.

Classification of Scaled Textured Images Using Normalized Pattern Spectrum Based on Mathematical Morphology (형태학적 정규화 패턴 스펙트럼을 이용한 질감영상 분류)

  • Song, Kun-Woen;Kim, Gi-Seok;Do, Kyeong-Hoon;Ha, Yeong-Ho
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.116-127
    • /
    • 1996
  • In this paper, a scheme of classification of scaled textured images using normalized pattern spectrum incorporating arbitrary scale changes based on mathematical morphology is proposed in more general environments considering camera's zoom-in and zoom-out function. The normalized pattern spectrum means that firstly pattern spectrum is calculated and secondly interpolation is performed to incorporate scale changes according to scale change ratio in the same textured image class. Pattern spectrum is efficiently obtained by using both opening and closing, that is, we calculate pattern spectrum by opening method for pixels which have value more than threshold and calculate pattern spectrum by closing method for pixels which have value less than threshold. Also we compare classification accuracy between gray scale method and binary method. The proposed approach has the advantage of efficient information extraction, high accuracy, less computation, and parallel implementation. An important advantage of the proposed method is that it is possible to obtain high classification accuracy with only (1:1) scale images for training phase.

  • PDF

A New Hardware Design for Generating Digital Holographic Video based on Natural Scene (실사기반 디지털 홀로그래픽 비디오의 실시간 생성을 위한 하드웨어의 설계)

  • Lee, Yoon-Hyuk;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.11
    • /
    • pp.86-94
    • /
    • 2012
  • In this paper we propose a hardware architecture of high-speed CGH (computer generated hologram) generation processor, which particularly reduces the number of memory access times to avoid the bottle-neck in the memory access operation. For this, we use three main schemes. The first is pixel-by-pixel calculation rather than light source-by-source calculation. The second is parallel calculation scheme extracted by modifying the previous recursive calculation scheme. The last one is a fully pipelined calculation scheme and exactly structured timing scheduling by adjusting the hardware. The proposed hardware is structured to calculate a row of a CGH in parallel and each hologram pixel in a row is calculated independently. It consists of input interface, initial parameter calculator, hologram pixel calculators, line buffer, and memory controller. The implemented hardware to calculate a row of a $1,920{\times}1,080$ CGH in parallel uses 168,960 LUTs, 153,944 registers, and 19,212 DSP blocks in an Altera FPGA environment. It can stably operate at 198MHz. Because of the three schemes, the time to access the external memory is reduced to about 1/20,000 of the previous ones at the same calculation speed.

Quantization Noise Reduction in Block-Coded Video Using the Characteristics of Block Boundary Area (블록 경계 영역 특성을 이용한 블록 부호화 영상에서의 양자화 잡음 제거)

  • Kwon Kee-Koo;Yang Man-Seok;Ma Jin-Suk;Im Sung-Ho;Lim Dong-Sun
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.223-232
    • /
    • 2005
  • In this paper, we propose a novel post-filtering algorithm with low computational complexity that improves the visual quality of decoded images using block boundary classification and simple adaptive filter (SAF). At first, each block boundary is classified into smooth or complex sub-region. And for smooth-smooth sub-regions, the existence of blocking artifacts is determined using blocky strength. And simple adaptive filtering is processed in each block boundary area. The proposed method processes adaptively, that is, a nonlinear 1-D 8-tap filter is applied to smooth-smooth sub-regions with blocking artifacts, and for smooth-complex or complex-smooth sub-regions, a nonlinear 1-D variant filter is applied to block boundary pixels so as to reduce the blocking and ringing artifacts. And for complex-complex sub-regions, a nonlinear 1-D 2-tap filter is only applied to adjust two block boundary pixels so as to preserve the image details. Experimental results show that the proposed algorithm produced better results than those of conventional algorithms both subjective and objective viewpoints.

A Fast Algorithm of the Belief Propagation Stereo Method (신뢰전파 스테레오 기법의 고속 알고리즘)

  • Choi, Young-Seok;Kang, Hyun-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.5
    • /
    • pp.1-8
    • /
    • 2008
  • The belief propagation method that has been studied recently yields good performance in disparity extraction. The method in which a target function is modeled as an energy function based on Markov random field(MRF), solves the stereo matching problem by finding the disparity to minimize the energy function. MRF models provide robust and unified framework for vision problem such as stereo and image restoration. the belief propagation method produces quite correct results, but it has difficulty in real time implementation because of higher computational complexity than other stereo methods. To relieve this problem, in this paper, we propose a fast algorithm of the belief propagation method. Energy function consists of a data term and a smoothness tern. The data term usually corresponds to the difference in brightness between correspondences, and smoothness term indicates the continuity of adjacent pixels. Smoothness information is created from messages, which are assigned using four different message arrays for the pixel positions adjacent in four directions. The processing time for four message arrays dominates 80 percent of the whole program execution time. In the proposed method, we propose an algorithm that dramatically reduces the processing time require in message calculation, since the message.; are not produced in four arrays but in a single array. Tn the last step of disparity extraction process, the messages are called in the single integrated array and this algorithm requires 1/4 computational complexity of the conventional method. Our method is evaluated by comparing the disparity error rates of our method and the conventional method. Experimental results show that the proposed method remarkably reduces the execution time while it rarely increases disparity error.

Effects of Various Intracranial Volume Measurements on Hippocampal Volumetry and Modulated Voxel-based Morphometry (두개강의 용적측정법이 해마의 용적측정술과 화소기반 형태계측술에 미치는 영향)

  • Tae, Woo-Suk;Kim, Sam-Soo;Lee, Kang-Uk;Nam, Eui-Cheol
    • Investigative Magnetic Resonance Imaging
    • /
    • v.13 no.1
    • /
    • pp.63-73
    • /
    • 2009
  • Purpose : To investigate the effects of various intracranial volume (ICV) measurement methods on the sensitivity of hippocampal volumetry and modulated voxel-based morphometry (mVBM) in female patients with major depressive disorder (MDD). Materials and Methods : T1 magnetic resonance imaging (MRI) data for 41 female subjects (21 MDD patients, 20 normal subjects) were analyzed. Hippocampal volumes were measured manually, and ICV was measured manually and automatically using the FreeSurfer package. Gray and white matter volumes were measured separately. Results : Manual ICV normalization provided the greatest sensitivity in hippocampal volumetry and mVBM, followed by FreeSurfer ICV, GWMV, and GMV. Manual and FreeSurfer ICVs were similar in normal subjects (p = 0.696), but distinct in MDD patients (p = 0.000002). Manual ICV-corrected total gray matter volume (p = 0.0015) and Manual ICV-corrected bilateral hippocampal volumes (right, p = 0.014; left, p = 0.004) were decreased significantly in MDD patients, but the differences of hippocampal volumes corrected by FreeSurfer ICV, GWMV, or GMV were not significant between two groups (p > 0.05). Only manual ICV-corrected mVBM analysis was significant after correction for multiple comparisons. Conclusion : The method of ICV measurement greatly affects the sensitivity of hippocampal volumetry and mVBM. Manual ICV normalization showed the ability to detect differences between women with and without MDD for both methods.

  • PDF

A Real-time Correction of the Underestimation Noise for GK2A Daily NDVI (GK2A 일단위 NDVI의 과소추정 노이즈 실시간 보정)

  • Lee, Soo-Jin;Youn, Youjeong;Sohn, Eunha;Kim, Mija;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1301-1314
    • /
    • 2022
  • Normalized Difference Vegetation Index (NDVI) is utilized as an indicator to represent the vegetation condition on the land surface in various applications such as land cover, crop yield, agricultural drought, soil moisture, and forest disaster. However, satellite optical sensors for visible and infrared rays cannot see through the clouds, so the NDVI of the cloud pixel is not a valid value for the land surface. This study proposed a real-time correction of the underestimation noise for GEO-KOMPSAT-2A (GK2A) daily NDVI and made sure its feasibility through the quantitative comparisons with Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI and the qualitative interpretation of time-series changes. The underestimation noise was effectively corrected by the procedures such as the time-series correction considering vegetation phenology, the outlier removal using long-term climatology, and the gap filling using rigorous statistical methods. The correlation with MODIS NDVI was higher, and the difference was lower, showing a 32.7% improvement compared to the original NDVI product. The proposed method has an extensibility for use in other satellite products with some modification.

Image Matching for Orthophotos by Using HRNet Model (HRNet 모델을 이용한 항공정사영상간 영상 매칭)

  • Seong, Seonkyeong;Choi, Jaewan
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.597-608
    • /
    • 2022
  • Remotely sensed data have been used in various fields, such as disasters, agriculture, urban planning, and the military. Recently, the demand for the multitemporal dataset with the high-spatial-resolution has increased. This manuscript proposed an automatic image matching algorithm using a deep learning technique to utilize a multitemporal remotely sensed dataset. The proposed deep learning model was based on High Resolution Net (HRNet), widely used in image segmentation. In this manuscript, denseblock was added to calculate the correlation map between images effectively and to increase learning efficiency. The training of the proposed model was performed using the multitemporal orthophotos of the National Geographic Information Institute (NGII). In order to evaluate the performance of image matching using a deep learning model, a comparative evaluation was performed. As a result of the experiment, the average horizontal error of the proposed algorithm based on 80% of the image matching rate was 3 pixels. At the same time, that of the Zero Normalized Cross-Correlation (ZNCC) was 25 pixels. In particular, it was confirmed that the proposed method is effective even in mountainous and farmland areas where the image changes according to vegetation growth. Therefore, it is expected that the proposed deep learning algorithm can perform relative image registration and image matching of a multitemporal remote sensed dataset.