• Title/Summary/Keyword: Image summation

Search Result 51, Processing Time 0.029 seconds

Target Detection Using Texture Features and Neural Network in Infrared Images (적외선영상에서 질감 특징과 신경회로망을 이용한 표적탐지)

  • Sun, Sun-Gu
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.5
    • /
    • pp.62-68
    • /
    • 2010
  • This study is to identify target locations with low false alarms on thermal infrared images obtained from natural environment. The proposed method is different from the previous researches because it uses morphology filters for Gabor response images instead of an intensity image in initial detection stage. This method does not need precise extracting a target silhouette to distinguish true targets or clutters. It comprises three distinct stages. First, morphological operations and adaptive thresholding are applied to the summation image of four Gabor responses of an input image to find out salient regions. The locations of extracted regions can be classified into targets or clutters. Second, local texture features are computed from salient regions of an input image. Finally, the local texture features are compared with the training data to distinguish between true targets and clutters. The multi-layer perceptron having three layers is used as a classifier. The performance of the proposed method is proved by using natural infrared images. Therefore it can be applied to real automatic target detection systems.

Analysis on Optimal Threshold Value for Infrared Video Flame Detection (적외선 영상의 화염 검출을 위한 최적 문턱치 분석)

  • Jeong, Soo-Young;Kim, Won-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.8 no.4
    • /
    • pp.100-104
    • /
    • 2013
  • In this paper, we present an optimal threshold setting method for flame detection of infrared thermal image. Conventional infrared flame detection methods used fixed intensity threshold to segment candidate flame regions and further processing is performed to decide correct flame detection. So flame region segmentation step using the threshold is important processing for fire detection algorithm. The threshold should be change in input image depends on camera types and operation conditions. We have analyzed the conventional thresholds composed of fixed-intensity, average, standard deviation, maximum value. Finally, we extracted that the optimal threshold value is more than summation of average and standard deviation, and less than maximum value. it will be enhance flame detection rate than conventional fixed-threshold method.

An Iterative Spot Matching for 2-Dimensional Protein Separation Images (반복 점진적 방법에 의한 2차원 단백질 분리 영상의 반점 정합)

  • Kim, Jung-Ja;Hoang, Minh T.;Kim, Dong-Wook;Kim, Nam-Gyun;Won, Yong-Gwan
    • Journal of Biomedical Engineering Research
    • /
    • v.28 no.5
    • /
    • pp.601-608
    • /
    • 2007
  • 2 Dimensional Gel Electrophoresis(2DGE) is an essentialmethodology for analysis on the expression of various proteins. For example, information for the location, mass, expression, size and shape of the proteins obtained by 2DGE can be used for diagnosis, prognosis and biological progress by comparison of patients with the normal persons. Protein spot matching for this purpose is comparative analysis of protein expression pattern for the 2DGE images generated under different conditions. However, visual analysis of protein spots which are more than several hundreds included in a 2DGE image requires long time and heavy effort. Furthermore, geometrical distortion makes the spot matching for the same protein harder. In this paper, an iterative algorithm is introduced for more efficient spot matching. Proposed method is first performing global matching step, which reduces the geometrical difference between the landmarks and the spot to be matched. Thus, movement for a spot is defined by a weighted sum of the movement of the landmark spots. Weight for the summation is defined by the inverse of the distance from the spots to the landmarks. This movement is iteratively performed until the total sum of the difference between the corresponding landmarks is larger than a pre-selected value. Due to local distortion generally occurred in 2DGE images, there are many regions in whichmany spot pairs are miss-matched. In the second stage, the same spot matching algorithm is applied to such local regions with the additional landmarks for those regions. In other words, the same method is applied with the expanded landmark set to which additional landmarks are added. Our proposed algorithm for spot matching empirically proved reliable analysis of protein separation image by producing higher accuracy.

Digital Image Watermarking Based on Exponential Form with Base of 2 (2의 지수형식에 기초한 디지털 이미지 워터 마킹)

  • Ariunzaya, Batgerel;Kim, Han-kil;Chu, Hyung-Suk;An, Chong-Koo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.2
    • /
    • pp.97-103
    • /
    • 2010
  • In this paper, we propose a new digital watermarking technique. The main idea of the proposed algorithm relies on the assumption that any real number can be expressed as a summation of the exponential form with base of 2 and if only consider the first few summations some numbers can be expressed in the same form. Therefore, we can be sure that some amount of changes does not affect the first few summations. The algorithm decomposes a host image in wavelet domain and intensity of the significant wavelet coefficient is expressed in exponential form with base of 2. Multiple barcode watermarks are then embedded by modifying the parity of the exponent. The proposed scheme is semi-blind and also offers either objective or subjective deteew su as well. From extracted watermarks, more accurate watermark is obtained by merging technique as a final watermark. As a simulation result, the proposed algorithm could resist most cases of salt and pepper noise, Gaussian noise and JPEG compression.

The Evaluation of Difference according to Image Scan Duration in PET Scan using Short Half-Lived Radionuclide (단 반감기 핵종을 이용한 PET 검사 시 영상 획득 시간에 따른 정량성 평가)

  • Hong, Gun-Chul;Cha, Eun-Sun;Kwak, In-Suk;Lee, Hyuk;Park, Hoon;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.102-107
    • /
    • 2012
  • Purpose : Because of the rapid physical decay of the short half-lived radionuclide, counting of event for image is very limited. In this reason, long scan duration is applied for more accurate quantitative analysis in the relatively low sensitive examination. The aim of this study was to evaluate the difference according to scan duration and investigate the resonable scan duration using the radionuclide of 11C and 18F in PET scan. Materials and Methods : 1994-NEMA Phantom was filled with 11C of $30.08{\pm}4.22MBq$ and 18F of $40.08{\pm}8.29MBq$ diluted with distilled water. Dynamic images were acquired 20frames/1minute and static image was acquired for 20minutes with 11C. And dynamic images were acquired 20frames/2.5minutes and static image was acquired for 50minutes with 18F. All of data were applied with same reconstruction method and time decay correction. Region of interest (ROI) was set on the image, maximum radioactivity concentration (maxRC, kBq/mL) was compared. We compared maxRC with acquired dynamic image which was summed one bye one to increase the total scan duration. Results : maxRC over time of 11C was $3.85{\pm}0.45{\sim}5.15{\pm}0.50kBq/mL$ in dynamic image, and static image was $2.15{\pm}0.26kBq/mL$. In case of 18F, the maxRC was $9.09{\pm}0.42{\sim}9.48{\pm}0.31kBq/mL$ in dynamic image and $7.24{\pm}0.14kBq/mL$ in static. In summed image of 11C, as total scan duration was increased to 5, 10, 15, 20minutes, the maxRC were $2.47{\pm}0.4$, $2.22{\pm}0.37$, $2.08{\pm}0.42$, $1.95{\pm}0.55kBq/mL$ respectively. In case of 18F, the total scan duration was increased to 12.5, 25, 37.5, and 50minutes, the maxRC were $7.89{\pm}0.27$, $7.61{\pm}0.23$, $7.36{\pm}0.21$, $7.31{\pm}0.23kBq/mL$. Conclusion : As elapsed time was increased after completion of injection, the maxRC was increased by 33% and 4% in dynamic study of 11C and 18F respectively. Also the total scan duration was increased, the maxRC was reduced by 50% and 20% in summed image of 11C and 18F respectively. The percentage difference of each result is more larger in study using relatively shorter half-lived radionuclide. It appears that the accuracy of decay correction declined not only increment of scan duration but also increment of elapsed time from a starting point of acquisition. In study using 18F, there was no big difference so it's not necessary to consider error of quantitative evaluation according to elapsed time. It's recommended to apply additional decay correction method considering decay correction the error concerning elapsed time or to set the scan duration of static image less than 5minutes corresponding 25% of half life in study using shorter half-lived radionuclide as 11C.

  • PDF

Performance Analysis of Adaptive Corner Shrinking Algorithm for Decimating the Document Image (문서 영상 축소를 위한 적응형 코너 축소 알고리즘의 성능 분석)

  • Kwak No-Yoon
    • Journal of Digital Contents Society
    • /
    • v.4 no.2
    • /
    • pp.211-221
    • /
    • 2003
  • The objective of this paper is performance analysis of the digital document image decimation algorithm which generates a value of decimated element by an average of a target pixel value and a value of neighbor intelligible element to adaptively reflect the merits of ZOD method and FOD method on the decimated image. First, a target pixel located at the center of sliding window is selected, then the gradient amplitudes of its right neighbor pixel and its lower neighbor pixel are calculated using first order derivative operator respectively. Secondly, each gradient amplitude is divided by the summation result of two gradient amplitudes to generate each local intelligible weight. Next, a value of neighbor intelligible element is obtained by adding a value of the right neighbor pixel times its local intelligible weight to a value of the lower neighbor pixel times its intelligible weight. The decimated image can be acquired by applying the process repetitively to all pixels in input image which generates the value of decimated element by calculating the average of the target pixel value and the value of neighbor intelligible element. In this paper, the performance comparison of proposed method and conventional methods in terms of subjective performance and hardware complexity is analyzed and the preferable approach for developing the decimation algorithm of the digital document image on the basis of this analysis result has been reviewed.

  • PDF

An Analytical Approach to Color Composition in Ray Tracing of Volume Data

  • Jung, Moon-Ryul;Paik, Doowon;Kim, Eunghwan
    • Journal of the Korea Computer Graphics Society
    • /
    • v.2 no.1
    • /
    • pp.1-6
    • /
    • 1996
  • In ray tracing of 3D volume data, the color of each pixel in the image is typically obtained by accumulating the contributions of sample points on the ray cast from the pixel point. This accumulation is most naturally represented by integration. In most methods, however, it is done by numerical summation because analytical solution to the integration are hard to find. This paper shows that a semi-analytical solution can be obtained for a typical ray tracing of volume data. Tentative conclusions about the significance and usefulness of our approach are presented based on our experiments.

  • PDF

Design and Performance Analysis of Adaptive First-Order Decimator Using Local Intelligibility (국부 가해성을 이용한 적응형 선형 축소기의 설계 및 성능 분석)

  • Kwak, No-Yoon
    • Journal of Digital Contents Society
    • /
    • v.9 no.1
    • /
    • pp.17-26
    • /
    • 2008
  • This paper has for its object to propose AFOD(Adaptive First-Order Decimator) which sets a value of decimated element as an average of a value of neighbor intelligible component and a output value of FOD(First-Order Decimator) for the target pixel, and to analyze its performance in terms of subjective image quality and hardware complexity. In the proposed AFOD, a target pixel located at the center of sliding window is selected first, then the gradient amplitudes of its right neighbor pixel and its lower neighbor pixel are calculated using first order derivative operator respectively. Secondly, each gradient amplitude is divided by the summation result of two gradient amplitudes to generate each local intelligible weight. Next, a value of neighbor intelligible component is defined by adding a value of the right neighbor pixel times its local intelligible weight to a value of the lower neighbor pixel times its intelligible weight. Since the proposed method adaptively reflects neighbor intelligible informations of neighbor pixels on the decimated element according to each local intelligible weight, it can effectively suppress the blurring effect being the demerit of FOD. It also possesses the advantages that it can keep the merits of FOD with the good results on average but also lower computational cost.

  • PDF

Automatic Liver Segmentation on Abdominal Contrast-enhanced CT Images for the Pre-surgery Planning of Living Donor Liver Transplantation

  • Jang, Yujin;Hong, Helen;Chung, Jin Wook
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.1
    • /
    • pp.37-40
    • /
    • 2014
  • Purpose For living donor liver transplantation, liver segmentation is difficult due to the variability of its shape across patients and similarity of the density of neighbor organs such as heart, stomach, kidney, and spleen. In this paper, we propose an automatic segmentation of the liver using multi-planar anatomy and deformable surface model in portal phase of abdominal contrast-enhanced CT images. Method Our method is composed of four main steps. First, the optimal liver volume is extracted by positional information of pelvis and rib and by separating lungs and heart from CT images. Second, anisotropic diffusing filtering and adaptive thresholding are used to segment the initial liver volume. Third, morphological opening and connected component labeling are applied to multiple planes for removing neighbor organs. Finally, deformable surface model and probability summation map are performed to refine a posterior liver surface and missing left robe in previous step. Results All experimental datasets were acquired on ten living donors using a SIEMENS CT system. Each image had a matrix size of $512{\times}512$ pixels with in-plane resolutions ranging from 0.54 to 0.70 mm. The slice spacing was 2.0 mm and the number of images per scan ranged from 136 to 229. For accuracy evaluation, the average symmetric surface distance (ASD) and the volume overlap error (VE) between automatic segmentation and manual segmentation by two radiologists are calculated. The ASD was $0.26{\pm}0.12mm$ for manual1 versus automatic and $0.24{\pm}0.09mm$ for manual2 versus automatic while that of inter-radiologists was $0.23{\pm}0.05mm$. The VE was $0.86{\pm}0.45%$ for manual1 versus automatic and $0.73{\pm}0.33%$ for manaual2 versus automatic while that of inter-radiologist was $0.76{\pm}0.21%$. Conclusion Our method can be used for the liver volumetry for the pre-surgery planning of living donor liver transplantation.

Bandwidth Efficient Summed Area Table Generation for CUDA (CUDA를 이용한 효율적인 합산 영역 테이블의 생성 방법)

  • Ha, Sang-Won;Choi, Moon-Hee;Jun, Tae-Joon;Kim, Jin-Woo;Byun, Hye-Ran;Han, Tack-Don
    • Journal of Korea Game Society
    • /
    • v.12 no.5
    • /
    • pp.67-78
    • /
    • 2012
  • Summed area table allows filtering of arbitrary-width box regions for every pixel in constant time per pixel. This characteristic makes it beneficial in image processing applications where the sum or average of the surrounding pixel intensity is required. Although calculating the summed area table of an image data is primarily a memory bound job consisting of row or column-wise summation, previous works had to endure excessive access to the high latency global memory in order to exploit data parallelism. In this paper, we propose an efficient algorithm for generating the summed area table in the GPGPU environment where the input is decomposed into square sub-images with intermediate data that are propagated between them. By doing so, the global memory access is almost halved compared to the previous methods making an efficient use of the available memory bandwidth. The results show a substantial increase in performance.