• 제목/요약/키워드: Adaptive Contrast

검색결과 241건 처리시간 0.023초

The Effect of Mean Brightness and Contrast of Digital Image on Detection of Watermark Noise (워터 마크 잡음 탐지에 미치는 디지털 영상의 밝기와 대비의 효과)

  • Kham Keetaek;Moon Ho-Seok;Yoo Hun-Woo;Chung Chan-Sup
    • Korean Journal of Cognitive Science
    • /
    • 제16권4호
    • /
    • pp.305-322
    • /
    • 2005
  • Watermarking is a widely employed method tn protecting copyright of a digital image, the owner's unique image is embedded into the original image. Strengthened level of watermark insertion would help enhance its resilience in the process of extraction even from various distortions of transformation on the image size or resolution. However, its level, at the same time, should be moderated enough not to reach human visibility. Finding a balance between these two is crucial in watermarking. For the algorithm for watermarking, the predefined strength of a watermark, computed from the physical difference between the original and embedded images, is applied to all images uniformal. The mean brightness or contrast of the surrounding images, other than the absolute brightness of an object, could affect human sensitivity for object detection. In the present study, we examined whether the detectability for watermark noise might be attired by image statistics: mean brightness and contrast of the image. As the first step to examine their effect, we made rune fundamental images with varied brightness and control of the original image. For each fundamental image, detectability for watermark noise was measured. The results showed that the strength ot watermark node for detection increased as tile brightness and contrast of the fundamental image were increased. We have fitted the data to a regression line which can be used to estimate the strength of watermark of a given image with a certain brightness and contrast. Although we need to take other required factors into consideration in directly applying this formula to actual watermarking algorithm, an adaptive watermarking algorithm could be built on this formula with image statistics, such as brightness and contrast.

  • PDF

Spectral Analysis Method to Eliminate Spurious in FMICW HRR Millimeter-Wave Seeker (주파수 변조 단속 지속파를 이용하는 고해상도 밀리미터파 탐색기의 스퓨리어스 제거를 위한 스펙트럼 분석 기법)

  • Yang, Hee-Seong;Chun, Joo-Hwan;Song, Sung-Chan
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • 제23권1호
    • /
    • pp.85-95
    • /
    • 2012
  • In this thesis, we develop a spectral analysis scheme to eliminate the spurious peaks generated in HRR Millimeterwave Seeker based on FMICW system. In contrast to FMCW system, FMICW system generates spurious peaks in the spectrum of its IF signal, caused by the periodic discontinuity of the signal. These peaks make the accuracy of the system depend on the previously estimated range if a band pass filter is utilized to eliminate them and noise floor go to high level if random interrupted sequence is utilized and in case of using staggering process, we must transmit several waveforms to obtain overlapped information. Using the spectral analysis one of the schemes such as IAA(Iterative Adaptive Approach) and SPICE(SemiParametric Iterative Covariance-based Estimation method) which were introduced recently, the spurious peaks can be eliminated effectively. In order to utilize IAA and SPICE, since we must distinguish between reliable data and unreliable data and only use reliable data, STFT(Short Time Fourier Transform) is applied to the distinguishment process.

A Study on Projection Image Restoration by Adaptive Filtering (적응적 필터링에 의한 투사영상 복원에 관한 연구)

  • 김정희;김광익
    • Journal of Biomedical Engineering Research
    • /
    • 제19권2호
    • /
    • pp.119-128
    • /
    • 1998
  • This paper describes a filtering algorithm which employs apriori information of SPECT lesion detectability potential for the filtering of degraded projection images prior to the backprojection reconstruction. In this algorithm, we determined m minimum detectable lesion sized(MDLSs) by assuming m object contrasts uniformly-chosen in the range of 0.0-1.0, based on a signal/noise model which provides the capability potential of SPECT in terms of physical factors. A best estimate of given projection image is attempted as a weighted combination of the subimages from m optimal filters whose design is focused on maximizing the local S/N ratios for the MDLS-lesions. These subimages show relatively larger resolution recovery effect and relatively smaller noise reduction effect with the decreased MDLS, and the weighting on each subimage was controlled by the difference between the subimage and the maximum-resolution-recovered projection image. The proposed filtering algoritym was tested on SPECT image reconstruction problems, and produced good results. Especially, this algorithm showed the adaptive effect that approximately averages the filter outputs in homogeneous areas and sensitively depends on each filter strength on contrast preserving/enhancing in textured lesion areas of the reconstructed image.

  • PDF

Automatic Liver Segmentation of a Contrast Enhanced CT Image Using a Partial Histogram Threshold Algorithm (부분 히스토그램 문턱치 알고리즘을 사용한 조영증강 CT영상의 자동 간 분할)

  • Kyung-Sik Seo;Seung-Jin Park;Jong An Park
    • Journal of Biomedical Engineering Research
    • /
    • 제25권3호
    • /
    • pp.189-194
    • /
    • 2004
  • Pixel values of contrast enhanced computed tomography (CE-CT) images are randomly changed. Also, the middle liver part has a problem to segregate the liver structure because of similar gray-level values of a pancreas in the abdomen. In this paper, an automatic liver segmentation method using a partial histogram threshold (PHT) algorithm is proposed for overcoming randomness of CE-CT images and removing the pancreas. After histogram transformation, adaptive multi-modal threshold is used to find the range of gray-level values of the liver structure. Also, the PHT algorithm is performed for removing the pancreas. Then, morphological filtering is processed for removing of unnecessary objects and smoothing of the boundary. Four CE-CT slices of eight patients were selected to evaluate the proposed method. As the average of normalized average area of the automatic segmented method II (ASM II) using the PHT and manual segmented method (MSM) are 0.1671 and 0.1711, these two method shows very small differences. Also, the average area error rate between the ASM II and MSM is 6.8339 %. From the results of experiments, the proposed method has similar performance as the MSM by medical Doctor.

Validation of Deep-Learning Image Reconstruction for Low-Dose Chest Computed Tomography Scan: Emphasis on Image Quality and Noise

  • Joo Hee Kim;Hyun Jung Yoon;Eunju Lee;Injoong Kim;Yoon Ki Cha;So Hyeon Bak
    • Korean Journal of Radiology
    • /
    • 제22권1호
    • /
    • pp.131-138
    • /
    • 2021
  • Objective: Iterative reconstruction degrades image quality. Thus, further advances in image reconstruction are necessary to overcome some limitations of this technique in low-dose computed tomography (LDCT) scan of the chest. Deep-learning image reconstruction (DLIR) is a new method used to reduce dose while maintaining image quality. The purposes of this study was to evaluate image quality and noise of LDCT scan images reconstructed with DLIR and compare with those of images reconstructed with the adaptive statistical iterative reconstruction-Veo at a level of 30% (ASiR-V 30%). Materials and Methods: This retrospective study included 58 patients who underwent LDCT scan for lung cancer screening. Datasets were reconstructed with ASiR-V 30% and DLIR at medium and high levels (DLIR-M and DLIR-H, respectively). The objective image signal and noise, which represented mean attenuation value and standard deviation in Hounsfield units for the lungs, mediastinum, liver, and background air, and subjective image contrast, image noise, and conspicuity of structures were evaluated. The differences between CT scan images subjected to ASiR-V 30%, DLIR-M, and DLIR-H were evaluated. Results: Based on the objective analysis, the image signals did not significantly differ among ASiR-V 30%, DLIR-M, and DLIR-H (p = 0.949, 0.737, 0.366, and 0.358 in the lungs, mediastinum, liver, and background air, respectively). However, the noise was significantly lower in DLIR-M and DLIR-H than in ASiR-V 30% (all p < 0.001). DLIR had higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) than ASiR-V 30% (p = 0.027, < 0.001, and < 0.001 in the SNR of the lungs, mediastinum, and liver, respectively; all p < 0.001 in the CNR). According to the subjective analysis, DLIR had higher image contrast and lower image noise than ASiR-V 30% (all p < 0.001). DLIR was superior to ASiR-V 30% in identifying the pulmonary arteries and veins, trachea and bronchi, lymph nodes, and pleura and pericardium (all p < 0.001). Conclusion: DLIR significantly reduced the image noise in chest LDCT scan images compared with ASiR-V 30% while maintaining superior image quality.

Sampling Strategies for Computer Experiments: Design and Analysis

  • Lin, Dennis K.J.;Simpson, Timothy W.;Chen, Wei
    • International Journal of Reliability and Applications
    • /
    • 제2권3호
    • /
    • pp.209-240
    • /
    • 2001
  • Computer-based simulation and analysis is used extensively in engineering for a variety of tasks. Despite the steady and continuing growth of computing power and speed, the computational cost of complex high-fidelity engineering analyses and simulations limit their use in important areas like design optimization and reliability analysis. Statistical approximation techniques such as design of experiments and response surface methodology are becoming widely used in engineering to minimize the computational expense of running such computer analyses and circumvent many of these limitations. In this paper, we compare and contrast five experimental design types and four approximation model types in terms of their capability to generate accurate approximations for two engineering applications with typical engineering behaviors and a wide range of nonlinearity. The first example involves the analysis of a two-member frame that has three input variables and three responses of interest. The second example simulates the roll-over potential of a semi-tractor-trailer for different combinations of input variables and braking and steering levels. Detailed error analysis reveals that uniform designs provide good sampling for generating accurate approximations using different sample sizes while kriging models provide accurate approximations that are robust for use with a variety of experimental designs and sample sizes.

  • PDF

Shadow Removal from Scanned Documents taken by Mobile Phones based on Image Local Statistics (이미지 지역 통계를 이용한 모바일 기기로 촬영한 문서에서의 그림자 제거)

  • Na, Yeji;Park, Sang Il
    • Journal of the Korea Computer Graphics Society
    • /
    • 제24권3호
    • /
    • pp.43-48
    • /
    • 2018
  • In this paper, we present a method for removing shadows from scanned documents. Compared to the existing methods such as one based on image pyramid representation or adaptive thresholding, our method produces more robust and higher quality results. The basic idea of the approach is to use the local image statistics and to separate interesting regions from the image such as the regions around letters and figures. For the separated regions, we adaptively adjust the local brightness and contrast, and apply the sigmoid function to the intensity values as well to enhance the clarity of the image. For separated the other empty regions, we apply the gradient-base image hole filling method to fill the region with smooth color change.

Recursive Estimation of Biased Zero-Error Probability for Adaptive Systems under Non-Gaussian Noise (비-가우시안 잡음하의 적응 시스템을 위한 바이어스된 영-오차확률의 반복적 추정법)

  • Kim, Namyong
    • Journal of Internet Computing and Services
    • /
    • 제17권1호
    • /
    • pp.1-6
    • /
    • 2016
  • The biased zero-error probability and its related algorithms require heavy computational burden related with some summation operations at each iteration time. In this paper, a recursive approach to the biased zero-error probability and related algorithms are proposed, and compared in the simulation environment of shallow water communication channels with ambient noise of biased Gaussian and impulsive noise. The proposed recursive method has significantly reduced computational burden regardless of sample size, contrast to the original MBZEP algorithm with computational complexity proportional to sample size. With this computational efficiency the proposed algorithm, compared with the block-processing method, shows the equivalent robustness to multipath fading, biased Gaussian and impulsive noise.

A Critical Review of Current Crisis Simulation Methodology

  • Kim, Hak-Kyong;Lee, Ju-Lak
    • International Journal of Contents
    • /
    • 제7권1호
    • /
    • pp.58-64
    • /
    • 2011
  • This paper is concerned with simulation exercises used to train key response agencies for crisis situations. While 'multi-agency' simulations are increasingly acknowledged as a necessary and significant training tool for emergency response organisations, many current crisis simulations are still focused on the revision of existing response plans only. However, a crisis requires a rapid reaction, yet in contrast to an 'emergency', the risks for critical decision makers in crisis situations are difficult to measure, owing to their ill-structure. In other words, a crisis situation is likely to create great uncertainty, unfamiliarity and complexity, and consequently should be managed by adaptive or second order expertise and techniques, rather than routine or structured responses. In this context, the paper attempts to prove that the current practices of simulation exercises might not be good enough for uncertain, unfamiliar, and complex 'crisis' situations, in particular, by conducting case studies of two different underground fire crises in Korea (Daegu Subway Fire 2003) and the UK (King's Cross Fire 1987). Finally, it is suggested that the three abilities: 'flexibility', 'improvisation' and 'creativity' are critical in responding to a crisis situation.

Caption Region Extraction of Sports Video Using Multiple Frame Merge (다중 프레임 병합을 이용한 스포츠 비디오 자막 영역 추출)

  • 강오형;황대훈;이양원
    • Journal of Korea Multimedia Society
    • /
    • 제7권4호
    • /
    • pp.467-473
    • /
    • 2004
  • Caption in video plays an important role that delivers video content. Existing caption region extraction methods are difficult to extract caption region from background because they are sensitive to noise. This paper proposes the method to extract caption region in sports video using multiple frame merge and MBR(Minimum Bounding Rectangles). As preprocessing, adaptive threshold can be extracted using contrast stretching and Othu Method. Caption frame interval is extracted by multiple frame merge and caption region is efficiently extracted by median filtering, morphological dilation, region labeling, candidate character region filtering, and MBR extraction.

  • PDF