• Title/Summary/Keyword: and thresholding

Search Result 602, Processing Time 0.026 seconds

Intra Prediction Offset Compensation for Improving Video Coding Efficiency (영상 부호화 효율 향상을 위한 화면내 예측 오프셋 보상)

  • Lim, Sung-Chang;Lee, Ha-Hyun;Choi, Hae-Chul;Jeong, Se-Yoon;Kim, Jong-Ho;Choi, Jin-Soo
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.749-768
    • /
    • 2009
  • In this paper, an intra prediction offset compensation method is proposed to improve intra prediction in H.264/AVC. In H.264/AVC, intra prediction based on various directions improves the coding efficiency by removing spatial correlation between neighboring blocks. In details, neighboring pixels in reconstructed block can be used as intra reference block for the current block to be coded when intra prediction method is used. In order to reduce further the prediction error of the intra reference block, the proposed method introduces an intra prediction offset which is determined in the sense of the rate-distortion optimization and is added to the conventional intra prediction block. Besides the intra prediction offset compensation, the coefficient thresholding method which is used for inter coding in JM 11.0, is used for chroma component in intra block, which leads the improvement of the luma coding efficiency of the proposed method. In experiments, we show that the proposed method achieves average 2.45% in High Profile condition and maximum 4.41% of bitrate reduction relative to JM 11.0.

Background Removal and ROI Segmentation Algorithms for Chest X-ray Images (흉부 엑스레이 영상에서 배경 제거 및 관심영역 분할 기법)

  • Park, Jin Woo;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.105-114
    • /
    • 2015
  • This paper proposes methods to remove background area and segment region of interest (ROI) in chest X-ray images. Conventional algorithms to improve detail or contrast of images normally utilize brightness and frequency information. If we apply such algorithms to the entire images, we cannot obtain reliable visual quality due to unnecessary information such as background area. So, we propose two effective algorithms to remove background and segment ROI from the input X-ray images. First, the background removal algorithm analyzes the histogram distribution of the input X-ray image. Next, the initial background is estimated by a proper thresholding on histogram domain, and it is removed. Finally, the body contour or background area is refined by using a popular guided filter. On the other hand, the ROI, i.e., lung segmentation algorithm first determines an initial bounding box using the lung's inherent location information. Next, the main intensity value of the lung is computed by vertical cumulative sum within the initial bounding box. Then, probable outliers are removed by using a specific labeling and the pre-determined background information. Finally, a bounding box including lung is obtained. Simulation results show that the proposed background removal and ROI segmentation algorithms outperform the previous works.

Substitutability of Noise Reduction Algorithm based Conventional Thresholding Technique to U-Net Model for Pancreas Segmentation (이자 분할을 위한 노이즈 제거 알고리즘 기반 기존 임계값 기법 대비 U-Net 모델의 대체 가능성)

  • Sewon Lim;Youngjin Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.5
    • /
    • pp.663-670
    • /
    • 2023
  • In this study, we aimed to perform a comparative evaluation using quantitative factors between a region-growing based segmentation with noise reduction algorithms and a U-Net based segmentation. Initially, we applied median filter, median modified Wiener filter, and fast non-local means algorithm to computed tomography (CT) images, followed by region-growing based segmentation. Additionally, we trained a U-Net based segmentation model to perform segmentation. Subsequently, to compare and evaluate the segmentation performance of cases with noise reduction algorithms and cases with U-Net, we measured root mean square error (RMSE) and peak signal to noise ratio (PSNR), universal quality image index (UQI), and dice similarity coefficient (DSC). The results showed that using U-Net for segmentation yielded the most improved performance. The values of RMSE, PSNR, UQI, and DSC were measured as 0.063, 72.11, 0.841, and 0.982 respectively, which indicated improvements of 1.97, 1.09, 5.30, and 1.99 times compared to noisy images. In conclusion, U-Net proved to be effective in enhancing segmentation performance compared to noise reduction algorithms in CT images.

Voice Activity Detection using Motion and Variation of Intensity in The Mouth Region (입술 영역의 움직임과 밝기 변화를 이용한 음성구간 검출 알고리즘 개발)

  • Kim, Gi-Bak;Ryu, Je-Woong;Cho, Nam-Ik
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.519-528
    • /
    • 2012
  • Voice activity detection (VAD) is generally conducted by extracting features from the acoustic signal and a decision rule. The performance of such VAD algorithms driven by the input acoustic signal highly depends on the acoustic noise. When video signals are available as well, the performance of VAD can be enhanced by using the visual information which is not affected by the acoustic noise. Previous visual VAD algorithms usually use single visual feature to detect the lip activity, such as active appearance models, optical flow or intensity variation. Based on the analysis of the weakness of each feature, we propose to combine intensity change measure and the optical flow in the mouth region, which can compensate for each other's weakness. In order to minimize the computational complexity, we develop simple measures that avoid statistical estimation or modeling. Specifically, the optical flow is the averaged motion vector of some grid regions and the intensity variation is detected by simple thresholding. To extract the mouth region, we propose a simple algorithm which first detects two eyes and uses the profile of intensity to detect the center of mouth. Experiments show that the proposed combination of two simple measures show higher detection rates for the given false positive rate than the methods that use a single feature.

The Obstacle Avoidance Algorithm of Mobile Robot using Line Histogram Intensity (Line Histogram Intensity를 이용한 이동로봇의 장애물 회피 알고리즘)

  • 류한성;최중경;구본민;박무열;방만식
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1365-1373
    • /
    • 2002
  • In this paper, we present two types of vision algorithm that mobile robot has CCD camera. for obstacle avoidance. This is simple algorithm that compare with grey level from input images. Also, The mobile robot depend on image processing and move command from PC host. we has been studied self controlled mobile robot system with CCD camera. This system consists of digital signal processor, step motor, RF module and CCD camera. we used wireless RF module for movable command transmitting between robot and host PC. This robot go straight until recognize obstacle from input image that preprocessed by edge detection, converting, thresholding. And it could avoid the obstacle when recognize obstacle by line histogram intensity. Host PC measurement wave from various line histogram each 20 pixel. This histogram is (x, y) value of pixel. For example, first line histogram intensity wave from (0, 0) to (0, 197) and last wave from (280, 0) to (2n, 197. So we find uniform wave region and nonuniform wave region. The period of uniform wave is obstacle region. we guess that algorithm is very useful about moving robot for obstacle avoidance.

Time-series Mapping and Uncertainty Modeling of Environmental Variables: A Case Study of PM10 Concentration Mapping (시계열 환경변수 분포도 작성 및 불확실성 모델링: 미세먼지(PM10) 농도 분포도 작성 사례연구)

  • Park, No-Wook
    • Journal of the Korean earth science society
    • /
    • v.32 no.3
    • /
    • pp.249-264
    • /
    • 2011
  • A multi-Gaussian kriging approach extended to space-time domain is presented for uncertainty modeling as well as time-series mapping of environmental variables. Within a multi-Gaussian framework, normal score transformed environmental variables are first decomposed into deterministic trend and stochastic residual components. After local temporal trend models are constructed, the parameters of the models are estimated and interpolated in space. Space-time correlation structures of stationary residual components are quantified using a product-sum space-time variogram model. The ccdf is modeled at all grid locations using this space-time variogram model and space-time kriging. Finally, e-type estimates and conditional variances are computed from the ccdf models for spatial mapping and uncertainty analysis, respectively. The proposed approach is illustrated through a case of time-series Particulate Matter 10 ($PM_{10}$) concentration mapping in Incheon Metropolitan city using monthly $PM_{10}$ concentrations at 13 stations for 3 years. It is shown that the proposed approach would generate reliable time-series $PM_{10}$ concentration maps with less mean bias and better prediction capability, compared to conventional spatial-only ordinary kriging. It is also demonstrated that the conditional variances and the probability exceeding a certain thresholding value would be useful information sources for interpretation.

A Study on the Improvement of Wavefront Sensing Accuracy for Shack-Hartmann Sensors (Shack-Hartmann 센서를 이용한 파면측정의 정확도 향상에 관한 연구)

  • Roh, Kyung-Wan;Uhm, Tae-Kyoung;Kim, Ji-Yeon;Park, Sang-Hoon;Youn, Sung-Kie;Lee, Jun-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.17 no.5
    • /
    • pp.383-390
    • /
    • 2006
  • The SharkHartmann wavefront sensors are the most popular devices to measure wavefront in the field of adaptive optics. The Shack-Hartmann sensors measure the centroids of spot irradiance distribution formed by each corresponding micro-lens. The centroids are linearly proportional to the local mean slopes of the wavefront defined within the corresponding sub-aperture. The wavefront is then reconstructed from the evaluated local mean slopes. The uncertainty of the Shack-Hartmann sensor is caused by various factors including the detector noise, the limited size of the detector, the magnitude and profile of spot irradiance distribution, etc. This paper investigates the noise propagation in two major centroid evaluation algorithms through computer simulation; 1st order moments of the irradiance algorithms i.e. center of gravity algorithm, and correlation algorithm. First, the center of gravity algorithm is shown to have relatively large dependence on the magnitudes of noises and the shape & size of irradiance sidelobes, whose effects are also shown to be minimized by optimal thresholding. Second, the correlation algorithm is shown to be robust over those effects, while its measurement accuracy is vulnerable to the size variation of the reference spot. The investigation is finally confirmed by experimental measurements of defocus wavefront aberrations using a Shack-Hartmann sensor using those two algorithms.

Fast Detection of Finger-vein Region for Finger-vein Recognition (지정맥 인식을 위한 고속 지정맥 영역 추출 방법)

  • Kim, Sung-Min;Park, Kang-Roung;Park, Dong-Kwon;Won, Chee-Sun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.1
    • /
    • pp.23-31
    • /
    • 2009
  • Recently, biometric techniques such as face recognition, finger-print recognition and iris recognition have been widely applied for various applications including door access control, finance security and electric passport. This paper presents the method of using finger-vein pattern for the personal identification. In general, when the finger-vein image is acquired from the camera, various conditions such as the penetrating amount of the infrared light and the camera noise make the segmentation of the vein from the background difficult. This in turn affects the system performance of personal identification. To solve this problem, we propose the novel and fast method for extracting the finger-vein region. The proposed method has two advantages compared to the previous methods. One is that we adopt a locally adaptive thresholding method for the binarization of acquired finger-vein image. Another advantage is that the simple morphological opening and closing are used to remove the segmentation noise to finally obtain the finger-vein region from the skeletonization. Experimental results showed that our proposed method could quickly and exactly extract the finger-vein region without using various kinds of time-consuming filters for preprocessing.

Analysis on Topographic Normalization Methods for 2019 Gangneung-East Sea Wildfire Area Using PlanetScope Imagery (2019 강릉-동해 산불 피해 지역에 대한 PlanetScope 영상을 이용한 지형 정규화 기법 분석)

  • Chung, Minkyung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_1
    • /
    • pp.179-197
    • /
    • 2020
  • Topographic normalization reduces the terrain effects on reflectance by adjusting the brightness values of the image pixels to be equal if the pixels cover the same land-cover. Topographic effects are induced by the imaging conditions and tend to be large in high mountainousregions. Therefore, image analysis on mountainous terrain such as estimation of wildfire damage assessment requires appropriate topographic normalization techniques to yield accurate image processing results. However, most of the previous studies focused on the evaluation of topographic normalization on satellite images with moderate-low spatial resolution. Thus, the alleviation of topographic effects on multi-temporal high-resolution images was not dealt enough. In this study, the evaluation of terrain normalization was performed for each band to select the optimal technical combinations for rapid and accurate wildfire damage assessment using PlanetScope images. PlanetScope has considerable potential in the disaster management field as it satisfies the rapid image acquisition by providing the 3 m resolution daily image with global coverage. For comparison of topographic normalization techniques, seven widely used methods were employed on both pre-fire and post-fire images. The analysis on bi-temporal images suggests the optimal combination of techniques which can be applied on images with different land-cover composition. Then, the vegetation index was calculated from the images after the topographic normalization with the proposed method. The wildfire damage detection results were obtained by thresholding the index and showed improvementsin detection accuracy for both object-based and pixel-based image analysis. In addition, the burn severity map was constructed to verify the effects oftopographic correction on a continuous distribution of brightness values.

Bar Code Location Algorithm Using Pixel Gradient and Labeling (화소의 기울기와 레이블링을 이용한 효율적인 바코드 검출 알고리즘)

  • Kim, Seung-Jin;Jung, Yoon-Su;Kim, Bong-Seok;Won, Jong-Un;Won, Chul-Ho;Cho, Jin-Ho;Lee, Kuhn-Il
    • The KIPS Transactions:PartD
    • /
    • v.10D no.7
    • /
    • pp.1171-1176
    • /
    • 2003
  • In this paper, we propose an effective bar code detection algorithm using the feature analysis and the labeling. After computing the direction of pixels using four line operators, we obtain the histogram about the direction of pixels by a block unit. We calculate the difference between the maximum value and the minimum value of the histogram and consider the block that have the largest difference value as the block of the bar code region. We get the line passing by the bar code region with the selected block but detect blocks of interest to get the more accurate line. The largest difference value is used to decide the threshold value to obtain the binary image. After obtaining a binary image, we do the labeling about the binary image. Therefore, we find blocks of interest in the bar code region. We calculate the gradient and the center of the bar code with blocks of interest, and then get the line passing by the bar code and detect the bar code. As we obtain the gray level of the line passing by the bar code, we grasp the information of the bar code.