• Title/Summary/Keyword: normalization method

Search Result 639, Processing Time 0.031 seconds

Optimization of Dehazing Method for Efficient Implementation (효율적인 구현을 위한 안개 제거 방법의 최적화)

  • Kim, Minsang;Park, Yongmin;Kim, Byung-O;Kim, Tae-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.10
    • /
    • pp.58-65
    • /
    • 2016
  • This paper presents optimization techniques to reduce the processing time of the dehazing method and proposes an efficient dehazing method based on them. In the proposed techniques, the atmospheric light is estimated based on the distributed sorting of the dark channel pixels, so as to reduce the computations. The normalization process required in the transmission estimation is simplified by the assumption that the atmospheric light is monochromatic. In addition, the dark channel is modified into the median dark channel in order to eliminate the transmission refinement process while achieving a comparable dehazing quality. The proposed dehazing method based on the optimization techniques is presented and its performance is investigated by developing a prototype system. When compared to the previous method, the proposed dehazing method reduces the processing time by 65% while maintaining the dehazing quality.

A Study on Design and Interpretation of Pattern Laser Coordinate Tracking Method for Curved Screen Using Multiple Cameras (다중카메라를 이용한 곡면 스크린의 패턴 레이저 좌표 추적 방법 설계와 해석 연구)

  • Jo, Jinpyo;Kim, Jeongho;Jeong, Yongbae
    • Journal of Platform Technology
    • /
    • v.9 no.4
    • /
    • pp.60-70
    • /
    • 2021
  • This paper proposes a method capable of stably tracking the coordinates of a patterned laser image in a curved screen shooting system using two or more channels of multiple cameras. This method can track and acquire target points very effectively when applied to a multi-screen shooting method that can replace the HMD shooting method. Images of curved screens with severe deformation obtained from individual cameras are corrected through image normalization, image binarization, and noise removal. This corrected image is created and applied as an Euclidean space map that is easy to track the firing point based on the matching point. As a result of the experiment, the image coordinates of the pattern laser were stably extracted in the curved screen shooting system, and the error of the target point position of the real-world coordinate position and the broadband Euclidean map was minimized. The reliability of the proposed method was confirmed through the experiment.

A Study on 8kbps PC-MPC by Using Position Compensation Method of Multi-Pulse (멀티펄스의 위치보정 방법을 이용한 8kbps PC-MPC에 관한 연구)

  • Lee, See-Woo
    • Journal of Digital Convergence
    • /
    • v.11 no.5
    • /
    • pp.285-290
    • /
    • 2013
  • In a MPC coding using excitation source of voiced and unvoiced, it would be a distortion of speech waveform. This is caused by normalization of synthesis speech waveform of voiced in the process of restoration the multi-pulses of representation section. To solve this problem, this paper present a method of position compensation(PC-MPC) in a multi-pulses each pitch interval in order to reduce distortion of speech waveform. I was confirmed that the method can be synthesized close to the original speech waveform. And I evaluate the MPC and PC-MPC using multi-pulses position compensation method. As a result, $SNR_{seg}$ of PC-MPC was improved 0.4dB for female voice and 0.5dB for male voice respectively. Compared to the MPC, $SNR_{seg}$ of PC-MPC has been improved that I was able to control the distortion of the speech waveform finally. And so, I expect to be able to this method for cellular phone and smart phone using excitation source of low bit rate.

Image Retrieval Using Combination of Color and Multiresolution Texture Features (칼라 및 다해상도 질감 특징 결합에 의한 영상검색)

  • Chun Young-deok;Sung Joong-ki;Kim Nam-chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9C
    • /
    • pp.930-938
    • /
    • 2005
  • We propose a content-based image retrieval(CBIR) method based on an efncient combination of a color feature and multiresolution texture features. As a color feature, a HSV autocorrelograrn is chosen which is blown to measure spatial correlation of colors well. As texture features, BDIP and BVLC moments are chosen which is hewn to measure local intensity variations well and measure local texture smoothness well, respectively. The texture features are obtained in a wavelet pyramid of the luminance component of a color image. The extracted features are combined for efficient similarity computation by the normalization depending on their dimensions and standard deviation vectors. Experimental results show that the proposed method yielded average $8\%\;and\;11\%$ better performance in precision vs. recall than the method using BDIPBVLC moments and the method using color autocorrelograrn, respectively and yielded at least $10\%$ better performance than the methods using wavelet moments, CSD, color histogram. Specially, the proposed method shows an excellent performance over the other methods in image DBs contained images of various resolutions.

Improvement in the classification performance of Raman spectra using a hierarchical tree structure (계층적 트리 구조를 이용한 라만스펙트럼 판별 성능 개선)

  • Park, Jun-Kyu;Baek, Sung-June;Seo, Yu-Gyeong;Seo, Sung-Il
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.8
    • /
    • pp.5280-5287
    • /
    • 2014
  • This paper proposes a method in which classes are grouped as a hierarchical tree structure for the effective classification of the Raman spectra. As experimental data, the Raman spectra of 28 chemical compounds were obtained, and pre-treated with noise removal and normalization. The spectra that induced a classification error were grouped into the same class and the hierarchical structure class was composed. Each high and low class was classified using a PCA-MAP method. According to the experimental results, the classification of 100% was achieved with 2.7 features on average when the proposed method was applied. Considering that the same classification rates were achieved with 6 features using the conventional method, the proposed method was found to be much better than the conventional one in terms of the total computational complexity and practical application.

Text extraction from camera based document image (카메라 기반 문서영상에서의 문자 추출)

  • 박희주;김진호
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.2
    • /
    • pp.14-20
    • /
    • 2003
  • This paper presents a text extraction method of camera based document image. It is more difficult to recognize camera based document image in comparison with scanner based image because of segmentation problem due to variable lighting condition and versatile fonts. Both document binarization and character extraction are important processes to recognize camera based document image. After converting color image into grey level image, gray level normalization is used to extract character region independent of lighting condition and background image. Local adaptive binarization method is then used to extract character from the background after the removal of noise. In this character extraction step, the information of the horizontal and vertical projection and the connected components is used to extract character line, word region and character region. To evaluate the proposed method, we have experimented with documents mixed Hangul, English, symbols and digits of the ETRI database. An encouraging binarization and character extraction results have been obtained.

  • PDF

A novel evidence theory model and combination rule for reliability estimation of structures

  • Tao, Y.R.;Wang, Q.;Cao, L.;Duan, S.Y.;Huang, Z.H.H.;Cheng, G.Q.
    • Structural Engineering and Mechanics
    • /
    • v.62 no.4
    • /
    • pp.507-517
    • /
    • 2017
  • Due to the discontinuous nature of uncertainty quantification in conventional evidence theory(ET), the computational cost of reliability analysis based on ET model is very high. A novel ET model based on fuzzy distribution and the corresponding combination rule to synthesize the judgments of experts are put forward in this paper. The intersection and union of membership functions are defined as belief and plausible membership function respectively, and the Murfhy's average combination rule is adopted to combine the basic probability assignment for focal elements. Then the combined membership functions are transformed to the equivalent probability density function by a normalizing factor. Finally, a reliability analysis procedure for structures with the mixture of epistemic and aleatory uncertainties is presented, in which the equivalent normalization method is adopted to solve the upper and lower bound of reliability. The effectiveness of the procedure is demonstrated by a numerical example and an engineering example. The results also show that the reliability interval calculated by the suggested method is almost identical to that solved by conventional method. Moreover, the results indicate that the computational cost of the suggested procedure is much less than that of conventional method. The suggested ET model provides a new way to flexibly represent epistemic uncertainty, and provides an efficiency method to estimate the reliability of structures with the mixture of epistemic and aleatory uncertainties.

Suspectible Object Detection Method for Radiographic Images (방사선 검색기 영상 내의 의심 물체 탐지 방법)

  • Kim, Gi-Tae;Kang, Hyun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.670-678
    • /
    • 2014
  • This paper presents a method to extract objects in radiographic images where all the allowable combinations of segmented regions are compared to a target object using Fourier descriptor. In the object extraction for usual images, a main problem is occlusion. In radiographic images, there is an advantage that the shape of an object is not occluded by other objects. It is because radiographic images represent the amount of radiation penetrated through objects. Considering the property of no occlusion in radiographic images, the shape based descriptors can be very effective to find objects. After all, the proposed object extraction method consists of three steps of segmenting regions, finding all the combinations of the segmented regions, and matching the combinations to the shape of the target object. In finding the combinations, we reduce a lot of computations to remove unnecessary combinations before matching. In matching, we employ Fourier descriptor so that the proposed method is rotation and shift invariant. Additionally, shape normalization is adopted to be scale invariant. By experiments, we verify that the proposed method works well in extracting objects.

Measurement of nuclear fuel assembly's bow from visual inspection's video record

  • Dusan Plasienka;Jaroslav Knotek;Marcin Kopec;Martina Mala;Jan Blazek
    • Nuclear Engineering and Technology
    • /
    • v.55 no.4
    • /
    • pp.1485-1494
    • /
    • 2023
  • The bow of the nuclear fuel assembly is a well-known phenomenon. One of the vital criteria during the history of nuclear fuel development has been fuel assembly's mechanical stability. Once present, the fuel assembly bow can lead to safety issues like excessive water gap and power redistribution or even incomplete rod insertion (IRI). The extensive bow can result in assembly handling and loading problems. This is why the fuel assembly's bow is one of the most often controlled geometrical factors during periodic fuel inspections for VVER when compared e.g. to on-site fuel rod gap measurements or other instrumental measurements performed on-site. Our proposed screening method uses existing video records for fuel inspection. We establish video frames normalization and aggregation for the purposes of bow measurement. The whole process is done by digital image processing algorithms which analyze rotations of video frames, extract angles whose source is the fuel set torsion, and reconstruct torsion schema. This approach provides results comparable to the commonly utilized method. We tested this new approach in real operation on 19 fuel assemblies with different campaign numbers and designs, where the average deviation from other methods was less than 2 % on average. Due to the fact, that the method has not yet been validated during full scale measurements of the fuel inspection, the preliminary results stand for that we recommend this method as a complementary part of standard bow measurement procedures to increase measurement robustness, lower time consumption and preserve or increase accuracy. After completed validation it is expected that the proposed method allows standalone fuel assembly bow measurements.

Robust Eye Localization using Multi-Scale Gabor Feature Vectors (다중 해상도 가버 특징 벡터를 이용한 강인한 눈 검출)

  • Kim, Sang-Hoon;Jung, Sou-Hwan;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.25-36
    • /
    • 2008
  • Eye localization means localization of the center of the pupils, and is necessary for face recognition and related applications. Most of eye localization methods reported so far still need to be improved about robustness as well as precision for successful applications. In this paper, we propose a robust eye localization method using multi-scale Gabor feature vectors without big computational burden. The eye localization method using Gabor feature vectors is already employed in fuck as EBGM, but the method employed in EBGM is known not to be robust with respect to initial values, illumination, and pose, and may need extensive search range for achieving the required performance, which may cause big computational burden. The proposed method utilizes multi-scale approach. The proposed method first tries to localize eyes in the lower resolution face image by utilizing Gabor Jet similarity between Gabor feature vector at an estimated initial eye coordinates and the Gabor feature vectors in the eye model of the corresponding scale. Then the method localizes eyes in the next scale resolution face image in the same way but with initial eye points estimated from the eye coordinates localized in the lower resolution images. After repeating this process in the same way recursively, the proposed method funally localizes eyes in the original resolution face image. Also, the proposed method provides an effective illumination normalization to make the proposed multi-scale approach more robust to illumination, and additionally applies the illumination normalization technique in the preprocessing stage of the multi-scale approach so that the proposed method enhances the eye detection success rate. Experiment results verify that the proposed eye localization method improves the precision rate without causing big computational overhead compared to other eye localization methods reported in the previous researches and is robust to the variation of post: and illumination.