• Title/Summary/Keyword: 보정처리

Search Result 1,290, Processing Time 0.026 seconds

FPGA Design and Realization for Scanning Image Enhancement using LUT Shading Correction Algorithm (LUT 쉐이딩 보정 알고리듬을 이용한 스캐닝 이미지 향상 FPGA 설계 구현)

  • Kim, Young-Bin;Ryu, Conan K.R.
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.8
    • /
    • pp.1759-1764
    • /
    • 2012
  • This paper describes FPGA design and realization using the shading correction algorithm for a CCD scan image enhancement. The shading algorithm is used by LUT (Look-up Table). The image enhancement results from that the histogram minimum value and maximum with respect to all pixels of the CCD image should be extracted, and the shading LUT is constructed to keep constant histogram with offset data. The output of sensor be converted to corrected LUT image in preprocessing, and the converting system is realized by FPGA to be enabled to operate in real time. The result of the experimentation for the proposed system is showed to take the scanning time 2.4ms below. The system is presented to be based on a low speed processor system to scan enhanced images in real time and be guaranteed to be low cost.

The Implementation of Automatic Compensation Modules for Digital Camera Image by Recognition of the Eye State (눈의 상태 인식을 이용한 디지털 카메라 영상 자동 보정 모듈의 구현)

  • Jeon, Young-Joon;Shin, Hong-Seob;Kim, Jin-Il
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.3
    • /
    • pp.162-168
    • /
    • 2013
  • This paper examines the implementation of automatic compensation modules for digital camera image when a person is closing his/her eyes. The modules detect the face and eye region and then recognize the eye state. If the image is taken when a person is closing his/her eyes, the function corrects the eye and produces the image by using the most satisfactory image of the eye state among the past frames stored in the buffer. In order to recognize the face and eye precisely, the pre-process of image correction is carried out using SURF algorithm and Homography method. For the detection of face and eye region, Haar-like feature algorithm is used. To decide whether the eye is open or not, similarity comparison method is used along with template matching of the eye region. The modules are tested in various facial environments and confirmed to effectively correct the images containing faces.

A Study on Underwater Camera Image Correction for Ship Bottom Inspection Using Underwater Drone (수중드론을 활용한 선박 선저검사용 수중 카메라 영상보정에 대한 연구)

  • Ha, Yeon-chul;Park, Junmo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.4
    • /
    • pp.186-192
    • /
    • 2019
  • In general, many marine organisms are attached to the bottom of a ship in operation or a ship in construction. Due to this phenomenon, the roughness of the ship surface increases, resulting in loss of ship speed, resulting in economic losses and environmental pollution. This study acquires / utilizes camera images attached to ship's bottom and underwater drones to check the condition of bottom. The acquired image will determine the roughness according to marine life by the administrator's visual confirmation. Therefore, by applying a filter algorithm to correct the image to the original image can help in the correct determination of whether or not attached to marine life. Various correction filters are required for the underwater image correction algorithm, and the lighting suitable for the dark underwater environment has a great influence on the judgment. The results of the research test according to the calibration algorithm and the roughness of each algorithm are considered to be applicable to many fields.

A Study on Pattern Inspection of LCD Using Color Compensation and Pattern Matching (색상보정 및 패턴 정합기법을 이용한 LCD 패턴검사에 관한 연구)

  • Ye, Soo-Young;Yoo, Choong-Woong;Nam, Ki-Gon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.7 no.4
    • /
    • pp.161-168
    • /
    • 2006
  • In this paper, we propose a method for the pattern inspection of LCD module using the color compensation and pattern matching. The pattern matching is generally used for the inspection method of LCD module at the industry. LCD module has many defections such as the brightness difference of the back light, the optic feature of liquid crystal, the difference of the light penetrated by driving LCD and the color difference by the lighting. The conventional method without the color compensation can not solve these defections and decreases the efficiency of inspecting LCD module. The method proposed to inspect defective badness through the pattern matching after it compensated color difference of the LCD occurred by the various causes. At first, it revises with setting by standard tone of color with the LCD pattern of the reference image. And It perform the preprocessing and pattern matching algorithm on the compensated image. In experiment, we confirmed that this algorithm is useful to detect some defections of LCD module. The proposed methods was easy to detect the faulty product.

  • PDF

Deep Learning-Based Outlier Detection and Correction for 3D Pose Estimation (3차원 자세 추정을 위한 딥러닝 기반 이상치 검출 및 보정 기법)

  • Ju, Chan-Yang;Park, Ji-Sung;Lee, Dong-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.10
    • /
    • pp.419-426
    • /
    • 2022
  • In this paper, we propose a method to improve the accuracy of 3D human pose estimation model in various move motions. Existing human pose estimation models have some problems of jitter, inversion, swap, miss that cause miss coordinates when estimating human poses. These problems cause low accuracy of pose estimation models to detect exact coordinates of human poses. We propose a method that consists of detection and correction methods to handle with these problems. Deep learning-based outlier detection method detects outlier of human pose coordinates in move motion effectively and rule-based correction method corrects the outlier according to a simple rule. We have shown that the proposed method is effective in various motions with the experiments using 2D golf swing motion data and have shown the possibility of expansion from 2D to 3D coordinates.

Evaluation of Contrast and Resolution on the SPECT of Pre and Post Scatter Correction (산란보정 전, 후의 SPECT 대조도 및 분해능 평가)

  • Seo, Myeong-Deok;Kim, Yeong-Seon;Jeong, Yo-Cheon;Lee, Wan-Kyu;Song, Jae-Beom
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.127-132
    • /
    • 2010
  • Purpose: Because of limitation of image acquisition method and acquisition time, scatter correction cannot perform easily in SPECT study. But in our hospital, could provide to clinic doctor of scatter corrected images, through introduction of new generation gamma camera has function of simple scatter correction. Taking this opportunity, we will compare scatter corrected and non-scatter corrected image from image quality of point of view. Materials and Methods: We acquisite the 'Hoffman brain phantom' SPECT image and '1mm line phantom' SPECT image, each 18 times, with GE Infinia Hawkeye 4, SPECT-CT gamma camera. At first, we calculated each contrast from axial slice of scatter corrected and non-scatter corrected SPECT image of 'Hoffman brain phantom'. and next, calculated each FWHM of horizontal and vertical from axial slice of scatter corrected and non-scatter corrected SPECT image of '1mm line phantom'. After then, we attempted T test analysis with SAS program on data, contrast and resolution value of scatter corrected and non-scatter corrected image. Results: The contrast of scatter corrected image, elevated from 0.3979 to 0.3509. And the resolution of scatter corrected image, elevated from 3.4822 to 3.6375. p value were 0.0097 in contrast and <0.0001 in resolution. We knew the fact that do improve of contrast and resolution through scatter correction. Conclusion: We got the improved SPECT image through simple and easy way, scatter correct. We will expect to provide improved images, from contrast and resolution point of view. to our clinic doctor.

  • PDF

Development of the GOCI Radiometric Calibration S/W (정지궤도 해양위성(GOCI) 복사보정 S/W 개발)

  • Cho, Seong-Ick;Ahn, Yu-Hwan;Han, Hee-Jeong;Ryu, Joo-Hyung
    • Proceedings of the KSRS Conference
    • /
    • 2009.03a
    • /
    • pp.167-171
    • /
    • 2009
  • 정지궤도에서는 세계 최초의 해양관측위성으로 개발된 정지궤도 해양위성(GOCI, Geostationary Ocean Color Imager)은 통신해양기상위성(COMS, Communication, Ocean and Meterological Satellite)의 탑재체로서 2009년말 발사 예정이다. 정지궤도 해양위성의 복사보정은 센서의 전기적 특성에 의한 잡음을 제거하기 위한 암흑전류 교정(Dark Current Correction)을 먼저 수행한 다음, 주운영지상국인 해양위성센터(KOSC, Korea Ocean Satellite Center)에서 수신된 위성의 원시자료의 Digital Number(DN)를 실제 해양원격탐사에서 이용하는 물리량인 복사휘도(Radiance, $W/m^2/{\mu}m/sr$)로 변환하는 복사보정을 수행한다. 정확도 높은 복사보정을 수행하기 위해서는 기준광원의 복사휘도와 센서의 물리적 특성을 정확하게 알아야 한다. 정지궤도 해양위성 궤도상 복사보정(on-orbit radiometric calibration)에서는 태양이 기준광원이기 때문에, 기준 태양복사모델(Thuillier 2004 Solar Irradiance Model)에서 지구-태양간 거리 변화(1년 주기)를 보정한 태양의 방사도 (Irradiance)를 이용하고, 태양입사각에 대한 태양광 확산기의 감쇄 특성 변화를 고려하여 센서에 입력되는 복사휘도를 계산한다. 센서의 물리적 특성으로 인한 복사보정의 오차를 줄이기 위해 우주방사선 및 우주먼지(space debris)로 인해 위성 운용기간 중 그 특성이 저하되는 태양광 확산기(solar Diffuser)의 특성변화를 모니터링하기 위한 DAMD(Diffuser Aging Monitoring Device)를 이용한다. 정지궤도 해양위성 주관운영기관인 한국해양연구원의 해양위성센터에서는 정지궤도 해양위성 복사보정을 수행하기 위한 S/W를 통신해양기상위성 자료처리시스템 개발사업의 일환으로 개발하였으며, 관련 성능 시험을 수행하고 있다.

  • PDF

Prestack Depth Migration for Gas Hydrate Seismic Data of the East Sea (동해 가스 하이드레이트 탄성파자료의 중합전 심도 구조보정)

  • Jang, Seong-Hyung;Suh, Sang-Yong;Go, Gin-Seok
    • Economic and Environmental Geology
    • /
    • v.39 no.6 s.181
    • /
    • pp.711-717
    • /
    • 2006
  • In order to study gas hydrate, potential future energy resources, Korea Institute of Geoscience and Mineral Resources has conducted seismic reflection survey in the East Sea since 1997. one of evidence for presence of gas hydrate in seismic reflection data is a bottom simulating reflector (BSR). The BSR occurs at the interface between overlaying higher velocity, hydrate-bearing sediment and underlying lower velocity, free gas-bearing sediment. That is often characterized by large reflection coefficient and reflection polarity reverse to that of seafloor reflection. In order to apply depth migration to seismic reflection data. we need high performance computers and a parallelizing technique because of huge data volume and computation. Phase shift plus interpolation (PSPI) is a useful method for migration due to less computing time and computational efficiency. PSPI is intrinsically parallelizing characteristic in the frequency domain. We conducted conventional data processing for the gas hydrate data of the Ease Sea and then applied prestack depth migration using message-passing-interface PSPI (MPI_PSPI) that was parallelized by MPI local-area-multi-computer (MPI_LAM). Velocity model was made using the stack velocities after we had picked horizons on the stack image with in-house processing tool, Geobit. We could find the BSRs on the migrated stack section were about at SP 3555-4162 and two way travel time around 2,950 ms in time domain. In depth domain such BSRs appear at 6-17 km distance and 2.1 km depth from the seafloor. Since energy concentrated subsurface was well imaged we have to choose acquisition parameters suited for transmitting seismic energy to target area.

Development of Value-added Product Generation Software from Satellite Imagery: 'Valadd-Pro' (고부가 정보 추출을 위한 위성 영상 처리 소프트웨어의 개발: '발라드-프로')

  • Lee, Hae Yeoun;Park, Wonkyu;Kim, S.A.B.;Kim, Taejung;Yoon, Taehun;Shin, Dongseok;Lee, Heungkyu
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.2 no.3
    • /
    • pp.91-100
    • /
    • 1999
  • To extract value-added products from satellite images for the benefit of science and human life, the Satellite Technology Research Center at Korea Advanced Institute of Science and Technology has developed an integrated software 'Valadd-Pro'. In this paper, the 'Valadd-Pro' software is briefly introduced and its main components such as geometric correction, ortho correction and digital elevation model extraction are described. The performances of the 'Valadd-Pro' was assessed on $60km{\times}60km$ SPOT panchromatic images using ground control points from GPS measurements. Also, the height accuracy was measured by comparing our results with the $DTEDs^3$ produced by USGS and the DEM generated from the digitized countours of maps produced by the National Geographic Institute. In geometric correction, the 'Valadd-Pro' software needed fewer ground control points than a commercial software 'P' for the satisfactory results. In ortho correction, the 'Valadd-Pro' software show the similar performance to a commercial software 'P'. In digital elevation model extraction, the 'Valadd-Pro' software is two times more accurate and four times faster than a commercial software 'P'.

  • PDF

A Study on Field Seismic Data Processing using Migration Velocity Analysis (MVA) for Depth-domain Velocity Model Building (심도영역 속도모델 구축을 위한 구조보정 속도분석(MVA) 기술의 탄성파 현장자료 적용성 연구)

  • Son, Woohyun;Kim, Byoung-yeop
    • Geophysics and Geophysical Exploration
    • /
    • v.22 no.4
    • /
    • pp.225-238
    • /
    • 2019
  • Migration velocity analysis (MVA) for creating optimum depth-domain velocities in seismic imaging was applied to marine long-offset multi-channel data, and the effectiveness of the MVA approach was demonstrated by the combinations of conventional data processing procedures. The time-domain images generated by conventional time-processing scheme has been considered to be sufficient so far for the seismic stratigraphic interpretation. However, when the purpose of the seismic imaging moves to the hydrocarbon exploration, especially in the geologic modeling of the oil and gas play or lead area, drilling prognosis, in-place hydrocarbon volume estimation, the seismic images should be converted into depth domain or depth processing should be applied in the processing phase. CMP-based velocity analysis, which is mainly based on several approximations in the data domain, inherently contains errors and thus has high uncertainties. On the other hand, the MVA provides efficient and somewhat real-scale (in depth) images even if there are no logging data available. In this study, marine long-offset multi-channel seismic data were optimally processed in time domain to establish the most qualified dataset for the usage of the iterative MVA. Then, the depth-domain velocity profile was updated several times and the final velocity-in-depth was used for generating depth images (CRP gather and stack) and compared with the images obtained from the velocity-in-time. From the results, we were able to confirm the depth-domain results are more reasonable than the time-domain results. The spurious local minima, which can be occurred during the implementation of full waveform inversion, can be reduced when the result of MVA is used as an initial velocity model.