• Title/Summary/Keyword: error correction algorithm

Search Result 475, Processing Time 0.024 seconds

Improvement of Radar Rainfall Estimation Using Radar Reflectivity Data from the Hybrid Lowest Elevation Angles (혼합 최저고도각 반사도 자료를 이용한 레이더 강우추정 정확도 향상)

  • Lyu, Geunsu;Jung, Sung-Hwa;Nam, Kyung-Yeub;Kwon, Soohyun;Lee, Cheong-Ryong;Lee, Gyuwon
    • Journal of the Korean earth science society
    • /
    • v.36 no.1
    • /
    • pp.109-124
    • /
    • 2015
  • A novel approach, hybrid surface rainfall (KNU-HSR) technique developed by Kyungpook Natinal University, was utilized for improving the radar rainfall estimation. The KNU-HSR technique estimates radar rainfall at a 2D hybrid surface consistings of the lowest radar bins that is immune to ground clutter contaminations and significant beam blockage. Two HSR techniques, static and dynamic HSRs, were compared and evaluated in this study. Static HSR technique utilizes beam blockage map and ground clutter map to yield the hybrid surface whereas dynamic HSR technique additionally applies quality index map that are derived from the fuzzy logic algorithm for a quality control in real time. The performances of two HSRs were evaluated by correlation coefficient (CORR), total ratio (RATIO), mean bias (BIAS), normalized standard deviation (NSD), and mean relative error (MRE) for ten rain cases. Dynamic HSR (CORR=0.88, BIAS= $-0.24mm\;hr^{-1}$, NSD=0.41, MRE=37.6%) shows better performances than static HSR without correction of reflectivity calibration bias (CORR=0.87, BIAS= $-2.94mm\;hr^{-1}$, NSD=0.76, MRE=58.4%) for all skill scores. Dynamic HSR technique overestimates surface rainfall at near range whereas it underestimates rainfall at far ranges due to the effects of beam broadening and increasing the radar beam height. In terms of NSD and MRE, dynamic HSR shows the best results regardless of the distance from radar. Static HSR significantly overestimates a surface rainfall at weaker rainfall intensity. However, RATIO of dynamic HSR remains almost 1.0 for all ranges of rainfall intensity. After correcting system bias of reflectivity, NSD and MRE of dynamic HSR are improved by about 20 and 15%, respectively.

A Reflectance Normalization Via BRDF Model for the Korean Vegetation using MODIS 250m Data (한반도 식생에 대한 MODIS 250m 자료의 BRDF 효과에 대한 반사도 정규화)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.6
    • /
    • pp.445-456
    • /
    • 2005
  • The land surface parameters should be determined with sufficient accuracy, because these play an important role in climate change near the ground. As the surface reflectance presents strong anisotropy, off-nadir viewing results a strong dependency of observations on the Sun - target - sensor geometry. They contribute to the random noise which is produced by surface angular effects. The principal objective of the study is to provide a database of accurate surface reflectance eliminated the angular effects from MODIS 250m reflective channel data over Korea. The MODIS (Moderate Resolution Imaging Spectroradiometer) sensor has provided visible and near infrared channel reflectance at 250m resolution on a daily basis. The successive analytic processing steps were firstly performed on a per-pixel basis to remove cloudy pixels. And for the geometric distortion, the correction process were performed by the nearest neighbor resampling using 2nd-order polynomial obtained from the geolocation information of MODIS Data set. In order to correct the surface anisotropy effects, this paper attempted the semiempirical kernel-driven Bi- directional Reflectance Distribution Function(BRDF) model. The algorithm yields an inversion of the kernel-driven model to the angular components, such as viewing zenith angle, solar zenith angle, viewing azimuth angle, solar azimuth angle from reflectance observed by satellite. First we consider sets of the model observations comprised with a 31-day period to perform the BRDF model. In the next step, Nadir view reflectance normalization is carried out through the modification of the angular components, separated by BRDF model for each spectral band and each pixel. Modeled reflectance values show a good agreement with measured reflectance values and their RMSE(Root Mean Square Error) was totally about 0.01(maximum=0.03). Finally, we provide a normalized surface reflectance database consisted of 36 images for 2001 over Korea.

The Analysis of Dose in a Rectum by Multipurpose Brachytherapy Phantom (근접방사선치료용 다목적 팬톰을 이용한 직장 내 선량분석)

  • Huh, Hyun-Do;Kim, Seong-Hoon;Cho, Sam-Ju;Lee, Suk;Shin, Dong-Oh;Kwon, Soo-Il;Kim, Hun-Jung;Kim, Woo-Chul;K. Loh John-J.
    • Radiation Oncology Journal
    • /
    • v.23 no.4
    • /
    • pp.223-229
    • /
    • 2005
  • Purpose: In this work we designed and made MPBP(Multi Purpose Brachytherapy Phantom). The MPBP enables one to reproduce the same patient set-up in MPBP as the treatment of the patient and we tried to get an exact analysis of rectal doses in the phantom without need of in-vivo dosimetry. Materials and Methods: Dose measurements were tried at a point of rectum 1, the reference point of rectum, with a diode detector for 4 patients treated with tandem and ovoid for a brachytherapy of a cervix cancer. Total 20 times of rectal dose measurements were made with 5 times a patient. The set-up variation of the diode detector was analyzed. The same patient set-ups were reproduced in self-made MPBP and then rectal doses were measured with TLD. Results: The measurement results of the diode detector showed that the set-up variation of the diode detector was the maximum $11.25{\pm}0.95mm$ in the y-direction for Patient 1 and the maximum $9.90{\pm}4.50mm,\;20.85{\pm}4.50mm,\;and\;19.15{\pm}3.33mm$ in the z-direction for Patient 2, 3, and 4, respectively. Un analyzing the degree of variation in 3 directions the more variation was showed in the z-direction than x- and y-direction except Patient 1. The results of TLD measurements in MPBP showed the relative maximum error of 8.6% and 7.7% at a point of rectum 1 for Patient 1 and 4, respectively and 1.7% and 1.2% for Patient 2 and 3, respectively. The doses measured at R1 and R2 were higher than those calculated except R point of Patient 2. this can be thought to related to the algorithm of dose calculation, whcih corrects for air and water but is guessed not to consider the correction for the scattered rays, but by considering the self-error (${\pm}5%$) TLD has the relative error of values measured and calculated was analyzed to be in a good agreement within 15%. Conclusion: The reproducibility of dose measurements under the same condition as the treatment could be achieved owing to the self-made MPMP and the dose at the point of interest could be analyzed accurately. If a treatment is peformed after achieving dose optimization using the data obtained in the phantom, dose will be able to be minimized to important organs.

Development of JPEG2000 Viewer for Mobile Image System (이동형 의료영상 장치를 위한 JPEG2000 영상 뷰어 개발)

  • 김새롬;정해조;강원석;이재훈;이상호;신성범;유선국;김희중
    • Progress in Medical Physics
    • /
    • v.14 no.2
    • /
    • pp.124-130
    • /
    • 2003
  • Currently, as a consequence of PACS (Picture Archiving Communication System) implementation many hospitals are replacing conventional film-type interpretations of diagnostic medical images with new digital-format interpretations that can also be saved, and retrieve However, the big limitation in PACS is considered to be the lack of mobility. The purpose of this study is to determine the optimal communication packet size. This was done by considering the terms occurred in the wireless communication. After encoding medical image using JPGE2000 image compression method, This method embodied auto-error correction technique preventing the loss of packets occurred during wireless communication. A PC class server, with capabilities to load, collect data, save images, and connect with other network, was installed. Image data were compressed using JPEG2000 algorithm which supports the capability of high energy density and compression ratio, to communicate through a wireless network. Image data were also transmitted in block units coeded by JPEG2000 to prevent the loss of the packets in a wireless network. When JPGE2000 image data were decoded in a PUA (Personal Digital Assistant), it was instantaneous for a MR (Magnetic Resonance) head image of 256${\times}$256 pixels, while it took approximately 5 seconds to decode a CR (Computed Radiography) chest image of 800${\times}$790 pixels. In the transmission of the image data using a CDMA 1X module (Code-Division Multiple Access 1st Generation), 256 byte/sec was considered a stable transmission rate, but packets were lost in the intervals at the transmission rate of 1Kbyte/sec. However, even with a transmission rate above 1 Kbyte/sec, packets were not lost in wireless LAN. Current PACS are not compatible with wireless networks. because it does not have an interface between wired and wireless. Thus, the mobile JPEG2000 image viewing system was developed in order to complement mobility-a limitation in PACS. Moreover, the weak-connections of the wireless network was enhanced by re-transmitting image data within a limitations The results of this study are expected to play an interface role between the current wired-networks PACS and the mobile devices.

  • PDF

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.