• Title/Summary/Keyword: model errors

Search Result 3,127, Processing Time 0.03 seconds

The Relationship Analysis between the Epicenter and Lineaments in the Odaesan Area using Satellite Images and Shaded Relief Maps (위성영상과 음영기복도를 이용한 오대산 지역 진앙의 위치와 선구조선의 관계 분석)

  • CHA, Sung-Eun;CHI, Kwang-Hoon;JO, Hyun-Woo;KIM, Eun-Ji;LEE, Woo-Kyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.61-74
    • /
    • 2016
  • The purpose of this paper is to analyze the relationship between the location of the epicenter of a medium-sized earthquake(magnitude 4.8) that occurred on January 20, 2007 in the Odaesan area with lineament features using a shaded relief map(1/25,000 scale) and satellite images from LANDSAT-8 and KOMPSAT-2. Previous studies have analyzed lineament features in tectonic settings primarily by examining two-dimensional satellite images and shaded relief maps. These methods, however, limit the application of the visual interpretation of relief features long considered as the major component of lineament extraction. To overcome some existing limitations of two-dimensional images, this study examined three-dimensional images, produced from a Digital Elevation Model and drainage network map, for lineament extraction. This approach reduces mapping errors introduced by visual interpretation. In addition, spline interpolation was conducted to produce density maps of lineament frequency, intersection, and length required to estimate the density of lineament at the epicenter of the earthquake. An algorithm was developed to compute the Value of the Relative Density(VRD) representing the relative density of lineament from the map. The VRD is the lineament density of each map grid divided by the maximum density value from the map. As such, it is a quantified value that indicates the concentration level of the lineament density across the area impacted by the earthquake. Using this algorithm, the VRD calculated at the earthquake epicenter using the lineament's frequency, intersection, and length density maps ranged from approximately 0.60(min) to 0.90(max). However, because there were differences in mapped images such as those for solar altitude and azimuth, the mean of VRD was used rather than those categorized by the images. The results show that the average frequency of VRD was approximately 0.85, which was 21% higher than the intersection and length of VRD, demonstrating the close relationship that exists between lineament and the epicenter. Therefore, it is concluded that the density map analysis described in this study, based on lineament extraction, is valid and can be used as a primary data analysis tool for earthquake research in the future.

Dosimetric Characteristics of Edge $Detector^{TM}$ in Small Beam Dosimetry (소조사면 선량 계측을 위한 엣지검출기의 특성 분석)

  • Chang, Kyung-Hwan;Lee, Bo-Ram;Kim, You-Hyun;Choi, Kyoung-Sik;Lee, Jung-Seok;Park, Byung-Moon;Bae, Yong-Ki;Hong, Se-Mie;Lee, Jeong-Woo
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.191-198
    • /
    • 2009
  • In this study, we evaluated an edge detector for small-beam dosimetry. We measured the dose linearity, dose rate dependence, output factor, beam profiles, and percentage depth dose using an edge detector (Model 1118 Edge) for 6-MV photon beams at different field sizes and depths. The obtained values were compared with those obtained using a standard volume ionization chamber (CC13) and photon diode detector (PFD). The dose linearity results for the three detectors showed good agreement within 1%. The edge detector had the best linearity of ${\pm}0.08%$. The edge detector and PFD showed little dose rate dependency throughout the range of 100~600 MU/min, while CC13 showed a significant discrepancy of approximately -5% at 100 MU/min. The output factors of the three detectors showed good agreement within 1% for the tested field sizes. However, the output factor of CC13 compared to the other two detectors had a maximum difference of 21% for small field sizes (${\sim}4{\times}4\;cm^2$). When analyzing the 20~80% penumbra, the penumbra measured using CC13 was approximately two times wider than that using the edge detector for all field sizes. The width measured using PFD was approximately 30% wider for all field sizes. Compared to the edge detector, the 10~90% penumbras measured using the CC13 and PFD were approximately 55% and 19% wider, respectively. The full width at half maximum (FWHM) of the edge detector was close to the real field size, while the other two detectors measured values that were 8~10% greater for all field sizes. Percentage depth doses measured by the three detectors corresponded to each other for small beams. Based on the results, we consider the edge detector as an appropriate small-beam detector, while CC13 and PFD can lead to some errors when used for small beam fields under $4{\times}4\;cm^2$.

  • PDF

A Study on Change in Cement Mortar Characteristics under Carbonation Based on Tests for Hydration and Porosity (수화물 및 공극률 관측 실험을 통한 시멘트모르타르의 탄산화 특성 변화에 대한 연구)

  • Kwon, Seung-Jun;Song, Ha-Won;Park, Sang-Soon
    • Journal of the Korea Concrete Institute
    • /
    • v.19 no.5
    • /
    • pp.613-621
    • /
    • 2007
  • Due to the increasing significance of durability, much researches on carbonation, one of the major deterioration phenomena are carried out. However, conventional researches based on fully hardened concrete are focused on prediction of carbonation depth and they sometimes cause errors. In contrast with steel members, behaviors in early-aged concrete such as porosity and hydrates (calcium hydroxide) are very important and may be changed under carbonation process. Because transportation of deteriorating factors is mainly dependent on porosity and saturation, it is desirable to consider these changes in behaviors in early-aged concrete under carbonation for reasonable analysis of durability in long term exposure or combined deterioration. As for porosity, unless the decrease in $CO_2$ diffusion due to change in porosity is considered, the results from the prediction is overestimated. The carbonation depth and characteristics of pore water are mainly determined by amount of calcium hydroxide, and bound chloride content in carbonated concrete is also affected. So Analysis based on test for hydration and porosity is recently carried out for evaluation of carbonation characteristics. In this study, changes in porosity and hydrate $(Ca(OH)_2)$ under carbonation process are performed through the tests. Mercury Intrusion Porosimetry (MIP) for changed porosity, Thermogravimetric Analysis (TGA) for amount of $(Ca(OH)_2)$ are carried out respectively and analysis technique for porosity and hydrates under carbonation is developed utilizing modeling for behavior in early-aged concrete such as multi component hydration heat model (MCHHM) and micro pore structure formation model (MPSFM). The results from developed technique is in reasonable agreement with experimental data, respectively and they are evaluated to be used for analysis of chloride behavior in carbonated concrete.

High-Resolution Numerical Simulations with WRF/Noah-MP in Cheongmicheon Farmland in Korea During the 2014 Special Observation Period (2014년 특별관측 기간 동안 청미천 농경지에서의 WRF/Noah-MP 고해상도 수치모의)

  • Song, Jiae;Lee, Seung-Jae;Kang, Minseok;Moon, Minkyu;Lee, Jung-Hoon;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.17 no.4
    • /
    • pp.384-398
    • /
    • 2015
  • In this paper, the high-resolution Weather Research and Forecasting/Noah-MultiParameterization (WRF/Noah-MP) modeling system is configured for the Cheongmicheon Farmland site in Korea (CFK), and its performance in land and atmospheric simulation is evaluated using the observed data at CFK during the 2014 special observation period (21 August-10 September). In order to explore the usefulness of turning on Noah-MP dynamic vegetation in midterm simulations of surface and atmospheric variables, two numerical experiments are conducted without dynamic vegetation and with dynamic vegetation (referred to as CTL and DVG experiments, respectively). The main results are as following. 1) CTL showed a tendency of overestimating daytime net shortwave radiation, thereby surface heat fluxes and Bowen ratio. The CTL experiment showed reasonable magnitudes and timing of air temperature at 2 m and 10 m; especially the small error in simulating minimum air temperature showed high potential for predicting frost and leaf wetness duration. The CTL experiment overestimated 10-m wind and precipitation, but the beginning and ending time of precipitation were well captured. 2) When the dynamic vegetation was turned on, the WRF/Noah-MP system showed more realistic values of leaf area index (LAI), net shortwave radiation, surface heat fluxes, Bowen ratio, air temperature, wind and precipitation. The DVG experiment, where LAI is a prognostic variable, produced larger LAI than CTL, and the larger LAI showed better agreement with the observed. The simulated Bowen ratio got closer to the observed ratio, indicating reasonable surface energy partition. The DVG experiment showed patterns similar to CTL, with differences for maximum air temperature. Both experiments showed faster rising of 10-m air temperature during the morning growth hours, presumably due to the rapid growth of daytime mixed layers in the Yonsei University (YSU) boundary layer scheme. The DVG experiment decreased errors in simulating 10-m wind and precipitation. 3) As horizontal resolution increases, the models did not show practical improvement in simulation performance for surface fluxes, air temperature, wind and precipitation, and required three-dimensional observation for more agricultural land spots as well as consistency in model topography and land cover data.

Development of 2.5D Electron Dose Calculation Algorithm (2.5D 전자선 선량계산 알고리즘 개발)

  • 조병철;고영은;오도훈;배훈식
    • Progress in Medical Physics
    • /
    • v.10 no.3
    • /
    • pp.133-140
    • /
    • 1999
  • In this paper, as a preliminary study for developing a full 3D electron dose calculation algorithm, We developed 2.5D electron dose calculation algorithm by extending 2D pencil-beam model to consider three dimensional geometry such as air-gap and obliquity appropriately. The dose calculation algorithm was implemented using the IDL5.2(Research Systems Inc., USA), For calculation of the Hogstrom's pencil-beam algorithm, the measured data of the central-axis depth-dose for 12 MeV(Siemens M6740) and the linear stopping power and the linear scattering power of water and air from ICRU report 35 was used. To evaluate the accuracy of the implemented program, we compared the calculated dose distribution with the film measurements in the three situations; the normal incident beam, the 45$^{\circ}$ oblique incident beam, and the beam incident on the pit-shaped phantom. As results, about 120 seconds had been required on the PC (Pentium III 450MHz) to calculate dose distribution of a single beam. It needs some optimizing methods to speed up the dose calculation. For the accuracy of dose calculation, in the case of the normal incident beam of the regular and irregular shaped field, at the rapid dose gradient region of penumbra, the errors were within $\pm$3 mm and the dose profiles were agreed within 5%. However, the discrepancy between the calculation and the measurement were about 10% for the oblique incident beam and the beam incident on the pit-shaped phantom. In conclusions, we expended 2D pencil-beam algorithm to take into account the three dimensional geometry of the patient. And also, as well as the dose calculation of irregular field, the irregular shaped body contour and the air-gap could be considered appropriately in the implemented program. In the near future, the more accurate algorithm will be implemented considering inhomogeneity correction using CT, and at that time, the program can be used as a tool for educational and research purpose. This study was supported by a grant (#HMP-98-G-1-016) of the HAN(Highly Advanced National) Project, Ministry of Health & Welfare, R.O.K.

  • PDF

The PRISM-based Rainfall Mapping at an Enhanced Grid Cell Resolution in Complex Terrain (복잡지형 고해상도 격자망에서의 PRISM 기반 강수추정법)

  • Chung, U-Ran;Yun, Kyung-Dahm;Cho, Kyung-Sook;Yi, Jae-Hyun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.2
    • /
    • pp.72-78
    • /
    • 2009
  • The demand for rainfall data in gridded digital formats has increased in recent years due to the close linkage between hydrological models and decision support systems using the geographic information system. One of the most widely used tools for digital rainfall mapping is the PRISM (parameter-elevation regressions on independent slopes model) which uses point data (rain gauge stations), a digital elevation model (DEM), and other spatial datasets to generate repeatable estimates of monthly and annual precipitation. In the PRISM, rain gauge stations are assigned with weights that account for other climatically important factors besides elevation, and aspects and the topographic exposure are simulated by dividing the terrain into topographic facets. The size of facet or grid cell resolution is determined by the density of rain gauge stations and a $5{\times}5km$ grid cell is considered as the lowest limit under the situation in Korea. The PRISM algorithms using a 270m DEM for South Korea were implemented in a script language environment (Python) and relevant weights for each 270m grid cell were derived from the monthly data from 432 official rain gauge stations. Weighted monthly precipitation data from at least 5 nearby stations for each grid cell were regressed to the elevation and the selected linear regression equations with the 270m DEM were used to generate a digital precipitation map of South Korea at 270m resolution. Among 1.25 million grid cells, precipitation estimates at 166 cells, where the measurements were made by the Korea Water Corporation rain gauge network, were extracted and the monthly estimation errors were evaluated. An average of 10% reduction in the root mean square error (RMSE) was found for any months with more than 100mm monthly precipitation compared to the RMSE associated with the original 5km PRISM estimates. This modified PRISM may be used for rainfall mapping in rainy season (May to September) at much higher spatial resolution than the original PRISM without losing the data accuracy.

Quantitative Analysis of Digital Radiography Pixel Values to absorbed Energy of Detector based on the X-Ray Energy Spectrum Model (X선 스펙트럼 모델을 이용한 DR 화소값과 디텍터 흡수에너지의 관계에 대한 정량적 분석)

  • Kim Do-Il;Kim Sung-Hyun;Ho Dong-Su;Choe Bo-young;Suh Tae-Suk;Lee Jae-Mun;Lee Hyoung-Koo
    • Progress in Medical Physics
    • /
    • v.15 no.4
    • /
    • pp.202-209
    • /
    • 2004
  • Flat panel based digital radiography (DR) systems have recently become useful and important in the field of diagnostic radiology. For DRs with amorphous silicon photosensors, CsI(TI) is normally used as the scintillator, which produces visible light corresponding to the absorbed radiation energy. The visible light photons are converted into electric signal in the amorphous silicon photodiodes which constitute a two dimensional array. In order to produce good quality images, detailed behaviors of DR detectors to radiation must be studied. The relationship between air exposure and the DR outputs has been investigated in many studies. But this relationship was investigated under the condition of the fixed tube voltage. In this study, we investigated the relationship between the DR outputs and X-ray in terms of the absorbed energy in the detector rather than the air exposure using SPEC-l8, an X-ray energy spectrum model. Measured exposure was compared with calculated exposure for obtaining the inherent filtration that is a important input variable of SPEC-l8. The absorbed energy in the detector was calculated using algorithm of calculating the absorbed energy in the material and pixel values of real images under various conditions was obtained. The characteristic curve was obtained using the relationship of two parameter and the results were verified using phantoms made of water and aluminum. The pixel values of the phantom image were estimated and compared with the characteristic curve under various conditions. It was found that the relationship between the DR outputs and the absorbed energy in the detector was almost linear. In a experiment using the phantoms, the estimated pixel values agreed with the characteristic curve, although the effect of scattered photons introduced some errors. However, effect of a scattered X-ray must be studied because it was not included in the calculation algorithm. The result of this study can provide useful information about a pre-processing of digital radiography.

  • PDF

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Study on the LOWTRAN7 Simulation of the Atmospheric Radiative Transfer Using CAGEX Data. (CAGEX 관측자료를 이용한 LOWTRAN7의 대기 복사전달 모의에 대한 조사)

  • 장광미;권태영;박경윤
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.2
    • /
    • pp.99-120
    • /
    • 1997
  • Solar radiation is scattered and absorbed atmospheric compositions in the atmosphere before it reaches the surface and, then after reflected at the surface, until it reaches the satellite sensor. Therefore, consideration of the radiative transfer through the atmosphere is essential for the quantitave analysis of the satellite sensed data, specially at shortwave region. This study examined a feasibility of using radiative transfer code for estimating the atmospheric effects on satellite remote sensing data. To do this, the flux simulated by LOWTRAN7 is compared with CAGEX data in shortwave region. The CAGEX (CERES/ARM/GEWEX Experiment) data provides a dataset of (1) atmospheric soundings, aerosol optical depth and albedo, (2) ARM(Aerosol Radiation Measurement) radiation flux measured by pyrgeometers, pyrheliometer and shadow pyranometer and (3) broadband shortwave flux simulated by Fu-Liou's radiative transfer code. To simulate aerosol effect using the radiative transfer model, the aerosol optical characteristics were extracted from observed aerosol column optical depth, Spinhirne's experimental vertical distribution of scattering coefficient and D'Almeida's statistical atmospheric aerosols radiative characteristics. Simulation of LOWTRAN7 are performed on 31 sample of completely clear days. LOWTRAN's result and CAGEX data are compared on upward, downward direct, downward diffuse solar flux at the surface and upward solar flux at the top of the atmosphere(TOA). The standard errors in LOWTRAN7 simulation of the above components are within 5% except for the downward diffuse solar flux at the surface(6.9%). The results show that a large part of error in LOWTRAN7 flux simulation appeared in the diffuse component due to scattering mainly by atmispheric aerosol. For improving the accuracy of radiative transfer simulation by model, there is a need to provide better information about the radiative charateristrics of atmospheric aerosols.

Estimation of Annual Trends and Environmental Effects on the Racing Records of Jeju Horses (제주마 주파기록에 대한 연도별 추세 및 환경효과 분석)

  • Lee, Jongan;Lee, Soo Hyun;Lee, Jae-Gu;Kim, Nam-Young;Choi, Jae-Young;Shin, Sang-Min;Choi, Jung-Woo;Cho, In-Cheol;Yang, Byoung-Chul
    • Journal of Life Science
    • /
    • v.31 no.9
    • /
    • pp.840-848
    • /
    • 2021
  • This study was conducted to estimate annual trends and the environmental effects in the racing records of Jeju horses. The Korean Racing Authority (KRA) collected 48,645 observations for 2,167 Jeju horses from 2002 to 2019. Racing records were preprocessed to eliminate errors that occur during the data collection. Racing times were adjusted for comparison between race distances. A stepwise Akaike information criterion (AIC) variable selection method was applied to select appropriate environment variables affecting racing records. The annual improvement of the race time was -0.242 seconds. The model with the lowest AIC value was established when variables were selected in the following order: year, budam classification, jockey ranking, trainer ranking, track condition, weather, age, and gender. The most suitable model was constructed when the jockey ranking and age variables were considered as random effects. Our findings have potential for application as basic data when building models for evaluating genetic abilities of Jeju horses.