• Title/Summary/Keyword: parameter sensitivity

Search Result 1,014, Processing Time 0.038 seconds

Sensitivity Analysis of Meteorology-based Wildfire Risk Indices and Satellite-based Surface Dryness Indices against Wildfire Cases in South Korea (기상기반 산불위험지수와 위성기반 지면건조지수의 우리나라 산불발생에 대한 민감도분석)

  • Kong, Inhak;Kim, Kwangjin;Lee, Yangwon
    • Journal of Cadastre & Land InformatiX
    • /
    • v.47 no.2
    • /
    • pp.107-120
    • /
    • 2017
  • There are many wildfire risk indices worldwide, but objective comparisons between such various wildfire risk indices and surface dryness indices have not been conducted for the wildfire cases in Korea. This paper describes a sensitivity analysis on the wildfire risk indices and surface dryness indices for Korea using LDAPS(Local Analysis and Prediction System) meteorological dataset on a 1.5-km grid and MODIS(Moderate-resolution Imaging Spectroradiometer) satellite images on a 1-km grid. We analyzed the meteorology-based wildfire risk indices such as the Australian FFDI(forest fire danger index), the Canadian FFMC(fine fuel moisture code), the American HI(Haines index), and the academically presented MNI(modified Nesterov index). Also we examined the satellite-based surface dryness indices such as NDDI(normalized difference drought index) and TVDI(temperature vegetation dryness index). As a result of the comparisons between the six indices regarding 120 wildfire cases with the area damaged over 1ha during the period between January 2013 and May 2017, we found that the FFDI and FFMC showed a good predictability for most wildfire cases but the MNI and TVDI were not suitable for Korea. The NDDI can be used as a proxy parameter for wildfire risk because its average CDF(cumulative distribution function) scores were stably high irrespective of fire size. The indices tested in this paper should be carefully chosen and used in an integrated way so that they can contribute to wildfire forecasting in Korea.

A Sensitivity Analysis of JULES Land Surface Model for Two Major Ecosystems in Korea: Influence of Biophysical Parameters on the Simulation of Gross Primary Productivity and Ecosystem Respiration (한국의 두 주요 생태계에 대한 JULES 지면 모형의 민감도 분석: 일차생산량과 생태계 호흡의 모사에 미치는 생물리모수의 영향)

  • Jang, Ji-Hyeon;Hong, Jin-Kyu;Byun, Young-Hwa;Kwon, Hyo-Jung;Chae, Nam-Yi;Lim, Jong-Hwan;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.2
    • /
    • pp.107-121
    • /
    • 2010
  • We conducted a sensitivity test of Joint UK Land Environment Simulator (JULES), in which the influence of biophysical parameters on the simulation of gross primary productivity (GPP) and ecosystem respiration (RE) was investigated for two typical ecosystems in Korea. For this test, we employed the whole-year observation of eddy-covariance fluxes measured in 2006 at two KoFlux sites: (1) a deciduous forest in complex terrain in Gwangneung and (2) a farmland with heterogeneous mosaic patches in Haenam. Our analysis showed that the simulated GPP was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration for both ecosystems. RE was sensitive to wood biomass parameter for the deciduous forest in Gwangneung. For the mixed farmland in Haenam, however, RE was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration like the simulated GPP. For both sites, the JULES model overestimated both GPP and RE when the default values of input parameters were adopted. Considering the fact that the leaf nitrogen concentration observed at the deciduous forest site was only about 60% of its default value, the significant portion of the model's overestimation can be attributed to such a discrepancy in the input parameters. Our finding demonstrates that the abovementioned key biophysical parameters of the two ecosystems should be evaluated carefully prior to any simulation and interpretation of ecosystem carbon exchange in Korea.

Uniform Hazard Spectrum for Seismic Design of Fire Protection Facilities (소방시설의 내진설계를 위한 등재해도 스펙트럼)

  • Kim, Jun-Kyoung;Jeong, Keesin
    • Fire Science and Engineering
    • /
    • v.31 no.1
    • /
    • pp.26-35
    • /
    • 2017
  • Since the Northridge earthquake (1994) and Kobe earthquake (1995), the concept of performance-based design has been actively introduced to design major structures and buildings. Recently, the seismic design code was established for fire protection facilities. Therefore, the important fire protection facilities should be designed and constructed according to the seismic design code. Accordingly, uniform hazard spectra (UHS), with annual exceedance probabilities, corresponding to the performance level, such as operational, immediate occupancy, life safety, and collapse prevention, are required for performance-based design. Using the method of probabilistic seismic hazard analysis (PSHA), the uniform hazard spectra for 5 major cities in Korea with a recurrence period of 500, 1,000, and 2,500 years corresponding to frequencies of (0.5, 1.0, 2.0, 5.0, 10.0)Hz and PGA, were analyzed. The expert panel was comprised of 10 members in seismology and tectonics. The ground motion prediction equations and several seismo tectonic models suggested by 10 expert panel members in seismology and tectonics were used as the input data for uniform hazard spectrum analysis. According to sensitivity analysis, the parameter of spectral ground motion prediction equations has a greater impact on the seismic hazard than seismotectonic models. The resulting uniform hazard spectra showed maximum values of the seismic hazard at a frequency of 10Hz and also showed the shape characteristics, which are similar to previous studies and related technical guides for nuclear facilities.

Development of a Model for Calculating Road Congestion Toll with Sensitivity Analysis (민감도 분석을 이용한 도로 혼잡통행료 산정 모형 개발)

  • Kim, Byung-Kwan;Lim, Yong-Taek;Lim, Kang-Won
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.5
    • /
    • pp.139-149
    • /
    • 2004
  • As the expansion of road capacity has become impractical in many urban areas, congestion pricing has been widely considered as an effective method to reduce urban traffic congestion in recent years. The principal reason is that the congestion pricing may lead the user equilibrium (UE) flow pattern to system optimum (SO) pattern in road network. In the context of network equilibrium, the link tolls according to the marginal cost pricing principle can user an UE flow to a SO pattern. Thus, the pricing method offers an efficient tool for moving toward system optimal traffic conditions on the network. This paper proposes a continuous network design program (CNDP) in network equilibrium condition, in order to find optimal congestion toll for maximizing net economic benefit (NEB). The model could be formulated as a bi-level program with continuous variable(congestion toll) such that the upper level problem is for maximizing the NEB in elastic demand, while the lower level is for describing route choice of road users. The bi-level CNDP is intrinsically nonlinear, non-convex, and hence it might be difficult to solve. So, we suggest a heuristic solution algorithm, which adopt derivative information of link flow with respect to design parameter, or congestion toll. Two example networks are used for test of the model proposed in the paper.

Optimal Active-Control & Development of Optimization Algorithm for Reduction of Drag in Flow Problems(1) - Development of Optimization Algorithm and Techniques for Large-Scale and Highly Nonlinear Flow Problem (드래그 감소를 위한 유체의 최적 엑티브 제어 및 최적화 알고리즘의 개발(1) - 대용량, 비선헝 유체의 최적화를 위한 알고리즘 및 테크닉의 개발)

  • Bark, Jai-Hyeong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.20 no.5
    • /
    • pp.661-669
    • /
    • 2007
  • Eyer since the Prandtl's experiment in 1934 and X-21 airjet test in 1950 both attempting to reduce drag, it was found that controlling the velocities of surface for extremely fast-moving object in the air through suction or injection was highly effective and active method. To obtain the right amount of suction or injection, however, repetitive trial-and error parameter test has been still used up to now. This study started from an attempt to decide optimal amount of suction and injection of incompressible Navier-Stokes by employing optimization techniques. However, optimization with traditional methods are very limited, especially when Reynolds number gets high and many unexpected variables emerges. In earlier study, we have proposed an algorithm to solve this problem by using step by step method in analysis and introducing SQP method in optimization. In this study, we propose more effective and robust algorithm and techniques in solving flow optimization problem.

A Study on a Model of Rainfall Drop-Size Distribution over Daegwanryeong Mountainous Area Using PARSIVEL Observations (PARSIVEL 측정 자료를 활용한 대관령 산악지역 강수입자분포 모형 연구)

  • Park, Rae-Seol;Jang, Min;Oh, Sung Nam;Hong, Yun-Ki
    • Journal of the Korean earth science society
    • /
    • v.35 no.7
    • /
    • pp.518-528
    • /
    • 2014
  • In this study, a model of rainfall drop-size distribution was modified using PARSIVEL-retrieved rainfall drop-size distribution over Daegwanryeong mountainous area. A prototype model (Modified ${\Gamma}$ distribution model) applicable for this area was decided through the comparative analysis between results from models proposed by preceding research and PARSIVEL-retrieved data over Daegwanryeong mountainous area. In order to apply the prototype model for Daegwanryeong region, the parameters (${\alpha}$, A, B) were made via sensitivity experiments and models of the rainfall drop-size distributions for five cases of rainfall rate were proposed. Results from the proposed five models showed high correlations with PARSIVEL-retrieved data ($R^2=0.975$). In order to suggest a generalized form of rainfall drop-size distribution, interaction equations between rainfall rates and parameters (${\alpha}$, A, B) were investigated. The generalized model of the rainfall drop-size distribution was highly correlated with PARSIVEL-retrieved data ($R^2=0.953$), which means that the proposed model from this study was effective for simulating the rainfall drop-size distribution over Daegwanryeong region. However, the proposed model was optimized for rainfall drop-size distribution over Daegwanryeong region. Therefore, broad observations of other regions are necessary in order to develop the representative model of the Korean peninsula.

Prediction of Infarction in Acute Cerebral Ischemic Stroke by Using Perfusion MR Imaging and $^{99m}Tc-HMPAO$ SPECT (급성 허혈성 뇌졸중에서 관류 자기공명영상과 99mTC-HMPAO 단광자방출단층촬영술을 이용한 뇌경색의 예측)

  • Ho Cheol Choe;Sun Joo Lee;Jae Hyoung Kim
    • Investigative Magnetic Resonance Imaging
    • /
    • v.6 no.1
    • /
    • pp.55-63
    • /
    • 2002
  • Purpose : We investigated the predictive values of relative CBV measured with perfusion MR imaging, and relative CBF measured with SPECT for tissue outcome in acute ischemic stroke. Material and Methods : Thirteen patients, who had acute unilateral middle cerebral artery occlusion, underwent perfusion MR imaging, and $^{99m}Tc-HMPAO$ SPECT within 6 hours after the onset of symptoms. Lesion-to-contralateral ratios of perfusion parameters were measured, and best cut-off values of both parameter ratios with their accuracy to discriminate between regions with and without evolving infarction were calculated. Results : Mean relative CBV ratios in regions with evolving infarction and without evolving infarction were $0.58{\pm}0.27$ and $0.9{\pm}0.17$ (p < 0.001), and mean relative CBF ratios in those regions were $0.41{\pm}0.22$ and $0.71{\pm}0.14$ (p < 0.001). The best cutoff values to discriminate between regions with and without evolving infarction were estimated to be 0.80 for relative CBV ratio and 0.56 for relative CBF ratio. The sensitivity, specificity and efficiency of each cutoff value were 80.6, 87.5, 82.7% for relative CBV ratio, and 72.2, 75.0, 73.0% for relative CBF ratio (p > 0.05 between two parameters). Conclusion Measurement of relative CBV and relative CBE may be useful in predicting tissue outcome in acute ischemic stroke.

  • PDF

Detection Efficiency of Microcalcification using Computer Aided Diagnosis in the Breast Ultrasonography Images (컴퓨터보조진단을 이용한 유방 초음파영상에서의 미세석회화 검출 효율)

  • Lee, Jin-Soo;Ko, Seong-Jin;Kang, Se-Sik;Kim, Jung-Hoon;Park, Hyung-Hu;Choi, Seok-Yoon;Kim, Chang-Soo
    • Journal of radiological science and technology
    • /
    • v.35 no.3
    • /
    • pp.227-235
    • /
    • 2012
  • Digital Mammography makes it possible to reproduce the entire breast image. And it is used to detect microcalcification and mass which are the most important point of view of nonpalpable early breast cancer, so it has been used as the primary screening test of breast disease. It is reported that microcalcification of breast lesion is important in diagnosis of early breast cancer. In this study, six types of texture features algorithms are used to detect microcalcification on breast US images and the study has analyzed recognition rate of lesion between normal US images and other US images which microcalification is seen. As a result of the experiment, Computer aided diagnosis recognition rate that distinguishes mammography and breast US disease was considerably high 70~98%. The average contrast and entropy parameters were low in ROC analysis, but sensitivity and specificity of four types parameters were over 90%. Therefore it is possible to detect microcalcification on US images. If not only six types of texture features algorithms but also the research of additional parameter algorithm is being continually proceeded and basis of practical use on CAD is being prepared, it can be a important meaning as pre-reading. Also, it is considered very useful things for early diagnosis of breast cancer.

Investigation of image preprocessing and face covering influences on motion recognition by a 2D human pose estimation algorithm (모션 인식을 위한 2D 자세 추정 알고리듬의 이미지 전처리 및 얼굴 가림에 대한 영향도 분석)

  • Noh, Eunsol;Yi, Sarang;Hong, Seokmoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.285-291
    • /
    • 2020
  • In manufacturing, humans are being replaced with robots, but expert skills remain difficult to convert to data, making them difficult to apply to industrial robots. One method is by visual motion recognition, but physical features may be judged differently depending on the image data. This study aimed to improve the accuracy of vision methods for estimating the posture of humans. Three OpenPose vision models were applied: MPII, COCO, and COCO+foot. To identify the effects of face-covering accessories and image preprocessing on the Convolutional Neural Network (CNN) structure, the presence/non-presence of accessories, image size, and filtering were set as the parameters affecting the identification of a human's posture. For each parameter, image data were applied to the three models, and the errors between the actual and predicted values, as well as the percentage correct keypoints (PCK), were calculated. The COCO+foot model showed the lowest sensitivity to all three parameters. A <50% (from 3024×4032 to 1512×2016 pixels) reduction in image size was considered acceptable. Emboss filtering, in combination with MPII, provided the best results (reduced error of <60 pixels).

Parameter Optimization and Automation of the FLEXPART Lagrangian Particle Dispersion Model for Atmospheric Back-trajectory Analysis (공기괴 역궤적 분석을 위한 FLEXPART Lagrangian Particle Dispersion 모델의 최적화 및 자동화)

  • Kim, Jooil;Park, Sunyoung;Park, Mi-Kyung;Li, Shanlan;Kim, Jae-Yeon;Jo, Chun Ok;Kim, Ji-Yoon;Kim, Kyung-Ryul
    • Atmosphere
    • /
    • v.23 no.1
    • /
    • pp.93-102
    • /
    • 2013
  • Atmospheric transport pathway of an air mass is an important constraint controlling the chemical properties of the air mass observed at a designated location. Such information could be utilized for understanding observed temporal variabilities in atmospheric concentrations of long-lived chemical compounds, of which sinks and/or sources are related particularly with natural and/or anthropogenic processes in the surface, and as well as for performing inversions to constrain the fluxes of such compounds. The Lagrangian particle dispersion model FLEXPART provides a useful tool for estimating detailed particle dispersion during atmospheric transport, a significant improvement over traditional "single-line" trajectory models that have been widely used. However, those without a modeling background seeking to create simple back-trajectory maps may find it challenging to optimize FLEXPART for their needs. In this study, we explain how to set up, operate, and optimize FLEXPART for back-trajectory analysis, and also provide automatization programs based on the open-source R language. Discussions include setting up an "AVAILABLE" file (directory of input meteorological fields stored on the computer), creating C-shell scripts for initiating FLEXPART runs and storing the output in directories designated by date, as wells as processing the FLEXPART output to create figures for a back-trajectory "footprint" (potential emission sensitivity within the boundary layer). Step by step instructions are explained for an example case of calculating back trajectories derived for Anmyeon-do, Korea for January 2011. One application is also demonstrated in interpreting observed variabilities in atmospheric $CO_2$ concentration at Anmyeon-do during this period. Back-trajectory modeling information introduced in this study should facilitate the creation and automation of most common back-trajectory calculation needs in atmospheric research.