• 제목/요약/키워드: normal forest model

검색결과 72건 처리시간 0.022초

기후변화에 따른 벚꽃 개화일의 시공간 변이 (Climate Change Impact on the Flowering Season of Japanese Cherry (Prunus serrulata var. spontanea) in Korea during 1941-2100)

  • 윤진일
    • 한국농림기상학회지
    • /
    • 제8권2호
    • /
    • pp.68-76
    • /
    • 2006
  • A thermal time-based two-step phenological model was used to project flowering dates of Japanese cherry in South Korea from 1941 to 2100. The model consists of two sequential periods: the rest period described by chilling requirement and the forcing period described by heating requirement. Daily maximum and minimum temperature are used to calculate daily chill units until a pre-determined chilling requirement for rest release is met. After the projected rest release date, daily heat units (growing degree days) are accumulated until a pre-determined heating requirement for flowering is achieved. Model calculations using daily temperature data at 18 synoptic stations during 1955-2004 were compared with the observed blooming dates and resulted in 3.9 days mean absolute error, 5.1 days root mean squared error, and a correlation coefficient of 0.86. Considering that the phonology observation has never been fully standardized in Korea, this result seems reasonable. Gridded data sets of daily maximum and minimum temperature with a 270 m grid spacing were prepared for the climatological years 1941-1970 and 1971-2000 from observations at 56 synoptic stations by using a spatial interpolation scheme for correcting urban heat island effect as well as elevation effect. A 25km-resolution temperature data set covering the Korean Peninsula, prepared by the Meteorological Research Institute of Korea Meteorological Administration under the condition of Inter-governmental Panel on Climate Change-Special Report on Emission Scenarios A2, was converted to 270 m gridded data for the climatological years 2011-2040, 2041-2070 and 2071-2100. The model was run by the gridded daily maximum and minimum temperature data sets, each representing a climatological normal year for 1941-1970, 1971-2000, 2011-2040, 2041-2070, and 2071-2100. According to the model calculation, the spatially averaged flowering date for the 1971-2000 normal is shorter than that for 1941-1970 by 5.2 days. Compared with the current normal (1971-2000), flowering of Japanese cherry is expected to be earlier by 9, 21, and 29 days in the future normal years 2011-2040, 2041-2070, and 2071-2100, respectively. Southern coastal areas might experience springs with incomplete or even no Japanese cherry flowering caused by insufficient chilling for breaking bud dormancy.

한반도 중서부 초본식생에 있어서의 최대현존량과 종밀도와의 관계에 대하여 (On Relationship between Maximum Standing Crop and Species Density in the Herbaceous Vegetaton of West Central Korea)

  • 오규칠
    • Journal of Plant Biology
    • /
    • 제26권3호
    • /
    • pp.161-171
    • /
    • 1983
  • To test whether the Grime's model on relationship between maximum standing crop plus litter (350~750g/$m^2$) and species density (10~30/0.25$m^2$) fit well or not, a total of 52 samples, with 4 replicate plots (0.5m$\times$0.5m each) per sample, was collected from various forests, grass lands and coastal salt marshes in midwestern part of central Korean peninsula from September to October in 1982. The result agrees well with the model for grass lands salt marshes, that is, shape of curve for the maximum standing crop (minus litter) against species density indicates normal distribution. The number of species was 11 for the grassland and 7 for the salt marshes within the range of 300g to 700g per square meter for the maximum standing crop. In forest stands, however, as the maximum standing crop of herbs increased the species density decreased. The Grime's model does not seem to fit with the resutls on forest stands of this study. It is examined further the relationships among the maximum standing crop, species density and eleven soil properties, and the possible cause of this discrepancy was disscused.

  • PDF

Surface Analysis of Papers Treated with N-chloro-polyacrylamide Using X-ray Photoelectron Spectroscopy: Mechanism of Wet Strength Development

  • Chen Shaoping;Wu Zonghua;Tanaka Hiroo
    • 한국펄프종이공학회:학술대회논문집
    • /
    • 한국펄프종이공학회 1999년도 Pre-symposium of the 10th ISWPC Recent Advances in Paper Science and Technology
    • /
    • pp.276-281
    • /
    • 1999
  • The surfaces of sheets added with N-chloro-polyacrylamide (N-Cl-PAM) are analyzed using X-ray photoelectron spectroscopy (XPS) to clarify the chemical bonding involved in the paper strength development induced by N-Cl-PAM. The comparison of the observed N1s chemical shift of the sheet with those of the paper strength additives and the model compound, 1-butyryl-3-propyl urea, illustrated the presence of covalent bonds of alkyl acyl urea and urethane on the fiber surfaces. Thus the formation of the covalent bonds by N-Cl-PAM themselves and by N-Cl-PAM with cellulose and hemicellulose may be an explanation for much higher effectiveness of N-Cl-PAM on the improvement of wet strength of paper than A-PAM.

Prediction of spatio-temporal AQI data

  • KyeongEun Kim;MiRu Ma;KyeongWon Lee
    • Communications for Statistical Applications and Methods
    • /
    • 제30권2호
    • /
    • pp.119-133
    • /
    • 2023
  • With the rapid growth of the economy and fossil fuel consumption, the concentration of air pollutants has increased significantly and the air pollution problem is no longer limited to small areas. We conduct statistical analysis with the actual data related to air quality that covers the entire of South Korea using R and Python. Some factors such as SO2, CO, O3, NO2, PM10, precipitation, wind speed, wind direction, vapor pressure, local pressure, sea level pressure, temperature, humidity, and others are used as covariates. The main goal of this paper is to predict air quality index (AQI) spatio-temporal data. The observations of spatio-temporal big datasets like AQI data are correlated both spatially and temporally, and computation of the prediction or forecasting with dependence structure is often infeasible. As such, the likelihood function based on the spatio-temporal model may be complicated and some special modelings are useful for statistically reliable predictions. In this paper, we propose several methods for this big spatio-temporal AQI data. First, random effects with spatio-temporal basis functions model, a classical statistical analysis, is proposed. Next, neural networks model, a deep learning method based on artificial neural networks, is applied. Finally, random forest model, a machine learning method that is closer to computational science, will be introduced. Then we compare the forecasting performance of each other in terms of predictive diagnostics. As a result of the analysis, all three methods predicted the normal level of PM2.5 well, but the performance seems to be poor at the extreme value.

동해안 산불 피해지에서 산불 후 경과 년 수에 따른 식생 구조의 발달 (Development of Vegetation Structure after Forest Fire in the East Coastal Region, Korea)

  • 이규송;정연숙;김석철;신승숙;노찬호;박상덕
    • The Korean Journal of Ecology
    • /
    • 제27권2호
    • /
    • pp.99-106
    • /
    • 2004
  • 동해안 산불피해지역 중 시급한 복구계획을 요하는 사면지역에 대한 산불 후 경과 년 수에 따른 일반적인 식생발달 모형을 개발하였다. 또한 산불피해지역을 관리하기 위한 기법 개발에 사용될 매개변수로서 식생지수들의 발달모형을 제시하였다. 산불 피해지역에서 맹아로 재생하는 목본 식물량을 추정하기 위하여 총 17종의 상대 생장식을 구하였다. 산불 경과 후 교목층, 아교목층, 관목층 및 초본층의 4층 구조를 갖춘 완전한 형태의 숲으로 회복되는데 약 20년이 경과해야 하는 것으로 추정되었다. 산불 후 경과 년 수에 따라 최상층부 식생의 키, 목본의 기저면적 및 지상부 목본 식물량은 직선적인 증가 경향을 나타내었고, 전체 지상부 식피율과 낙엽층은 대수적인 증가 경향을 나타내었다. 식생지수 중 Ivc와 Ivcd는 산불 후 경과 년 수에 따라 대수적인 증가 경향을 그리고 Hcl과 Hcdl은 직선적인 증가경향을 나타내었다. 특히 산불 피해 후 2차 재해가 예측되는 초기 5년차 이내에서는 모든 식생 요인들이 입지요인이나 지력에 따라 공간적 이질성이 큰 것이 확인되었다. 식생 지수 중 Ivc와Ivcd는 산불 초기의 공간적 이질성을 표현하기에 적당한 지수이었고, Hcl과 Hcdl은 장기적인 식생구조의 발달을 예측하는데 유용한 지수로 판단되었다.

A Study on the Prediction Model of the Elderly Depression

  • SEO, Beom-Seok;SUH, Eung-Kyo;KIM, Tae-Hyeong
    • 산경연구논집
    • /
    • 제11권7호
    • /
    • pp.29-40
    • /
    • 2020
  • Purpose: In modern society, many urban problems are occurring, such as aging, hollowing out old city centers and polarization within cities. In this study, we intend to apply big data and machine learning methodologies to predict depression symptoms in the elderly population early on, thus contributing to solving the problem of elderly depression. Research design, data and methodology: Machine learning techniques used random forest and analyzed the correlation between CES-D10 and other variables, which are widely used worldwide, to estimate important variables. Dependent variables were set up as two variables that distinguish normal/depression from moderate/severe depression, and a total of 106 independent variables were included, including subjective health conditions, cognitive abilities, and daily life quality surveys, as well as the objective characteristics of the elderly as well as the subjective health, health, employment, household background, income, consumption, assets, subjective expectations, and quality of life surveys. Results: Studies have shown that satisfaction with residential areas and quality of life and cognitive ability scores have important effects in classifying elderly depression, satisfaction with living quality and economic conditions, and number of outpatient care in living areas and clinics have been important variables. In addition, the results of a random forest performance evaluation, the accuracy of classification model that classify whether elderly depression or not was 86.3%, the sensitivity 79.5%, and the specificity 93.3%. And the accuracy of classification model the degree of elderly depression was 86.1%, sensitivity 93.9% and specificity 74.7%. Conclusions: In this study, the important variables of the estimated predictive model were identified using the random forest technique and the study was conducted with a focus on the predictive performance itself. Although there are limitations in research, such as the lack of clear criteria for the classification of depression levels and the failure to reflect variables other than KLoSA data, it is expected that if additional variables are secured in the future and high-performance predictive models are estimated and utilized through various machine learning techniques, it will be able to consider ways to improve the quality of life of senior citizens through early detection of depression and thus help them make public policy decisions.

Applying a Forced Censoring Technique with Accelerated Modeling for Improving Estimation of Extremely Small Percentiles of Strengths

  • Chen Weiwei;Leon Ramon V.;Young Timothy M.;Guess Frank M.
    • International Journal of Reliability and Applications
    • /
    • 제7권1호
    • /
    • pp.27-39
    • /
    • 2006
  • Many real world cases in material failure analysis do not follow perfectly the normal distribution. Forcing of the normality assumption may lead to inaccurate predictions and poor product quality. We examine the failure process of the internal bond (IB or tensile strength) of medium density fiberboard (MDF). We propose a forced censoring technique that closer fits the lower tails of strength distributions and better estimates extremely smaller percentiles, which may be valuable to continuous quality improvement initiatives. Further analyses are performed to build an accelerated common-shaped Weibull model for different product types using the $JMP^{(R)}$ Survival and Reliability platform. In this paper, a forced censoring technique is implemented for the first time as a software module, using $JMP^{(R)}$ Scripting Language (JSL) to expedite data processing, which is crucial for real-time manufacturing settings. Also, we use JSL to automate the task of fitting an accelerated Weibull model and testing model homogeneity in the shape parameter. Finally, a package script is written to readily provide field engineers customized reporting for model visualization, parameter estimation, and percentile forecasting. Our approach may be more accurate for product conformance evaluation, plus help reduce the cost of destructive testing and data management due to reduced frequency of testing. It may also be valuable for preventing field failure and improved product safety even when destructive testing is not reduced by yielding higher precision intervals at the same confidence level.

  • PDF

Constraints on dark radiation from cosmological probes

  • Rossi, Graziano;Yeche, Christophe;Palanque-Delabrouille, Nathalie;Lesgourgues, Julien
    • 천문학회보
    • /
    • 제40권1호
    • /
    • pp.44.1-44.1
    • /
    • 2015
  • We present joint constraints on the number of effective neutrino species $N_{eff}$ and the sum of neutrino masses ${\Sigma}m_{\nu}$, based on a technique which exploits the full information contained in the one-dimensional Lyman-${\alpha}$ forest flux power spectrum, complemented by additional cosmological probes. In particular, we obtain $N_{eff}=2.91{\pm}0.22$ (95% CL) and ${\Sigma}m_{\nu}$ < 0.15 eV (95% CL) when we combine BOSS Lyman-${\alpha}$ forest data with CMB (Planck+ACT+SPT+WMAP polarization) measurements, and $N_{eff}=2.88{\pm}0.20$ (95% CL) and ${Sigma}m_{\nu}$ < 0.14 eV (95% CL) when we further add baryon acoustic oscillations. Our results tend to favor the normal hierarchy scenario for the masses of the active neutrino species, provide strong evidence for the Cosmic Neutrino Background from $N_{eff}{\approx}3$($N_{eff}=0$ is rejected at more than $14{\sigma}$), and rule out the possibility of a sterile neutrino thermalized with active neutrinos (i.e., $N_{eff}=4$) - or more generally any decoupled relativistic relic with $${\Delta}N_{eff}{\sim_=}1$$ - at a significance of over $5{\sigma}$, the strongest bound to date, implying that there is no need for exotic neutrino physics in the concordance ${\Lambda}CDM$ model.

  • PDF

Effects of Fracture Intersection Characteristics on Transport in Three-Dimensional Fracture Networks

  • Park, Young-Jin;Lee, Kang-Kun
    • 한국지하수토양환경학회:학술대회논문집
    • /
    • 한국지하수토양환경학회 2001년도 추계학술발표회
    • /
    • pp.27-30
    • /
    • 2001
  • Flow and transport at fracture intersections, and their effects on network scale transport, are investigated in three-dimensional random fracture networks. Fracture intersection mixing rules complete mixing and streamline routing are defined in terms of fluxes normal to the intersection line between two fractures. By analyzing flow statistics and particle transfer probabilities distributed along fracture intersections, it is shown that for various network structures with power law size distributions of fractures, the choice of intersection mixing rule makes comparatively little difference in the overall simulated solute migration patterns. The occurrence and effects of local flows around an intersection (local flow cells) are emphasized. Transport simulations at fracture intersections indicate that local flow circulations can arise from variability within the hydraulic head distribution along intersections, and from the internal no flow condition along fracture boundaries. These local flow cells act as an effective mechanism to enhance the nondiffusive breakthrough tailing often observed in discrete fracture networks. It is shown that such non-Fickian (anomalous) solute transport can be accounted for by considering only advective transport, in the framework of a continuous time random walk model. To clarify the effect of forest environmental changes (forest type difference and clearcut) on water storage capacity in soil and stream flow, watershed had been investigated.

  • PDF

Protecting Accounting Information Systems using Machine Learning Based Intrusion Detection

  • Biswajit Panja
    • International Journal of Computer Science & Network Security
    • /
    • 제24권5호
    • /
    • pp.111-118
    • /
    • 2024
  • In general network-based intrusion detection system is designed to detect malicious behavior directed at a network or its resources. The key goal of this paper is to look at network data and identify whether it is normal traffic data or anomaly traffic data specifically for accounting information systems. In today's world, there are a variety of principles for detecting various forms of network-based intrusion. In this paper, we are using supervised machine learning techniques. Classification models are used to train and validate data. Using these algorithms we are training the system using a training dataset then we use this trained system to detect intrusion from the testing dataset. In our proposed method, we will detect whether the network data is normal or an anomaly. Using this method we can avoid unauthorized activity on the network and systems under that network. The Decision Tree and K-Nearest Neighbor are applied to the proposed model to classify abnormal to normal behaviors of network traffic data. In addition to that, Logistic Regression Classifier and Support Vector Classification algorithms are used in our model to support proposed concepts. Furthermore, a feature selection method is used to collect valuable information from the dataset to enhance the efficiency of the proposed approach. Random Forest machine learning algorithm is used, which assists the system to identify crucial aspects and focus on them rather than all the features them. The experimental findings revealed that the suggested method for network intrusion detection has a neglected false alarm rate, with the accuracy of the result expected to be between 95% and 100%. As a result of the high precision rate, this concept can be used to detect network data intrusion and prevent vulnerabilities on the network.