• Title/Summary/Keyword: threshold model

Search Result 1,457, Processing Time 0.027 seconds

Life Table Analysis of the Cabbage Aphide, Brevicoryne brassicae (Linnaeus) (Homoptera: Aphididae), on Tah Tsai Chinese Cabbages (다채를 기주로 양배추가루진딧물[Brevicoryne brassicae (Linnaeus)]의 생명표 분석)

  • Kim, So Hyung;Kim, Kwang-Ho;Hwang, Chang-Yeon;Lim, Ju-Rak;Kim, Kang-Hyeok;Jeon, Sung-Wook
    • Korean journal of applied entomology
    • /
    • v.53 no.4
    • /
    • pp.449-456
    • /
    • 2014
  • Life table analysis and temperature-dependent development experiments were conducted to understand the biological characteristics of the cabbage aphid, Brevicoryne brassicae (Linnaeus) on detached Tah Tsai Chinese cabbage (Brassica campestris var. narinosa) leaves at seven constant temperatures (15, 18, 21, 24, 27, 30 and $33{\pm}1^{\circ}C$; $65{\pm}5%$ RH; 16L:8D). Mortality was lowest at $24^{\circ}C$ with 18% and 0% at $1^{st}{\sim}2^{nd}$ and $3^{rd}{\sim}4^{th}$ nymphal stages, respectively. The developmental period of $1^{st}{\sim}2^{nd}$ nymphal stage was 8.4 days at $18^{\circ}C$, and it decreased with increasing temperature. The developmental period of the $3^{rd}{\sim}4^{th}$ nymphal stage was 6.7 days at $18^{\circ}C$. The lower threshold temperature calculated using a linear model was $7.8^{\circ}C$, and the effective accumulative temperature was 120.1DD. Adult longevity was 14.9 days at $21^{\circ}C$, and total fecundity was observed 58.5 at $24^{\circ}C$. According to the life table, the net reproduction rate was 47.5 at $24^{\circ}C$, and the intrinsic rate of increase and the finite rate of increase were 0.36 and 1.43, respectively, at $27^{\circ}C$. The doubling time was 1.95d at $27^{\circ}C$, and mean generation time was 7.43d at $30^{\circ}C$.

A Study on Optimal Site Selection for Automatic Mountain Meteorology Observation System (AMOS): the Case of Honam and Jeju Areas (최적의 산악기상관측망 적정위치 선정 연구 - 호남·제주 권역을 대상으로)

  • Yoon, Sukhee;Won, Myoungsoo;Jang, Keunchang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.208-220
    • /
    • 2016
  • Automatic Mountain Meteorology Observation System (AMOS) is an important ingredient for several climatological and forest disaster prediction studies. In this study, we select the optimal sites for AMOS in the mountain areas of Honam and Jeju in order to prevent forest disasters such as forest fires and landslides. So, this study used spatial dataset such as national forest map, forest roads, hiking trails and 30m DEM(Digital Elevation Model) as well as forest risk map(forest fire and landslide), national AWS information to extract optimal site selection of AMOS. Technical methods for optimal site selection of the AMOS was the firstly used multifractal model, IDW interpolation, spatial redundancy for 2.5km AWS buffering analysis, and 200m buffering analysis by using ArcGIS. Secondly, optimal sites selected by spatial analysis were estimated site accessibility, observatory environment of solar power and wireless communication through field survey. The threshold score for the final selection of the sites have to be higher than 70 points in the field assessment. In the result, a total of 159 polygons in national forest map were extracted by the spatial analysis and a total of 64 secondary candidate sites were selected for the ridge and the top of the area using Google Earth. Finally, a total of 26 optimal sites were selected by quantitative assessment based on field survey. Our selection criteria will serve for the establishment of the AMOS network for the best observations of weather conditions in the national forests. The effective observation network may enhance the mountain weather observations, which leads to accurate prediction of forest disasters.

Development of a Distribution Prediction Model by Evaluating Environmental Suitability of the Aconitum austrokoreense Koidz. Habitat (세뿔투구꽃의 서식지 환경 적합성 평가를 통한 분포 예측 모형 개발)

  • Cho, Seon-Hee;Lee, Kye-Han
    • Journal of Korean Society of Forest Science
    • /
    • v.110 no.4
    • /
    • pp.504-515
    • /
    • 2021
  • To examine the relationship between environmental factors influencing the habitat of Aconitum austrokoreense Koidz., this study employed the MexEnt model to evaluate 21 environmental factors. Fourteen environmental factors having an AUC of at least 0.6 were found to be the age of stand, growing stock, altitude, topography, topographic wetness index, solar radiation, soil texture, mean temperature in January, mean temperature in April, mean annual temperature, mean rainfall in January, mean rainfall in August, and mean annual rainfall. Based on the response curves of the 14 descriptive factors, Aconitum austrokoreense Koidz. on the Baekun Mountain were deemed more suitable for sites at an altitude of 600 m or lower, and habitats were not significantly affected by the inclination angle. The preferred conditions were high stand density, sites close to valleys, and distribution in the northwestern direction. Under the five-age class system, the species were more likely to be observed for lower classes. The preferred solar radiation in this study was 1.2 MJ/m2. The species were less likely to be observed when the topographic wetness index fell below the reference value of 4.5, and were more likely observed above 7.5 (reference of threshold). Soil analysis showed that Aconitum austrokoreense Koidz. was more likely to thrive in sandy loam than clay. Suitable conditions were a mean January temperature of - 4.4℃ to -2.5℃, mean April temperature of 8.8℃-10.0℃, and mean annual temperature of 9.6℃-11.0℃. Aconitum austrokoreense Koidz. was first observed in sites with a mean annual rainfall of 1,670- 1,720 mm, and a mean August rainfall of at least 350 mm. Therefore, sites with increasing rainfall of up to 390 mm were preferred. The area of potential habitats having distributive significance of 75% or higher was 202 ha, or 1.8% of the area covered in this study.

An Exploratory Study on the Industry/Market Characteristics of the 'Hyper-Growing Companies' and the Firm Strategies: A Focus on Firms with more than Annual Revenue of 100 Million dollars from 'Inc. the 5,000 Fastest-Growing Private Companies in America' (초고성장 기업의 산업/시장 특성과 전략 선택에 대한 탐색적 연구: 'Inc. the 5,000 Fastest-Growing Private Companies in America' 기업 중 연간 매출액 1억 달러 이상 기업을 중심으로)

  • Lee, Young-Dall;Oh, Soyoung
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.16 no.2
    • /
    • pp.51-78
    • /
    • 2021
  • Followed by 'start-up', the theme of 'scale-up' has been considered as an important agenda in both corporate and policy spheres. In particular, although it is a term commonly used in industry and policy fields, even a conceptual definition has not been achieved from the academic perspective. "Corporate Growth" in the academic aspect and "Business Growth" in the practical management field have different understandings (Achtenhagen et al., 2010). Previous research on corporate growth has not departed from Penrose(1959)'s "Firm as a bundle of resources" and "the role of managers". Based on the theory and background of economics, existing research has mainly examined factors that contribute to firms' growth and their growth patterns. Comparatively, we lack knowledge on the firms' growth with a focus on 'annual revenue growth rate'. In the early stage of the firms, they tend to exhibit a high growth rate as it started with a lower level of annual revenue. However, when the firms reach annual revenue of more than 100 billion KRW, a threshold to be classified as a 'middle-standing enterprise' by Korean standards, they are unlikely to reach a high level of revenue growth rate. In our study, we used our sample of 333 companies (6.7% out of 5,000 'fastest-growing' companies) which reached 15% of the compound annual growth rate in the last three years with more than USD 100 million. It shows that sustaining 'high-growth' above a certain firm size is difficult. The study focuses on firms with annual revenue of more than $100 billion (approximately 120 billion KRW) from the 'Inc. 2020 fast-growing companies 5,000' list. The companies have been categorized into 1) Fast-growing companies (revenue CAGR 15%~40% between 2016 and 2019), 2) Hyper-growing companies (40%~99.9%), and 3) Super-growing (100% or more) with in-depth analysis of each group's characteristics. Also, the relationship between the revenue growth rate, individual company's strategy choice (market orientation, generic strategy, growth strategy, pioneer strategy), industry/market environment, and firm age is investigated with a quantitative approach. Through conducting the study, it aims to provide a reference to the 'Hyper-Growing Model' that combines the paths and factors of growth strategies. For policymakers, our study intends to provide a reference to which factors or environmental variables should be considered for 'optimal effective combinations' to promote firms' growth.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Crystalline lens'curvature change model by Accommdation (조절력에 따른 Crystalline Lens의 곡률 변화 모델)

  • Park, Kwang-Ho;Kim, Yong-Geun
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.7 no.2
    • /
    • pp.181-187
    • /
    • 2002
  • Curvature of Crystalline lens changes by Accommdation's change. When Accommdation gives force vertically to Crystalline lens that is elastic body, length increases for vertex direction. Density distribution and form of Crystalline lens that receive force lean to posterior surface, horizontal force of anterior surface direction is bigger more than horizontal force of posterior surface direction. But, if Accommdation begins to grow more than threshold value, expansity reaches in limit on anterior surface. This time, horizontal force of posterior surface direction is great mored more than horizontal force of anterior surface direction, thickness of posterior surface direction increases because is more than anterior surface direction. Anterior and posterior relationship thickness change difference accomplish the 2-nd funtional line(${\Delta}=B_1D+B_2D^2$) about Accommdation. Thickness (${\Delta}t_a$, ${\Delta}t_p$) difference change curved line of anterior pole-border and border-posterior pole by Accommdation is expressed as following. $${\Delta}t_a=t_a-t_{ao}=t_{max}+t_0{\exp}(-A/B)-t_{ao}$$ $${\Delta}t_p=t_p-t_{po}=t_{min}+t_0{\exp}(A/B)-t_{po}$$ The Parameter value that save in human's Crystalline lens obtain $t_{min}=1.1.06$, $t_0=-0.33$, B=9.32 in anterior, and $t_{max}=1.97$, $t_0=0.10$, B=7.96 etc. in posterior. Vertex curvature radius' change is as following Crystalline lens' anterior and posterior by Accommation $$R=R_0+R_1{\exp}(D/k)$$ The Parameter value that save in human's Crystalline lens obtain $R_{min}=5.55$, $R_1=6.87$, k=4.65 in anterior, and $R_{max}=-68.6$, $R_1=76.7$, k=308.5 in posterior, respectively.

  • PDF

Modeling of Sensorineural Hearing Loss for the Evaluation of Digital Hearing Aid Algorithms (디지털 보청기 알고리즘 평가를 위한 감음신경성 난청의 모델링)

  • 김동욱;박영철
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.1
    • /
    • pp.59-68
    • /
    • 1998
  • Digital hearing aids offer many advantages over conventional analog hearing aids. With the advent of high speed digital signal processing chips, new digital techniques have been introduced to digital hearing aids. In addition, the evaluation of new ideas in hearing aids is necessarily accompanied by intensive subject-based clinical tests which requires much time and cost. In this paper, we present an objective method to evaluate and predict the performance of hearing aid systems without the help of such subject-based tests. In the hearing impairment simulation(HIS) algorithm, a sensorineural hearing impairment medel is established from auditory test data of the impaired subject being simulated. Also, the nonlinear behavior of the loudness recruitment is defined using hearing loss functions generated from the measurements. To transform the natural input sound into the impaired one, a frequency sampling filter is designed. The filter is continuously refreshed with the level-dependent frequency response function provided by the impairment model. To assess the performance, the HIS algorithm was implemented in real-time using a floating-point DSP. Signals processed with the real-time system were presented to normal subjects and their auditory data modified by the system was measured. The sensorineural hearing impairment was simulated and tested. The threshold of hearing and the speech discrimination tests exhibited the efficiency of the system in its use for the hearing impairment simulation. Using the HIS system we evaluated three typical hearing aid algorithms.

  • PDF

A Study on the Precise Lineament Recovery of Alluvial Deposits Using Satellite Imagery and GIS (충적층의 정밀 선구조 추출을 위한 위성영상과 GIS 기법의 활용에 관한 연구)

  • 이수진;석동우;황종선;이동천;김정우
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.363-368
    • /
    • 2003
  • We have successfully developed a more effective algorithm to extract the lineament in the area covered by wide alluvial deposits characterized by a relatively narrow range of brightness in the Landsat TM image, while the currently used algorithm is limited to the mountainous areas. In the new algorithm, flat areas mainly consisting of alluvial deposits were selected using the Local Enhancement from the Digital Elevation Model (DEM). The aspect values were obtained by 3${\times}$3 moving windowing of Zevenbergen & Thorno's Method, and then the slopes of the study area were determined using the aspect values. After the lineament factors in the alluvial deposits were revealed by comparing the threshold values, the first rank lineament under the alluvial deposits were extracted using the Hough transform In order to extract the final lineament, the lowest points under the alluvial deposits in a given topographic section perpendicular to the first rank lineament were determined through the spline interpolation, and then the final lineament were chosen through Hough transform using the lowest points. The algorithm developed in this study enables us to observe a clearer lineament in the areas covered by much larger alluvial deposits compared with the results extracted using the conventional existing algorithm. There exists, however, some differences between the first rank lineament, obtained using the aspect and the slope, and the final lineament. This study shows that the new algorithm more effectively extracts the lineament in the area covered with wide alluvlal deposits than in the areas of converging slope, areas with narrow alluvial deposits or valleys.

  • PDF

Study on Tumor Control Probability and Normal Tissue Complication Probability in 3D Conformal Radiotherapy (방사선 입체조형치료에 대한 종양치유확율과 정상조직손상확율에 관한 연구)

  • 추성실
    • Progress in Medical Physics
    • /
    • v.9 no.4
    • /
    • pp.227-245
    • /
    • 1998
  • A most appropriate model of 3-D conformal radiotherapy has been induced by clinical evaluation and animal study, and therapeutic gains were evaluated by numerical equation of tumor control probability(TCP) and normal tissue complication probability (NTCP). The radiation dose to the tumor and the adjacent normal organs was accurately evaluated and compared using the dose volume histogram(DVH). The TCP and NTCP was derived from the distribution of given dosage and irradiated volume, and these numbers were used as the biological index for the assessment of the treatment effects. Ten patients with liver disease have been evaluated and 3 dogs were sacrificed for this study. Based on the 3-D images of the tumor and adjacent organs, the optimum radiation dose and the projection direction which could maximize the radiation effect while minimizing the effects to the adjacent organs could be decided. 3). The most effective collimation for the normal adjacent organs was made through the beams eye view with the use of multileaf collimator. When the dose was increased from 50Gy to 70Gy, the TCP for the conventional 2-port radiation and the 5-port multidimensional therapy was 0.982 and 0.995 respectively, while the NTCP was 0.725 and 0.142 respectively, suggesting that the 3-D conformal radiotherapy might be the appropriate therapy to apply sufficient radiation dose to the tumor while minimizing the damages to the normal areas of the liver. Positive correlation was observed between the NTCP and the actual complication of the normal liver in the animal study. The present study suggest that the use of 3-D conformal radiotherapy and the application of the mathematical models of TCP and NTCP may provide the improvements in the treatment of hepatoma with enhanced results.

  • PDF

CO2 Exchange in Kwangneung Broadleaf Deciduous Forest in a Hilly Terrain in the Summer of 2002 (2002년 여름철 경사진 광릉 낙엽 활엽수림에서의 이산화탄소 교환)

  • Choi, Tae-jin;Kim, Joon;Lim, Jong-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.5 no.2
    • /
    • pp.70-80
    • /
    • 2003
  • We report the first direct measurement of $CO_2$ flux over Kwangneung broadleaf deciduous forest, one of the tower flux sites in KoFlux network. Eddy covariance system was installed on a 30 m tower along with other meteorological instruments from June to August in 2002. Although the study site was non-ideal (with valley-like terrain), turbulence characteristics from limited wind directions (i.e., 90$\pm$45$^{\circ}$) was not significantly different from those obtained at simple, homogeneous terrains with an ideal fetch. Despite very low rate of data retrieval, preliminary results from our analysis are encouraging and worthy of further investigation. Ignoring the role of advection terms, the averaged net ecosystem exchange (NEE) of $CO_2$ ranged from -1.2 to 0.7 mg m$^{-2}$ s$^{-1}$ from June to August in 2002. The effect of weak turbulence on nocturnal NEE was examined in terms of friction velocity (u*) along with the estimation of storage term. The effect of low uf u* NEE was obvious with a threshold value of about 0.2 m s$^{-1}$ . The contribution of storage term to nocturnal NEE was insignificant; suggesting that the $CO_2$ stored within the forest canopy at night was probably removed by the drainage flow along the hilly terrain. This could be also an artifact of uncertainty in calculations of storage term based on a single-level concentration. The hyperbolic light response curves explained >80% of variation in the observed NEE, indicating that $CO_2$ exchange at the site was notably light-dependent. Such a relationship can be used effectively in filling up the missing gaps in NEE data through the season. Finally, a simple scaling analysis based on a linear flow model suggested that advection might play a significant role in NEE evaluation at this site.