• Title/Summary/Keyword: Threshold model

Search Result 1,454, Processing Time 0.028 seconds

An Exploratory Study on the Industry/Market Characteristics of the 'Hyper-Growing Companies' and the Firm Strategies: A Focus on Firms with more than Annual Revenue of 100 Million dollars from 'Inc. the 5,000 Fastest-Growing Private Companies in America' (초고성장 기업의 산업/시장 특성과 전략 선택에 대한 탐색적 연구: 'Inc. the 5,000 Fastest-Growing Private Companies in America' 기업 중 연간 매출액 1억 달러 이상 기업을 중심으로)

  • Lee, Young-Dall;Oh, Soyoung
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.16 no.2
    • /
    • pp.51-78
    • /
    • 2021
  • Followed by 'start-up', the theme of 'scale-up' has been considered as an important agenda in both corporate and policy spheres. In particular, although it is a term commonly used in industry and policy fields, even a conceptual definition has not been achieved from the academic perspective. "Corporate Growth" in the academic aspect and "Business Growth" in the practical management field have different understandings (Achtenhagen et al., 2010). Previous research on corporate growth has not departed from Penrose(1959)'s "Firm as a bundle of resources" and "the role of managers". Based on the theory and background of economics, existing research has mainly examined factors that contribute to firms' growth and their growth patterns. Comparatively, we lack knowledge on the firms' growth with a focus on 'annual revenue growth rate'. In the early stage of the firms, they tend to exhibit a high growth rate as it started with a lower level of annual revenue. However, when the firms reach annual revenue of more than 100 billion KRW, a threshold to be classified as a 'middle-standing enterprise' by Korean standards, they are unlikely to reach a high level of revenue growth rate. In our study, we used our sample of 333 companies (6.7% out of 5,000 'fastest-growing' companies) which reached 15% of the compound annual growth rate in the last three years with more than USD 100 million. It shows that sustaining 'high-growth' above a certain firm size is difficult. The study focuses on firms with annual revenue of more than $100 billion (approximately 120 billion KRW) from the 'Inc. 2020 fast-growing companies 5,000' list. The companies have been categorized into 1) Fast-growing companies (revenue CAGR 15%~40% between 2016 and 2019), 2) Hyper-growing companies (40%~99.9%), and 3) Super-growing (100% or more) with in-depth analysis of each group's characteristics. Also, the relationship between the revenue growth rate, individual company's strategy choice (market orientation, generic strategy, growth strategy, pioneer strategy), industry/market environment, and firm age is investigated with a quantitative approach. Through conducting the study, it aims to provide a reference to the 'Hyper-Growing Model' that combines the paths and factors of growth strategies. For policymakers, our study intends to provide a reference to which factors or environmental variables should be considered for 'optimal effective combinations' to promote firms' growth.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Crystalline lens'curvature change model by Accommdation (조절력에 따른 Crystalline Lens의 곡률 변화 모델)

  • Park, Kwang-Ho;Kim, Yong-Geun
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.7 no.2
    • /
    • pp.181-187
    • /
    • 2002
  • Curvature of Crystalline lens changes by Accommdation's change. When Accommdation gives force vertically to Crystalline lens that is elastic body, length increases for vertex direction. Density distribution and form of Crystalline lens that receive force lean to posterior surface, horizontal force of anterior surface direction is bigger more than horizontal force of posterior surface direction. But, if Accommdation begins to grow more than threshold value, expansity reaches in limit on anterior surface. This time, horizontal force of posterior surface direction is great mored more than horizontal force of anterior surface direction, thickness of posterior surface direction increases because is more than anterior surface direction. Anterior and posterior relationship thickness change difference accomplish the 2-nd funtional line(${\Delta}=B_1D+B_2D^2$) about Accommdation. Thickness (${\Delta}t_a$, ${\Delta}t_p$) difference change curved line of anterior pole-border and border-posterior pole by Accommdation is expressed as following. $${\Delta}t_a=t_a-t_{ao}=t_{max}+t_0{\exp}(-A/B)-t_{ao}$$ $${\Delta}t_p=t_p-t_{po}=t_{min}+t_0{\exp}(A/B)-t_{po}$$ The Parameter value that save in human's Crystalline lens obtain $t_{min}=1.1.06$, $t_0=-0.33$, B=9.32 in anterior, and $t_{max}=1.97$, $t_0=0.10$, B=7.96 etc. in posterior. Vertex curvature radius' change is as following Crystalline lens' anterior and posterior by Accommation $$R=R_0+R_1{\exp}(D/k)$$ The Parameter value that save in human's Crystalline lens obtain $R_{min}=5.55$, $R_1=6.87$, k=4.65 in anterior, and $R_{max}=-68.6$, $R_1=76.7$, k=308.5 in posterior, respectively.

  • PDF

Modeling of Sensorineural Hearing Loss for the Evaluation of Digital Hearing Aid Algorithms (디지털 보청기 알고리즘 평가를 위한 감음신경성 난청의 모델링)

  • 김동욱;박영철
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.1
    • /
    • pp.59-68
    • /
    • 1998
  • Digital hearing aids offer many advantages over conventional analog hearing aids. With the advent of high speed digital signal processing chips, new digital techniques have been introduced to digital hearing aids. In addition, the evaluation of new ideas in hearing aids is necessarily accompanied by intensive subject-based clinical tests which requires much time and cost. In this paper, we present an objective method to evaluate and predict the performance of hearing aid systems without the help of such subject-based tests. In the hearing impairment simulation(HIS) algorithm, a sensorineural hearing impairment medel is established from auditory test data of the impaired subject being simulated. Also, the nonlinear behavior of the loudness recruitment is defined using hearing loss functions generated from the measurements. To transform the natural input sound into the impaired one, a frequency sampling filter is designed. The filter is continuously refreshed with the level-dependent frequency response function provided by the impairment model. To assess the performance, the HIS algorithm was implemented in real-time using a floating-point DSP. Signals processed with the real-time system were presented to normal subjects and their auditory data modified by the system was measured. The sensorineural hearing impairment was simulated and tested. The threshold of hearing and the speech discrimination tests exhibited the efficiency of the system in its use for the hearing impairment simulation. Using the HIS system we evaluated three typical hearing aid algorithms.

  • PDF

A Study on the Precise Lineament Recovery of Alluvial Deposits Using Satellite Imagery and GIS (충적층의 정밀 선구조 추출을 위한 위성영상과 GIS 기법의 활용에 관한 연구)

  • 이수진;석동우;황종선;이동천;김정우
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.363-368
    • /
    • 2003
  • We have successfully developed a more effective algorithm to extract the lineament in the area covered by wide alluvial deposits characterized by a relatively narrow range of brightness in the Landsat TM image, while the currently used algorithm is limited to the mountainous areas. In the new algorithm, flat areas mainly consisting of alluvial deposits were selected using the Local Enhancement from the Digital Elevation Model (DEM). The aspect values were obtained by 3${\times}$3 moving windowing of Zevenbergen & Thorno's Method, and then the slopes of the study area were determined using the aspect values. After the lineament factors in the alluvial deposits were revealed by comparing the threshold values, the first rank lineament under the alluvial deposits were extracted using the Hough transform In order to extract the final lineament, the lowest points under the alluvial deposits in a given topographic section perpendicular to the first rank lineament were determined through the spline interpolation, and then the final lineament were chosen through Hough transform using the lowest points. The algorithm developed in this study enables us to observe a clearer lineament in the areas covered by much larger alluvial deposits compared with the results extracted using the conventional existing algorithm. There exists, however, some differences between the first rank lineament, obtained using the aspect and the slope, and the final lineament. This study shows that the new algorithm more effectively extracts the lineament in the area covered with wide alluvlal deposits than in the areas of converging slope, areas with narrow alluvial deposits or valleys.

  • PDF

Study on Tumor Control Probability and Normal Tissue Complication Probability in 3D Conformal Radiotherapy (방사선 입체조형치료에 대한 종양치유확율과 정상조직손상확율에 관한 연구)

  • 추성실
    • Progress in Medical Physics
    • /
    • v.9 no.4
    • /
    • pp.227-245
    • /
    • 1998
  • A most appropriate model of 3-D conformal radiotherapy has been induced by clinical evaluation and animal study, and therapeutic gains were evaluated by numerical equation of tumor control probability(TCP) and normal tissue complication probability (NTCP). The radiation dose to the tumor and the adjacent normal organs was accurately evaluated and compared using the dose volume histogram(DVH). The TCP and NTCP was derived from the distribution of given dosage and irradiated volume, and these numbers were used as the biological index for the assessment of the treatment effects. Ten patients with liver disease have been evaluated and 3 dogs were sacrificed for this study. Based on the 3-D images of the tumor and adjacent organs, the optimum radiation dose and the projection direction which could maximize the radiation effect while minimizing the effects to the adjacent organs could be decided. 3). The most effective collimation for the normal adjacent organs was made through the beams eye view with the use of multileaf collimator. When the dose was increased from 50Gy to 70Gy, the TCP for the conventional 2-port radiation and the 5-port multidimensional therapy was 0.982 and 0.995 respectively, while the NTCP was 0.725 and 0.142 respectively, suggesting that the 3-D conformal radiotherapy might be the appropriate therapy to apply sufficient radiation dose to the tumor while minimizing the damages to the normal areas of the liver. Positive correlation was observed between the NTCP and the actual complication of the normal liver in the animal study. The present study suggest that the use of 3-D conformal radiotherapy and the application of the mathematical models of TCP and NTCP may provide the improvements in the treatment of hepatoma with enhanced results.

  • PDF

CO2 Exchange in Kwangneung Broadleaf Deciduous Forest in a Hilly Terrain in the Summer of 2002 (2002년 여름철 경사진 광릉 낙엽 활엽수림에서의 이산화탄소 교환)

  • Choi, Tae-jin;Kim, Joon;Lim, Jong-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.5 no.2
    • /
    • pp.70-80
    • /
    • 2003
  • We report the first direct measurement of $CO_2$ flux over Kwangneung broadleaf deciduous forest, one of the tower flux sites in KoFlux network. Eddy covariance system was installed on a 30 m tower along with other meteorological instruments from June to August in 2002. Although the study site was non-ideal (with valley-like terrain), turbulence characteristics from limited wind directions (i.e., 90$\pm$45$^{\circ}$) was not significantly different from those obtained at simple, homogeneous terrains with an ideal fetch. Despite very low rate of data retrieval, preliminary results from our analysis are encouraging and worthy of further investigation. Ignoring the role of advection terms, the averaged net ecosystem exchange (NEE) of $CO_2$ ranged from -1.2 to 0.7 mg m$^{-2}$ s$^{-1}$ from June to August in 2002. The effect of weak turbulence on nocturnal NEE was examined in terms of friction velocity (u*) along with the estimation of storage term. The effect of low uf u* NEE was obvious with a threshold value of about 0.2 m s$^{-1}$ . The contribution of storage term to nocturnal NEE was insignificant; suggesting that the $CO_2$ stored within the forest canopy at night was probably removed by the drainage flow along the hilly terrain. This could be also an artifact of uncertainty in calculations of storage term based on a single-level concentration. The hyperbolic light response curves explained >80% of variation in the observed NEE, indicating that $CO_2$ exchange at the site was notably light-dependent. Such a relationship can be used effectively in filling up the missing gaps in NEE data through the season. Finally, a simple scaling analysis based on a linear flow model suggested that advection might play a significant role in NEE evaluation at this site.

Plant Hardiness Zone Mapping Based on a Combined Risk Analysis Using Dormancy Depth Index and Low Temperature Extremes - A Case Study with "Campbell Early" Grapevine - (최저기온과 휴면심도 기반의 동해위험도를 활용한 'Campbell Early' 포도의 내동성 지도 제작)

  • Chung, U-Ran;Kim, Soo-Ock;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.10 no.4
    • /
    • pp.121-131
    • /
    • 2008
  • This study was conducted to delineate temporal and spatial patterns of potential risk of cold injury by combining the short-term cold hardiness of Campbell Early grapevine and the IPCC projected climate winter season minimum temperature at a landscape scale. Gridded data sets of daily maximum and minimum temperature with a 270m cell spacing ("High Definition Digital Temperature Map", HD-DTM) were prepared for the current climatological normal year (1971-2000) based on observations at the 56 Korea Meteorological Administration (KMA) stations using a geospatial interpolation scheme for correcting land surface effects (e.g., land use, topography, and elevation). The same procedure was applied to the official temperature projection dataset covering South Korea (under the auspices of the IPCC-SRES A2 and A1B scenarios) for 2071-2100. The dormancy depth model was run with the gridded datasets to estimate the geographical pattern of any changes in the short-term cold hardiness of Campbell Early across South Korea for the current and future normal years (1971-2000 and 2071-2100). We combined this result with the projected mean annual minimum temperature for each period to obtain the potential risk of cold injury. Results showed that both the land areas with the normal cold-hardiness (-150 and below for dormancy depth) and those with the sub-threshold temperature for freezing damage ($-15^{\circ}C$ and below) will decrease in 2071-2100, reducing the freezing risk. Although more land area will encounter less risk in the future, the land area with higher risk (>70%) will expand from 14% at the current normal year to 23 (A1B) ${\sim}5%$ (A2) in the future. Our method can be applied to other deciduous fruit trees for delineating geographical shift of cold-hardiness zone under the projected climate change in the future, thereby providing valuable information for adaptation strategy in fruit industry.

A Feasibility Study of the K-LandBridge through a Linear Programming Model of Minimum Transport Costs (최소운송비용의 선형계획모형을 통한 K-LandBridge의 타당성 연구)

  • Koh, Yong Ki;Seo, Su Wan;Na, Jung Ho
    • Journal of Korea Port Economic Association
    • /
    • v.32 no.3
    • /
    • pp.95-108
    • /
    • 2016
  • China has recently advocated a national strategy called "One Belt One Road" and transferred to execution to refine it into detailed action plans and has continued to fix the complement. However, the Korean Peninsula, including the North Korea remains could not be included at all in the Chinese development policy and framework in terms of the International Logistics. Currently it is raised between Korea-China rail ferry system again and that is when we need to make effective policy development on international multimodal transport system in Northeast Asia. This paper introduces the K-LB (Korea LandBridge) as its execution plan and conducted a feasibility study on this. K-LB consists of a Korea-Russian train ferry system based in Pohang Yeongil New Port(light-wing) and a Korea-China train ferry system based in Saemangeum New Port(left-wing). These two wings are linked to the existing rail system in Korea. This study is convinced that the K-LB is an effective international logistics system in the current terms and conditions and also demonstrated that it is feasible to introduce th K-LB on the peninsula. More strictly speaking, through a linear programming under objective function that minimize the transport cost quantified prior to demonstrate the feasibility, the available ranges and conditions for the transportation costs that are ensured the effectiveness of the K-LB are presented as results. According to the results, if the transport cost of K-LB is cheaper about 34.5% than that of sea transport such as container transport, the object goods may be transported by K-LB on this route. It means that the K-LB system has a competitive advantage due to more rapid customs clearance as well as omitted loading and unloading procedures over container transportation system. It also noted that the threshold level may not be large. Therefore, K-LB has competitive enough to prove its introduction in the Northeast Asian logistics system.

Rainfall image DB construction for rainfall intensity estimation from CCTV videos: focusing on experimental data in a climatic environment chamber (CCTV 영상 기반 강우강도 산정을 위한 실환경 실험 자료 중심 적정 강우 이미지 DB 구축 방법론 개발)

  • Byun, Jongyun;Jun, Changhyun;Kim, Hyeon-Joon;Lee, Jae Joon;Park, Hunil;Lee, Jinwook
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.6
    • /
    • pp.403-417
    • /
    • 2023
  • In this research, a methodology was developed for constructing an appropriate rainfall image database for estimating rainfall intensity based on CCTV video. The database was constructed in the Large-Scale Climate Environment Chamber of the Korea Conformity Laboratories, which can control variables with high irregularity and variability in real environments. 1,728 scenarios were designed under five different experimental conditions. 36 scenarios and a total of 97,200 frames were selected. Rain streaks were extracted using the k-nearest neighbor algorithm by calculating the difference between each image and the background. To prevent overfitting, data with pixel values greater than set threshold, compared to the average pixel value for each image, were selected. The area with maximum pixel variability was determined by shifting with every 10 pixels and set as a representative area (180×180) for the original image. After re-transforming to 120×120 size as an input data for convolutional neural networks model, image augmentation was progressed under unified shooting conditions. 92% of the data showed within the 10% absolute range of PBIAS. It is clear that the final results in this study have the potential to enhance the accuracy and efficacy of existing real-world CCTV systems with transfer learning.