• Title/Summary/Keyword: initial check

Search Result 267, Processing Time 0.027 seconds

Static Behavior of Concrete-Filled and Tied Steel Tubular Arch(CFTA) Girder (CFTA거더의 정적 거동연구)

  • Kim, Jong-In;Kim, Doo-kie;Lee, Jang-hyeong;Kim, Jeong-Ho
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.13 no.3 s.55
    • /
    • pp.225-231
    • /
    • 2009
  • This study introduces the CFTA girder(Concrete-Filled and Tied Steel Tubular Arch Girder) which is a combined structural system of traditional CFT, arch, and prestress structures. Static load tests and structural behavior analyses were carried out for a 25m long CFTA girder. In the analysis, each load of 58kN, 88kN, 148kN, 207kN,and 298kN was applied incrementally at the positions of 1.0 m distances in both directions from the center of the girder. On each test, strain and displacement were measured. Linear static FEM analyses using Strand7 code were also performed to check the structural stability and to investigate the effects of prestressing(${\pm}$20%) and material property(Young's modulus) on the displacement and strain. The results of this study are summarized as follows: the initial strain & displacement under selfweight and prestressing were influenced with the variation of prestressing, but they were mainly effected only by Young's modulus when additional loads were applied.

Timing Driven Analytic Placement for FPGAs (타이밍 구동 FPGA 분석적 배치)

  • Kim, Kyosun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.7
    • /
    • pp.21-28
    • /
    • 2017
  • Practical models for FPGA architectures which include performance- and/or density-enhancing components such as carry chains, wide function multiplexers, and memory/multiplier blocks are being applied to academic FPGA placement tools which used to rely on simple imaginary models. Previously the techniques such as pre-packing and multi-layer density analysis are proposed to remedy issues related to such practical models, and the wire length is effectively minimized during initial analytic placement. Since timing should be optimized rather than wire length, most previous work takes into account the timing constraints. However, instead of the initial analytic placement, the timing-driven techniques are mostly applied to subsequent steps such as placement legalization and iterative improvement. This paper incorporates the timing driven techniques, which check if the placement meets the timing constraints given in the standard SDC format, and minimize the detected violations, with the existing analytic placer which implements pre-packing and multi-layer density analysis. First of all, a static timing analyzer has been used to check the timing of the wire-length minimized placement results. In order to minimize the detected violations, a function to minimize the largest arrival time at end points is added to the objective function of the analytic placer. Since each clock has a different period, the function is proposed to be evaluated for each clock, and added to the objective function. Since this function can unnecessarily reduce the unviolated paths, a new function which calculates and minimizes the largest negative slack at end points is also proposed, and compared. Since the existing legalization which is non-timing driven is used before the timing analysis, any improvement on timing is entirely due to the functions added to the objective function. The experiments on twelve industrial examples show that the minimum arrival time function improves the worst negative slack by 15% on average whereas the minimum worst negative slack function improves the negative slacks by additional 6% on average.

The Contents and Satisfation of Home Care Progral Delivered by Seoul Nurses Association (서울시 간호사회 가정간호시범사업 서비스 내용 및 만족도 분석)

  • Lim, Nan-Young;Kim, Keum-Soon;Kim, Young-lm;Kim, Kwuy-Bun;Kim, Si-Hyun;Park, Ho-Ran
    • The Korean Nurse
    • /
    • v.36 no.1
    • /
    • pp.59-76
    • /
    • 1997
  • The purposes of this study were to identify the contents and satisfaction level of the patients received home care service, and to compare the differences of the contents by the characteristics of the patients. Seventy eight patients received home care service from 1st Jan. to 30th Sept., 1996 were data-collected to analyze the contents and outcomes of home care service. Sixty-nine patients currently receiving home care service were participated to evaluate the satisfaction level of home care service. The data were analyzed using mean, standard deviation, $x^2$ test, and ANOVA by SPSS $PC^+$ program. The findings of this study were as follow : 1. The contents & outcomes of home care service 1) The mean age of the subjects was 64.4 years: 58% of them were female. Those who living in Seoul were 83% and the rest of the subjects was living in Kyung-Gi. 2) The subjects who had one diagnosis were 41%. Over 60% of them had the disease of neurologic & sensory system. 3) The mean number of visit was 6. Only one visit was 22%. The mean time of care was 79 minutes. Duration of visit from 31 minutes to 60 minutes were 47 %. The subjects who terminated the visit because of death were 67.3%. 62% of the persons who referred them to the home care service were nurses. 4) The pain after the service was more relieved than before. The amounts of intake, the degree of bed sore, edema & fracture after the service were more improved than before. Health status after the service was improved in general. 5) There were significant differences between initial and last conscious level in tracheostomy care & oxygen inhalation care. There was significant difference between initial and last degree of activity in blood sugar check. 6) There were significant differences on the number of visit in assessment of the status, evaluation & observation, vital sign check, skin care, injection, medication, bed sore care, colostomy care, relaxation therapy for pain relief, patient education, family care, exercise therapy, position change, supply of disinfected equipments and infection control. There were significant differences on visiting time in nasogastric tube care, drainage tube care and oxygen inhalation care. 2. The satisfaction level of home care service 1) 50% were male. Over 60 years of the subjects was 61 %. Those who living in Seoul were 82%. 2) The subjects who had one or two diagnosis were 32% respectively. 55% of the persons who referred them to the home care service were nurses. 3) Total level of satisfaction of home care service was very high. 4) The older the age, the higher the satisfaction level. The larger the number of visit, the higher the satisfaction level. 5) The subjects who were in cloudy state were higher level of satisfaction than in alert or coma state. The subjects whose activity were normal or who needed assistance were higher level of satisfaction than bedridden or immobilized subjects. These findings suggested that the patients had substantial need for posthospital care. They tended to be elderly and to have experienced the wide range of health problems associated with aging, chronicity, including limitations in activities, and other serious health problems. So, the nationwide home care systems beyond the limit of demonstration program by local association and the development of the effective financial system of home based health care are necessary for the clients who are in need of home care.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

A study on improving self-inference performance through iterative retraining of false positives of deep-learning object detection in tunnels (터널 내 딥러닝 객체인식 오탐지 데이터의 반복 재학습을 통한 자가 추론 성능 향상 방법에 관한 연구)

  • Kyu Beom Lee;Hyu-Soung Shin
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.2
    • /
    • pp.129-152
    • /
    • 2024
  • In the application of deep learning object detection via CCTV in tunnels, a large number of false positive detections occur due to the poor environmental conditions of tunnels, such as low illumination and severe perspective effect. This problem directly impacts the reliability of the tunnel CCTV-based accident detection system reliant on object detection performance. Hence, it is necessary to reduce the number of false positive detections while also enhancing the number of true positive detections. Based on a deep learning object detection model, this paper proposes a false positive data training method that not only reduces false positives but also improves true positive detection performance through retraining of false positive data. This paper's false positive data training method is based on the following steps: initial training of a training dataset - inference of a validation dataset - correction of false positive data and dataset composition - addition to the training dataset and retraining. In this paper, experiments were conducted to verify the performance of this method. First, the optimal hyperparameters of the deep learning object detection model to be applied in this experiment were determined through previous experiments. Then, in this experiment, training image format was determined, and experiments were conducted sequentially to check the long-term performance improvement through retraining of repeated false detection datasets. As a result, in the first experiment, it was found that the inclusion of the background in the inferred image was more advantageous for object detection performance than the removal of the background excluding the object. In the second experiment, it was found that retraining by accumulating false positives from each level of retraining was more advantageous than retraining independently for each level of retraining in terms of continuous improvement of object detection performance. After retraining the false positive data with the method determined in the two experiments, the car object class showed excellent inference performance with an AP value of 0.95 or higher after the first retraining, and by the fifth retraining, the inference performance was improved by about 1.06 times compared to the initial inference. And the person object class continued to improve its inference performance as retraining progressed, and by the 18th retraining, it showed that it could self-improve its inference performance by more than 2.3 times compared to the initial inference.

The Precision Test Based on States of Bone Mineral Density (골밀도 상태에 따른 검사자의 재현성 평가)

  • Yoo, Jae-Sook;Kim, Eun-Hye;Kim, Ho-Seong;Shin, Sang-Ki;Cho, Si-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.67-72
    • /
    • 2009
  • Purpose: ISCD (International Society for Clinical Densitometry) requests that users perform mandatory Precision test to raise their quality even though there is no recommendation about patient selection for the test. Thus, we investigated the effect on precision test by measuring reproducibility of 3 bone density groups (normal, osteopenia, osteoporosis). Materials and Methods: 4 users performed precision test with 420 patients (age: $57.8{\pm}9.02$) for BMD in Asan Medical Center (JAN-2008 ~ JUN-2008). In first group (A), 4 users selected 30 patient respectively regardless of bone density condition and measured 2 part (L-spine, femur) in twice. In second group (B), 4 users measured bone density of 10 patients respectively in the same manner of first group (A) users but dividing patient into 3 stages (normal, osteopenia, osteoporosis). In third group (C), 2 users measured 30 patients respectively in the same manner of first group (A) users considering bone density condition. We used GE Lunar Prodigy Advance (Encore. V11.4) and analyzed the result by comparing %CV to LSC using precision tool from ISCD. Check back was done using SPSS. Results: In group A, the %CV calculated by 4 users (a, b, c, d) were 1.16, 1.01, 1.19, 0.65 g/$cm^2$ in L-spine and 0.69, 0.58, 0.97, 0.47 g/$cm^2$ in femur. In group B, the %CV calculated by 4 users (a, b, c, d) were 1.01, 1.19, 0.83, 1.37 g/$cm^2$ in L-spine and 1.03, 0.54, 0.69, 0.58 g/$cm^2$ in femur. When comparing results (group A, B), we found no considerable differences. In group C, the user_1's %CV of normal, osteopenia and osteoporosis were 1.26, 0.94, 0.94 g/$cm^2$ in L-spine and 0.94, 0.79, 1.01 g/$cm^2$ in femur. And the user_2's %CV were 0.97, 0.83, 0.72 g/$cm^2$ L-spine and 0.65, 0.65, 1.05 g/$cm^2$ in femur. When analyzing the result, we figured out that the difference of reproducibility was almost not found but the differences of two users' several result values have effect on total reproducibility. Conclusions: Precision test is a important factor of bone density follow up. When Machine and user's reproducibility is getting better, it’s useful in clinics because of low range of deviation. Users have to check machine's reproducibility before the test and keep the same mind doing BMD test for patient. In precision test, the difference of measured value is usually found for ROI change caused by patient position. In case of osteoporosis patient, there is difficult to make initial ROI accurately more than normal and osteopenia patient due to lack of bone recognition even though ROI is made automatically by computer software. However, initial ROI is very important and users have to make coherent ROI because we use ROI Copy function in a follow up. In this study, we performed precision test considering bone density condition and found LSC value was stayed within 3%. There was no considerable difference. Thus, patient selection could be done regardless of bone density condition.

  • PDF

Possibility Estimating of Unaccessible Area on 1/5,000 Digital Topographic Mapping Using PLEIADES Images (PLEIADES 영상을 활용한 비접근지역의 1/5,000 수치지형도 제작 가능성 평가)

  • Shin, Jin Kyu;Lee, Young Jin;Choi, Hae Jin;Lee, Jun Hyuk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.4_1
    • /
    • pp.299-309
    • /
    • 2014
  • This paper evaluated the possibility for 1/5,000 digital topographic mapping by using PLEIADES images of 0.5m GSD(Ground Sampling Distance) resolution that has recently launched. Those results of check points by applying the initial RPC(Rational Polynomial Coefficient) of PLEIADES images came out as; RMSE of those were $X={\pm}1.806m$, $Y={\pm}2.132m$, $Z={\pm}1.973m$. Also, if we corrected geometric correction using 16 GCP(Ground Control Point)s, the results of RMSE became $X={\pm}0.104m$, $Y={\pm}0.171m$, $Z={\pm}0.036m$, and t he RMSE of check points were $X={\pm}0.357m$, $Y={\pm}0.239m$, $Z={\pm}0.188m$; which of those results indicated the accuracy of standard adjustment complied in error tolerances of the 1/5,000 scale. Additionally, we converted coordinates of points, obtained by TerraSAR. for comparing with measurements from GPS(Global Positioning System) surveying. The RMSE of comparing converted and GPS points were $X={\pm}0.818m$, $Y={\pm}0.200m$, $Z={\pm}0.265m$, which confirmed the possibility for 1/5,000 digital topographic mapping with PLEIADES images and GCPs. As method of obtaining GCPs in unaccessible area, however, the outcome evaluation of GCPs extracted from TerraSAR images was not acceptable for 1/5,000 digital topographic mapping. Therefore, we considered that further researches are needed on applicability of GCPs extracted from TerraSAR images for future alternative method.

Studies on the Utilization of Exothermic Heat Composting during Winter Season (동계(冬季) 퇴비부숙열(堆肥腐熟熱) 이용(利用)에 관(關)한 연구(硏究))

  • Kim, Sung-Pil;Park, Young-Dae;Joo, Young-Hee;Uhm, Dae-Ik
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.17 no.3
    • /
    • pp.283-288
    • /
    • 1984
  • This study was conducted to evaluate the characteristics of exothermic heat and compost generated from decomposition of organic wastes composts were piled up with various sources of raw materials of rice straw, rice husk, human and animal wastes. The duration of generated exothermic heat during compositing process was longer in mixture piles of rice straw/rice husk ratio of 1:1 compared to rice straw alone. Temperature in compost piles added with phosphate as fused superphosphate fertilizer was rapidly increased at the earlier stage of composting and gradually decreased in 30 days compared to the check. pH of compost showed 5.5 at initial piling, however, its peak appeared 8.8 in 10 days with rapidly increasing temperature of compost and maintained around 8.3 after one month. Compost of mixture of rice straw and chicken droppings maintained temperature ranges of 30 to $65^{\circ}C$ for 39 days, compost of rice straw, rice husk and chicken droppings for 69 days, piles of rice straw, rice husk and hog manure for 56 days, mixture of rice straw, rice husk and cow manure for 66 days and compost of rice straw, rice husk and human wastes for 21 days.

  • PDF

Effects of Change in Body Mass Index on Change in Serum Total Cholesterol Levels among Industrial Workers (산업장 근로자들의 BMI 변화가 혈청총콜레스테롤의 변화에 미치는 영향)

  • Yoon, Seok-Han;Lee, Myung-Jun;Cho, Young-Chae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.6
    • /
    • pp.278-290
    • /
    • 2016
  • The purpose of this study is to investigate the association of total cholesterol levels (TC) and the incidence of hypercholesterolemia in response to changes in BMI. The subjects included a total of 28,249 industrial workers (25,548 male and 2,701 female) aged 30-69 years old who had received regular medical check-ups at least once per year from 2002 to 2012 (over 11 years). Data from this period were categorized into a first term (2002-2005), middle term (2006-2009), and last term (2010-2012). Then, the average TC was stratified by BMI, which was obtained from the initial examination results of each individual. In addition, average changes in TC were analyzed by stratifying with changes in BMI over 10 years (starting in 2002). The annual occurrence rates of hypercholesterolemia stratified by BMI were further assessed in patients with normal ranges of TC. In all three terms, the average TC was significantly elevated in the obese group ($25.0kg/m^2$) compared to the low weight group ($18.5kg/m^2$) and the normal weight group ($18.5-25.0kg/m^2$). Similarly, the group with higher BMI presented elevated rates of hypercholesterolemia compared to the groups with low BMI. In addition, increased BMI over the 10 year period significantly influenced TC. Consequently, it is suggested that evaluation and intervention for obesity control may be needed in order to manage the risk of high serum lipid levels.

Normalized Digital Surface Model Extraction and Slope Parameter Determination through Region Growing of UAV Data (무인항공기 데이터의 영역 확장법 적용을 통한 정규수치표면모델 추출 및 경사도 파라미터 설정)

  • Yeom, Junho;Lee, Wonhee;Kim, Taeheon;Han, Youkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.499-506
    • /
    • 2019
  • NDSM (Normalized Digital Surface Model) is key information for the detailed analysis of remote sensing data. Although NDSM can be simply obtained by subtracting a DTM (Digital Terrain Model) from a DSM (Digital Surface Model), in case of UAV (Unmanned Aerial Vehicle) data, it is difficult to get an accurate DTM due to high resolution characteristics of UAV data containing a large number of complex objects on the ground such as vegetation and urban structures. In this study, RGB-based UAV vegetation index, ExG (Excess Green) was used to extract initial seed points having low ExG values for region growing such that a DTM can be generated cost-effectively based on high resolution UAV data. For this process, local window analysis was applied to resolve the problem of erroneous seed point extraction from local low ExG points. Using the DSM values of seed points, region growing was applied to merge neighboring terrain pixels. Slope criteria were adopted for the region growing process and the seed points were determined as terrain points in case the size of segments is larger than 0.25 ㎡. Various slope criteria were tested to derive the optimized value for UAV data-based NDSM generation. Finally, the extracted terrain points were evaluated and interpolation was performed using the terrain points to generate an NDSM. The proposed method was applied to agricultural area in order to extract the above ground heights of crops and check feasibility of agricultural monitoring.