• Title/Summary/Keyword: 변환된 변수

Search Result 843, Processing Time 0.028 seconds

Differences in Grip Strength by Living Conditions and Living Area among Men and Women in Middle and Later Life (독거여부와 거주지역에 따른 중년기와 노년기 남성과 여성의 악력 차이)

  • Joo, Susanna;Jun, Hey Jung;Park, Hayoung
    • 한국노년학
    • /
    • v.38 no.3
    • /
    • pp.551-567
    • /
    • 2018
  • Demographic and socio-structural information is useful to identify potential welfare recipients who are in need of disease-prevention and intervention services. Thus, the present study aims to explore the differences in grip strength among middle and old-aged adults by living conditions and by living area. The 5th wave data of Korean Longitudinal Study of Aging was utilized. The dependent variable was grip strength, and the independent variables were living alone (living alone or not) and living area (city or non-city). Covariates were age, education, log-transformed household income, spouse existence, body mass index, self-rated health conditions, depressive symptoms, cognitive function, smoking, regular exercise, frequency of meeting with friends, and the number of social participation. Regression analysis was performed for middle-aged men, middle-aged women, old-aged men, and old-aged women, respectively. ANOVA and Chi-test were additionally used to specifically discuss significant results. Cross-sectional weight was applied to all analyses. According to the results, living alone and living area did not have significant effects on grip strength among middle-aged men, old-aged men, and old-aged women. In middle-aged women, however, living alone and living area were significantly associated with grip strength. To be specific, middle-aged women who lived alone in rural areas had the lowest grip strength compared to other middle-aged women. Additional analysis showed that middle-aged women who lived alone in rural areas had risk factors, such as low education level, low income, or high depressive symptoms. It implies that middle-aged women living alone in rural areas may have physical health risks, so they might be in need of disease prevention. This study is meaningful in that it can provide reliable information on the latent welfare recipients by using representative panel data and applying weight values.

Development of Estimation Equation for Minimum and Maximum DBH Using National Forest Inventory (국가산림자원조사 자료를 이용한 최저·최고 흉고직경 추정식 개발)

  • Kang, Jin-Taek;Yim, Jong-Su;Lee, Sun-Jeoung;Moon, Ga-Hyun;Ko, Chi-Ung
    • Journal of agriculture & life science
    • /
    • v.53 no.6
    • /
    • pp.23-33
    • /
    • 2019
  • In accordance with a change in the management information system containing the management record and planning for the entire national forest in South Korea by an amendment of the relevant law (The national forest management planning and methods, Korea Forest Service), in this study, average, the maximum, and the minimum values for DBH were presented while only average values were required before the amendment. In this regard, there is a need for an estimation algorithm by which all the existing values for DBH established before the revision can be converted to the highest and the lowest ones. The purpose of this study is to develop an estimation equation to automatically show the minimum and the maximum values for DBH for 12 main tree species from the data in the national forest management information system. In order to develop the estimation equation for the minimum and the maximum values for DBH, there was exploited the 6,858 fixed sample plots of the fifth and the sixth national forest inventory between in 2006 and 2015. Two estimation models were applied for DBH-tree age and DHB-tree height using such growth variables as DBH, tree age, and height, to draw the estimation equation for the maximum and the minimum values for DBH. The findings showed that the most suitable model to estimate the minimum and the maximum values for DBH was Dmin=a+bD+cH, Dmax=a+bD+cH with the variables of DBH and height. Based on these optimal models, the estimation equation was devised for the minimum and the maximum values for DBH for the 12 main tree species.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Stochastic Programming Model for River Water Quality Management (추계학적 계획모형을 이용한 하천수질관리)

  • Cho, Jae Heon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.14 no.1
    • /
    • pp.231-243
    • /
    • 1994
  • A stochastic programming model for river water quality management was developed. River water quality, river flow, quality and flowrate of the wastewater treatment plant inflow were treated as random variables in the model. Withdrawal for water supply and submerged weir reaeration were included in the model itself. A probabilistic model was formulated to compute the expectation and variance of water quality using Streeter-Phelps equation. Chance constraints of the optimization problem were converted to deterministic equivalents by chance constrained method. Objective function was total annual treatment cost of all wastewater treatment plants in the region. Construction cost function and O & M cost function were derived in the form of nonlinear equations that are functions of treatment efficiency and capacity of treatment plant. The optimization problem was solved by nonlinear programming. This model was applied to the lower Han River. The results show that the reliability to meet the DO standards of the year 1996 is about 50% when the treatment level of four wastewater treatment plants in Seoul is secondary treatment, and BOD load from the tributary inflows is the same as present time. And when BOD load from Tanchon, Jungrangchon, and Anyangchon is decreased to 50%, the reliability to meet the DO standards of the year 1996 is above 60%. This results indicated that for the sake of the water quality conservation of the lower Han River, water quality of the tributaries must be improved, and at least secondary level of treatment is required in the wastewater treatment plants.

  • PDF

A Devolatilization Model of Woody Biomass Particle in a Fluidized Bed Reactor (유동층 반응기에서의 목질계 바이오매스 입자의 탈휘발 예측 모델)

  • Kim, Kwang-Su;Leckner, Bo;Lee, Jeong-Woo;Lee, Uen-Do;Choi, Young-Tai
    • Korean Chemical Engineering Research
    • /
    • v.50 no.5
    • /
    • pp.850-859
    • /
    • 2012
  • Devolatilization is an important mechanism in the gasification and pyrolysis of woody biomass, and has to be accordingly considered in designing a gasifier. In order to describe the devolatilization process of wood particle, there have been proposed a number of empirical correlations based on experimental data. However, the correlations are limited to apply for various reaction conditions due to the complex nature of wood devolatilization. In this study, a simple model was developed for predicting the devolatilization of a wood particle in a fluidized bed reactor. The model considered the drying, shrinkage and heat generation of intra-particle for a spherical biomass. The influence of various parameters such as size, initial moisture content, heat transfer coefficient, kinetic model and temperature, was investigated. The devolatilization time linearly increased with increasing initial moisture content and size of a wood particle, whereas decreases with reaction temperature. There is no significant change of results when the external heat transfer coefficient is over 300 $W/m^2K$, and smaller particles are more sensitive to the outer heat transfer coefficient. Predicted results from the model show a similar tendency with the experimental data from literatures within a deviation of 10%.

Study on Queue Length Estimation using GPS Trajectory Data (GPS 데이터를 이용한 대기행렬길이 산출에 관한 연구)

  • Lee, Yong-Ju;Hwang, Jae-Seong;Lee, Choul-Ki
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.3
    • /
    • pp.45-51
    • /
    • 2016
  • Existing real-time signal control system was brought up typical problems which are supersaturated condition, point detection system and loop detection system. For that reason, the next generation signal control system of advanced form is required. Following thesis aimed at calculating queue length for the next generation signal control system to utilize basic parameter of signal control in crossing queue instead of the volume of real-time through traffic. Overflow saturated condition which was appeared as limit of existing system was focused to set-up range. Real-time location information of individual vehicle which is collected by GPS data. It converted into the coordinate to apply shock wave model with an linear equation that is extracted by regression model applied by a least square. Through the calculated queue length and link length by contrast, If queue length exceed the link, queue of downstream intersection is included as queue length that upstream queue vehicle is judeged as affecting downstream intersection. In result of operating correlation analysis among link travel time to judge confidence of extracted queue length, Both of links were shown over 0.9 values. It is appeared that both of links are highly correlated. Following research is significant using real-time data to calculate queue length and contributing to signal control system.

Two-dimensional Inundation Analysis Using Stochastic Rainfall Variation and Geographic Information System (추계학적 강우변동생성 기법과 GIS를 연계한 2차원 침수해석)

  • Lee, Jin-Young;Cho, Wan-Hee;Han, Kun-Yeun;Ahn, Ki-Hong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.13 no.1
    • /
    • pp.101-113
    • /
    • 2010
  • Recently actual rainfall pattern is decreasing rainy days and increasing in rainfall intensity and the frequency of flood occurrence is also increased. To consider recent situation, Engineers use deterministic methods like a PMP(Probable Maximum Precipitation). If design storm wouldn't occur, increasing of design criteria is extravagant. In addition, the biggest structure cause trouble with residents and environmental problem. And then it is necessary to study considering probability of rainfall parameter in each sub-basin for design of water structure. In this study, stochastic rainfall patterns are generated by using log-ratio method, Johnson system and multivariate Monte Carlo simulation. Using the stochastic rainfall patterns, hydrological analysis, hydraulic analysis and 2nd flooding analysis were performed based on GIS for their applicability. The results of simulations are similar to the actual damage area so the methodology of this study should be used about making a flood risk map or regidental shunting rout map against the region.

High Energy Photon Dosimetry by ESR Spectroscopy in Radiotherapy (ESR Spectroscopy에 의한 치료용 고에너지 광자선의 선량측정)

  • Chu, Sung-Sil
    • Progress in Medical Physics
    • /
    • v.1 no.1
    • /
    • pp.31-42
    • /
    • 1990
  • The finding of long lived free radicals produced by ionizing radiation in organic crystals and the quantification of this effect by electron spin resonance(ESR) spactroscopy has proven excellent dosimetric applicability. The tissue equivalent alanine dosimeter also appear appropriate for radiation therapy level dosimetry. The dose measurement was performed in a Rando phantom using high energy photons as produced by high energy medical linear accelerator and cobalt-60 teletherapy unit. The absorbed dose range of the ESR/alanine dosimetry system could be extended down to 0.1 Gy. The response of the alanine dosimeters was determined for photons at different therapeutic dose levels from less than 0.1 Gy to 100 Gy and the depth dose measurements were carried out for photon energies of 1.25MeV, 6 and 10 MV with alanine dosimeters in Rando phantom. Comparisons between ESR/alanine in a Rando phantom and ion chamber in a water phantom were made performing depth dose measurements to examine the agreement of both methods under field conditions.

  • PDF

An analysis of Factorial structure of Kinematic variables in Bowling (볼링의 운동학적 분석과 주요인 구조분석)

  • Lee, Kyung-Il
    • Korean Journal of Applied Biomechanics
    • /
    • v.12 no.2
    • /
    • pp.381-392
    • /
    • 2002
  • This study attempted to indentify changeability of the factorial structure of kinematic analysis in bowling. Subjects of group composed of three groups : Higher bowers who are national representative bowers with 200 average point and one pro-bowler. Middle bowlers who are three common persons with 170 average points. Lower bowler who are three common persons with 150 average points. Motion analysis on throw motion in three groups respectively has been made through three-dimension cinematography using DLT method. Two high-speed video camera at operating 180 and 60 frame per secondary. T-test factorial structure analysis has been used to define variable relations. It was concluded that : 1. The difference of x1, x2, x4, x8, x9, x11, x12, x13 where significant between two group. 2. The difference of number of spin and angle of the back-hand where statistically significant between two group(p<.001, p<.05) 3. The correlation over r=.5 between the kinematic data x1, x2, x3, x9, x10, x11. In the rotation loading matrix Factor 1 was x1, x2, x9, x10 and Factor 2 relates to x3, x11. 4. In order to obtain the factor score as follow as ; Factor 1 = (0.248)X1 + (0.265)X2 + (-0.074)X3 + (0.259)X9 + (0.259)X10 + (-0.025)X11 Factor 2=(-0.016)X1 + (-0.055)X2 + (0.84)X3 + (-0.013)X9 + (-0.007)X10 + (0.553)X11.

Synthesis of Porous $TiO_2$ Thin Films Using PVC-g-PSSA Graft Copolymer and Their Use in Dye-sensitized Solar Cells (PVC-g-PSSA 가지형 공중합체를 이용한 다공성 $TiO_2$ 박막의 합성 및 염료감응 태양전지 응용)

  • Byun, Su-Jin;Seo, Jin-Ah;Chi, Won-Seok;Shul, Yong-Gun;Kim, Jong-Hak
    • Membrane Journal
    • /
    • v.21 no.2
    • /
    • pp.193-200
    • /
    • 2011
  • An amphiphilic graft copolymer comprising a poly(vinyl chloride) (PVC) backbone and poly (styrene sulfonic acid) (PSSA) side chains (PVC-g-PSSA) was synthesized via atom transfer radical polymerization (ATRP). Mesoporous titanium dioxide $(TiO_2)$ films with crystalline anatase phase were synthesized via a sol-gel process by templating PVC-g-PSSA graft copolymer. Titanium isopropoxide (TTIP), a $TiO_2$ precursor was selectively incorporated into the hydrophilic PSSA domains of the graft copolymer and grew to form mesoporous $TiO_2$ films, as confirmed by scanning electron microscopy (SEM) and X-ray diffraction (XRD) analysis. The performances of dye-sensitized solar cell (DSSC) were systematically investigated by varying spin coating times and the amounts of P25 nanoparticies. The energy conversion efficiency reached up to 2.7% at 100 mW/$cm^2$ upon using quasi-solid-state polymer electrolyte.