• Title/Summary/Keyword: estimating

Search Result 9,691, Processing Time 0.041 seconds

Diagnosis of Fatty Liver Complicated by Simple Obesity in Children: Serum ALT and Its Correlation with Abdominal CT and Liver Biopsy (소아의 단순성 비만증에 의한 지방간의 진단: ALT치와 복부 전산화단층촬영 및 간생검 소견간의 상관관계)

  • Lee, Seong-Hee;Kim, Hwa-Jung;Oh, Jae-Cheol;Han, Hae-Jeong;Kim, Hee-Sup;Tchah, Hann;Park, Ho-Jin;Shin, Mi-Keong;Lee, Min-Jin;Lee, Sang-Chun
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.2 no.2
    • /
    • pp.153-163
    • /
    • 1999
  • Purpose: The purpose of our study is to provide useful information for diagnostic methods of fatty liver by childhood simple obesity and to provide correlation between serum alanine aminotransferase (ALT) for screening test and abdominal computerized tomography (CT) and liver biopsy for confirmative diagnostic methods of fatty liver. Methods: Among 78 obese childrens who visited our hospital, CT was carried out in 26 childrens. Of these, liver biopsy was carried out in 15 childrens who had high obesity index or severe elevated ALT. Based on the level of serum ALT, 26 cases were classified into 3 groups, and compared with physical measurements and degree of fatty infiltration on CT and liver biopsy. Results: 1) Correlation between ALT and physical measurements: Of 26 obese children, ALT was abnormally elevated (>30 IU/L) in 17 cases (67.4%) but there was no significant correlation between ALT and physical measurements (p>0.05). 2) Correlation between degree of fatty infiltration on CT and ALT: Of 26 cases, 13 cases (50%) revealed fatty liver on CT. The degree of fatty liver on CT had significant correlation with elevation of ALT (p<0.05). 3) Correlation between the degree of fatty infiltration on liver biopsy and ALT: Liver biopsy was performed in 15 cases of which 14 cases revealed fatty liver. But one case had normal hepatic histology with severe obesity and normal ALT. Fourteen fatty liver cases on liver biopsy were classified into 3 groups by the degree of fatty infiltration and analysed with obesity index and ALT. The histologic hepatic steatosis had no significant correlation with obesity index (p>0.05), but significant correlation with ALT (p<0.05). 4) Correlation between CT and liver biopsy finding: Both CT and liver biopsy were performed in 15 cases of which 6 cases revealed normal finding on CT and 9 cases manifested fatty liver. There was significant correlation between CT and liver biopsy findings (r=0.6094). Conclusion: The results of our study suggest that abdominal CT and liver biopsy are useful and accurate methods of estimating fatty liver in the childhood obesity. But biochemical abnormalities of routine liver function tests dot not correlate well with severity of the fatty liver and liver injury.

  • PDF

Estimation of Fresh Weight and Leaf Area Index of Soybean (Glycine max) Using Multi-year Spectral Data (다년도 분광 데이터를 이용한 콩의 생체중, 엽면적 지수 추정)

  • Jang, Si-Hyeong;Ryu, Chan-Seok;Kang, Ye-Seong;Park, Jun-Woo;Kim, Tae-Yang;Kang, Kyung-Suk;Park, Min-Jun;Baek, Hyun-Chan;Park, Yu-hyeon;Kang, Dong-woo;Zou, Kunyan;Kim, Min-Cheol;Kwon, Yeon-Ju;Han, Seung-ah;Jun, Tae-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.329-339
    • /
    • 2021
  • Soybeans (Glycine max), one of major upland crops, require precise management of environmental conditions, such as temperature, water, and soil, during cultivation since they are sensitive to environmental changes. Application of spectral technologies that measure the physiological state of crops remotely has great potential for improving quality and productivity of the soybean by estimating yields, physiological stresses, and diseases. In this study, we developed and validated a soybean growth prediction model using multispectral imagery. We conducted a linear regression analysis between vegetation indices and soybean growth data (fresh weight and LAI) obtained at Miryang fields. The linear regression model was validated at Goesan fields. It was found that the model based on green ratio vegetation index (GRVI) had the greatest performance in prediction of fresh weight at the calibration stage (R2=0.74, RMSE=246 g/m2, RE=34.2%). In the validation stage, RMSE and RE of the model were 392 g/m2 and 32%, respectively. The errors of the model differed by cropping system, For example, RMSE and RE of model in single crop fields were 315 g/m2 and 26%, respectively. On the other hand, the model had greater values of RMSE (381 g/m2) and RE (31%) in double crop fields. As a result of developing models for predicting a fresh weight into two years (2018+2020) with similar accumulated temperature (AT) in three years and a single year (2019) that was different from that AT, the prediction performance of a single year model was better than a two years model. Consequently, compared with those models divided by AT and a three years model, RMSE of a single crop fields were improved by about 29.1%. However, those of double crop fields decreased by about 19.6%. When environmental factors are used along with, spectral data, the reliability of soybean growth prediction can be achieved various environmental conditions.

Application and Analysis of Ocean Remote-Sensing Reflectance Quality Assurance Algorithm for GOCI-II (천리안해양위성 2호(GOCI-II) 원격반사도 품질 검증 시스템 적용 및 결과)

  • Sujung Bae;Eunkyung Lee;Jianwei Wei;Kyeong-sang Lee;Minsang Kim;Jong-kuk Choi;Jae Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1565-1576
    • /
    • 2023
  • An atmospheric correction algorithm based on the radiative transfer model is required to obtain remote-sensing reflectance (Rrs) from the Geostationary Ocean Color Imager-II (GOCI-II) observed at the top-of-atmosphere. This Rrs derived from the atmospheric correction is utilized to estimate various marine environmental parameters such as chlorophyll-a concentration, total suspended materials concentration, and absorption of dissolved organic matter. Therefore, an atmospheric correction is a fundamental algorithm as it significantly impacts the reliability of all other color products. However, in clear waters, for example, atmospheric path radiance exceeds more than ten times higher than the water-leaving radiance in the blue wavelengths. This implies atmospheric correction is a highly error-sensitive process with a 1% error in estimating atmospheric radiance in the atmospheric correction process can cause more than 10% errors. Therefore, the quality assessment of Rrs after the atmospheric correction is essential for ensuring reliable ocean environment analysis using ocean color satellite data. In this study, a Quality Assurance (QA) algorithm based on in-situ Rrs data, which has been archived into a database using Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Bio-optical Archive and Storage System (SeaBASS), was applied and modified to consider the different spectral characteristics of GOCI-II. This method is officially employed in the National Oceanic and Atmospheric Administration (NOAA)'s ocean color satellite data processing system. It provides quality analysis scores for Rrs ranging from 0 to 1 and classifies the water types into 23 categories. When the QA algorithm is applied to the initial phase of GOCI-II data with less calibration, it shows the highest frequency at a relatively low score of 0.625. However, when the algorithm is applied to the improved GOCI-II atmospheric correction results with updated calibrations, it shows the highest frequency at a higher score of 0.875 compared to the previous results. The water types analysis using the QA algorithm indicated that parts of the East Sea, South Sea, and the Northwest Pacific Ocean are primarily characterized as relatively clear case-I waters, while the coastal areas of the Yellow Sea and the East China Sea are mainly classified as highly turbid case-II waters. We expect that the QA algorithm will support GOCI-II users in terms of not only statistically identifying Rrs resulted with significant errors but also more reliable calibration with quality assured data. The algorithm will be included in the level-2 flag data provided with GOCI-II atmospheric correction.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Studies on the Mechanical Properties of Weathered Granitic Soil -On the Elements of Shear Strength and Hardness- (화강암질풍화토(花崗岩質風化土)의 역학적(力學的) 성질(性質)에 관(關)한 연구(硏究) -전단강도(剪斷强度)의 영향요소(影響要素)와 견밀도(堅密度)에 대(對)하여-)

  • Cho, Hi Doo
    • Journal of Korean Society of Forest Science
    • /
    • v.66 no.1
    • /
    • pp.16-36
    • /
    • 1984
  • It is very important in forestry to study the shear strength of weathered granitic soil, because the soil covers 66% of our country, and because the majority of land slides have been occured in the soil. In general, the causes of land slide can be classified both the external and internal factors. The external factors are known as vegetations, geography and climate, but internal factors are known as engineering properties originated from parent rocks and weathering. Soil engineering properties are controlled by the skeleton structure, texture, consistency, cohesion, permeability, water content, mineral components, porosity and density etc. of soils. And the effects of these internal factors on sliding down summarize as resistance, shear strength, against silding of soil mass. Shear strength basically depends upon effective stress, kinds of soils, density (void ratio), water content, the structure and arrangement of soil particles, among the properties. But these elements of shear strength work not all alone, but together. The purpose of this thesis is to clarify the characteristics of shear strength and the related elements, such as water content ($w_o$), void ratio($e_o$), dry density (${\gamma}_d$) and specific gravity ($G_s$), and the interrelationship among related elements in order to decide the dominant element chiefly influencing on shear strength in natural/undisturbed state of weathered granitic soil, in addition to the characteristics of soil hardness of weathered granitic soil and root distribution of Pinus rigida Mill and Pinus rigida ${\times}$ taeda planted in erosion-controlled lands. For the characteristics of shear strength of weathered granitic soil and the related elements of shear strength, three sites were selected from Kwangju district. The outlines of sampling sites in the district were: average specific gravity, 2.63 ~ 2.79; average natural water content, 24.3 ~ 28.3%; average dry density, $1.31{\sim}1.43g/cm^3$, average void ratio, 0.93 ~ 1.001 ; cohesion, $ 0.2{\sim}0.75kg/cm^2$ ; angle of internal friction, $29^{\circ}{\sim}45^{\circ}$ ; soil texture, SL. The shear strength of the soil in different sites was measured by a direct shear apparatus (type B; shear box size, $62.5{\times}20mm$; ${\sigma}$, $1.434kg/cm^2$; speed, 1/100mm/min.). For the related element analyses, water content was moderated through a series of drainage experiments with 4 levels of drainage period, specific gravity was measured by KS F 308, analysis of particle size distribution, by KS F 2302 and soil samples were dried at $110{\pm}5^{\circ}C$ for more than 12 hours in dry oven. Soil hardness represents physical properties, such as particle size distribution, porosity, bulk density and water content of soil, and test of the hardness by soil hardness tester is the simplest approach and totally indicative method to grasp the mechanical properties of soil. It is important to understand the mechanical properties of soil as well as the chemical in order to realize the fundamental phenomena in the growth and the distribution of tree roots. The writer intended to study the correlation between the soil hardness and the distribution of tree roots of Pinus rigida Mill. planted in 1966 and Pinus rigida ${\times}$ taeda in 199 to 1960 in the denuded forest lands with and after several erosion control works. The soil texture of the sites investigated was SL originated from weathered granitic soil. The former is situated at Py$\ddot{o}$ngchangri, Ky$\ddot{o}$m-my$\ddot{o}$n, Kogs$\ddot{o}$ng-gun, Ch$\ddot{o}$llanam-do (3.63 ha; slope, $17^{\circ}{\sim}41^{\circ}$ soil depth, thin or medium; humidity, dry or optimum; height, 5.66/3.73 ~ 7.63 m; D.B.H., 9.7/8.00 ~ 12.00 cm) and the Latter at changun-long Kwangju-shi (3.50 ha; slope, $12^{\circ}{\sim}23^{\circ}$; soil depth, thin; humidity, dry; height, 10.47/7.3 ~ 12.79 m; D.B.H., 16.94/14.3 ~ 19.4 cm).The sampling areas were 24quadrats ($10m{\times}10m$) in the former area and 12 in the latter expanding from summit to foot. Each sampling trees for hardness test and investigation of root distribution were selected by purposive selection and soil profiles of these trees were made at the downward distance of 50 cm from the trees, at each quadrat. Soil layers of the profile were separated by the distance of 10 cm from the surface (layer I, II, ... ...). Soil hardness was measured with Yamanaka soil hardness tester and indicated as indicated soil hardness at the different soil layers. The distribution of tree root number per unit area in different soil depth was investigated, and the relationship between the soil hardness and the number of tree roots was discussed. The results obtained from the experiments are summarized as follows. 1. Analyses of simple relationship between shear strength and elements of shear strength, water content ($w_o$), void ratio ($e_o$), dry density (${\gamma}_d$) and specific gravity ($G_s$). 1) Negative correlation coefficients were recognized between shear strength and water content. and shear strength and void ratio. 2) Positive correlation coefficients were recognized between shear strength and dry density. 3) The correlation coefficients between shear strength and specific gravity were not significant. 2. Analyses of partial and multiple correlation coefficients between shear strength and the related elements: 1) From the analyses of the partial correlation coefficients among water content ($x_1$), void ratio ($x_2$), and dry density ($x_3$), the direct effect of the water content on shear strength was the highest, and effect on shear strength was in order of void ratio and dry density. Similar trend was recognized from the results of multiple correlation coefficient analyses. 2) Multiple linear regression equations derived from two independent variables, water content ($x_1$ and dry density ($x_2$) were found to be ineffective in estimating shear strength ($\hat{Y}$). However, the simple linear regression equations with an independent variable, water content (x) were highly efficient to estimate shear strength ($\hat{Y}$) with relatively high fitness. 3. A relationship between soil hardness and the distribution of root number: 1) The soil hardness increased proportionally to the soil depth. Negative correlation coefficients were recognized between indicated soil hardness and the number of tree roots in both plantations. 2) The majority of tree roots of Pinus rigida Mill and Pinus rigida ${\times}$ taeda planted in erosion-controlled lands distributed at 20 cm deep from the surface. 3) Simple linear regression equations were derived from indicated hardness (x) and the number of tree roots (Y) to estimate root numbers in both plantations.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Studies on the Kiln Drying Characteristics of Several Commercial Woods of Korea (국산 유용 수종재의 인공건조 특성에 관한 연구)

  • Chung, Byung-Jae
    • Journal of the Korean Wood Science and Technology
    • /
    • v.2 no.2
    • /
    • pp.8-12
    • /
    • 1974
  • 1. If one unity is given to the prongs whose ends touch each other for estimating the internal stresses occuring in it, the internal stresses which are developed in the open prongs can be evaluated by the ratio to the unity. In accordance with the above statement, an equation was derived as follows. For employing this equation, the prongs should be made as shown in Fig. I, and be measured A and B' as indicated in Fig. l. A more precise value will result as the angle (J becomes smaller. $CH=\frac{(A-B') (4W+A) (4W-A)}{2A[(2W+(A-B')][2W-(A-B')]}{\times}100%$ where A is thickness of the prong, B' is the distance between the two prongs shown in Fig. 1 and CH is the value of internal stress expressed by percentage. It precision is not required, the equation can be simplified as follows. $CH=\frac{A-B'}{A}{\times}200%$ 2. Under scheduled drying condition III the kiln, when the weight of a sample board is constant, the moisture content of the shell of a sample board in the case of a normal casehardening is lower than that of the equilibrium moisture content which is indicated by the Forest Products Laboratory, U. S. Department of Agriculture. This result is usually true, especially in a thin sample board. A thick unseasoned or reverse casehardened sample does not follow in the above statement. 3. The results in the comparison of drying rate with five different kinds of wood given in Table 1 show that the these drying rates, i.e., the quantity of water evaporated from the surface area of I centimeter square per hour, are graded by the order of their magnitude as follows. (1) Ginkgo biloba Linne (2) Diospyros Kaki Thumberg. (3) Pinus densiflora Sieb. et Zucc. (4) Larix kaempheri Sargent (5) Castanea crenata Sieb. et Zucc. It is shown, for example, that at the moisture content of 20 percent the highest value revealed by the Ginkgo biloba is in the order of 3.8 times as great as that for Castanea crenata Sieb. & Zucc. which has the lowest value. Especially below the moisture content of 26 percent, the drying rate, i.e., the function of moisture content in percentage, is represented by the linear equation. All of these linear equations are highly significant in testing the confficient of X i. e., moisture content in percentage. In the Table 2, the symbols are expressed as follows; Y is the quantity of water evaporated from the surface area of 1 centimeter square per hour, and X is the moisture content of the percentage. The drying rate is plotted against the moisture content of the percentage as in Fig. 2. 4. One hundred times the ratio(P%) of the number of samples occuring in the CH 4 class (from 76 to 100% of CH ratio) within the total number of saplmes tested to those of the total which underlie the given SR ratio is measured in Table 3. (The 9% indicated above is assumed as the danger probability in percentage). In summarizing above results, the conclusion is in Table 4. NOTE: In Table 4, the column numbers such as 1. 2 and 3 imply as follows, respectively. 1) The minimum SR ratio which does not reveal the CH 4, class is indicated as in the column 1. 2) The extent of SR ratio which is confined in the safety allowance of 30 percent is shown in the column 2. 3) The lowest limitation of SR ratio which gives the most danger probability of 100 percent is shown in column 3. In analyzing above results, it is clear that chestnut and larch easly form internal stress in comparison with persimmon and pine. However, in considering the fact that the revers, casehardening occured in fir and ginkgo, under the same drying condition with the others, it is deduced that fir and ginkgo form normal casehardening with difficulty in comparison with the other species tested. 5. All kinds of drying defects except casehardening are developed when the internal stresses are in excess of the ultimate strength of material in the case of long-lime loading. Under the drying condition at temperature of $170^{\circ}F$ and the lower humidity. the drying defects are not so severe. However, under the same conditions at $200^{\circ}F$, the lower humidity and not end coated, all sample boards develop severe drying defects. Especially the chestnut was very prone to form the drying defects such as casehardening and splitting.

  • PDF

A Study for the Norms of Audiometric Tests in Koreans (정상한국인의 청력검사치에 관한 연구)

  • 오혜경;서장수;이근해;김희남;김영명;권영화;서옥기
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1981.05a
    • /
    • pp.38.1-38
    • /
    • 1981
  • Currently in the otologic field, there are various methods of special audiometric examinations, such as, tone decay, SISI, and impedance audiometry and only a few studies has been done in these fields sporadically in Korea. The purpose of this paper is to establish norms of various special audiometric tests, so we have performed the special audiometric tests on 100 male medical students in good physical condition and the follow results were obtained. 1. All cases showed over 90% of PB scores. The mean and its 2 S.D. were 98$\pm$4.9% in the right ear and 97$\pm$5.6% in the left ear. 2. The mean and its 2 S.D. of MCL(most comfortable level) were 45$\pm$15.4 dB in the right ear and 46$\pm$17.9 dB in the left ear, and its range was 12$\pm$12.2 dB in the right ear and 13$\pm$12.6 dB in the left ear. 3. The mean and its 2 S.D. of UCL (uncomfortable level) were 102$\pm$7.9 dB in the right ear and 102$\pm$7.9 dB in the left ear and about an half in cases showed over 106 dB of UCL. 4. In 95% of cases, SISIs(short increment sensitivity index) at 1, 000 Hz and 4000 Hz was below 45% in the right ear in both frequencies and below 55% and 75% in the left ear, respectively. 5. In 95% of cases, tone decays at 2, 000 Hz and 4, 000 Hz was below 10 dB in both ears. 6. The difference between SRT and PTA (speech reception threshold minus pure tone average) was 4$\pm$9.2 dB in the right ear and 4$\pm$10.0 dB in the left ear. 7. The dynamic range(uncomfortable level minus speech reception threshold) was 98$\pm$13.5 dB in the right ear and 99$\pm$13.5 dB in the left ear. We had trouble in estimating the dynamic range in about an half in cases, in which we couldn't estimate the UCL with our conventional audiometry. 8. The results of impedance audiometric tests were as follow: A. In the tympanogram, all cases were of A type with one exception of B type in the left ear. The mean and its 2 S.D. of its peak level were 22.8$\pm$32.94mm $H_2O$ in the right ear and 23.9$\pm$29. 81mm $H_2O$ in the left ear. B. The mean and its 2 S.D. of the compliance were 0.6$\pm$0.54cc in the right ear and 0.6$\pm$0.53cc in the left ear. C. The results of stapedial reflex: a. The mean and its 2 S.D. of the controlateral stapedial reflex at 500Hz, 1, 000Hz, 2, 000Hz, 4, 000Hz were 99$\pm$17.7 dB, 87$\pm$14.4 dB, 79$\pm$13.7 dB, 77$\pm$20.0 dB in the right ear and 99$\pm$15.9 dB, 88$\pm$13.9 dB, 79$\pm$13.7 dB, 77$\pm$21.3 dB in the left ear. Depending on the tested frequencies, the stapedial reflex wasn't generated in 6 cases in the right ear and 11 cases in the left ear. b. The mean and its 2 S.D. of the ipsilateral stapedial reflex at 1, 000Hz, and 2, 000Hz were 89$\pm$16.3 dB, 82$\pm$15.9 dB in the right ear and 89$\pm$18.0 dB, 83$\pm$18.9 dB in the left ear. Depending on the tested frequencies, the stapedial reflex wans't generated in 1 case in the right ear and 2 cases in the left ear. 9. Eustachian tube function using with impedance audiometry was malfunctioned in21 cases depending on the tested pressure and the range of peak level of tympanogram was 14$\pm$26.9mm $H_2O$(tested pressure:+250mm $H_2O$), 8$\pm$21.9mm $H_2O$ (tested pressure:-250mm $H_2O$) in the right ear and 11 cases depending on the tested pressure and the range of the peak level of tympanogram was 12$\pm$22.5mm $H_2O$ (tested pressure: +250 mm $H_2O$, 9$\pm$17.3mm $H_2O$(tested pressure: -250mm $H_2O$) in the left ear.

  • PDF

The Use of Radioactive $^{51}Cr$ in Measurement of Intestinal Blood Loss ($^{51}Cr$을 사용(使用)한 장관내(賜管內) 출혈량측정법(出血量測定法))

  • Lee, Mun-Ho
    • The Korean Journal of Nuclear Medicine
    • /
    • v.4 no.1
    • /
    • pp.19-26
    • /
    • 1970
  • 1. Sixteen normal healthy subjects free from occult blood in the stool were selected and administered with their $^{51}Cr$ labeled own blood via duodenal tube and the recovery rate of radioactivity in feces and urine was measured. The average fecal recovery rate was 90.7 per cent ($85.7{\sim}97.7%$) of the administered radioactivity, and the average urinary excretion rate was 0.8 per cent ($0.5{\sim}1.5%$) 2. There was a close correlation between the amount of blood administered and the recovery rate from the feces; the more the blood administered, the higher the recovery rate was. It was also found that the administration of the tagged blood in the amount exceeding 15ml was suitable for measuring the radioactivity in the stools. 3. In five normal healthy subjects whose circulating erythrocytes had been tagged with $^{51}Cr$, there was little fecal excretion of radioactivity (average 0.9 ml of blood per day). This excretion is not related to hemorrhage and the main route of excretion of such an negligible radioactivity was postulated as gastric juice and bile. 4. A comparison of the radioactivity in the blood and feces of the patients with $^{51}Cr$ labeled erythrocytes seems to be a valid way of estimating intestinal blood loss.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF