• Title/Summary/Keyword: SAMe

Search Result 58,348, Processing Time 0.091 seconds

Batch Scale Storage of Sprouting Foods by Irradiation Combined with Natural Low Temperature - III. Storage of Onions - (방사선조사(放射線照射)와 자연저온(自然低溫)에 의한 발아식품(發芽食品)의 Batch Scale 저장(貯藏)에 관한 연구(硏究) - 제3보(第三報) 양파의 저장(貯藏) -)

  • Cho, Han-Ok;Kwon, Joong-Ho;Byun, Myung-Woo;Yang, Ho-Sook
    • Applied Biological Chemistry
    • /
    • v.26 no.2
    • /
    • pp.82-89
    • /
    • 1983
  • In order to develop a commercial storage method of onions by irradiation combined with natural low temperature, two local varieties of onions, precocious species and late ripening, were stored at natural low temperature storage room ($450{\times}650{\times}250cmH.$; year-round temperature change, $2{\sim}17^{\circ}C$; R.H., $80{\sim}85%$) on batch scale following irradiation with optimum dose level. Precocious and late varieties were all sprouted after five to seven months storage, whereas $10{\sim}15$ Krad irradiated precocious variety was $2{\sim}4%$ sprouted after nine months storage, but sprouting was completly inhibited at the same dose for late variety. The extent of loss due to rot attack after ten months storage were $23{\sim}49%$ in both control and irradiated group of precocious variety but those of late variety were only $4{\sim}10%$. The weight loss of irradiated precocious variety after ten months storage was $13{\sim}16$, while that of late variety was $5.3{\sim}5.9%$ after nine months storage. The moisture content, during whole storage period, of two varieties were $90{\sim}93$ with negligible changes. The total sugar content differed little with varieties and doses immediatly after irradiation, but decreased by the elapse of storage period. 33.6% of its content was decreased in control and 12.5% in irradiated group but $20{\sim}26$ decreased in both control and irradiated group of late variety after nine months storage. No appreciable change was observed immediately after irradiation irrespective of variety and dose, but decreased slightly with storage. Ascorbic acid content of precocious variety was increased slightly with dose immediately after irradiation, but those of late variety decreased slightly. Ascorbic acid content were generally decreased during whole storage period. An economical preservation method of onions appliable to late variety, would be to irradiate onion bulbs at dost range of $10{\sim}15$ Krad followed by storage at natural low temperature storage room.

  • PDF

A Study on Recent Research Trend in Management of Technology Using Keywords Network Analysis (키워드 네트워크 분석을 통해 살펴본 기술경영의 최근 연구동향)

  • Kho, Jaechang;Cho, Kuentae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.101-123
    • /
    • 2013
  • Recently due to the advancements of science and information technology, the socio-economic business areas are changing from the industrial economy to a knowledge economy. Furthermore, companies need to do creation of new value through continuous innovation, development of core competencies and technologies, and technological convergence. Therefore, the identification of major trends in technology research and the interdisciplinary knowledge-based prediction of integrated technologies and promising techniques are required for firms to gain and sustain competitive advantage and future growth engines. The aim of this paper is to understand the recent research trend in management of technology (MOT) and to foresee promising technologies with deep knowledge for both technology and business. Furthermore, this study intends to give a clear way to find new technical value for constant innovation and to capture core technology and technology convergence. Bibliometrics is a metrical analysis to understand literature's characteristics. Traditional bibliometrics has its limitation not to understand relationship between trend in technology management and technology itself, since it focuses on quantitative indices such as quotation frequency. To overcome this issue, the network focused bibliometrics has been used instead of traditional one. The network focused bibliometrics mainly uses "Co-citation" and "Co-word" analysis. In this study, a keywords network analysis, one of social network analysis, is performed to analyze recent research trend in MOT. For the analysis, we collected keywords from research papers published in international journals related MOT between 2002 and 2011, constructed a keyword network, and then conducted the keywords network analysis. Over the past 40 years, the studies in social network have attempted to understand the social interactions through the network structure represented by connection patterns. In other words, social network analysis has been used to explain the structures and behaviors of various social formations such as teams, organizations, and industries. In general, the social network analysis uses data as a form of matrix. In our context, the matrix depicts the relations between rows as papers and columns as keywords, where the relations are represented as binary. Even though there are no direct relations between papers who have been published, the relations between papers can be derived artificially as in the paper-keyword matrix, in which each cell has 1 for including or 0 for not including. For example, a keywords network can be configured in a way to connect the papers which have included one or more same keywords. After constructing a keywords network, we analyzed frequency of keywords, structural characteristics of keywords network, preferential attachment and growth of new keywords, component, and centrality. The results of this study are as follows. First, a paper has 4.574 keywords on the average. 90% of keywords were used three or less times for past 10 years and about 75% of keywords appeared only one time. Second, the keyword network in MOT is a small world network and a scale free network in which a small number of keywords have a tendency to become a monopoly. Third, the gap between the rich (with more edges) and the poor (with fewer edges) in the network is getting bigger as time goes on. Fourth, most of newly entering keywords become poor nodes within about 2~3 years. Finally, keywords with high degree centrality, betweenness centrality, and closeness centrality are "Innovation," "R&D," "Patent," "Forecast," "Technology transfer," "Technology," and "SME". The results of analysis will help researchers identify major trends in MOT research and then seek a new research topic. We hope that the result of the analysis will help researchers of MOT identify major trends in technology research, and utilize as useful reference information when they seek consilience with other fields of study and select a new research topic.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

A Study on the Improvement of Recommendation Accuracy by Using Category Association Rule Mining (카테고리 연관 규칙 마이닝을 활용한 추천 정확도 향상 기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.27-42
    • /
    • 2020
  • Traditional companies with offline stores were unable to secure large display space due to the problems of cost. This limitation inevitably allowed limited kinds of products to be displayed on the shelves, which resulted in consumers being deprived of the opportunity to experience various items. Taking advantage of the virtual space called the Internet, online shopping goes beyond the limits of limitations in physical space of offline shopping and is now able to display numerous products on web pages that can satisfy consumers with a variety of needs. Paradoxically, however, this can also cause consumers to experience the difficulty of comparing and evaluating too many alternatives in their purchase decision-making process. As an effort to address this side effect, various kinds of consumer's purchase decision support systems have been studied, such as keyword-based item search service and recommender systems. These systems can reduce search time for items, prevent consumer from leaving while browsing, and contribute to the seller's increased sales. Among those systems, recommender systems based on association rule mining techniques can effectively detect interrelated products from transaction data such as orders. The association between products obtained by statistical analysis provides clues to predicting how interested consumers will be in another product. However, since its algorithm is based on the number of transactions, products not sold enough so far in the early days of launch may not be included in the list of recommendations even though they are highly likely to be sold. Such missing items may not have sufficient opportunities to be exposed to consumers to record sufficient sales, and then fall into a vicious cycle of a vicious cycle of declining sales and omission in the recommendation list. This situation is an inevitable outcome in situations in which recommendations are made based on past transaction histories, rather than on determining potential future sales possibilities. This study started with the idea that reflecting the means by which this potential possibility can be identified indirectly would help to select highly recommended products. In the light of the fact that the attributes of a product affect the consumer's purchasing decisions, this study was conducted to reflect them in the recommender systems. In other words, consumers who visit a product page have shown interest in the attributes of the product and would be also interested in other products with the same attributes. On such assumption, based on these attributes, the recommender system can select recommended products that can show a higher acceptance rate. Given that a category is one of the main attributes of a product, it can be a good indicator of not only direct associations between two items but also potential associations that have yet to be revealed. Based on this idea, the study devised a recommender system that reflects not only associations between products but also categories. Through regression analysis, two kinds of associations were combined to form a model that could predict the hit rate of recommendation. To evaluate the performance of the proposed model, another regression model was also developed based only on associations between products. Comparative experiments were designed to be similar to the environment in which products are actually recommended in online shopping malls. First, the association rules for all possible combinations of antecedent and consequent items were generated from the order data. Then, hit rates for each of the associated rules were predicted from the support and confidence that are calculated by each of the models. The comparative experiments using order data collected from an online shopping mall show that the recommendation accuracy can be improved by further reflecting not only the association between products but also categories in the recommendation of related products. The proposed model showed a 2 to 3 percent improvement in hit rates compared to the existing model. From a practical point of view, it is expected to have a positive effect on improving consumers' purchasing satisfaction and increasing sellers' sales.

A Review of Personal Radiation Dose per Radiological Technologists Working at General Hospitals (전국 종합병원 방사선사의 개인피폭선량에 대한 고찰)

  • Jung, Hong-Ryang;Lim, Cheong-Hwan;Lee, Man-Koo
    • Journal of radiological science and technology
    • /
    • v.28 no.2
    • /
    • pp.137-144
    • /
    • 2005
  • To find the personal radiation dose of radiological technologists, a survey was conducted to 623 radiological technologists who had been working at 44 general hospitals in Korea's 16 cities and provinces from 1998 to 2002. A total of 2,624 cases about personal radiological dose that were collected were analyzed by region, year and hospital, the results of which look as follows : 1. The average radiation dose per capita by region and year for the 5 years was 1.61 mSv. By region, Daegu showed the highest amount 4.74 mSv, followed by Gangwon 4.65 mSv and Gyeonggi 2.15 mSv. The lowest amount was recorded in Chungbuk 0.91 mSv, Jeju 0.94 mSv and Busan 0.97 mSv in order. By year, 2000 appeared to be the year showing the highest amount of radiation dose 1.80 mSv, followed by 2002 1.77 mSv, 1999 1.55 mSv, 2001 1.50 mSv and 1998 1.36 mSv. 2. In 1998, Gangwon featured the highest amount of radiological dose per capita 3.28 mSv, followed by Gwangju 2.51 mSv and Daejeon 2.25 mSv, while Jeju 0.86mSv and Chungbuk 0.85 mSv belonged to the area where the radiation dose remained less than 1.0 mSv In 1999, Gangwon also topped the list with 5.67 mSv, followed by Daegu with 4.35 mSv and Gyeonggi with 2.48 mSv. In the same year, the radiation dose was kept below 1.0 mSv. in Ulsan 0.98 mSv, Gyeongbuk 0.95 mSv and Jeju 0.91 mSv. 3. In 2000, Gangwon was again at the top of the list with 5.73 mSv. Ulsan turned out to have less than 1.0 mSv of radiation dose in the years 1998 and 1999 consecutively, whereas the amount increased relatively high to 5.20 mSv. Chungbuk remained below the level of 1.0 mSv with 0.79 mSv. 4. In 2001, Daegu recorded the highest amount of radiation dose among those ever analyzed for 5 years with 9.05 mSv, followed by Gangwon with 4.01 mSv. The area with less than 1.0 mSv included Gyeongbuk 0.99 mSv and Jeonbuk 0.92 mSv. In 2002, Gangwon also led the list with 4.65 mSv while Incheon 0.88 mSv, Jeonbuk 0.96 mSv and Jeju 0.68 mSv belonged to the regions with less than 1.0 mSv of radiation dose. 5. By hospital, KMH in Daegu showed the record high amount of average radiation dose during the period of 5 years 6.82 mSv, followed by GAH 5.88 mSv in Gangwon and CAH 3.66 mSv in Seoul. YSH in Jeonnam 0.36 mSv comes first in the order of the hospitals with least amount of radiation dose, followed by GNH in Gyeongnam 0.39 mSv and DKH in Chungnam 0.51 mSv. There is a limit to the present study in that a focus is laid on the radiological technologists who are working at the 3rd referral hospitals which are regarded to be stable in terms of working conditions while radiological technologists who are working at small-sized hospitals are excluded from the survey. Besides, there are also cases in which hospitals with less than 5 years since establishment are included in the survey and the radiological technologists who have worked for less than 5 years at a hospital are also put to survey. We can't exclude the possibility, either, of assumption that the difference of personal average radiological dose by region, hospital and year might be ascribed to the different working conditions and facilities by medical institutions. It seems therefore desirable to develop standardized instruments to measure working environment objectively and to invent device to compare and analyze them by region and hospital more accurately in the future.

  • PDF

Validity and Pertinence of Administrative Capital City Proposal (행정수도 건설안의 타당성과 시의성)

  • 김형국
    • Journal of the Korean Geographical Society
    • /
    • v.38 no.2
    • /
    • pp.312-323
    • /
    • 2003
  • This writer absolutely agrees with the government that regional disequilibrium is severe enough to consider moving the administrative capital. Pursuing this course solely to establish a balanced development, however, is not a convincing enough reason. The capital city is directly related to not only the social and economic situation but, much more importantly, to the domestic political situation as well. In the mid-1970s, the proposal by the Third Republic to move the capital city temporarily was based completely on security reasons. At e time, the then opposition leader Kim, Dae-jung said that establishing a safe distance from the demilitarized zone(DMZ) reflected a typically military decision. His view was that retaining the capital city close to the DMZ would show more consideration for the will of the people to defend their own country. In fact, independent Pakistan moved its capital city from Karachi to Islamabad, situated dose to Kashmir the subject of hot territorial dispute with India. It is regrettable that no consideration has been given to the urgent political situation in the Korean peninsula, which is presently enveloped in a dense nuclear fog. As a person requires health to pursue his/her dream, a country must have security to implement a balanced territorial development. According to current urban theories, the fate of a country depends on its major cities. A negligently guarded capital city runs the risk of becoming hostage and bringing ruin to the whole country. In this vein, North Koreas undoubted main target of attack in the armed communist reunification of Korea is Seoul. For the preservation of our state, therefore, it is only right that Seoul must be shielded to prevent becoming hostage to North Korea. The location of the US Armed Forces to the north of the capital city is based on the judgment that defense of Seoul is of absolute importance. At the same time, regardless of their different standpoints, South and North Korea agree that division of the Korean people into two separate countries is abnormal. Reunification, which so far has defied all predictions, may be realized earlier than anyone expects. The day of reunification seems to be the best day for the relocation of the capital city. Building a proper capital city would take at least twenty years, and a capital city cannot be dragged from one place to another. On the day of a free and democratic reunification, a national agreement will be reached naturally to find a nationally symbolic city as in Brazil or Australia. Even if security does not pose a problem, the governments way of thinking would not greatly contribute to the balanced development of the country. The Chungcheon region, which is earmarked as the new location of the capital city, has been the greatest beneficiary of its proximity to the capital region. Not being a disadvantaged region, locating the capital city there would not help alleviate regional disparity. If it is absolutely necessary to find a candidate region at present, considering security, balanced regional development and post-reunification scenario of the future, Cheolwon area located in the middle of the Korean peninsula may be a plausible choice. Even if the transfer of capital is delayed in consideration of the present political conflict between the South and the North Koreas, there is a definite shortcut to realizing a balanced regional development. It can be found not in the geographical dispersal of the central government, but in the decentralization of power to the provinces. If the government has surplus money to build a new symbolic capital city, it is only right that it should improve, for instance, the quality of drinking water which now everyone eschews, and to help the regional subway authority whose chronic deficit state resoled in a recent disastrous accident. And it is proper to time the transfer of capital city to coincide with that of the reunification of Korea whenever Providence intends.

Comparison of Left Ventricular Volume and Function between 46 Channel Multi-detector Computed Tomography (MDCT) and Echocardiography (16 채널 Multi-detector 컴퓨터 단층촬영과 심초음파를 이용한 좌심실 용적과 기능의 비교)

  • Park, Chan-Beom;Cho, Min-Seob;Moon, Mi-Hyoung;Cho, Eun-Ju;Lee, Bae-Young;Kim, Chi-Kyung;Jin, Ung
    • Journal of Chest Surgery
    • /
    • v.40 no.1 s.270
    • /
    • pp.45-51
    • /
    • 2007
  • Background: Although echocardiography is usually used for quantitative assessment of left ventricular function, the recently developed 16-slice multidetector computed tomography (MDCT) is not only capable of evaluating the coronary arteries but also left ventricular function. Therefore, the objective of our study was to compare the values of left ventricular function quantified by MDCT to those by echocardiography for evaluation of its regards to clinical applications. Material and Method: From 49 patients who underwent MDCT in our hospital from November 1, 2003 to January 31, 2005, we enrolled 20 patients who underwent echocardiography during the same period for this study. Left ventricular end-diastolic volume index (LVEDVI), left ventricular end-systolic volume index (LVESVI), stroke volume index (SVI), left ventricular mass index (LVMI), and ejection fraction (EF) were analyzed. Result: Average LVEDVI ($80.86{\pm}34.69mL$ for MDCT vs $60.23{\pm}29.06mL$ for Echocardiography, p<0.01), average LVESVI ($37.96{\pm}24.52mL$ for MDCT vs $25.68{\pm}16.57mL$ for Echocardiography, p<0.01), average SVI ($42.90{\pm}15.86mL$ for MDCT vs $34.54{\pm}17.94mL$ for Echocardiography, p<0.01), average LVMI ($72.14{\pm}25.35mL$ for MDCT vs $130.35{\pm}53.10mL$ for Echocardiography, p<0.01), and average EF ($55.63{\pm}12.91mL$ for MOCT vs $59.95{\pm}12.75ml$ for Echocardiography, p<0.05) showed significant difference between both groups. Average LVEDVI, average LVESVI, and average SVI were higher in MDCT, and average LVMI and average EF were higher in echocardiogram. Comparing correlation for each parameters between both groups, LVEDVI $(r^2=0.74,\;p<0.0001)$, LVESVI $(r^2=0.69,\;p<0.0001)$ and SVI $(r^2=0.55,\;p<0.0001)$ showed high relevance, LVMI $(r^2=0.84,\;p<0.0001)$ showed very high relevance, and $EF (r^2=0.45,\;p=0.0002)$ showed relatively high relevance. Conclusion: Quantitative assessment of left ventricular volume and function using 16-slice MDCT showed high relevance compared with echocardiography, therefore may be a feasible assessment method. However, because the average of each parameters showed significant difference, the absolute values between both studies may not be appropriate for clinical applications. Furthermore, considering the future development of MDCT, we expect to be able to easily evaluate the assessment of coronary artery stenosis along with left ventricular function in coronary artery disease patients.

A Study on the Amino Acid Components Soil Humus Composition (토양부식산(土壤腐植酸)의 형태별(形態別) Amino 산(酸) 함량(含量)에 관(關)한 연구(硏究))

  • Kim, Jeong-Je;Lee, Wi-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.21 no.3
    • /
    • pp.254-263
    • /
    • 1988
  • Contents and distribution of amino acids in the humic acid and fulvic acid fractions of different types ($R_p$, B, A, P) were investigated. Extracted humic and fulvic acids were purified and analyzed. The results are summarized as the following: (1) Composition of Humus The total humus ($H_T$), amount of humic acid (a), amount of fulvic acid (b), and ${\Delta}logK$ all decrease in the order of $R_p$ > B > A > P type. The same trend was observed in the total nitrogen and carbon. (2) Contents and composition of amino acids in humic acids. 1) The total amounts of amino acids in the humic acid fraction of different types were in the following order for soils under coniferous forest trees: $R_p$ > B > A > P type, but for soils under deciduous forest trees the order was P > A > $R_p$ > B type. There were positive correlationships between total amino acids and total carbon and ${\Delta}logK$ for humic acids from soils under coniferous forest trees, but a negative correlationship was existed. between total amino acids and C/N ratios. No significant correlation was found for samples taken from soils under deciduous forest trees. 2) The ratios of one group of amino acids to the others were compared. The ratios of acidic amino acids were in the order of P > $R_p$ > B > A type. those of neutral amino acids followed the order of $R_p$ > B > A > P type and those of the basic amino acids were in the order of B > A >$R_p$ > P type for soils under coniferous forest trees. Contents of total amino acids were in the order of the neutral > the acidic > the basic amino acids. For the soils under deciduous forest trees the order of the ratio was different. Acidic amino acids followed the order of A > P > B > $R_p$ type, neutral ones followed the order of P > $R_p$ > A > B type, and the basic amino acids did the order of $$P{\geq_-}$$ A > B $$\geq_-$$ $-R_p$ type where the difference was very small. 3) In general aspartic aicd, glycine and glutamic acid were the major components in all samples. Histidine, tyrosine and methionine belonged to the group contained in a small amount. (3) Contents and composition of amino acids in fulvic acids. 1) The total amounts of amino acids of different types of fulvic acids were in the order of $R_p$ > B > P > A type regardless of origin of samples. There were positive correlationships observed between the toal amino acids and total carbon and ${\Delta}logK$ for soils under coniferous forest trees. For soils under deciduous forest trees, positive correlationships were observed among total amino aicds, total nitrogen, total humus ($H_T$), total humic aicd (a), and ${\Delta}logK$, but a negative correlationship existed between total amino acids and C/N ratio. 2) Thr ratio among acidic amino acids, neutral amino acids and basic amino acids of different types were $R_p$ > B > P > A type. In this respect there was no difference between the two soils. 3) In general glycine, aspartic acid, and alanine were the major constituents in all samples of different types, while tyrosine and methionine were contained in a small amount. Virtually no amount of arginine was measured.

  • PDF

Development of a Traffic Accident Prediction Model and Determination of the Risk Level at Signalized Intersection (신호교차로에서의 사고예측모형개발 및 위험수준결정 연구)

  • 홍정열;도철웅
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.7
    • /
    • pp.155-166
    • /
    • 2002
  • Since 1990s. there has been an increasing number of traffic accidents at intersection. which requires more urgent measures to insure safety on intersection. This study set out to analyze the road conditions, traffic conditions and traffic operation conditions on signalized intersection. to identify the elements that would impose obstructions in safety, and to develop a traffic accident prediction model to evaluate the safety of an intersection using the cop relation between the elements and an accident. In addition, the focus was made on suggesting appropriate traffic safety policies by dealing with the danger elements in advance and on enhancing the safety on the intersection in developing a traffic accident prediction model fir a signalized intersection. The data for the study was collected at an intersection located in Wonju city from January to December 2001. It consisted of the number of accidents, the road conditions, the traffic conditions, and the traffic operation conditions at the intersection. The collected data was first statistically analyzed and then the results identified the elements that had close correlations with accidents. They included the area pattern, the use of land, the bus stopping activities, the parking and stopping activities on the road, the total volume, the turning volume, the number of lanes, the width of the road, the intersection area, the cycle, the sight distance, and the turning radius. These elements were used in the second correlation analysis. The significant level was 95% or higher in all of them. There were few correlations between independent variables. The variables that affected the accident rate were the number of lanes, the turning radius, the sight distance and the cycle, which were used to develop a traffic accident prediction model formula considering their distribution. The model formula was compared with a general linear regression model in accuracy. In addition, the statistics of domestic accidents were investigated to analyze the distribution of the accidents and to classify intersections according to the risk level. Finally, the results were applied to the Spearman-rank correlation coefficient to see if the model was appropriate. As a result, the coefficient of determination was highly significant with the value of 0.985 and the ranks among the intersections according to the risk level were appropriate too. The actual number of accidents and the predicted ones were compared in terms of the risk level and they were about the same in the risk level for 80% of the intersections.