• Title/Summary/Keyword: etc.

Search Result 28,551, Processing Time 0.073 seconds

A Study on the Passengers liability of the Carrier on the Montreal Convention (몬트리올협약상의 항공여객운송인의 책임(Air Carrier's Liability for Passenger on Montreal Convention 1999))

  • Kim, Jong-Bok
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.23 no.2
    • /
    • pp.31-66
    • /
    • 2008
  • Until Montreal Convention was established in 1999, the Warsaw System is undoubtedly accepted private international air law treaty and has played major role on the carrier's liability in international aviation transport industry. But the whole Warsaw System, though it was revised many times to meet the rapid developments of the aviation transport industry, is so complicated, tangled and outdated. This thesis, therefore, aim to introduce the Montreal Convention by interpreting it as a new legal instrument on the air carrier's liability, specially on the passenger's, and analyzing all the issues relating to it. The Montreal Convention markedly changed the rules governing international carriage by air. The Montreal Convention has modernized and consolidated the old Warsaw System of international instruments of private international air law into one legal instrument. One of the most significant features of the Montreal Convention is that it sifted its priority to the protection of the interest of the consumers from the protection of the carrier which originally the Warsaw Convention intended to protect the fledgling international air transport business. Two major features of the Montreal Convention adopts are the Two-tier Liability System and the Fifth Jurisdiction. In case of death or bodily injury to passengers, the Montreal Convention introduces a two-tier liability system. The first tier includes strict liability up to 100,000SDR, irrespective of carriers' fault. The second tier is based on presumption of fault of carrier and has no limit of liability. Regarding Jurisdiction, the Montreal Convention expands upon the four jurisdiction in which the carrier could be sued by adding a fifth jurisdiction, i.e., a passenger can bring suit in a country in which he or she has their permanent and principal residence and in which the carrier provides a services for the carriage of passengers by either its own aircraft or through a commercial agreement. Other features are introducing the advance payment, electronic ticketing, compulsory insurance and regulation on the contracting and actual carrier etc. As we see some major features of the Montreal Convention, the Convention heralds the single biggest change in the international aviation liability and there can be no doubt it will prevail the international aviation transport world in the future. Our government signed this Convention on 20th Sep. 2007 and it came into effect on 29th Dec. 2007 domestically. Thus, it was recognized that domestic carriers can adequately and independently manage the change of risks of liability. I, therefore, would like to suggest our country's aviation industry including newly-born low cost carrier prepare some countermeasures domestically that are necessary to the enforcement of the Convention.

  • PDF

Correlation between High-Resolution CT and Pulmonary Function Tests in Patients with Emphysema (폐기종환자에서 고해상도 CT와 폐기능검사와의 상관관계)

  • Ahn, Joong-Hyun;Park, Jeong-Mee;Ko, Seung-Hyeon;Yoon, Jong-Goo;Kwon, Soon-Seug;Kim, Young-Kyoon;Kim, Kwan-Hyoung;Moon, Hwa-Sik;Park, Sung-Hak;Song, Jeong-Sup
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.3
    • /
    • pp.367-376
    • /
    • 1996
  • Background : The diagnosis of emphysema during life is based on a combination of clinical, functional, and radiographic findings, but this combination is relatively insensitive and nonspecific. The development of rapid, high-resolution third and fourth generation CT scanners has enabled us to resolve pulmonary parenchymal abnormalities with great precision. We compared the chest HRCT findings to the pulmonary function test and arterial blood gas analysis in pulmonary emphysema patients to test the ability of HRCT to quantify the degree of pulmonary emphysema. Methods : From october 1994 to october 1995, the study group consisted of 20 subjects in whom HRCT of the thorax and pulmonary function studies had been obtained at St. Mary's hospital. The analysis was from scans at preselected anatomic levels and incorporated both lungs. On each HRCT slice the lung parenchyma was assessed for two aspects of emphysema: severity and extent. The five levels were graded and scored separately for the left and right lung giving a total of 10 lung fields. A combination of severity and extent gave the degree of emphysema. We compared the HRCT quantitation of emphysema, pulmonary function tests, ABGA, CBC, and patients characteristics(age, sex, height, weight, smoking amounts etc.) in 20 patients. Results : 1) There was a significant inverse correlation between HRCT scores for emphysema and percentage predicted values of DLco(r = -0.68, p < 0.05), DLco/VA(r = -0.49, p < 0.05), FEV1(r = -0.53, p < 0.05), and FVC(r = -0.47, p < 0.05). 2) There was a significant correlation between the HRCT scores and percentage predicted values of TLC(r = 0.50, p < 0.05), RV(r = 0.64, p < 0.05). 3) There was a significant inverse correlation between the HRCT scores and PaO2(r = -0.48, p < 0.05) and significant correlation with D(A-a)O2(r = -0.48, p < 0.05) but no significant correlation between the HRCT scores and PaCO2. 4) There was no significant correlation between the HRCT scores and age, sex, height, weight, smoking amounts in patients, hemoglobin, hematocrit, and wbc counts. Conclusion : High-Resolution CT provides a useful method for early detection and quantitating emphysema in life and correlates significantly with pulmonary function tests and arterial blood gas analysis.

  • PDF

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

The Effects of Environmental Dynamism on Supply Chain Commitment in the High-tech Industry: The Roles of Flexibility and Dependence (첨단산업의 환경동태성이 공급체인의 결속에 미치는 영향: 유연성과 의존성의 역할)

  • Kim, Sang-Deok;Ji, Seong-Goo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.31-54
    • /
    • 2007
  • The exchange between buyers and sellers in the industrial market is changing from short-term to long-term relationships. Long-term relationships are governed mainly by formal contracts or informal agreements, but many scholars are now asserting that controlling relationship by using formal contracts under environmental dynamism is inappropriate. In this case, partners will depend on each other's flexibility or interdependence. The former, flexibility, provides a general frame of reference, order, and standards against which to guide and assess appropriate behavior in dynamic and ambiguous situations, thus motivating the value-oriented performance goals shared between partners. It is based on social sacrifices, which can potentially minimize any opportunistic behaviors. The later, interdependence, means that each firm possesses a high level of dependence in an dynamic channel relationship. When interdependence is high in magnitude and symmetric, each firm enjoys a high level of power and the bonds between the firms should be reasonably strong. Strong shared power is likely to promote commitment because of the common interests, attention, and support found in such channel relationships. This study deals with environmental dynamism in high-tech industry. Firms in the high-tech industry regard it as a key success factor to successfully cope with environmental changes. However, due to the lack of studies dealing with environmental dynamism and supply chain commitment in the high-tech industry, it is very difficult to find effective strategies to cope with them. This paper presents the results of an empirical study on the relationship between environmental dynamism and supply chain commitment in the high-tech industry. We examined the effects of consumer, competitor, and technological dynamism on supply chain commitment. Additionally, we examined the moderating effects of flexibility and dependence of supply chains. This study was confined to the type of high-tech industry which has the characteristics of rapid technology change and short product lifecycle. Flexibility among the firms of this industry, having the characteristic of hard and fast growth, is more important here than among any other industry. Thus, a variety of environmental dynamism can affect a supply chain relationship. The industries targeted industries were electronic parts, metal product, computer, electric machine, automobile, and medical precision manufacturing industries. Data was collected as follows. During the survey, the researchers managed to obtain the list of parts suppliers of 2 companies, N and L, with an international competitiveness in the mobile phone manufacturing industry; and of the suppliers in a business relationship with S company, a semiconductor manufacturing company. They were asked to respond to the survey via telephone and e-mail. During the two month period of February-April 2006, we were able to collect data from 44 companies. The respondents were restricted to direct dealing authorities and subcontractor company (the supplier) staff with at least three months of dealing experience with a manufacture (an industrial material buyer). The measurement validation procedures included scale reliability; discriminant and convergent validity were used to validate measures. Also, the reliability measurements traditionally employed, such as the Cronbach's alpha, were used. All the reliabilities were greater than.70. A series of exploratory factor analyses was conducted. We conducted confirmatory factor analyses to assess the validity of our measurements. A series of chi-square difference tests were conducted so that the discriminant validity could be ensured. For each pair, we estimated two models-an unconstrained model and a constrained model-and compared the two model fits. All these tests supported discriminant validity. Also, all items loaded significantly on their respective constructs, providing support for convergent validity. We then examined composite reliability and average variance extracted (AVE). The composite reliability of each construct was greater than.70. The AVE of each construct was greater than.50. According to the multiple regression analysis, customer dynamism had a negative effect and competitor dynamism had a positive effect on a supplier's commitment. In addition, flexibility and dependence had significant moderating effects on customer and competitor dynamism. On the other hand, all hypotheses about technological dynamism had no significant effects on commitment. In other words, technological dynamism had no direct effect on supplier's commitment and was not moderated by the flexibility and dependence of the supply chain. This study makes its contribution in the point of view that this is a rare study on environmental dynamism and supply chain commitment in the field of high-tech industry. Especially, this study verified the effects of three sectors of environmental dynamism on supplier's commitment. Also, it empirically tested how the effects were moderated by flexibility and dependence. The results showed that flexibility and interdependence had a role to strengthen supplier's commitment under environmental dynamism in high-tech industry. Thus relationship managers in high-tech industry should make supply chain relationship flexible and interdependent. The limitations of the study are as follows; First, about the research setting, the study was conducted with high-tech industry, in which the direction of the change in the power balance of supply chain dyads is usually determined by manufacturers. So we have a difficulty with generalization. We need to control the power structure between partners in a future study. Secondly, about flexibility, we treated it throughout the paper as positive, but it can also be negative, i.e. violating an agreement or moving, but in the wrong direction, etc. Therefore we need to investigate the multi-dimensionality of flexibility in future research.

  • PDF

Internal Changes and Countermeasure for Performance Improvement by Separation of Prescribing and Dispensing Practice in Health Center (의약분업(醫藥分業) 실시(實施)에 따른 보건소(保健所)의 내부변화(內部變化)와 업무개선방안(業務改善方案))

  • Jeong, Myeong-Sun;Kam, Sin;Kim, Tae-Woong
    • Journal of agricultural medicine and community health
    • /
    • v.26 no.1
    • /
    • pp.19-35
    • /
    • 2001
  • This study was conducted to investigate the internal changes and the countermeasure for performance improvement by Separation of Prescribing and Dispensing Practice (SPDP) in Health Center. Data were collected from two sources: Performance report before and after SPDP of 25 Health Centers in Kyongsangbuk-do and 6 Health Centers in Daegu-City and self-administerd questionnaire survey of 221 officials at health center. The results of this study were summarized as follows: Twenty-four health centers(77.4%) of 31 health centers took convenience measures for medical treatment of citizens and convenience measures were getting map of pharmacy, improvement of health center interior, introduction of order communication system in order. After the SPDP in health centers, 19.4% of health centers increased doctors and 25.8% decreased pharmacists. 58.1% of health centers showed that number of medical treatments were decreased. 96.4%, 80.6% 80.6% 96.7% of health centers showed that number of prescriptions, total medical treatment expenses, amounts paid by the insureds and the expenses to purchase drugs, respectively, were decreased. More than fifty percent(54.2%) of health centers responded that the relative importance of health works increased compared to medical treatments after the SPDP, and number of patients decreased compared to those in before the SPDP. And there was a drastic reduction in number of prescriptions, total medical treatment expenses, amounts paid by insureds, the expenses to purchase drugs after the SPDP. Above fifty percent(57.6%) of officers at health center responded that the function of medical treatment should be reduced after the SPDP. Fields requested improvement in health centers were 'development of heath works contents'(62.4%), 'rearrangement of health center personnel'(51.6%), 'priority setting for health works'(48.4%), 'restructuring the organization'(36.2%), 'quality impro­vement for medical services'(32.1%), 'replaning the budgets'(23.1%) in order. And to better the image of health centers, health center officers replied that 'health information management'(60.7%), 'public relations for health center'(15.8%), 'kindness of health center officers'(15.3%) were necessary in order. Health center officers suggested that 'vaccination program', 'health promotion', 'maternal and children health', 'communicable disease management', 'community health planning' were relatively important works, in order, performed by health center after SPDP. In the future, medical services in health centers should be cut down with a momentum of the SPDP so that health centers might reestablish their functions and roles as public health organizations, but quality of medical services must be improved. Also health centers should pay attention to residents for improving health through 'vaccination program', 'health promotion', 'mother-children health', 'acute and chronic communicable disease management', 'community health planning', 'oral health', 'chronic degenerative disease management', etc. And there should be a differentiation of relative importance between health promotion services and medical treatment services by character of areas(metropolitan, city, county).

  • PDF

Sesquiterpenoids Bioconversion Analysis by Wood Rot Fungi

  • Lee, Su-Yeon;Ryu, Sun-Hwa;Choi, In-Gyu;Kim, Myungkil
    • 한국균학회소식:학술대회논문집
    • /
    • 2016.05a
    • /
    • pp.19-20
    • /
    • 2016
  • Sesquiterpenoids are defined as $C_{15}$ compounds derived from farnesyl pyrophosphate (FPP), and their complex structures are found in the tissue of many diverse plants (Degenhardt et al. 2009). FPP's long chain length and additional double bond enables its conversion to a huge range of mono-, di-, and tri-cyclic structures. A number of cyclic sesquiterpenes with alcohol, aldehyde, and ketone derivatives have key biological and medicinal properties (Fraga 1999). Fungi, such as the wood-rotting Polyporus brumalis, are excellent sources of pharmaceutically interesting natural products such as sesquiterpenoids. In this study, we investigated the biosynthesis of P. brumalis sesquiterpenoids on modified medium. Fungal suspensions of 11 white rot species were inoculated in modified medium containing $C_6H_{12}O_6$, $C_4H_{12}N_2O_6$, $KH_2PO_4$, $MgSO_4$, and $CaCl_2$ for 20 days. Cultivation was stopped by solvent extraction via separation of the mycelium. The metabolites were identified as follows: propionic acid (1), mevalonic acid lactone (2), ${\beta}$-eudesmane (3), and ${\beta}$-eudesmol (4), respectively (Figure 1). The main peaks of ${\beta}$-eudesmane and ${\beta}$-eudesmol, which were indicative of sesquiterpene structures, were consistently detected for 5, 7, 12, and 15 days These results demonstrated the existence of terpene metabolism in the mycelium of P. brumalis. Polyporus spp. are known to generate flavor components such as methyl 2,4-dihydroxy-3,6-dimethyl benzoate; 2-hydroxy-4-methoxy-6-methyl benzoic acid; 3-hydroxy-5-methyl phenol; and 3-methoxy-2,5-dimethyl phenol in submerged cultures (Hoffmann and Esser 1978). Drimanes of sesquiterpenes were reported as metabolites from P. arcularius and shown to exhibit antimicrobial activity against Gram-positive bacteria such as Staphylococcus aureus (Fleck et al. 1996). The main metabolites of P. brumalis, ${\beta}$-Eudesmol and ${\beta}$-eudesmane, were categorized as eudesmane-type sesquiterpene structures. The eudesmane skeleton could be biosynthesized from FPP-derived IPP, and approximately 1,000 structures have been identified in plants as essential oils. The biosynthesis of eudesmol from P. brumalis may thus be an important tool for the production of useful natural compounds as presumed from its identified potent bioactivity in plants. Essential oils comprising eudesmane-type sesquiterpenoids have been previously and extensively researched (Wu et al. 2006). ${\beta}$-Eudesmol is a well-known and important eudesmane alcohol with an anticholinergic effect in the vascular endothelium (Tsuneki et al. 2005). Additionally, recent studies demonstrated that ${\beta}$-eudesmol acts as a channel blocker for nicotinic acetylcholine receptors at the neuromuscular junction, and it can inhibit angiogenesis in vitro and in vivo by blocking the mitogen-activated protein kinase (MAPK) signaling pathway (Seo et al. 2011). Variation of nutrients was conducted to determine an optimum condition for the biosynthesis of sesquiterpenes by P. brumalis. Genes encoding terpene synthases, which are crucial to the terpene synthesis pathway, generally respond to environmental factors such as pH, temperature, and available nutrients (Hoffmeister and Keller 2007, Yu and Keller 2005). Calvo et al. described the effect of major nutrients, carbon and nitrogen, on the synthesis of secondary metabolites (Calvo et al. 2002). P. brumalis did not prefer to synthesize sesquiterpenes under all growth conditions. Results of differences in metabolites observed in P. brumalis grown in PDB and modified medium highlighted the potential effect inorganic sources such as $C_4H_{12}N_2O_6$, $KH_2PO_4$, $MgSO_4$, and $CaCl_2$ on sesquiterpene synthesis. ${\beta}$-eudesmol was apparent during cultivation except for when P. brumalis was grown on $MgSO_4$-free medium. These results demonstrated that $MgSO_4$ can specifically control the biosynthesis of ${\beta}$-eudesmol. Magnesium has been reported as a cofactor that binds to sesquiterpene synthase (Agger et al. 2008). Specifically, the $Mg^{2+}$ ions bind to two conserved metal-binding motifs. These metal ions complex to the substrate pyrophosphate, thereby promoting the ionization of the leaving groups of FPP and resulting in the generation of a highly reactive allylic cation. Effect of magnesium source on the sesquiterpene biosynthesis was also identified via analysis of the concentration of total carbohydrates. Our current study offered further insight that fungal sesquiterpene biosynthesis can be controlled by nutrients. To profile the metabolites of P. brumalis, the cultures were extracted based on the growth curve. Despite metabolites produced during mycelia growth, there was difficulty in detecting significant changes in metabolite production, especially those at low concentrations. These compounds may be of interest in understanding their synthetic mechanisms in P. brumalis. The synthesis of terpene compounds began during the growth phase at day 9. Sesquiterpene synthesis occurred after growth was complete. At day 9, drimenol, farnesol, and mevalonic lactone (or mevalonic acid lactone) were identified. Mevalonic acid lactone is the precursor of the mevalonic pathway, and particularly, it is a precursor for a number of biologically important lipids, including cholesterol hormones (Buckley et al. 2002). Farnesol is the precursor of sesquiterpenoids. Drimenol compounds, bi-cyclic-sesquiterpene alcohols, can be synthesized from trans-trans farnesol via cyclization and rearrangement (Polovinka et al. 1994). They have also been identified in the basidiomycota Lentinus lepideus as secondary metabolites. After 12 days in the growth phase, ${\beta}$-elemene caryophyllene, ${\delta}$-cadiene, and eudesmane were detected with ${\beta}$-eudesmol. The data showed the synthesis of sesquiterpene hydrocarbons with bi-cyclic structures. These compounds can be synthesized from FPP by cyclization. Cyclic terpenoids are synthesized through the formation of a carbon skeleton from linear precursors by terpene cyclase, which is followed by chemical modification by oxidation, reduction, methylation, etc. Sesquiterpene cyclase is a key branch-point enzyme that catalyzes the complex intermolecular cyclization of the linear prenyl diphosphate into cyclic hydrocarbons (Toyomasu et al. 2007). After 20 days in stationary phase, the oxygenated structures eudesmol, elemol, and caryophyllene oxide were detected. Thus, after growth, sesquiterpenes were identified. Per these results, we showed that terpene metabolism in wood-rotting fungi occurs in the stationary phase. We also showed that such metabolism can be controlled by magnesium supplementation in the growth medium. In conclusion, we identified P. brumalis as a wood-rotting fungus that can produce sesquiterpenes. To mechanistically understand eudesmane-type sesquiterpene biosynthesis in P. brumalis, further research into the genes regulating the dynamics of such biosynthesis is warranted.

  • PDF

Clinical Study of Corrosive Injury of the Esophagus (식도부식증의 임상적 고찰)

  • 박철원;송기준;이형석;안경성;김선곤
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1981.05a
    • /
    • pp.5.3-6
    • /
    • 1981
  • There are too many kinds of esophageal corrosive agents, such as sodium hydrochloride, acetic acid, hydrochloric acid, etc. Esophageal burn due to above chemical agents are decreasing recently, but still many patients visited to the hospital because of swallowing corrosive agents for the purpose of suicide or accidentally. Among the treatment of corrosive injury of the esophagus, prevention of esophageal stricture is the key point. Recently various methods are using as the treatment of corrosive esophagitis and prevention of esophageal stricture. 51 cases of corrosive injury of the esophagus who had been admitted and treated at the Dept. of Otolaryngology, Han Yang University Hospital during past 9 years (from May 1972 to Dec. 1980) were evaluated and report the result about age distribution, sex incidence, monthly distribution, cause of swallowing, swallowing agents, arriving time at hospital after swallowing, changes on oral and pharyngeal mucosa, laboratory findings, emergency treatment and treatment during admission, treatment follow up results and complications with review of liter ature. Following results were obtained; 1. Female patients 27 cases (52.9%) were more than male patients 24 cases (47.1%) and its ratio was 1.13 : 1. 2. Age distribution showed predilection for age of 21-30 with 20 cases(39.2%), and 11-20 with 11 cases (21.6%), 31-40 with 7 cases(13.7%), over 50 with 7 cases (13.7%) were following. 3. Monthly distribution showed predilection for March with 8 cases(15.7%), and April, July with 7 cases (13.7%), September with 6 cases(l1.8%), October 5 cases(9.8%) were following. 4. For the purpose of suicide was the most cause of swallowing with 40 cases(78.4%), and accidentally swallowing 11 cases(21.6%). 5. Acetic acid was the most swallowing agent with 24 cases (47.0%), and hydrochloric acid 11 cases (21.5%), lye 8 cases(15.7%), iodine 2 cases(3.9%) were following. 6. Arriving time at the hospital after swallowing showed predilection for within 12 hours with 42 cases (82.4%), and from 12 hours to 24 hours with 4 cases(7.8%) was next. 7. Moderate change with injection and swelling was the prevalent change on oral and pharyngeal mucosa with 20 cases(39.2%) and severe cases with ulceration 18 cases (35.3%), mild cases with injection 10 cases (19.6%) were following. 8. Leukocytosis was seen on 40 cases (78.4%), and increased Hct. was seen 31 cases (60.8%). On urine analysis, 14 cases(27.5%) showed over 1.030 S.G., and proteinuria was seen on 25 cases(49.0%), glycosuria was seen on 5 cases(9.8%) and hematuria was seen on 6 cases(11.8). 9. Gastric lavage was done on 30 cases (58.8%) as emergency treatment and on 3 cases(5.9%) tracheostomy was done for the airway keeping. 10. As methods of treatment during admission, L-tube insertion was done on 50 cases (98.0%), antibiotics was given to 49 cases (96.1%), steroid and antacid were given to 46 cases(90.2%). 11. 36 cases(70.6%) were in favorable condition after proper treatment, but 2 cases (3.9%) were expired during admission, 4 cases (7.8%) showed esophageal stricture in-spite of treatment, and 1 case(2.0%) showed pyloric stenosis. 12. Complications were observed in 8 cases (17.7%). Renal failure (4 cases), aspiration pneumonia (2 cases), upper G-I bleeding (1 cases), and diabetic coma (1 cases) were seen in order of frequency.

  • PDF

Changes in blood pressure and determinants of blood pressure level and change in Korean adolescents (성장기 청소년의 혈압변화와 결정요인)

  • Suh, Il;Nam, Chung-Mo;Jee, Sun-Ha;Kim, Suk-Il;Kim, Young-Ok;Kim, Sung-Soon;Shim, Won-Heum;Kim, Chun-Bae;Lee, Kang-Hee;Ha, Jong-Won;Kang, Hyung-Gon;Oh, Kyung-Won
    • Journal of Preventive Medicine and Public Health
    • /
    • v.30 no.2 s.57
    • /
    • pp.308-326
    • /
    • 1997
  • Many studies have led to the notion that essential hypertension in adults is the result of a process that starts early in life: investigation of blood pressure(BP) in children and adolescents can therefore contribute to knowledge of the etiology of the condition. A unique longitudinal study on BP in Korea, known as Kangwha Children's Blood Pressure(KCBP) Study was initiated in 1986 to investigate changes in BP in children. This study is a part of the KCBP study. The purposes of this study are to show changes in BP and to determine factors affecting to BP level and change in Korean adolescents during age period 12 to 16 years. A total of 710 students(335 males, 375 females) who were in the first grade at junior high school(12 years old) in 1992 in Kangwha County, Korea have been followed to measure BP and related factors(anthropometric, serologic and dietary factors) annually up to 1996. A total of 562 students(242 males, 320 females) completed all five annual examinations. The main results are as follows: 1. For males, mean systolic and diastolic BP at age 12 and 16 years old were 108.7 mmHg and 118.1 mmHg(systolic), and 69.5 mmHg and 73.4 mmHg(diastolic), respectively. BP level was the highest when students were at 15 years old. For females, mean systolic and diastolic BP at age 12 and 16 years were 114.4 mmHg and 113.5 mmHg(systolic) and 75.2 mmHg and 72.1 mmHg(diastolic), respectively. BP level reached the highest point when they were 13-14 years old. 2. Anthropometric variables(height, weight and body mass index, etc) increased constantly during the study period for males. However, the rate of increase was decreased for females after age 15 years. Serum total cholesterol decreased and triglyceride increased according to age for males, but they did not show any significant trend fer females. Total fat intake increased at age 16 years compared with that at age 14 years. Compositions of carbohydrate, protein and fat among total energy intake were 66.2:12.0:19.4, 64.1:12.1:21.8 at age 14 and 16 years, respectively. 3. Most of anthropometric measures, especially, height, body mass index(BMI) and triceps skinfold thickness showed a significant correlation with BP level in both sexes. When BMI was adjusted, serum total cholesterol showed a significant negative correlation with systolic BP at age 12 years in males, but at age 14 years the direction of correlation changed to positive. In females serum total cholesterol was negatively correlated with diastolic BP at age 15 and 16 years. Triglyceride and creatinine showed positive correlation with systolic and diastolic BP in males, but they did not show any correlation in females. There was no consistent findings between nutrient intake and BP level. However, protein intake correlated positively with diastolic BP level in males. 4. Blood pressure change was positively associated with changes in BMI and serum total cholesterol in both sexes. Change in creatinine was associated with BP change positively in males and negatively in females. Students whose sodium intake was high showed higher systolic and diastolic BP in males, and students whose total fat intake was high maintained lower level of BP in females. The major determinants on BP change was BMI in both sexes.

  • PDF

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF