• Title/Summary/Keyword: Rational number

Search Result 401, Processing Time 0.029 seconds

Spatiotemporal Feature-based LSTM-MLP Model for Predicting Traffic Accident Severity (시공간 특성 기반 LSTM-MLP 모델을 활용한 교통사고 위험도 예측 연구)

  • Hyeon-Jin Jung;Ji-Woong Yang;Ellen J. Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.4
    • /
    • pp.178-185
    • /
    • 2023
  • Rapid urbanization and advancements in technology have led to a surge in the number of automobiles, resulting in frequent traffic accidents, and consequently, an increase in human casualties and economic losses. Therefore, there is a need for technology that can predict the risk of traffic accidents to prevent them and minimize the damage caused by them. Traffic accidents occur due to various factors including traffic congestion, the traffic environment, and road conditions. These factors give traffic accidents spatiotemporal characteristics. This paper analyzes traffic accident data to understand the main characteristics of traffic accidents and reconstructs the data in a time series format. Additionally, an LSTM-MLP based model that excellently captures spatiotemporal characteristics was developed and utilized for traffic accident prediction. Experiments have proven that the proposed model is more rational and accurate in predicting the risk of traffic accidents compared to existing models. The traffic accident risk prediction model suggested in this paper can be applied to systems capable of real-time monitoring of road conditions and environments, such as navigation systems. It is expected to enhance the safety of road users and minimize the social costs associated with traffic accidents.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

A Study on the Risk Factors for Maternal and Child Health Care Program with Emphasis on Developing the Risk Score System (모자건강관리를 위한 위험요인별 감별평점분류기준 개발에 관한 연구)

  • 이광옥
    • Journal of Korean Academy of Nursing
    • /
    • v.13 no.1
    • /
    • pp.7-21
    • /
    • 1983
  • For the flexible and rational distribution of limited existing health resources based on measurements of individual risk, the socalled Risk Approach is being proposed by the World Health Organization as a managerial tool in maternal and child health care program. This approach, in principle, puts us under the necessity of developing a technique by which we will be able to measure the degree of risk or to discriminate the future outcomes of pregnancy on the basis of prior information obtainable at prenatal care delivery settings. Numerous recent studies have focussed on the identification of relevant risk factors as the Prior infer mation and on defining the adverse outcomes of pregnancy to be dicriminated, and also have tried on how to develope scoring system of risk factors for the quantitative assessment of the factors as the determinant of pregnancy outcomes. Once the scoring system is established the technique of classifying the patients into with normal and with adverse outcomes will be easily de veloped. The scoring system should be developed to meet the following four basic requirements. 1) Easy to construct 2) Easy to use 3) To be theoretically sound 4) To be valid In searching for a feasible methodology which will meet these requirements, the author has attempted to apply the“Likelihood Method”, one of the well known principles in statistical analysis, to develop such scoring system according to the process as follows. Step 1. Classify the patients into four groups: Group $A_1$: With adverse outcomes on fetal (neonatal) side only. Group $A_2$: With adverse outcomes on maternal side only. Group $A_3$: With adverse outcome on both maternal and fetal (neonatal) sides. Group B: With normal outcomes. Step 2. Construct the marginal tabulation on the distribution of risk factors for each group. Step 3. For the calculation of risk score, take logarithmic transformation of relative proport-ions of the distribution and round them off to integers. Step 4. Test the validity of the score chart. h total of 2, 282 maternity records registered during the period of January 1, 1982-December 31, 1982 at Ewha Womans University Hospital were used for this study and the“Questionnaire for Maternity Record for Prenatal and Intrapartum High Risk Screening”developed by the Korean Institute for Population and Health was used to rearrange the information on the records into an easy analytic form. The findings of the study are summarized as follows. 1) The risk score chart constructed on the basis of“Likelihood Method”ispresented in Table 4 in the main text. 2) From the analysis of the risk score chart it was observed that a total of 24 risk factors could be identified as having significant predicting power for the discrimination of pregnancy outcomes into four groups as defined above. They are: (1) age (2) marital status (3) age at first pregnancy (4) medical insurance (5) number of pregnancies (6) history of Cesarean sections (7). number of living child (8) history of premature infants (9) history of over weighted new born (10) history of congenital anomalies (11) history of multiple pregnancies (12) history of abnormal presentation (13) history of obstetric abnormalities (14) past illness (15) hemoglobin level (16) blood pressure (17) heart status (18) general appearance (19) edema status (20) result of abdominal examination (21) cervix status (22) pelvis status (23) chief complaints (24) Reasons for examination 3) The validity of the score chart turned out to be as follows: a) Sensitivity: Group $A_1$: 0.75 Group $A_2$: 0.78 Group $A_3$: 0.92 All combined : 0.85 b) Specificity : 0.68 4) The diagnosabilities of the“score chart”for a set of hypothetical prevalence of adverse outcomes were calculated as follows (the sensitivity“for all combined”was used). Hypothetidal Prevalence : 5% 10% 20% 30% 40% 50% 60% Diagnosability : 12% 23% 40% 53% 64% 75% 80%.

  • PDF

Limit Pricing by Noncooperative Oligopolists (과점산업(寡占産業)에서의 진입제한가격(進入制限價格))

  • Nam, Il-chong
    • KDI Journal of Economic Policy
    • /
    • v.12 no.1
    • /
    • pp.127-148
    • /
    • 1990
  • A Milgrom-Roberts style signalling model of limit pricing is developed to analyze the possibility and the scope of limit pricing in general, noncooperative oligopolies. The model contains multiple incumbent firms facing a potential entrant and assumes an information asymmetry between incombents and the potential entrant about the market demand. There are two periods in the model. In period 1, n incumbent firms simultaneously and noncooperatively choose quantities. At the end of period 1, the potential entrant observes the market price and makes an entry decision. In period 2, depending on the entry decision of the entrant, n' or (n+1) firms choose quantities again before the game terminates. Since the choice of incumbent firms in period 1 depends on their information about demand, the market price in period 1 conveys information about the market demand. Thus, there is a systematic link between the market price and the profitability of entry. Using Bayes-Nash equilibrium as the solution concept, we find that there exist some demand conditions under which incumbent firms will limit price. In symmetric equilibria, incumbent firms each produce an output that is greater than the Cournot output and induce a price that is below the Cournot price. In doing so, each incumbent firm refrains from maximizing short-run profit and supplies a public good that is entry deterrence. The reason that entry is deterred by such a reduced price is that it conveys information about the demand of the industry that is unfavorable to the entrant. This establishes the possibility of limit pricing by noncooperative oligopolists in a setting that is fully rational, and also generalizes the result of Milgrom and Roberts to general oligopolies, confirming Bain's intuition. Limit pricing by incumbents explained above can be interpreted as a form of credible collusion in which each firm voluntarily deviates from myopic optimization in order to deter entry using their superior information. This type of implicit collusion differs from Folk-theorem type collusions in many ways and suggests that a collusion can be a credible one even in finite games as long as there is information asymmetry. Another important result is that as the number of incumbent firms approaches infinity, or as the industry approaches a competitive one, the probability that limit pricing occurs converges to zero and the probability of entry converges to that under complete information. This limit result confirms the intuition that as the number of agents sharing the same private information increases, the value of the private information decreases, and the probability that the information gets revealed increases. This limit result also supports the conventional belief that there is no entry problem in a competitive market. Considering the fact that limit pricing is generally believed to occur at an early stage of an industry and the fact that many industries in Korea are oligopolies in their infant stages, the theoretical results of this paper suggest that we should pay attention to the possibility of implicit collusion by incumbent firms aimed at deterring new entry using superior information. The long-term loss to the Korean economy from limit pricing can be very large if the industry in question is a part of the world market and the domestic potential entrant whose entry is deterred could .have developed into a competitor in the world market. In this case, the long-term loss to the Korean economy should include the lost opportunity in the world market in addition to the domestic long-run welfare loss.

  • PDF

Study on Fire Hazard Analysis along with Heater Use in the Public Use Facility Traditional Market in Winter (겨울철 다중이용시설인 전통재래시장 난방기구 사용에 따른 화재 위험성 분석에 관한 연구)

  • Ko, Jaesun
    • Journal of the Society of Disaster Information
    • /
    • v.10 no.4
    • /
    • pp.583-597
    • /
    • 2014
  • Fire caused by heater has various causes as many as the types of heater. also, lots of damage of human life and property loss are caused, since annually continuous fire accident by heater in traditional market is frequently occurring. There are not many cases of fire due to heater in most of residential facilities such as general house, apartments, etc., because they are supplied with heating boiler, however the restaurant, store and office of the market, sports center, factory, workplace, etc. still use heater, e.g. oilstove, electric heater, etc., so that they are exposed to fire hazard. Also, when investigating the number of fire due to heater, it was analyzed to occur in order of home boiler, charcoal stove, oilstove, gas heater/stove, electric stove/heater, the number of fire per human life damage was analyzed in order of gas heater/stove, oil heater/stove, electric heater/stove, briquette/coal heater. Also, gas and oil related heater were analyzed to have low frequency, however, with high fire intensity. Therefore, this research aimed at considering more scientific fire inspection and identification approach by reenacting and reviewing fire outbreak possibility caused by combustibles' contact and conductivity under the normal condition and abnormal condition in respect of ignition hazard, i.e. minimum ignition temperature, carbonization degree and heat flux along with it, due to oilstove and electric stove, which are still frequently used in public use facility, traditional market, and, of which actual fire occurrence is the most frequent. As the result of reenact test, ignition hazard appeared very small, as long as enough heat storage condition is not made in both test objects(oilstove/electric stove), however carbonization condition was analyzed to be proceeded per each part respectively. Eventually, transition to fire is the ignition due to heat storage, so that it was analyzed to ignite when minimum heat storage temperature condition of fire place is over $500^{\circ}C$. Particularly, in case of quartz pipe, the heating element of electric stove, it is rapidly heated over the temperature of $600^{\circ}C$ within the shortest time(10sec), so that the heat flux of this appears 6.26kW/m2, which was analyzed to result in damage of thermal PVC cable and second-degree burn in human body. Also, the researcher recognized that the temperature change along with Geometric View Factor and Fire Load, which display decrease of heat, are also important variables to be considered, along with distance change besides temperature condition. Therefore, the researcher considers that a manual of careful fire inspection and identification on this is necessary, also, expects that scientific and rational efforts of this research can contribute to establish manual composition and theoretical basis on henceforth fire inspection and identification.

A Study on Light Condition between Pinus densiflora and Quercus variabilis Natural Mixed Forest Stands by Using the Hemispherical Photo Method (수관사진법을 이용한 소나무-굴참나무 천연림에 있어서의 광 조건 연구)

  • Chung Dong-Jun;Kim Young-Chai
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.1 no.2
    • /
    • pp.127-134
    • /
    • 1999
  • This study was performed to obtain the basic data to present rational silvicultural tending plan. It makes these widely distributed pine-oak mixed stand and each of pure stand in middle province on object of this study and do comparative analysis of light condition about stand parameter and natural regeneration according to each slope(north, west and south) conditions of location in central part of South Korea. Sample plots for pine-oak mixed stand and pine and oak pure stand were established on each of southern, northern and western slopes based upon site and growth conditions of the slope. Sample plot was a circle of 0.05ha with a diameter of 25.24 m. A sample plot has between 30 and 40 tree in it. Total 23 sample plots were established; 9 pure pine stands, 8 pine-oak mixed stands. and 6 pure oak stands across lower, middle, upper parts of slopes. Relative light intensity within a stand was' measured by crown- photo(fish-eye lens; 180$^{\circ}$) system through fish-eye lens and by comparing each plot with the denuded through PAR-sensor. The crown closure ratio of pure pine stand (75%) shows much lower than that of mixed stand (90.9%) and pure oak stand (93%). The relative light intensity within a stand showed an opposite result. The crown closure of mixed stands tended to become gradually low as the slope moves from the north to the south, but the relative light Intensity within the stand tended to rise. By analyzing the relationship between the relative light intensity within a stand and stand parameter, light intensity within a stand tended to decrease as the diameter and N/ha increase. Number of oak seedlings and light intensity within a stand is in a straight-line regression relation. In particular, the number of oak seedlings was the highest in mixed stands on the southern slope. But no single pine seedling was found. The unfavorable conditions of l0cm thick litter layer and low relative light intensity in a stand (ranging between 4% and 8%) is considered to prevent pine seeds from germinating.

  • PDF

The Impacts of Need for Cognitive Closure, Psychological Wellbeing, and Social Factors on Impulse Purchasing (인지폐합수요(认知闭合需要), 심리건강화사회인소대충동구매적영향(心理健康和社会因素对冲动购买的影响))

  • Lee, Myong-Han;Schellhase, Ralf;Koo, Dong-Mo;Lee, Mi-Jeong
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.4
    • /
    • pp.44-56
    • /
    • 2009
  • Impulse purchasing is defined as an immediate purchase with no pre-shopping intentions. Previous studies of impulse buying have focused primarily on factors linked to marketing mix variables, situational factors, and consumer demographics and traits. In previous studies, marketing mix variables such as product category, product type, and atmospheric factors including advertising, coupons, sales events, promotional stimuli at the point of sale, and media format have been used to evaluate product information. Some authors have also focused on situational factors surrounding the consumer. Factors such as the availability of credit card usage, time available, transportability of the products, and the presence and number of shopping companions were found to have a positive impact on impulse buying and/or impulse tendency. Research has also been conducted to evaluate the effects of individual characteristics such as the age, gender, and educational level of the consumer, as well as perceived crowding, stimulation, and the need for touch, on impulse purchasing. In summary, previous studies have found that all products can be purchased impulsively (Vohs and Faber, 2007), that situational factors affect and/or at least facilitate impulse purchasing behavior, and that various individual traits are closely linked to impulse buying. The recent introduction of new distribution channels such as home shopping channels, discount stores, and Internet stores that are open 24 hours a day increases the probability of impulse purchasing. However, previous literature has focused predominantly on situational and marketing variables and thus studies that consider critical consumer characteristics are still lacking. To fill this gap in the literature, the present study builds on this third tradition of research and focuses on individual trait variables, which have rarely been studied. More specifically, the current study investigates whether impulse buying tendency has a positive impact on impulse buying behavior, and evaluates how consumer characteristics such as the need for cognitive closure (NFCC), psychological wellbeing, and susceptibility to interpersonal influences affect the tendency of consumers towards impulse buying. The survey results reveal that while consumer affective impulsivity has a strong positive impact on impulse buying behavior, cognitive impulsivity has no impact on impulse buying behavior. Furthermore, affective impulse buying tendency is driven by sub-components of NFCC such as decisiveness and discomfort with ambiguity, psychological wellbeing constructs such as environmental control and purpose in life, and by normative and informational influences. In addition, cognitive impulse tendency is driven by sub-components of NFCC such as decisiveness, discomfort with ambiguity, and close-mindedness, and the psychological wellbeing constructs of environmental control, as well as normative and informational influences. The present study has significant theoretical implications. First, affective impulsivity has a strong impact on impulse purchase behavior. Previous studies based on affectivity and flow theories proposed that low to moderate levels of impulsivity are driven by reduced self-control or a failure of self-regulatory mechanisms. The present study confirms the above proposition. Second, the present study also contributes to the literature by confirming that impulse buying tendency can be viewed as a two-dimensional concept with both affective and cognitive dimensions, and illustrates that impulse purchase behavior is explained mainly by affective impulsivity, not by cognitive impulsivity. Third, the current study accommodates new constructs such as psychological wellbeing and NFCC as potential influencing factors in the research model, thereby contributing to the existing literature. Fourth, by incorporating multi-dimensional concepts such as psychological wellbeing and NFCC, more diverse aspects of consumer information processing can be evaluated. Fifth, the current study also extends the existing literature by confirming the two competing routes of normative and informational influences. Normative influence occurs when individuals conform to the expectations of others or to enhance his/her self-image. Whereas informational influence occurs when individuals search for information from knowledgeable others or making inferences based upon observations of the behavior of others. The present study shows that these two competing routes of social influence can be attributed to different sources of influence power. The current study also has many practical implications. First, it suggests that people with affective impulsivity may be primary targets to whom companies should pay closer attention. Cultivating a more amenable and mood-elevating shopping environment will appeal to this segment. Second, the present results demonstrate that NFCC is closely related to the cognitive dimension of impulsivity. These people are driven by careless thoughts, not by feelings or excitement. Rational advertising at the point of purchase will attract these customers. Third, people susceptible to normative influences are another potential target market. Retailers and manufacturers could appeal to this segment by advertising their products and/or services as products that can be used to identify with or conform to the expectations of others in the aspiration group. However, retailers should avoid targeting people susceptible to informational influences as a segment market. These people are engaged in an extensive information search relevant to their purchase, and therefore more elaborate, long-term rational advertising messages, which can be internalized into these consumers' thought processes, will appeal to this segment. The current findings should be interpreted with caution for several reasons. The study used a small convenience sample, and only investigated behavior in two dimensions. Accordingly, future studies should incorporate a sample with more diverse characteristics and measure different aspects of behavior. Future studies should also investigate personality traits closely related to affectivity theories. Trait variables such as sensory curiosity, interpersonal curiosity, and atmospheric responsiveness are interesting areas for future investigation.

  • PDF

The Study on the Priority of First Person Shooter game Elements using Delphi Methodology (FPS게임 구성요소의 중요도 분석방법에 관한 연구 1 -델파이기법을 이용한 독립요소의 계층설계와 검증을 중심으로-)

  • Bae, Hye-Jin;Kim, Suk-Tae
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.61-72
    • /
    • 2007
  • Having started with "Space War", the first game produced by MIT in the 1960's, the gaming industry expanded rapidly and grew to a large size over a short period of time: the brand new games being launched on the market are found to contain many different elements making up a single content in that it is often called the 'the most comprehensive ultimate fruits' of the design technologies. This also translates into a large increase in the number of things which need to be considered in developing games, complicating the plans on the financial budget, the work force, and the time to be committed. Therefore, an approach for analyzing the elements which make up a game, computing the importance of each of them, and assessing those games to be developed in the future, is the key to a successful development of games. Many decision-making activities are often required under such a planning process. The decision-making task involves many difficulties which are outlined as follows: the multi-factor problem; the uncertainty problem impeding the elements from being "quantified" the complex multi-purpose problem for which the outcome aims confusion among decision-makers and the problem with determining the priority order of multi-stages leading to the decision-making process. In this study we plan to suggest AHP (Analytic Hierarchy Process) so that these problems can be worked out comprehensively, and logical and rational alternative plan can be proposed through the quantification of the "uncertain" data. The analysis was conducted by taking FPS (First Person Shooting) which is currently dominating the gaming industry, as subjects for this study. The most important consideration in conducting AHP analysis is to accurately group the elements of the subjects to be analyzed objectively, and arrange them hierarchically, and to analyze the importance through pair-wise comparison between the elements. The study is composed of 2 parts of analyzing these elements and computing the importance between them, and choosing an alternative plan. Among these this paper is particularly focused on the Delphi technique-based objective element analyzing and hierarchy of the FPS games.

  • PDF

An Activation Analysis of Target("used H218O") for 18FDG Synthesis (18FDG 생산용 타겟("사용 후 H218O")의 방사화 분석)

  • Kang, Bo Sun
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.3
    • /
    • pp.213-219
    • /
    • 2013
  • Currently, about 35 cyclotrons have been operating in South Korea. Most of them are mainly used for the synthesis of radiopharmaceuticals such as $^{18}FDG$, which is a cancer tracer for nuclear medicine. Highly enriched $H_2{^{18}}O$ containing up to 98% of $^{18}O/O$ isotope ratio is used as the target for $^{18}F$ production. The price of the highly enriched $H_2{^{18}}O$ ranges 60~70 USD/g, and all of them have been imported from foreign country in spite of the very expensive price. The target (enriched $H_2{^{18}}O$) is non-radioactive before the proton beam irradiation. But, the post-irradiation target (used $H_2{^{18}}O$) must be managed following the National Radiation Safety Regulations, because it turns into radioactive by the radioactivation of the impurities within the target. Recently, nevertheless of the fast increasing amount of used $H_2{^{18}}O$ in accordance with the increasing number of nuclear medicine cases, any activation analysis on the used $H_2{^{18}}O$ have been conducted yet in Korea. In this research, activation analysis have been conducted to confirm the specific radioactivity(Bq/g) of each radioisotopes within the used $H_2{^{18}}O$. The analysis have been done on the 3 of 20g samples collected from the used $H_2{^{18}}O$ storages at different cyclotron centers. Based on the results, it was confirmed that the "used $H_2{^{18}}O$" contains gamma emitters such as $^{56}Co$, $^{57}Co$, $^{58}Co$, and $^{54}Mn$ as well as the considerable amount of beta emitter $^3H$. It was also confirmed that the only one sample contained over exemption level of gamma emitters while the specific activity of tritium was lower than the exemption level in all samples. The specific activity of radioisotopes were measured different levels in the samples depending on the elapsed time after irradiation. Further study on the activation of the "used $H_2{^{18}}O$" is definitely necessary, nevertheless the as-is results of this research must be useful in establishing a rational "used $H_2{^{18}}O$" management protocol.

Effect of Reperfusion after 20 min Ligation of the Left Coronary Artery in Open-chest Bovine Heart: An Ultrastructural Study (재관류가 허혈 심근세포의 미세구조에 미치는 영향 : 재관류 손상에 관한 연구)

  • 이종욱;조대윤;손동섭;양기민;라봉진;김호덕
    • Journal of Chest Surgery
    • /
    • v.31 no.8
    • /
    • pp.739-748
    • /
    • 1998
  • Background: It has been well documented that transient occlusion of the coronary artery causes myocardial ischemia and finally cell death when ischemia is sustained for more than 20 minutes. Extensive studies have revealed that ischemic myocardium cannot recover without reperfusion by adequate restoration of blood flow, however, reperfusion can cause long-lasting cardiac dysfunction and aggravation of structural damage. The author therefore attempted to examine the effect of postischemic reperfusion on myocardial ultrastructure and to determine the rationales for recanalization therapy to salvage ischemic myocardium. Materials and methods: Young Holstein-Friesian cows(130∼140 Kg body weight; n=40) of both sexes, maintained with nutritionally balanced diet and under constant conditions, were used. The left anterior descending coronary artery(LAD) was occluded by ligation with 4-0 silk snare for 20 minutes and recanalized by release of the ligation under continuous intravenous drip anesthesia with sodium pentobarbital(0.15 mg/Kg/min). Drill biopsies of the risk area (antero-lateral wall) were performed at just on reperfusion(5 minutes), 1-, 2-, 3-, 6-, 12-hours after recanalization, and at 1-hour assist(only with mechanical respiration and fluid replacement) after 12-hour recanalization. The materials were subdivided into subepicardial and subendocardial tissues. Tissue samples were examined with a transmission electron microscope (Philips EM 300) at the accelerating voltage of 60 KeV. Results: After a 20-minute ligation of the LAD, myocytes showed slight to moderate degree of ultrastructural changes including subsarcolemmal bleb formation, loss of nuclear matrix, clumping of chromatin and margination, mitochondrial destruction, and contracture of sarcomeres. However, microvascular structures were relatively well preserved. After 1-hour reperfusion, nuclear and mitochondrial matrices reappeared and intravascular plugging by polymorphonuclear leukocytes or platelets was observed. However, nucleoli and intramitochondrial granules reappeared within 3 hours of reperfusion and a large number of myocytes were recovered progressively within 6 hours of reperfusion. Recovery was apparent in the subepicardial myocytes and there were no distinct changes in the ultrastructure except narrowed lumen of the microvessels in the later period of reperfusion. Conclusions: It is likely that the ischemic myocardium could not be salvaged without adequate restoration of coronary flow and that the microvasculature is more resistant to reversible period of ischemia than subendocardium and subepicardium. Therefore, thrombolysis and/or angioplasty may be a rational method of therapy for coronarogenic myocardial ischemia. However, it may take a relatively longer period of time to recover from ischemic insult and reperfusion injury should be considered.

  • PDF