• 제목/요약/키워드: 시간인지

Search Result 84,288, Processing Time 0.104 seconds

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

The Influence of Ventilation and Shade on the Mean Radiant Temperature of Summer Outdoor (통풍과 차양이 하절기 옥외공간의 평균복사온도에 미치는 영향)

  • Lee, Chun-Seok;Ryu, Nam-Hyung
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.5
    • /
    • pp.100-108
    • /
    • 2012
  • The purpose of the study was to evaluate the influence of shading and ventilation on Mean Radiant Temperature(MRT) of the outdoor space at a summer outdoor. The Wind Speed(WS), Air Temperature(AT) and Globe Temperature(GT) were recorded every minute from $1^{st}$ of May to the $30^{th}$ of September 2011 at a height of 1.2m above in four experimental plots with different shading and ventilating conditions, with a measuring system consisting of a vane type anemometer(Barini Design's BDTH), Resistance Temperature Detector(RTD, Pt-100), standard black globe(${\O}$ 150mm) and data acquisition systems(National Instrument's Labview and Compfile Techs' Moacon). To implement four different ventilating and shading conditions, three hexahedral steel frames, and one natural plot were established in the open grass field. Two of the steel frames had a dimension of $3m(W){\times}3m(L){\times}1.5m(H)$ and every vertical side covered with transparent polyethylene film to prevent lateral ventilation(Ventilation Blocking Plot: VP), and an additional shading curtain was applied on the top side of a frame(Shading and Ventilation Blocking Plot: SVP). The third was $1.5m(W){\times}1.5m(L){\times}1.5m(H)$, only the top side of which was covered by the shading curtain without the lateral film(Shading Plot: SP). The last plot was natural condition without any kind of shading and wind blocking material(Natural Open Plot: NP). Based on the 13,262 records of 44 sunny days, the time serial difference of AT and GT for 24 hour were analyzed and compared, and statistical analysis was done based on the 7,172 records of daytime period from 7 A.M. to 8 P.M., while the relation between the MRT and solar radiation and wind speed was analyzed based on the records of the hottest period from 11 A.M. to 4 P.M.. The major findings were as follows: 1. The peak AT was $40.8^{\circ}C$ at VP and $35.6^{\circ}C$ at SP showing the difference about $5^{\circ}C$, but the difference of average AT was very small within${\pm}1^{\circ}C$. 2. The difference of the peak GT was $12^{\circ}C$ showing $52.5^{\circ}C$ at VP and $40.6^{\circ}C$ at SP, while the gap of average GT between the two plots was $6^{\circ}C$. Comparing all four plots including NP and SVP, it can be said that the shading decrease $6^{\circ}C$ GT while the wind blocking increase $3^{\circ}C$ GT. 3. According to the calculated MRT, the shading has a cooling effect in reducing a maximum of $13^{\circ}C$ and average $9^{\circ}C$ MRT, while the wind blocking has heating effect of increasing average $3^{\circ}C$ MRT. In other words, the MRT of the shaded area with natural ventilation could be cooler than the wind blocking the sunny site to about $16^{\circ}C$ MRT maximum. 4. The regression and correlation tests showed that the shading is more important than the ventilation in reducing the MRT, while both of them do an important role in improving the outdoor thermal comfort. In summary, the results of this study showed that the shade is the first and the ventilation is the second important factor in terms of improving outdoor thermal comfort in summer daylight hours. Therefore, it can be apparently said that the more shade by the forest, shading trees etc., the more effective in conditioning the microclimate of an outdoor space reducing the useless or even harmful heat energy for human activities. Furthermore, the delicately designed wind corridor or outdoor ventilation system can improve even the thermal environment of urban area.

A study on the Degradation and By-products Formation of NDMA by the Photolysis with UV: Setup of Reaction Models and Assessment of Decomposition Characteristics by the Statistical Design of Experiment (DOE) based on the Box-Behnken Technique (UV 공정을 이용한 N-Nitrosodimethylamine (NDMA) 광분해 및 부산물 생성에 관한 연구: 박스-벤켄법 실험계획법을 이용한 통계학적 분해특성평가 및 반응모델 수립)

  • Chang, Soon-Woong;Lee, Si-Jin;Cho, Il-Hyoung
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.1
    • /
    • pp.33-46
    • /
    • 2010
  • We investigated and estimated at the characteristics of decomposition and by-products of N-Nitrosodimethylamine (NDMA) using a design of experiment (DOE) based on the Box-Behken design in an UV process, and also the main factors (variables) with UV intensity($X_2$) (range: $1.5{\sim}4.5\;mW/cm^2$), NDMA concentration ($X_2$) (range: 100~300 uM) and pH ($X_2$) (rang: 3~9) which consisted of 3 levels in each factor and 4 responses ($Y_1$ (% of NDMA removal), $Y_2$ (dimethylamine (DMA) reformation (uM)), $Y_3$ (dimethylformamide (DMF) reformation (uM), $Y_4$ ($NO_2$-N reformation (uM)) were set up to estimate the prediction model and the optimization conditions. The results of prediction model and optimization point using the canonical analysis in order to obtain the optimal operation conditions were $Y_1$ [% of NDMA removal] = $117+21X_1-0.3X_2-17.2X_3+{2.43X_1}^2+{0.001X_2}^2+{3.2X_3}^2-0.08X_1X_2-1.6X_1X_3-0.05X_2X_3$ ($R^2$= 96%, Adjusted $R^2$ = 88%) and 99.3% ($X_1:\;4.5\;mW/cm^2$, $X_2:\;190\;uM$, $X_3:\;3.2$), $Y_2$ [DMA conc] = $-101+18.5X_1+0.4X_2+21X_3-{3.3X_1}^2-{0.01X_2}^2-{1.5X_3}^2-0.01X_1X_2+0.07X_1X_3-0.01X_2X_3$ ($R^2$= 99.4%, 수정 $R^2$ = 95.7%) and 35.2 uM ($X_1$: 3 $mW/cm^2$, $X_2$: 220 uM, $X_3$: 6.3), $Y_3$ [DMF conc] = $-6.2+0.2X_1+0.02X_2+2X_3-0.26X_1^2-0.01X_2^2-0.2X_3^2-0.004X_1X_2+0.1X_1X_3-0.02X_2X_3$ ($R^2$= 98%, Adjusted $R^2$ = 94.4%) and 3.7 uM ($X_1:\;4.5\;$mW/cm^2$, $X_2:\;290\;uM$, $X_3:\;6.2$) and $Y_4$ [$NO_2$-N conc] = $-25+12.2X_1+0.15X_2+7.8X_3+{1.1X_1}^2+{0.001X_2}^2-{0.34X_3}^2+0.01X_1X_2+0.08X_1X_3-3.4X_2X_3$ ($R^2$= 98.5%, Adjusted $R^2$ = 95.7%) and 74.5 uM ($X_1:\;4.5\;mW/cm^2$, $X_2:\;220\;uM$, $X_3:\;3.1$). This study has demonstrated that the response surface methodology and the Box-Behnken statistical experiment design can provide statistically reliable results for decomposition and by-products of NDMA by the UV photolysis and also for determination of optimum conditions. Predictions obtained from the response functions were in good agreement with the experimental results indicating the reliability of the methodology used.

THE RELATIONSHIP BETWEEN PARTICLE INJECTION RATE OBSERVED AT GEOSYNCHRONOUS ORBIT AND DST INDEX DURING GEOMAGNETIC STORMS (자기폭풍 기간 중 정지궤도 공간에서의 입자 유입률과 Dst 지수 사이의 상관관계)

  • 문가희;안병호
    • Journal of Astronomy and Space Sciences
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2003
  • To examine the causal relationship between geomagnetic storm and substorm, we investigate the correlation between dispersionless particle injection rate of proton flux observed from geosynchronous satellites, which is known to be a typical indicator of the substorm expansion activity, and Dst index during magnetic storms. We utilize geomagnetic storms occurred during the period of 1996 ~ 2000 and categorize them into three classes in terms of the minimum value of the Dst index ($Dst_{min}$); intense ($-200nT{$\leq$}Dst_{min}{$\leq$}-100nT$), moderate($-100nT{\leq}Dst_{min}{\leq}-50nT$), and small ($-50nT{\leq}Dst_{min}{\leq}-30nT$) -30nT)storms. We use the proton flux of the energy range from 50 keV to 670 keV, the major constituents of the ring current particles, observed from the LANL geosynchronous satellites located within the local time sector from 18:00 MLT to 04:00 MLT. We also examine the flux ratio ($f_{max}/f_{ave}$) to estimate particle energy injection rate into the inner magnetosphere, with $f_{ave}$ and $f_{max}$ being the flux levels during quiet and onset levels, respectively. The total energy injection rate into the inner magnetosphere can not be estimated from particle measurements by one or two satellites. However, the total energy injection rate should be at least proportional to the flux ratio and the injection frequency. Thus we propose a quantity, “total energy injection parameter (TEIP)”, defined by the product of the flux ratio and the injection frequency as an indicator of the injected energy into the inner magnetosphere. To investigate the phase dependence of the substorm contribution to the development of magnetic storm, we examine the correlations during the two intervals, main and recovery phase of storm separately. Several interesting tendencies are noted particularly during the main phase of storm. First, the average particle injection frequency tends to increase with the storm size with the correlation coefficient being 0.83. Second, the flux ratio ($f_{max}/f_{ave}$) tends to be higher during large storms. The correlation coefficient between $Dst_{min}$ and the flux ratio is generally high, for example, 0.74 for the 75~113 keV energy channel. Third, it is also worth mentioning that there is a high correlation between the TEIP and $Dst_{min}$ with the highest coefficient (0.80) being recorded for the energy channel of 75~113 keV, the typical particle energies of the ring current belt. Fourth, the particle injection during the recovery phase tends to make the storms longer. It is particularly the case for intense storms. These characteristics observed during the main phase of the magnetic storm indicate that substorm expansion activity is closely associated with the development of mangetic storm.

A Study on Market Expansion Strategy via Two-Stage Customer Pre-segmentation Based on Customer Innovativeness and Value Orientation (고객혁신성과 가치지향성 기반의 2단계 사전 고객세분화를 통한 시장 확산 전략)

  • Heo, Tae-Young;Yoo, Young-Sang;Kim, Young-Myoung
    • Journal of Korea Technology Innovation Society
    • /
    • v.10 no.1
    • /
    • pp.73-97
    • /
    • 2007
  • R&D into future technologies should be conducted in conjunction with technological innovation strategies that are linked to corporate survival within a framework of information and knowledge-based competitiveness. As such, future technology strategies should be ensured through open R&D organizations. The development of future technologies should not be conducted simply on the basis of future forecasts, but should take into account customer needs in advance and reflect them in the development of the future technologies or services. This research aims to select as segmentation variables the customers' attitude towards accepting future telecommunication technologies and their value orientation in their everyday life, as these factors wilt have the greatest effect on the demand for future telecommunication services and thus segment the future telecom service market. Likewise, such research seeks to segment the market from the stage of technology R&D activities and employ the results to formulate technology development strategies. Based on the customer attitude towards accepting new technologies, two groups were induced, and a hierarchical customer segmentation model was provided to conduct secondary segmentation of the two groups on the basis of their respective customer value orientation. A survey was conducted in June 2006 on 800 consumers aged 15 to 69, residing in Seoul and five other major South Korean cities, through one-on-one interviews. The samples were divided into two sub-groups according to their level of acceptance of new technology; a sub-group demonstrating a high level of technology acceptance (39.4%) and another sub-group with a comparatively lower level of technology acceptance (60.6%). These two sub-groups were further divided each into 5 smaller sub-groups (10 total smaller sub-groups) through two rounds of segmentation. The ten sub-groups were then analyzed in their detailed characteristics, including general demographic characteristics, usage patterns in existing telecom services such as mobile service, broadband internet and wireless internet and the status of ownership of a computing or information device and the desire or intention to purchase one. Through these steps, we were able to statistically prove that each of these 10 sub-groups responded to telecom services as independent markets. We found that each segmented group responds as an independent individual market. Through correspondence analysis, the target segmentation groups were positioned in such a way as to facilitate the entry of future telecommunication services into the market, as well as their diffusion and transferability.

  • PDF

A study of the difference of Dongeui-Suse-Bowon and past Oriental-Medicine appeared in the argument of Interior-overheating-sympton of the Tae-Eum-In caused by liver's receiving heat (태음인(太陰人) 간수열(肝受熱) 이열병론(裡熱病論)을 통해 살펴본 과거의학(過去醫學)과 동의수세보원(東醫壽世保元)의 음양관(陰陽觀)의 차이(差異))

  • Kim, Jong-Weon
    • Journal of Sasang Constitutional Medicine
    • /
    • v.9 no.1
    • /
    • pp.127-153
    • /
    • 1997
  • Sasang-Medicine can classify all sympton with more simple classifying system than past Oriental-Medicine, because Sasang Byeon-Zeung(=classifying system of the sympton) separate by four clearly. The merit of this Sasang Byeon-Zeung can be seen more clearly on the part of the pathology of the expiratory-scattering and inspiratory-gathering of the Tae-Eum and Tae-Yang. On this view point, this thesis discussed the following subjects. 1. Investigate the theory of raising-falling and scattering-gathering developed in the Dongeui-Suse-Bowon. 2. Investigate the changes of the recognition of the Yang-Dog sympton and Jo-Yeol sympton argued as Interior-overheating-sympton of the Tae-Eum-In caused by liver's receiving heat. 3. Investigate the Yi-Je-Ma's view on the Eum-Yang in the argument of interior-overheating-sympton of the Tae-Eum-In caused by liver's receiving heat. As a result, the following conclusions were led to. 1. Dongeui-Suse-Bowon considers Spleen-Kidney has the couple motion of the raising Yang and falling Eum, and Liver-Lung has the couple motion of the expiratory-scattering and inspiratory-gathering. This theory of raising-falling and scattering-gathering is same as in the concept with the gathering. This theory of raising-falling and scattering-gathering is same as in the concept with the theory of raising-falling and floating-sinking of past Oriental-Medicine, but more consistently systematized in the pathology and prescription. 2. Dongeui-Suse-Bowon considers the Yang-Dog sympton and Jo-Yeol sympton as the interior-overheating-sympton of the Tae-Eum-In. As following the book, the fire of desire weeken the expiratory-scattering power of the lung, and deepen the shortage of the expiratory-scattering power comparison to the inspiratory-gathering power. Therfore the sympton can be treated by releasing ourselves from the desire and taking medicine strengthening the expiratory-scattering power. 3. In the early stage of the orintal medicine, they used prescriptions composed of So-Yang medicine and Tae-Eum medicine which can cool heat. Galgeun, Mawhang and Seungma were used in the age of Sanghanron, thereafter Jugoing's Jojung-Tang and Gongsin's Galgeunhaegi-Tang were developed as prescriptions of the interior-overheating-sympton of the Tae-Eum-In, and finally Tea-Uem-In Galgeunhaegi-Tang was settled by Yi-Je-Ma.

  • PDF

Development of Korean Version of Heparin-Coated Shunt (헤파린 표면처리된 국산화 혈관우회도관의 개발)

  • Sun, Kyung;Park, Ki-Dong;Baik, Kwang-Je;Lee, Hye-Won;Choi, Jong-Won;Kim, Seung-Chol;Kim, Taik-Jin;Lee, Seung-Yeol;Kim, Kwang-Taek;Kim, Hyoung-Mook;Lee, In-Sung
    • Journal of Chest Surgery
    • /
    • v.32 no.2
    • /
    • pp.97-107
    • /
    • 1999
  • Background: This study was designed to develop a Korean version of the heparin-coated vascular bypass shunt by using a physical dispersing technique. The safety and effectiveness of the thrombo-resistant shunt were tested in experimental animals. Material and Method: A bypass shunt model was constructed on the descending thoracic aorta of 21 adult mongrel dogs(17.5-25 kg). The animals were divided into groups of no-treatment(CONTROL group; n=3), no-treatment with systemic heparinization(HEPARIN group; n=6), Gott heparin shunt (GOTT group; n=6), or Korean heparin shunt(KIST group; n=6). Parameters observed were complete blood cell counts, coagulation profiles, kidney and liver function(BUN/Cr and AST/ ALT), and surface scanning electron microscope(SSEM) findings. Blood was sampled from the aortic blood distal to the shunt and was compared before the bypass and at 2 hours after the bypass. Result: There were no differences between the groups before the bypass. At bypass 2 hours, platelet level increased in the HEPARIN and GOTT groups(p<0.05), but there were no differences between the groups. Changes in other blood cell counts were insignificant between the groups. Activated clotting time, activated partial thromboplastin time, and thrombin time were prolonged in the HEPARIN group(p<0.05) and differences between the groups were significant(p<0.005). Prothrombin time increased in the GOTT group(p<0.05) without having any differences between the groups. Changes in fibrinogen level were insignificant between the groups. Antithrombin III levels were increased in the HEPARIN and KIST groups(p<0.05), and the inter-group differences were also significant(p<0.05). Protein C level decreased in the HEPARIN group(p<0.05) without having any differences between the groups. BUN levels increased in all groups, especially in the HEPARIN and KIST groups(p<0.05), but there were no differences between the groups. Changes of Cr, AST, and ALT levels were insignificant between the groups. SSEM findings revealed severe aggregation of platelets and other cellular elements in the CONTROL group, and the HEPARIN group showed more adherence of the cellular elements than the GOTT or KIST group. Conclusion: Above results show that the heparin-coated bypass shunts(either GOTT or KIST) can suppress thrombus formation on the surface without inducing bleeding tendencies, while systemic heparinization(HEPARIN) may not be able to block activation of the coagulation system on the surface in contact with foreign materials but increases the bleeding tendencies. We also conclude that the thrombo-resistant effects of the Korean version of heparin shunt(KIST) are similar to those of the commercialized heparin shunt(GOTT).

  • PDF

Retrograde Autologous Priming: Is It Really Effective in Reducing Red Blood Cell Transfusions during Extracorporeal Circulation? (역행성 자가혈액 충전법: 체외순환 중 동종적혈구 수혈량을 줄일 수 있는가?)

  • Lim, Cheong;Son, Kuk-Hui;Park, Kay-Hyun;Jheon, Sang-Hoon;Sung, Sook-Whan
    • Journal of Chest Surgery
    • /
    • v.42 no.4
    • /
    • pp.473-479
    • /
    • 2009
  • Background: Retrograde autologous priming (RAP) is known to be useful in decreasing the need of transfusions in cardiac surgery because it prevents excessive hemodilution due to the crystalloid priming of cardiopulmonary bypass circuit. However, there are also negative side effects in terms of blood conservation. We analyzed the intraoperative blood-conserving effect of RAP and also investigated the efficacy of autotransfusion and ultrafiltration as a supplemental method for RAP. Material and Method: From January 2005 to December 2007, 117 patients who underwent isolated coronary artery bypass operations using cardiopulmonary bypass (CPB) were enrolled. Mean age was 63.9$\pm$9.1 years (range 36$\sim$83 years) and 34 patients were female. There were 62 patients in the RAP group and 55 patients in he control group. Intraoperative autotransfusion was performed via the arterial line. RAP was done just before initiating CPB using retrograde drainage of the crystalloid priming solution. Both conventional (CUF) and modified (MUF) ultrafiltrations were done during and after CPB, respectively. The transfusion threshold was less than 20% in hematocrit. Result: Autotransfusions were done in 79 patients (67.5%) and the average amount was 142.5$\pm$65.4 mL (range 30$\sim$320 mL). Homologous red blood cell (RBC) transfusion was done in 47 patients (40.2%) and mean amount of transfused RBC was 404.3$\pm$222.6 mL. Risk factors for transfusions were body surface area (OR 0.01, 95% CI 0.00 $\sim$ 0.63, p=0.030) and cardiopulmonary bypass time (OR 1.04, 95% CI 1.01 $\sim$ 1.08, p=0.019). RAP was not effective in terms of the rate of transfusion (34.5% vs 45.2%, p=0.24). However, the amount of transfused RBC was significantly decreased (526.3$\pm$242.3ml vs 321.4$\pm$166.3 mL, p=0.001). Autotransfusion and ultrafiltration revealed additive and cumulative effects decreasing transfusion amount (one; 600.0$\pm$231.0 mL, two; 533.3$\pm$264.6 mL, three; 346.7$\pm$176.7 mL, four; 300.0$\pm$146.1 mL, p=0.002). Conclusion: Even though RAP did not appear to be effective in terms of the number of patients receiving intraoperative RBC transfusions, it could conserve blood in terms of the amount transfused and with the additive effects of autotransfusion and ultrafiltration. If we want to maximize the blood conserving effect of RAP, more aggressive control will be necessary - such as high threshold of transfusion trigger or strict regulation of crystalloid infusion, and so forth.