• Title/Summary/Keyword: pre-extraction

Search Result 510, Processing Time 0.035 seconds

ORTHODONTIC TRACTION OF HORIZONTALLY ERUPTED LOWER LATERAL INCISOR ON THE LINGUAL SIDE (설측으로 수평 맹출한 하악 측절치의 교정적 견인)

  • Mah, Yon-Joo;Sohn, Hyung-Kyu;Choi, Byung-Jai;Lee, Jae-Ho;Kim, Seong-Oh
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.37 no.1
    • /
    • pp.117-123
    • /
    • 2010
  • Tooth eruption is the movement of the tooth from the developing place in the alveolar bone to the functional position in the oral cavity. The permanent incisors originate from the dental lamina on the lingual side of preceding deciduous tooth and erupt to the level of the occlusion through the well developed gubernacular cord. Ectopic eruption is a developmental disturbance in the eruption pattern of the permanent dentition. Most of the ectopically erupted lower incisor has been found in lingual side. The ectopically erupted tooth could be repositioned by orthodontic force in the early mixed dentition, which could help preventing the problems of loss of space and the lingual tilting of the lower anterior teeth. An eight-year-old girl visited the department of pediatric dentistry, Yonsei Dental University Hospital, for the evaluation and the treatment of the lower right lateral incisor, which was horizontally erupted in the lingual side, parallel to the mouth floor. Her tongue was placed on the labial side of that tooth. There was no previous dental history of dental caries or trauma on the pre-occupied primary incisor. Clinical and radiographic examinations including the computed tomography(CT), showed no evidence of dilacerations on root. Therefore, we decided to start active orthodontic traction of the lower right lateral incisor. We designed the fixed type of buccal arch wire and the lip bumper with hook for the traction. Button was attached to the lingual side of the ectopically positioned tooth. Elastic was used between the appliance and the button on that tooth. After the tooth become upright over the tongue level, appliance was change to the removable type and periodic check-up with occlusal guidance was followed to monitor the position of the tooth. In this case using the fixed appliance with modified form of lip bumper and hook embedded in acrylic part instead of extraction was very efficient up-righting the ectopically erupted tooth toward the occlusal plane.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Determination of plasma C16-C24 globotriaosylceramide (Gb3) isoforms by tandem mass spectrometry for diagnosis of Fabry disease (패브리병(Fabry) 진단을 위한 혈장 중 Globotriaosylceramide (Gb3)의 탠덤매스 분석법 개발과 임상 응용)

  • Yoon, Hye-Ran;Cho, Kyung-Hee;Yoo, Han-Wook;Choi, Jin-Ho;Lee, Dong-Hwan;Zhang, Kate;Keutzer, Joan
    • Journal of Genetic Medicine
    • /
    • v.4 no.1
    • /
    • pp.45-52
    • /
    • 2007
  • Purpose : A simple, rapid, and highly sensitive analytical method for Gb3 in plasma was developed without labor-ex tensive pre-treatment by electrospray ionization MS/ MS (ESI-MS/MS). Measurement of globotriaosy lceramide (Gb3, ceramide trihex oside) in plasma has clinical importance for monitoring after enzyme replacement therapy in Fabry disease patients. The disease is an X-linked lipid storage disorder that results from a deficiency of the enzyme ${\alpha}$-galactosidase A (${\alpha}$-Gal A). The lack of ${\alpha}$-Gal A causes an intracellular accumulation of glycosphingolipids, mainly Gb3. Methods : Only simple 50-fold dilution of plasma is necessary for the extraction and isolation of Gb3 in plasma. Gb3 in diluted plasma was dissolved in dioxane containing C17:0 Gb3 as an internal standard. After centrifugation it was directly injected and analyzed through guard column by in combination with multiple reaction monitoring mode of ESI-MS/MS. Results : Eight isoforms of Gb3 were completely resolved from plasma matrix. C16:0 Gb3 occupied 50% of total Gb3 as a major component in plasma. Linear relationship for Gb3 isoforms w as found in the range of 0.001-1.0 ${\mu}g$/mL. The limit of detection (S/N=3) was 0.001 ${\mu}g$/mL and limit of quantification was 0.01 ${\mu}g$/mL for C16:0 Gb3 with acceptable precision and accuracy. Correlation coefficient of calibration curves for 8 Gb3 isoforms ranged from 0.9678 to 0.9982. Conclusion : This quantitative method developed could be useful for rapid and sensitive 1st line Fabry disease screening, monitoring and/or diagnostic tool for Fabry disease.

  • PDF

Development of Samgyetang Broth from Air-dried and Oven-roasted Chicken Feet (열풍건조 및 오븐구이 닭발로부터 추출한 삼계탕 육수 제조)

  • Kim, Juntae;Utama, Dicky Tri;Jeong, Hae Seong;Heidar, Barido Farouq;Jang, Aera;Pak, Jae In;Kim, Yeong Jong;Lee, Sung Ki
    • Korean Journal of Poultry Science
    • /
    • v.46 no.3
    • /
    • pp.137-154
    • /
    • 2019
  • This study was conducted to develop and compare Samgyetang broth from extract of pre-treated chicken feet. Chicken feet were subjected to non-heating (control), heating at $70^{\circ}C$ for 12 h in a hot air dryer, and heating at $180^{\circ}C$ for 1 h in an oven. The heat-treated chicken feet were extracted at $121^{\circ}C$ for 1 h and 2 h, respectively. The extract was placed in a pouch with whole chicken carcass ($470{\pm}10g$). The sealed Samgyetang retort was made according to the industrial method. The pH of the extract from preheated chicken feet was lower than that extracted from fresh chicken feet. The Thiobarbituric Acid Reactive Substances (TBARS) value of the preheated chicken feet extract was significantly lower (P<0.05) than that of fresh chicken feet extract, but there were no significant differences among the broths. As the extraction time increased, the pH and TBARS value decreased in the extract (P<0.05) but increased in the broth (P<0.05). According to the sensory evaluation test, the extract from 1 h hot air heating and drying was significantly better in appearance, aroma, and overall preference than the other treatments (P<0.05). The GC-MS results showed that benzaldehyde and benzothiazole, which are widely known to give meat and nuts flavor, were detected in those treatments (P<0.05). The Samgyetang broths prepared from 1 h hot air heating and drying extract were significantly higher in the overall acceptability according to the sensory test (P<0.05). In summary, the quality of retort Samgyetang broth can be improved by adding chicken feet extract which is subjected to heating and drying for 1 h.

The Protective Role of Gleditsiae fructus against Streptococcus pneumoniae (폐렴 구균에 대한 조협의 보호 역할 연구)

  • Jun-ki Lee;Se-Hui Lee;Dong Ju Seo;Kang-Hee Lee;Sojung Park;Sun Park;Taekyung Kim;Jin-Young Yang
    • Journal of Life Science
    • /
    • v.33 no.2
    • /
    • pp.158-168
    • /
    • 2023
  • Natural products have been used to mitigate the effects of cancer and infectious diseases, as they feature diverse bioactivities, such as antioxidant, antibacterial, anti-inflammatory, and immunomodulatory effects. Here, we chose 10 natural products that are well-known as pulmonary enhancers and investigated their bactericidal effects on Streptococcus pneumoniae. In the disk diffusion assay, the growth of S. pneumoniae was significantly regulated by G. fructus treatment regardless of extraction method used. We first adopted spraying as a novel delivery method for G. fructus. Interestingly, mice exposed to G. fructus three times a day for 2 weeks were resistant to S. pneumoniae intranasal infection (shown both through body weight loss and survival rates compared to the control group). Moreover, we confirmed that exposure to G. fructus regulated the colonization of the bacteria despite the sustained inflammation in the lung after exposure to S. pneumoniae, indicating that migrated inflammatory immune cells may involve a host defense mechanism against pulmonary infectious diseases. While a similar number of granulocytes (CD11b+Ly6C+Ly6G+), neutrophils (CD11b+Ly6CintLy6G+), and monocytes (CD11b+Ly6CintLy6G-) were found between groups, a significantly increased number of alveolar macrophages (CD11b+CD11chiF4/80+) was detected in BAL fluids of mice pre-exposed to G. fructus at 5 days after S. pneumonia infection. Taken together, our data suggest that this usage of G. fructus can induce protective immunity against bacterial infection, indicating that facial spray may be helpful in enhancing the defense mechanism against pulmonary inflammation and in evaluating the efficacy of natural products as immune enhancers against respiratory diseases.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

A study on the classification of research topics based on COVID-19 academic research using Topic modeling (토픽모델링을 활용한 COVID-19 학술 연구 기반 연구 주제 분류에 관한 연구)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.155-174
    • /
    • 2022
  • From January 2020 to October 2021, more than 500,000 academic studies related to COVID-19 (Coronavirus-2, a fatal respiratory syndrome) have been published. The rapid increase in the number of papers related to COVID-19 is putting time and technical constraints on healthcare professionals and policy makers to quickly find important research. Therefore, in this study, we propose a method of extracting useful information from text data of extensive literature using LDA and Word2vec algorithm. Papers related to keywords to be searched were extracted from papers related to COVID-19, and detailed topics were identified. The data used the CORD-19 data set on Kaggle, a free academic resource prepared by major research groups and the White House to respond to the COVID-19 pandemic, updated weekly. The research methods are divided into two main categories. First, 41,062 articles were collected through data filtering and pre-processing of the abstracts of 47,110 academic papers including full text. For this purpose, the number of publications related to COVID-19 by year was analyzed through exploratory data analysis using a Python program, and the top 10 journals under active research were identified. LDA and Word2vec algorithm were used to derive research topics related to COVID-19, and after analyzing related words, similarity was measured. Second, papers containing 'vaccine' and 'treatment' were extracted from among the topics derived from all papers, and a total of 4,555 papers related to 'vaccine' and 5,971 papers related to 'treatment' were extracted. did For each collected paper, detailed topics were analyzed using LDA and Word2vec algorithms, and a clustering method through PCA dimension reduction was applied to visualize groups of papers with similar themes using the t-SNE algorithm. A noteworthy point from the results of this study is that the topics that were not derived from the topics derived for all papers being researched in relation to COVID-19 (

    ) were the topic modeling results for each research topic (
    ) was found to be derived from For example, as a result of topic modeling for papers related to 'vaccine', a new topic titled Topic 05 'neutralizing antibodies' was extracted. A neutralizing antibody is an antibody that protects cells from infection when a virus enters the body, and is said to play an important role in the production of therapeutic agents and vaccine development. In addition, as a result of extracting topics from papers related to 'treatment', a new topic called Topic 05 'cytokine' was discovered. A cytokine storm is when the immune cells of our body do not defend against attacks, but attack normal cells. Hidden topics that could not be found for the entire thesis were classified according to keywords, and topic modeling was performed to find detailed topics. In this study, we proposed a method of extracting topics from a large amount of literature using the LDA algorithm and extracting similar words using the Skip-gram method that predicts the similar words as the central word among the Word2vec models. The combination of the LDA model and the Word2vec model tried to show better performance by identifying the relationship between the document and the LDA subject and the relationship between the Word2vec document. In addition, as a clustering method through PCA dimension reduction, a method for intuitively classifying documents by using the t-SNE technique to classify documents with similar themes and forming groups into a structured organization of documents was presented. In a situation where the efforts of many researchers to overcome COVID-19 cannot keep up with the rapid publication of academic papers related to COVID-19, it will reduce the precious time and effort of healthcare professionals and policy makers, and rapidly gain new insights. We hope to help you get It is also expected to be used as basic data for researchers to explore new research directions.

  • Comparison of Effects of Normothermic and Hypothermic Cardiopulmonary Bypass on Cerebral Metabolism During Cardiac Surgery (체외순환 시 뇌 대사에 대한 정상 체온 체외순환과 저 체온 체외순환의 임상적 영향에 관한 비교연구)

    • 조광현;박경택;김경현;최석철;최국렬;황윤호
      • Journal of Chest Surgery
      • /
      • v.35 no.6
      • /
      • pp.420-429
      • /
      • 2002
    • Moderate hypothermic cardiopulmonary bypass (CPB) has commonly been used in cardiac surgery. Several cardiac centers recently practice normothermic CPB in cardiac surgery, However, the clinical effect and safety of normothermic CPB on cerebral metabolism are not established and not fully understood. This study was prospectively designed to evaluate the clinical influence of normothermic CPB on brain metabolism and to compare it with that of moderate hypothermic CPB. Material and Method: Thirty-six adult patients scheduled for elective cardiac surgery were randomized to receive normothermic (nasopharyngeal temperature >34.5 $^{\circ}C$, n=18) or hypothermic (nasopharyngeal temperature 29~3$0^{\circ}C$, n=18) CPB with nonpulsatile pump. Middle cerebral artery blood flow velocity (VMCA), cerebral arteriovenous oxygen content difference (CAVO$_{2}$), cerebral oxygen extraction (COE), modified cerebral metabolic rate for oxygen (MCMRO$_{2}$), cerebral oxygen transport (TEO$_{2}$), cerebral venous desaturation (oxygen saturation in internal jugular bulb blood$\leq$50 %), and arterial and internal jugular bulb blood gas analysis were measured during six phases of the operation: Pre-CPB (control), CPB-10 min, Rewarm-1 (nasopharyngeal temperature 34 $^{\circ}C$ in the hypothermic group), Rewarm-2 (nasopharyngeal temperature 37 $^{\circ}C$ in the both groups), CPB-off and Post-CPB (skin closure after CPB-off). Postoperaitve neuropsychologic complications were observed in all patients. All variables were compared between the two groups. Result: VMCA at Rewarm-2 was higher in the hypothermic group (153.11$\pm$8.98%) than in the normothermic group (131.18$\pm$6.94%) (p<0.05). CAVO$_{2}$ (3.47$\pm$0.21 vs 4.28$\pm$0.29 mL/dL, p<0.05), COE (0.30$\pm$0.02 vs 0.39$\pm$0.02, p<0.05) and MCMRO$_{2}$ (4.71 $\pm$0.42 vs 5.36$\pm$0.45, p<0.05) at CPB-10 min were lower in the hypothermic group than in the normothermic group. The hypothermic group had higher TEO$_{2}$ than the normothermic group at CPB-10 (1,527.60$\pm$25.84 vs 1,368.74$\pm$20.03, p<0.05), Rewarm-2 (1,757.50$\pm$32.30 vs 1,478.60$\pm$27.41, p<0.05) and Post-CPB (1,734.37$\pm$41.45 vs 1,597.68$\pm$27.50, p<0.05). Internal jugular bulb oxygen tension (40.96$\pm$1.16 vs 34.79$\pm$2.18 mmHg, p<0.05), saturation (72.63$\pm$2.68 vs 64.76$\pm$2.49 %, p<0.05) and content (8.08$\pm$0.34 vs 6.78$\pm$0.43 mL/dL, p<0.05) at CPB-10 were higher in the hypothermic group than in the normothermic group. The hypothermic group had less incidence of postoperative neurologic complication (delirium) than the normothermic group (2 vs 4 patients, p<0.05). Lasting periods of postoperative delirium were shorter in the hypothermic group than in the normothermic group (60 vs 160 hrs, p<0.01). Conclusion: These results indicate that normothermic CPB should not be routinely applied in all cardiac surgery, especially advanced age or the clinical situations that require prolonged operative time. Moderate hypothermic CPB may have beneficial influences relatively on brain metabolism and postoperative neuropsychologic outcomes when compared with normothermic CPB.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.