• Title/Summary/Keyword: Use Values

Search Result 5,649, Processing Time 0.046 seconds

The Precision Test Based on States of Bone Mineral Density (골밀도 상태에 따른 검사자의 재현성 평가)

  • Yoo, Jae-Sook;Kim, Eun-Hye;Kim, Ho-Seong;Shin, Sang-Ki;Cho, Si-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.67-72
    • /
    • 2009
  • Purpose: ISCD (International Society for Clinical Densitometry) requests that users perform mandatory Precision test to raise their quality even though there is no recommendation about patient selection for the test. Thus, we investigated the effect on precision test by measuring reproducibility of 3 bone density groups (normal, osteopenia, osteoporosis). Materials and Methods: 4 users performed precision test with 420 patients (age: $57.8{\pm}9.02$) for BMD in Asan Medical Center (JAN-2008 ~ JUN-2008). In first group (A), 4 users selected 30 patient respectively regardless of bone density condition and measured 2 part (L-spine, femur) in twice. In second group (B), 4 users measured bone density of 10 patients respectively in the same manner of first group (A) users but dividing patient into 3 stages (normal, osteopenia, osteoporosis). In third group (C), 2 users measured 30 patients respectively in the same manner of first group (A) users considering bone density condition. We used GE Lunar Prodigy Advance (Encore. V11.4) and analyzed the result by comparing %CV to LSC using precision tool from ISCD. Check back was done using SPSS. Results: In group A, the %CV calculated by 4 users (a, b, c, d) were 1.16, 1.01, 1.19, 0.65 g/$cm^2$ in L-spine and 0.69, 0.58, 0.97, 0.47 g/$cm^2$ in femur. In group B, the %CV calculated by 4 users (a, b, c, d) were 1.01, 1.19, 0.83, 1.37 g/$cm^2$ in L-spine and 1.03, 0.54, 0.69, 0.58 g/$cm^2$ in femur. When comparing results (group A, B), we found no considerable differences. In group C, the user_1's %CV of normal, osteopenia and osteoporosis were 1.26, 0.94, 0.94 g/$cm^2$ in L-spine and 0.94, 0.79, 1.01 g/$cm^2$ in femur. And the user_2's %CV were 0.97, 0.83, 0.72 g/$cm^2$ L-spine and 0.65, 0.65, 1.05 g/$cm^2$ in femur. When analyzing the result, we figured out that the difference of reproducibility was almost not found but the differences of two users' several result values have effect on total reproducibility. Conclusions: Precision test is a important factor of bone density follow up. When Machine and user's reproducibility is getting better, it’s useful in clinics because of low range of deviation. Users have to check machine's reproducibility before the test and keep the same mind doing BMD test for patient. In precision test, the difference of measured value is usually found for ROI change caused by patient position. In case of osteoporosis patient, there is difficult to make initial ROI accurately more than normal and osteopenia patient due to lack of bone recognition even though ROI is made automatically by computer software. However, initial ROI is very important and users have to make coherent ROI because we use ROI Copy function in a follow up. In this study, we performed precision test considering bone density condition and found LSC value was stayed within 3%. There was no considerable difference. Thus, patient selection could be done regardless of bone density condition.

  • PDF

Utility of Wide Beam Reconstruction in Whole Body Bone Scan (전신 뼈 검사에서 Wide Beam Reconstruction 기법의 유용성)

  • Kim, Jung-Yul;Kang, Chung-Koo;Park, Min-Soo;Park, Hoon-Hee;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.83-89
    • /
    • 2010
  • Purpose: The Wide Beam Reconstruction (WBR) algorithms that UltraSPECT, Ltd. (U.S) has provides solutions which improved image resolution by eliminating the effect of the line spread function by collimator and suppression of the noise. It controls the resolution and noise level automatically and yields unsurpassed image quality. The aim of this study is WBR of whole body bone scan in usefulness of clinical application. Materials and Methods: The standard line source and single photon emission computed tomography (SPECT) reconstructed spatial resolution measurements were performed on an INFINA (GE, Milwaukee, WI) gamma camera, equipped with low energy high resolution (LEHR) collimators. The total counts of line source measurements with 200 kcps and 300 kcps. The SPECT phantoms analyzed spatial resolution by the changing matrix size. Also a clinical evaluation study was performed with forty three patients, referred for bone scans. First group altered scan speed with 20 and 30 cm/min and dosage of 740 MBq (20 mCi) of $^{99m}Tc$-HDP administered but second group altered dosage of $^{99m}Tc$-HDP with 740 and 1,110 MBq (20 mCi and 30 mCi) in same scan speed. The acquired data was reconstructed using the typical clinical protocol in use and the WBR protocol. The patient's information was removed and a blind reading was done on each reconstruction method. For each reading, a questionnaire was completed in which the reader was asked to evaluate, on a scale of 1-5 point. Results: The result of planar WBR data improved resolution more than 10%. The Full-Width at Half-Maximum (FWHM) of WBR data improved about 16% (Standard: 8.45, WBR: 7.09). SPECT WBR data improved resolution more than about 50% and evaluate FWHM of WBR data (Standard: 3.52, WBR: 1.65). A clinical evaluation study, there was no statistically significant difference between the two method, which includes improvement of the bone to soft tissue ratio and the image resolution (first group p=0.07, second group p=0.458). Conclusion: The WBR method allows to shorten the acquisition time of bone scans while simultaneously providing improved image quality and to reduce the dosage of radiopharmaceuticals reducing radiation dose. Therefore, the WBR method can be applied to a wide range of clinical applications to provide clinical values as well as image quality.

  • PDF

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Motives for Writing After-Purchase Consumer Reviews in Online Stores and Classification of Online Store Shoppers (인터넷 점포에서의 구매후기 작성 동기 및 점포 고객 유형화)

  • Hong, Hee-Sook;Ryu, Sung-Min
    • Journal of Distribution Research
    • /
    • v.17 no.3
    • /
    • pp.25-57
    • /
    • 2012
  • This study identified motives for writing apparel product reviews in online stores, and determined what motives increase the behavior of writing reviews. It also classified store customers based on the type of writing motives, and clarified the characteristics of internet purchase behavior and of a demographic profile. Data were collected from 252 females aged 20s' and 30s' who have experience of reading and writing reviews on online shopping. The five types of writing motives were altruistic information sharing, remedying of a grievance and vengeance, economic incentives, helping new product development, and the expression of satisfaction feelings. Among five motives, altruistic information sharing, economic incentives, and helping new product development stimulate writing reviews. Store customers who write reviews were classified into three groups based on their writing motive types: Other consumer advocates(29.8%), self-interested shoppers(40.5%) and shoppers with moderate motives(29.8%). There were significant differences among three groups in writing behavior (the frequency of writing reviews, writing intent of reviews, duration of writing reviews, and frequency of online shopping) and age. Based on results, managerial implications were suggested. Long Abstract : The purpose of present study is to identify the types of writing motives on online shopping, and to clarify the motives affecting the behavior of writing reviews. This study also classifies online shoppers based on the motive types, and identifies the characteristics of the classified groups in terms of writing behavior, frequency of online shopping, and demographics. Use and Gratification Theory was adopted in this study. Qualitative research (focus group interview) and quantitative research were used. Korean women(20 to 39 years old) who reported experience with purchasing clothing online, and reading and writing reviews were selected as samples(n=252). Most of the respondents were relatively young (20-34yrs., 86.1%,), single (61.1%), employed(61.1%) and residents living in big cities(50.9%). About 69.8% of respondents read and 40.5% write apparel reviews frequently or very frequently. 24.6% of the respondents indicated an "average" in their writing frequency. Based on the qualitative result of focus group interviews and previous studies on motives for online community activities, measurement items of motives for writing after-purchase reviews were developed. All items were used a five-point Likert scale with endpoints 1 (strongly disagree) and 5 (strongly agree). The degree of writing behavior was measured by items concerning experience of writing reviews, frequency of writing reviews, amount of writing reviews, and intention of writing reviews. A five-point scale(strongly disagree-strongly agree) was employed. SPSS 18.0 was used for exploratory factor analysis, K-means cluster analysis, one-way ANOVA(Scheffe test) and ${\chi}^2$-test. Confirmatory factor analysis and path model analysis were conducted by AMOS 18.0. By conducting principal components factor analysis (varimax rotation, extracting factors with eigenvalues above 1.0) on the measurement items, five factors were identified: Altruistic information sharing, remedying of a grievance and vengeance, economic incentives, helping new product development, and expression of satisfaction feelings(see Table 1). The measurement model including these final items was analyzed by confirmatory factor analysis. The measurement model had good fit indices(GFI=.918, AGFI=.884, RMR=.070, RMSEA=.054, TLI=.941) except for the probability value associated with the ${\chi}^2$ test(${\chi}^2$=189.078, df=109, p=.00). Convergent validities of all variables were confirmed using composite reliability. All SMC values were found to be lower than AVEs confirming discriminant validity. The path model's goodness-of-fit was greater than the recommended limits based on several indices(GFI=.905, AGFI=.872, RMR=.070, RMSEA=.052, TLI=.935; ${\chi}^2$=260.433, df=155, p=.00). Table 2 shows that motives of altruistic information sharing, economic incentives and helping new product development significantly increased the degree of writing product reviews of online shopping. In particular, the effect of altruistic information sharing and pursuit of economic incentives on the behavior of writing reviews were larger than the effect of helping new product development. As shown in table 3, online store shoppers were classified into three groups: Other consumer advocates (29.8%), self-interested shoppers (40.5%), and moderate shoppers (29.8%). There were significant differences among the three groups in the degree of writing reviews (experience of writing reviews, frequency of writing reviews, amount of writing reviews, intention of writing reviews, and duration of writing reviews, frequency of online shopping) and age. For five aspects of writing behavior, the group of other consumer advocates who is mainly comprised of 20s had higher scores than the other two groups. There were not any significant differences between self-interested group and moderate group regarding writing behavior and demographics.

  • PDF

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

Preparation of Vitamin E Acetate Nano-emulsion and In Vitro Research Regarding Vitamin E Acetate Transdermal Delivery System which Use Franz Diffusion Cell (Vitamin E Acetate를 함유한 Nano-emulsion 제조와 Franz Diffusion Cell을 이용한 Vitamin E Acetate의 경표피 흡수에 관한 In Vitro 연구)

  • Park, Soo-Nam;Kim, Jai-Hyun;Yang, Hee-Jung;Won, Bo-Ryoung;Ahn, You-Jin;Kang, Myung-Kyu
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.35 no.2
    • /
    • pp.91-101
    • /
    • 2009
  • in the cosmetics and medical supply field as a antioxidant material. The stable nano particle emulsion of skin toner type containing VEA was prepared. To evaluate the skin permeation, experiments on VEA permeation to the skin of the ICR outbred albino mice (12 weeks, about 50 g, female) and on differences of solubility as a function of receptor formulations was performed. The analysis of nano-emulsions containing VEA 0.07 % showed that the higher ethanol contents the larger emulsions were formed, while the higher surfactant contents the size became smaller.In this study, vitamin E acetate (VEA, tocopheryl acetate), a lipid-soluble vitamin which is widely used A certain contents of ethanol in receptor phase increased VEA solubility on the nano-emulsion. When the ethanol contents were 10.0 % and 20.0 %, the VEA solubility was higher than 5.0 % and 40.0 %, respectively. The type of surfactant in receptor solution influenced to VEA solubility. The comparison between three kind surfactants whose chemical structures and HLB values are different, showed that solubility of VEA was increased as order of sorbitan sesquioleate (Arlacel 83; HLB 3.7) > POE (10) hydrogenated castor oil (HCO-10; HLB 6.5) > sorbitan monostearate (Arlacel 60; HLB 4.7). VEA solubility was also shown to be different according to the type of antioxidant. In early time, the solubility of the sample including ascorbic acid was similar to those of other samples including other types of antioxidants. However, the solubility of the sample including ascorbic acid was 2 times higher than others after 24 h. Franz diffusion cell experiment using mouse skin was performed with four nano-emulsion samples which have different VEA contents. The emulsion of 10 wt% ethanol was shown to be the most permeable at the amount of 128.8 ${\mu}g/cm^2$. When the result of 10 % ethanol content was compared with initial input of 220.057 ${\mu}g/cm^2$, the permeated amount was 58.53 % and the permeated amount at 10 % ethanol was higher 45.0 % and 15.0 % than the other results which ethanol contents were 1.0 and 20.0 wt%, respectively. Emulsion particle size used 0.5 % surfactant (HCO-60) was 26.0 nm that is one twentieth time smaller than the size of 0.007 % surfactant (HCO-60) at the same ethanol content. Transepidermal permeation of VEA was 54.848 ${\mu}g/cm^2$ which is smaller than that of particlesize 590.7 nm. Skin permeation of nano-emulsion containing VEA and difference of VEA solubility as a function of receptor phase formulation were determined from the results. Using these results, optimal conditions of transepidermal permeation with VEA were considered to be set up.

The Effect of Nasal BiPAP Ventilation in Acute Exacerbation of Chronic Obstructive Airway Disease (만성 기도폐쇄환자에서 급성 호흡 부전시 BiPAP 환기법의 치료 효과)

  • Cho, Young-Bok;Kim, Ki-Beom;Lee, Hak-Jun;Chung, Jin-Hong;Lee, Kwan-Ho;Lee, Hyun-Woo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.2
    • /
    • pp.190-200
    • /
    • 1996
  • Background : Mechanical ventilation constitutes the last therapeutic method for acute respiratory failure when oxygen therapy and medical treatment fail to improve the respiratory status of the patient. This invasive ventilation, classically administered by endotracheal intubation or by tracheostomy, is associated with significant mortality and morbidity. Consequently, any less invasive method able to avoid the use of endotracheal ventilation would appear to be useful in high risk patient. Over recent years, the efficacy of nasal mask ventilation has been demonstrated in the treatment of chronic restrictive respiratory failure, particularly in patients with neuromuscular diseases. More recently, this method has been successfully used in the treatment of acute respiratory failure due to parenchymal disease. Method : We assessed the efficacy of Bilevel positive airway pressure(BiPAP) in the treatment of acute exacerbation of chronic obstructive pulmonary disease(COPD). This study prospectively evaluated the clinical effectiveness of a treatment schedule with positive pressure ventilation via nasal mask(Respironics BiPAP device) in 22 patients with acute exacerbations of COPD. Eleven patients with acute exacerbations of COPD were treated with nasal pressure support ventilation delivered via a nasal ventilatory support system plus standard treatment for 3 consecutive days. An additional 11 control patients were treated only with standard treatment. The standard treatment consisted of medical and oxygen therapy. The nasal BiPAP was delivered by a pressure support ventilator in spontaneous timed mode and at an inspiratory positive airway pressure $6-8cmH_2O$ and an expiratory positive airway pressure $3-4cmH_2O$. Patients were evaluated with physical examination(respiratory rate), modified Borg scale and arterial blood gas before and after the acute therapeutic intervention. Results : Pretreatment and after 3 days of treatment, mean $PaO_2$ was 56.3mmHg and 79.1mmHg (p<0.05) in BiPAP group and 56.9mmHg and 70.2mmHg (p<0.05) in conventional treatment (CT) group and $PaCO_2$ was 63.9mmHg and 56.9mmHg (p<0.05) in BiPAP group and 53mmHg and 52.8mmHg in CT group respectively. pH was 7.36 and 7.41 (p<0.05) in BiPAP group and 7.37 and 7.38 in cr group respectively. Pretreatment and after treatment, mean respiratory rate was 28 and 23 beats/min in BiPAP group and 25 and 20 beats/min in CT group respectively. Borg scale was 7.6 and 4.7 in BiPAP group and 6.4 and 3.8 in CT group respectively. There were significant differences between the two groups in changes of mean $PaO_2$, $PaCO_2$ and pH respectively. Conclusion: We conclude that short-term nasal pressure-support ventilation delivered via nasal BiPAP in the treatment of acute exacerbation of COPD, is an efficient mode of assisted ventilation for improving blood gas values and dyspnea sensation and may reduce the need for endotracheal intubation with mechanical ventilation.

  • PDF

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Changes of Brain Natriuretic Peptide Levels according to Right Ventricular HemodynaMics after a Pulmonary Resection (폐절제술 후 우심실의 혈역학적 변화에 따른 BNP의 변화)

  • Na, Myung-Hoon;Han, Jong-Hee;Kang, Min-Woong;Yu, Jae-Hyeon;Lim, Seung-Pyung;Lee, Young;Choi, Jae-Sung;Yoon, Seok-Hwa;Choi, Si-Wan
    • Journal of Chest Surgery
    • /
    • v.40 no.9
    • /
    • pp.593-599
    • /
    • 2007
  • Background: The correlation between levels of brain natriuretic peptide (BNP) and the effect of pulmonary resection on the right ventricle of the heart is not yet widely known. This study aims to assess the relationship between the change in hemodynamic values of the right ventricle and increased BNP levels as a compensatory mechanism for right heart failure following pulmonary resection and to evaluate the role of the BNP level as an index of right heart failure after pulmonary resection. Material and Method: In 12 non small cell lung cancer patients that had received a lobectomy or pnemonectomy, the level of NT-proBNP was measured using the immunochemical method (Elecsys $1010^{(R)}$, Roche, Germany) which was compared with hemodynamic variables determined through the use of a Swan-Garz catheter prior to and following the surgery. Echocardiography was performed prior to and following the surgery, to measure changes in right ventricular and left ventricular pressures. For statistical analysis, the Wilcoxon rank sum test and linear regression analysis were conducted using SPSSWIN (version, 11.5). Result: The level of postoperative NT-proBNP (pg/mL) significantly increased for 6 hours, then for 1 day, 2 days, 3 days and 7 days after the surgery (p=0.003, 0.002, 0.002, 0.006, 0.004). Of the hemodynamic variables measured using the Swan-Ganz catheter, the mean pulmonary artery pressure after the surgery when compared with the pressure prior to surgery significantly increased at 0 hours, 6 hours, then 1 day, 2 days, and 3 days after the surgery (p=0.002, 0,002, 0.006, 0.007, 0.008). The right ventricular pressure significantly increased at 0 hours, 6 hours, then 1 day, and 3 days after the surgery (p=0.000, 0.009, 0.044, 0.032). The pulmonary vascular resistance index [pulmonary vascular resistance index=(mean pulmonary artery pressure-mean pulmonary capillary wedge pressure)/cardiac output index] significantly increased at 6 hours, then 2 days after the surgery (p=0.008, 0.028). When a regression analysis was conducted for changes in the mean pulmonary artery pressure and NT-proBNP levels after the surgery, significance was evident after 6 hours (r=0.602, p=0.038) and there was no significance thereafter. Echocardiography displayed no significant changes after the surgery. Conclusion: There was a significant correlation between changes in the mean pulmonary artery pressure and the NT-proBNP level 6 hours after a pulmonary resection. Therefore, it can be concluded that changes in NT-proBNP level after a pulmonary resection can serve as an index that reflects early hemodynamic changes in the right ventricle after a pulmonary resection.