• Title/Summary/Keyword: Accuracy Rate

Search Result 3,403, Processing Time 0.033 seconds

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Variation on Estimated Values of Radioactivity Concentration According to the Change of the Acquisition Time of SPECT/CT (SPECT/CT의 획득시간 증감에 따른 방사능농도 추정치의 변화)

  • Kim, Ji-Hyeon;Lee, Jooyoung;Son, Hyeon-Soo;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.2
    • /
    • pp.15-24
    • /
    • 2021
  • Purpose SPECT/CT was noted for its excellent correction method and qualitative functions based on fusion images in the early stages of dissemination, and interest in and utilization of quantitative functions has been increasing with the recent introduction of companion diagnostic therapy(Theranostics). Unlike PET/CT, various conditions like the type of collimator and detector rotation are a challenging factor for image acquisition and reconstruction methods at absolute quantification of SPECT/CT. Therefore, in this study, We want to find out the effect on the radioactivity concentration estimate by the increase or decrease of the total acquisition time according to the number of projections and the acquisition time per projection among SPECT/CT imaging conditions. Materials and Methods After filling the 9,293 ml cylindrical phantom with sterile water and diluting 99mTc 91.76 MBq, the standard image was taken with a total acquisition time of 600 sec (10 sec/frame × 120 frames, matrix size 128 × 128) and also volume sensitivity and the calibration factor was verified. Based on the standard image, the comparative images were obtained by increasing or decreasing the total acquisition time. namely 60 (-90%), 150 (-75%), 300 (-50%), 450 (-25%), 900 (+50%), and 1200 (+100%) sec. For each image detail, the acquisition time(sec/frame) per projection was set to 1.0, 2.5, 5.0, 7.5, 15.0 and 20.0 sec (fixed number of projections: 120 frame) and the number of projection images was set to 12, 30, 60, 90, 180 and 240 frames(fixed time per projection:10 sec). Based on the coefficients measured through the volume of interest in each acquired image, the percentage of variation about the contrast to noise ratio (CNR) was determined as a qualitative assessment, and the quantitative assessment was conducted through the percentage of variation of the radioactivity concentration estimate. At this time, the relationship between the radioactivity concentration estimate (cps/ml) and the actual radioactivity concentration (Bq/ml) was compared and analyzed using the recovery coefficient (RC_Recovery Coefficients) as an indicator. Results The results [CNR, radioactivity Concentration, RC] by the change in the number of projections for each increase or decrease rate (-90%, -75%, -50%, -25%, +50%, +100%) of total acquisition time are as follows. [-89.5%, +3.90%, 1.04] at -90%, [-77.9%, +2.71%, 1.03] at -75%, [-55.6%, +1.85%, 1.02] at -50%, [-33.6%, +1.37%, 1.01] at -25%, [-33.7%, +0.71%, 1.01] at +50%, [+93.2%, +0.32%, 1.00] at +100%. and also The results [CNR, radioactivity Concentration, RC] by the acquisition time change for each increase or decrease rate (-90%, -75%, -50%, -25%, +50%, +100%) of total acquisition time are as follows. [-89.3%, -3.55%, 0.96] at - 90%, [-73.4%, -0.17%, 1.00] at -75%, [-49.6%, -0.34%, 1.00] at -50%, [-24.9%, 0.03%, 1.00] at -25%, [+49.3%, -0.04%, 1.00] at +50%, [+99.0%, +0.11%, 1.00] at +100%. Conclusion In SPECT/CT, the total coefficient obtained according to the increase or decrease of the total acquisition time and the resulting image quality (CNR) showed a pattern that changed proportionally. On the other hand, quantitative evaluations through absolute quantification showed a change of less than 5% (-3.55 to +3.90%) under all experimental conditions, maintaining quantitative accuracy (RC 0.96 to 1.04). Considering the reduction of the total acquisition time rather than the increasing of the image acquiring time, The reduction in total acquisition time is applicable to quantitative analysis without significant loss and is judged to be clinically effective. This study shows that when increasing or decreasing of total acquisition time, changes in acquisition time per projection have fewer fluctuations that occur in qualitative and quantitative condition changes than the change in the number of projections under the same scanning time conditions.

Establishment of an Analytical Method for Prometryn Residues in Clam Using GC-MS (GC-MS를 이용한 바지락 중 prometryn 잔류분석법 확립)

  • Chae, Young-Sik;Cho, Yoon-Jae;Jang, Kyung-Joo;Kim, Jae-Young;Lee, Sang-Mok;Chang, Moon-Ik
    • Korean Journal of Food Science and Technology
    • /
    • v.45 no.5
    • /
    • pp.531-536
    • /
    • 2013
  • We developed a simple, sensitive, and specific analytical method for prometryn using gas chromatography-mass spectrometry (GC-MS). Prometryn is a selective herbicide used for the control of annual grasses and broadleaf weeds in cotton and celery crops. On the basis of high specificity, sensitivity, and reproducibility, combined with simple analytical operation, we propose that our newly developed method is suitable for use as a Ministry of Food and Drug Safety (MFDS, Korea) official method in the routine analysis of individual pesticide residues. Further, the method is applicable in clams. The separation condition for GC-MS was optimized by using a DB-5MS capillary column ($30m{\times}0.25mm$, 0.25 ${\mu}m$) with helium as the carrier gas, at a flow rate of 0.9 mL/min. We achieved high linearity over the concentration range 0.02-0.5 mg/L (correlation coefficient, $r^2$ >0.998). Our method is specific and sensitive, and has a quantitation limit of 0.04 mg/kg. The average recovery in clams ranged from 84.0% to 98.0%. The reproducibility of measurements expressed as the coefficient of variation (CV%) ranged from 3.0% to 7.1%. Our analytical procedure showed high accuracy and acceptable sensitivity regarding the analytical requirements for prometryn in fishery products. Finally, we successfully applied our method to the determination of residue levels in fishery products, and showed that none of the analyzed samples contained detectable amounts of residues.

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

A Multimodal Profile Ensemble Approach to Development of Recommender Systems Using Big Data (빅데이터 기반 추천시스템 구현을 위한 다중 프로파일 앙상블 기법)

  • Kim, Minjeong;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.93-110
    • /
    • 2015
  • The recommender system is a system which recommends products to the customers who are likely to be interested in. Based on automated information filtering technology, various recommender systems have been developed. Collaborative filtering (CF), one of the most successful recommendation algorithms, has been applied in a number of different domains such as recommending Web pages, books, movies, music and products. But, it has been known that CF has a critical shortcoming. CF finds neighbors whose preferences are like those of the target customer and recommends products those customers have most liked. Thus, CF works properly only when there's a sufficient number of ratings on common product from customers. When there's a shortage of customer ratings, CF makes the formation of a neighborhood inaccurate, thereby resulting in poor recommendations. To improve the performance of CF based recommender systems, most of the related studies have been focused on the development of novel algorithms under the assumption of using a single profile, which is created from user's rating information for items, purchase transactions, or Web access logs. With the advent of big data, companies got to collect more data and to use a variety of information with big size. So, many companies recognize it very importantly to utilize big data because it makes companies to improve their competitiveness and to create new value. In particular, on the rise is the issue of utilizing personal big data in the recommender system. It is why personal big data facilitate more accurate identification of the preferences or behaviors of users. The proposed recommendation methodology is as follows: First, multimodal user profiles are created from personal big data in order to grasp the preferences and behavior of users from various viewpoints. We derive five user profiles based on the personal information such as rating, site preference, demographic, Internet usage, and topic in text. Next, the similarity between users is calculated based on the profiles and then neighbors of users are found from the results. One of three ensemble approaches is applied to calculate the similarity. Each ensemble approach uses the similarity of combined profile, the average similarity of each profile, and the weighted average similarity of each profile, respectively. Finally, the products that people among the neighborhood prefer most to are recommended to the target users. For the experiments, we used the demographic data and a very large volume of Web log transaction for 5,000 panel users of a company that is specialized to analyzing ranks of Web sites. R and SAS E-miner was used to implement the proposed recommender system and to conduct the topic analysis using the keyword search, respectively. To evaluate the recommendation performance, we used 60% of data for training and 40% of data for test. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. A widely used combination metric called F1 metric that gives equal weight to both recall and precision was employed for our evaluation. As the results of evaluation, the proposed methodology achieved the significant improvement over the single profile based CF algorithm. In particular, the ensemble approach using weighted average similarity shows the highest performance. That is, the rate of improvement in F1 is 16.9 percent for the ensemble approach using weighted average similarity and 8.1 percent for the ensemble approach using average similarity of each profile. From these results, we conclude that the multimodal profile ensemble approach is a viable solution to the problems encountered when there's a shortage of customer ratings. This study has significance in suggesting what kind of information could we use to create profile in the environment of big data and how could we combine and utilize them effectively. However, our methodology should be further studied to consider for its real-world application. We need to compare the differences in recommendation accuracy by applying the proposed method to different recommendation algorithms and then to identify which combination of them would show the best performance.

The Role and Efficacy of Diagnostic Laparoscopy to Detect the Peritoneal Recurrence of Gastric Cancer (복막 전이가 의심되는 위암 환자에서 진단적 복강경 검사의 의의와 역할)

  • Song, Sun-Choon;Lee, Sang-Lim;Cho, Young-Kwan;Han, Sang-Uk
    • Journal of Gastric Cancer
    • /
    • v.9 no.2
    • /
    • pp.51-56
    • /
    • 2009
  • Purpose: Peritoneal recurrence has been reported to be the most common form of recurrence of gastric cancer. Peritoneal recurrence can generally be suggested by several types of image studies and also if there is evidence of ascites or Bloomer's rectal shelf. It can be confirmed by explorative laparotomy, but diagnostic laparoscopy is a good alternative method and laparoscopic surgery has also been widely used. We reviewed and analyzed the ability of diagnostic laparoscopy to detect peritoneal recurrence or carcinomatosis, and especially for gastric cancer. Materials and Methods: We performed a retrospective review the 45 gastric cancer patients who were operated via diagnostic laparoscopy between 2004. 2. and 2009. 3. We analyzed the perioperative clinical characteristics and the accuracy of the diagnostic methods. Results: The study groups included 14 patients who had confirmed gastric cancer, but they suspected to have carcinomatosis, and 31 patients who had previously underwent gastric resection, but they suspected to have recurrence. The mean operation time was $44.1\pm26.9$ minutes and the mean postoperative hospital stay was $2.7\pm2.8$ days. There was one case of operation-related complication and no postoperative mortality occurred. The sensitivities for detecting peritoneal recurrence or carcinomatosis were 92.1% for diagnostic laparoscopy, 29.7% for detecting ascites and rectal shelf on the physical examination, 86.5% for abdominal computed tomography, 69.2% for PET CT and 18.8% for CEA. Conclusion: Diagnostic laparoscopy does not require a long operation time or a long hospital stay, and it showed a low complication rate in our study. It has high sensitivity for detecting peritoneal recurrence of gastric cancer. It can be an alternative diagnostic confirmative method and it is useful for deciding on further treatment.

  • PDF

A Double-Blind Comparison of Paroxetine and Amitriptyline in the Treatment of Depression Accompanied by Alcoholism : Behavioral Side Effects during the First 2 Weeks of Treatment (주정중독에 동반된 우울증의 치료에서 Paroxetine과 Amitriptyline의 이중맹 비교 : 치료초기 2주 동안의 행동학적 부작용)

  • Yoon, Jin-Sang;Yoon, Bo-Hyun;Choi, Tae-Seok;Kim, Yong-Bum;Lee, Hyung-Yung
    • Korean Journal of Biological Psychiatry
    • /
    • v.3 no.2
    • /
    • pp.277-287
    • /
    • 1996
  • Objective : It has been proposed that cognition and related aspects of mental functioning are decreased in depression as well as in alcoholism. The objective of the study was to compare behavioral side effects of paroxetine and amitriptyline in depressed patients accompanied by alcoholism. The focused comparisons were drug effects concerning psychomotor performance, cognitive function, sleep and daytime sleepiness during the first 2 weeks of treatment. Methods : After an alcohol detoxification period(3 weeks) and a washout period(1 week), a total of 20 male inpatients with alcohol use disorder (DSM-IV), who also had a major depressive episode(DSM-IV), were treated double-blind with paroxetine 20mg/day(n=10) or amitriptyline 25mg/day(n=10) for 2 weeks. All patients were required to have a scare of at least 18 respectively on bath the Hamilton Rating Scale far Depression(HAM-D) and Beck Depression Inventory(BDI) at pre-drug baseline. Patients randomized to paroxetine received active medication in the morning and placebo in the evening whereas those randomized to amitriptyline received active medication in the evening and placebo in the morning. All patients performed the various tasks in a test battery at baseline and at days 3, 7 and 14. The test battery included : critical flicker fusion threshold for sensory information processing capacity : choice reaction time for gross psychomotor performance : tracking accuracy and latency of response to peripheral stimulus as a measure of line sensorimotor co-ordination and divided attention : digit symbol substitution as a measure of sustained attention and concentration. To rate perceived sleep and daytime sleepiness, 10cm line Visual analogue scales were employed at baseline and at days 3, 7 and 14. The subjective rating scales were adapted far this study from Leeds sleep Evaluation Questionnaire and Epworth Sleepiness Scale. In addition a comprehensive side effect assessment, using the UKU side effect rating scale, was carried out at baseline and at days 7 and 14. The efficacy of treatment was evaluated using HAM-D, BDI and clinical global impression far severity and improvement at days 7 and 14. Results : The pattern of results indicated thai paroxetine improved performance an mast of the lest variables and also improved sleep with no effect on daytime sleepiness aver the study period. In contrast, amitriptyline produced disruption of performance on same tests and improved sleep with increased daytime sleepiness in particular at day 3. On the UKU side effect rating scale, mare side effects were registered an amitriptyline. The therapeutic efficacy was observed in favor of paroxetine early in day 7. Conclusion : These results demonstrated thai paroxetine in much better than amitriptyline for the treatment of depressed patients accompained by alcoholism at least in terms of behavioral safety and tolerability, furthermore the results may assist in explaining the therapeutic outcome of paroxetine. For example, and earlier onset of antidepressant action of paroxetine may be caused by early improved cognitive function or by contributing to good compliance with treatment.

  • PDF

Freezing Time Prediction of Foods by Multiple Regression Analysis (다중회귀분석에 의한 식품의 동결시간 예측)

  • Jeong, Jin-Woong;Kim, Jong-Hoon;Park, Noh-Hyun;Lee, Seung-Hyun;Kim, Young-Dong
    • Korean Journal of Food Science and Technology
    • /
    • v.30 no.2
    • /
    • pp.341-347
    • /
    • 1998
  • To develop simple and accurate analytical method for freezing time prediction of beef and tylose under various freezing conditions, freezing time (Y) was regressed against the reciprocal $(X_3)$ of difference of initial freezing point and freezing medium temperature, reciprocal $(X_4)$ of surface heat transfer coefficient, the initial temperature $(X_1)$ and thickness $(X_2)$ of samples which should cover most situations arising in frozen food industry. As results of the multiple regression analysis, equations were obtained as follows. $Y_{tylose}=3.45X_1+7642.84X_2+4642.67X_3+2946.89X_4-431.33\;(R^2=0.9568)$ and $Y_{beef}=0.68X_1+7568.98X_2+2430.78X_3+3293.26X_4-299.00\;(R^2=0.9897)$. These equations offered better results than Plank, Nagaoka and Pham's models, shown in satisfactory agreement with models of Cleland & Earle and Hung & Thompson when were compared to previous models, and the accuracy of its was very high as average absolute difference of about 10% in the difference between the fitted and experimental results. Also, thermal diffusivities of beef and tylose were measured as $4.43{\times}10^{-4}m^2/hr$ and $4.39{\times}10^{-4}m^2/hr$ at $6{\sim}7^{\circ}C$, $2.42{\times}10^{-3}m^2/hr$ and $3.32{\times}10^{-3}m^2/hr$ at $-10{\sim}-12^{\circ}C$. Initial freezing points of beef and tylose were $-1.2^{\circ}C\;and\;-0.6^{\circ}C$, respectively. Surface heat transfer coefficients were estimated $20.57\;W/m^2^{\circ}C$ with no-packing, $16.11\;W/m^2^{\circ}C$ with wrap packing and $13.07\;W/m^2^{\circ}C$ with Al-foil packing, and the cooling rate of immersion freezing method was about 10 times faster than that of air blast freezing method.

  • PDF

Cytopathologic Diagnosis of Bile Obtained by Percutaneous Biliary Drainage (담즙의 세포병리학적 진단에 관한 연구)

  • Park, In-Ae;Ham, Eui-Keun
    • The Korean Journal of Cytopathology
    • /
    • v.3 no.1
    • /
    • pp.1-11
    • /
    • 1992
  • From the one hundred forty eight patients with evidence of biliary tract obstruction, 275 bile samples were obtained from percutaneously placed biliary drainage catheters. Of the 148 patients, ova of Clonorchis sinensis were demonstrated in 17 patients (11.5%), with the epithelial cells. Among them, one case also demonstrated coexisting adenocarcinoma. In 105 patients, the medical records were available for review and the clinical diagnoses were malignancy in 99 patients and benign lesion in 6 patients. Of the 99 patients in which clinico-radiologic diagnosis were malignant, cytologic results were positive in 23.2%. Dividing the patients Into two groups, the ones with tumor of bile duct origin (group I) and the others with tumors producing extrinsic compression of bile duct, such as periampullary carcinoma, pancreas head carcinoma or metastatic carcinoma in lymph nodes from tumors of adjacent organs (group II), the cytologic results were positive in 37% and 11.6%, respectively. In patients with histologic confirmation, the positive correlation was found in 50% and 20% in group I and group II, respectively, with remarkable difference between two groups. There were no false positives in cytologic diangosis. The overall concordance rate of cytologic diagnosis with diagnosis of clinical investigation in both benign and malignant lesions was 27.6% and the diagnostic specificity was 100%.

  • PDF