• Title/Summary/Keyword: 정확성 분석

Search Result 2,801, Processing Time 0.033 seconds

Comparison of marginal and internal fit of 3-unit monolithic zirconia fixed partial dentures fabricated from solid working casts and working casts from a removable die system (가철성 다이 시스템으로 제작된 작업 모형과 솔리드 작업 모형 상에서 제작된 지르코니아 3본 고정성 치과 보철물의 변연 및 내면 적합도 비교)

  • Wan-Sun Lee
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.40 no.2
    • /
    • pp.72-81
    • /
    • 2024
  • Purpose: This study aimed to assess the marginal and internal fit of 3-unit monolithic zirconia fixed partial dentures (FPDs) fabricated via computer-aided design and computer-aided manufacturing (CAD/CAM) from solid working casts and removable die system. Materials and Methods: The tooth preparation protocol for a zirconia crown was executed on the mandibular right first premolar and mandibular right first molar, with the creation of a reference cast featuring an absent mandibular right second premolar. The reference cast was duplicated using polyvinyl siloxane impression, from which 20 working casts were fabricated following typical dental laboratory procedures. For comparative analysis, 10 FPDs were produced from a removable die system (RD group) and the remaining 10 FPDs from the solid working casts (S group). The casts were digitized using a dental desktop scanner to establish virtual casts and design the FPDs using CAD. The definitive 3-unit monolithic zirconia FPDs were fabricated via a CAM milling process. The seated FPDs on the reference cast underwent digital evaluation for marginal and internal fit. The Mann-Whitney U test was applied for statistical comparison between the two groups (α = 0.05). Results: The RD group showed significantly higher discrepancies in fit for both premolars and molars compared to the S group (P < 0.05), particularly in terms of marginal and occlusal gaps. Color mapping also highlighted more significant deviations in the RD group, especially in the marginal and occlusal regions. Conclusion: The study found that the discrepancies in marginal and occlusal fits of 3-unit monolithic zirconia FPDs were primarily associated with those fabricated using the removable die system. This indicates the significant impact of the fabrication method on the accuracy of FPDs.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

M-mode Ultrasound Assessment of Diaphragmatic Excursions in Chronic Obstructive Pulmonary Disease : Relation to Pulmonary Function Test and Mouth Pressure (만성폐쇄성 폐질환 환자에서 M-mode 초음파로 측정한 횡격막 운동)

  • Lim, Sung-Chul;Jang, Il-Gweon;Park, Hyeong-Kwan;Hwang, Jun-Hwa;Kang, Yu-Ho;Kim, Young-Chul;Park, Kyung-Ok
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.4
    • /
    • pp.736-745
    • /
    • 1998
  • Background: Respiratory muscle interaction is further profoundly affected by a number of pathologic conditions. Hyperinflation may be particularly severe in chronic obstructive pulmonary disease(COPD) patients, in whom the functional residual capacity(FRC) often exceeds predicted total lung capacity(TLC). Hyperinflation reduces the diaphragmatic effectiveness as a pressure generator and reduces diaphragmatic contribution to chest wall motion. Ultrasonography has recently been shown to be a sensitive and reproducible method of assessing diaphragmatic excursion. This study was performed to evaluate how differences of diaphragmatic excursion measured by ultrasonography associate with normal subjects and COPD patients. Methods: We measured diaphragmatic excursions with ultrasonography on 28 healthy subjects(l6 medical students, 12 age-matched control) and 17 COPD patients. Ultrasonographic measurements were performed during tidal breathing and maximal respiratory efforts approximating vital capacity breathing using Aloka KEC-620 with 3.5 MHz transducer. Measurements were taken in the supine posture. The ultrasonographic probe was positioned transversely in the midclavicular line below the right subcostal margin. After detecting the right hemidiaphragm in the B-mode the ultrasound beam was then positioned so that it was approximately parallel to the movement of middle or posterior third of right diaphragm. Recordings in the M-mode at this position were made throughout the test. Measurements of diaphragmatic excursion on M-mode tracing were calculated by the average gap in 3 times-respiration cycle. Pulmonary function test(SensorMedics 2800), maximal inspiratory(PImax) and expiratory mouth pressure(PEmax, Vitalopower KH-101, Chest) were measured in the seated posture. Results: During the tidal breathing, diaphragmatic excursions were recorded $1.5{\pm}0.5cm$, $1.7{\pm}0.5cm$ and $1.5{\pm}0.6cm$ in medical students, age-matched control group and COPD patients, respectively. Diaphragm excursions during maximal respiratory efforts were significantly decreased in COPD patients ($3.7{\pm}1.3cm$) when compared with medical students, age-matched control group($6.7{\pm}1.3cm$, $5.8{\pm}1.2cm$, p< 0.05}. During maximal respiratory efforts in control subjects, diaphragm excursions were correlated with $FEV_1$, FEVl/FVC, PEF, PIF, and height. In COPD patients, diaphragm excursions during maximal respiratory efforts were correlated with PEmax(maximal expiratory pressure), age, and %FVC. In multiple regression analysis, the combination of PEmax and age was an independent marker of diaphragm excursions during maximal respiratory efforts with COPD patients. Conclusion: COPD subjects had smaller diaphragmatic excursions during maximal respiratory efforts than control subjects. During maximal respiratory efforts in COPD patients, diaphragm excursions were well correlated with PEmax. These results suggest that diaphragm excursions during maximal respiratory efforts with COPD patients may be valuable at predicting the pulmonary function.

  • PDF

System Development for Measuring Group Engagement in the Art Center (공연장에서 다중 몰입도 측정을 위한 시스템 개발)

  • Ryu, Joon Mo;Choi, Il Young;Choi, Lee Kwon;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.45-58
    • /
    • 2014
  • The Korean Culture Contents spread out to Worldwide, because the Korean wave is sweeping in the world. The contents stand in the middle of the Korean wave that we are used it. Each country is ongoing to keep their Culture industry improve the national brand and High added value. Performing contents is important factor of arousal in the enterprise industry. To improve high arousal confidence of product and positive attitude by populace is one of important factor by advertiser. Culture contents is the same situation. If culture contents have trusted by everyone, they will give information their around to spread word-of-mouth. So, many researcher study to measure for person's arousal analysis by statistical survey, physiological response, body movement and facial expression. First, Statistical survey has a problem that it is not possible to measure each person's arousal real time and we cannot get good survey result after they watched contents. Second, physiological response should be checked with surround because experimenter sets sensors up their chair or space by each of them. Additionally it is difficult to handle provided amount of information with real time from their sensor. Third, body movement is easy to get their movement from camera but it difficult to set up experimental condition, to measure their body language and to get the meaning. Lastly, many researcher study facial expression. They measures facial expression, eye tracking and face posed. Most of previous studies about arousal and interest are mostly limited to reaction of just one person and they have problems with application multi audiences. They have a particular method, for example they need room light surround, but set limits only one person and special environment condition in the laboratory. Also, we need to measure arousal in the contents, but is difficult to define also it is not easy to collect reaction by audiences immediately. Many audience in the theater watch performance. We suggest the system to measure multi-audience's reaction with real-time during performance. We use difference image analysis method for multi-audience but it weaks a dark field. To overcome dark environment during recoding IR camera can get the photo from dark area. In addition we present Multi-Audience Engagement Index (MAEI) to calculate algorithm which sources from sound, audience' movement and eye tracking value. Algorithm calculates audience arousal from the mobile survey, sound value, audience' reaction and audience eye's tracking. It improves accuracy of Multi-Audience Engagement Index, we compare Multi-Audience Engagement Index with mobile survey. And then it send the result to reporting system and proposal an interested persons. Mobile surveys are easy, fast, and visitors' discomfort can be minimized. Also additional information can be provided mobile advantage. Mobile application to communicate with the database, real-time information on visitors' attitudes focused on the content stored. Database can provide different survey every time based on provided information. The example shown in the survey are as follows: Impressive scene, Satisfied, Touched, Interested, Didn't pay attention and so on. The suggested system is combine as 3 parts. The system consist of three parts, External Device, Server and Internal Device. External Device can record multi-Audience in the dark field with IR camera and sound signal. Also we use survey with mobile application and send the data to ERD Server DB. The Server part's contain contents' data, such as each scene's weights value, group audience weights index, camera control program, algorithm and calculate Multi-Audience Engagement Index. Internal Device presents Multi-Audience Engagement Index with Web UI, print and display field monitor. Our system is test-operated by the Mogencelab in the DMC display exhibition hall which is located in the Sangam Dong, Mapo Gu, Seoul. We have still gotten from visitor daily. If we find this system audience arousal factor with this will be very useful to create contents.

Quantitative Differences between X-Ray CT-Based and $^{137}Cs$-Based Attenuation Correction in Philips Gemini PET/CT (GEMINI PET/CT의 X-ray CT, $^{137}Cs$ 기반 511 keV 광자 감쇠계수의 정량적 차이)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Park, Eun-Kyung;Kim, Jong-Hyo;Kim, Jae-Il;Lee, Hong-Jae;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.3
    • /
    • pp.182-190
    • /
    • 2005
  • Purpose: There are differences between Standard Uptake Value (SUV) of CT attenuation corrected PET and that of $^{137}Cs$. Since various causes lead to difference of SUV, it is important to know what is the cause of these difference. Since only the X-ray CT and $^{137}Cs$ transmission data are used for the attenuation correction, in Philips GEMINI PET/CT scanner, proper transformation of these data into usable attenuation coefficients for 511 keV photon has to be ascertained. The aim of this study was to evaluate the accuracy in the CT measurement and compare the CT and $^{137}Cs$-based attenuation correction in this scanner. Methods: For all the experiments, CT was set to 40 keV (120 kVp) and 50 mAs. To evaluate the accuracy of the CT measurement, CT performance phantom was scanned and Hounsfield units (HU) for those regions were compared to the true values. For the comparison of CT and $^{137}Cs$-based attenuation corrections, transmission scans of the elliptical lung-spine-body phantom and electron density CT phantom composed of various components, such as water, bone, brain and adipose, were performed using CT and $^{137}Cs$. Transformed attenuation coefficients from these data were compared to each other and true 511 keV attenuation coefficient acquired using $^{68}Ge$ and ECAT EXACT 47 scanner. In addition, CT and $^{137}Cs$-derived attenuation coefficients and SUV values for $^{18}F$-FDG measured from the regions with normal and pathological uptake in patients' data were also compared. Results: HU of all the regions in CT performance phantom measured using GEMINI PET/CT were equivalent to the known true values. CT based attenuation coefficients were lower than those of $^{68}Ge$ about 10% in bony region of NEMA ECT phantom. Attenuation coefficients derived from $^{137}Cs$ data was slightly higher than those from CT data also in the images of electron density CT phantom and patients' body with electron density. However, the SUV values in attenuation corrected images using $^{137}Cs$ were lower than images corrected using CT. Percent difference between SUV values was about 15%. Conclusion: Although the HU measured using this scanner was accurate, accuracy in the conversion from CT data into the 511 keV attenuation coefficients was limited in the bony region. Discrepancy in the transformed attenuation coefficients and SUV values between CT and $^{137}Cs$-based data shown in this study suggests that further optimization of various parameters in data acquisition and processing would be necessary for this scanner.

Effects of Motion Correction for Dynamic $[^{11}C]Raclopride$ Brain PET Data on the Evaluation of Endogenous Dopamine Release in Striatum (동적 $[^{11}C]Raclopride$ 뇌 PET의 움직임 보정이 선조체 내인성 도파민 유리 정량화에 미치는 영향)

  • Lee, Jae-Sung;Kim, Yu-Kyeong;Cho, Sang-Soo;Choe, Yearn-Seong;Kang, Eun-Joo;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Kim, Sang-Eun
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.413-420
    • /
    • 2005
  • Purpose: Neuroreceptor PET studies require 60-120 minutes to complete and head motion of the subject during the PET scan increases the uncertainty in measured activity. In this study, we investigated the effects of the data-driven head mutton correction on the evaluation of endogenous dopamine release (DAR) in the striatum during the motor task which might have caused significant head motion artifact. Materials and Methods: $[^{11}C]raclopride$ PET scans on 4 normal volunteers acquired with bolus plus constant infusion protocol were retrospectively analyzed. Following the 50 min resting period, the participants played a video game with a monetary reward for 40 min. Dynamic frames acquired during the equilibrium condition (pre-task: 30-50 min, task: 70-90 min, post-task: 110-120 min) were realigned to the first frame in pre-task condition. Intra-condition registrations between the frames were performed, and average image for each condition was created and registered to the pre-task image (inter-condition registration). Pre-task PET image was then co-registered to own MRI of each participant and transformation parameters were reapplied to the others. Volumes of interest (VOI) for dorsal putamen (PU) and caudate (CA), ventral striatum (VS), and cerebellum were defined on the MRI. Binding potential (BP) was measured and DAR was calculated as the percent change of BP during and after the task. SPM analyses on the BP parametric images were also performed to explore the regional difference in the effects of head motion on BP and DAR estimation. Results: Changes in position and orientation of the striatum during the PET scans were observed before the head motion correction. BP values at pre-task condition were not changed significantly after the intra-condition registration. However, the BP values during and after the task and DAR were significantly changed after the correction. SPM analysis also showed that the extent and significance of the BP differences were significantly changed by the head motion correction and such changes were prominent in periphery of the striatum. Conclusion: The results suggest that misalignment of MRI-based VOI and the striatum in PET images and incorrect DAR estimation due to the head motion during the PET activation study were significant, but could be remedied by the data-driven head motion correction.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Trends of Cancer Mortality in Gyeongsangbuk - do from 1991 to 1998 (경상북도 주민의 암사망 추이)

  • Kim, Byung-Guk;Lee, Sung-Kook;Kim, Tea-Woong;Lee, Do-Young;Lee, Kyeong-Soo
    • Journal of agricultural medicine and community health
    • /
    • v.26 no.2
    • /
    • pp.59-78
    • /
    • 2001
  • Data on reported cancer mortality in the Gyeongsangbuk- do province from 1991 to 1998 were collected and analyzed using the existing mortality reporting system as well as the public health network to furnish accurate data on reported cancer death and to collect data to establish a high quality district health plan. The overall crude death rate in Gyeongsangbuk province in 1991 was 74.56 deaths per 100,000-person but this rate increased to 79.22 in 1998. Among the deaths, the overall death rate of cancer was 16.7% in 1991, which increased to 19.3% in 1998; specifically the death rate of men increased from 19.4% in 1991 to 22.3% in 1998 while that of women increased from 12.4% in 1991 to 15.5% in 1998, showing a more increase among women. The types of cancer and associated death rates in 1991 were gastric cancer(41.5%), followed by liver cancer (28.8%), and lung and bronchogenic carcinoma(8.7%) and in 1998, gastric cancer (24.7%), followed by liver cancer(22.7%), lung and bronchogenic carcinoma(19.3%), showing the same order. For men and women, gastric cancer(40.2% and 44.7%, respectively) was the most common cancer death, followed by liver cancer(33.7% and 16.7%, respectively), and lung and bronchogenic carcinoma(10.2% and 5.0%, respectively) in 1991. However, in 1998, gastric cancer(27.8%) was still the most common type among both men and women, followed by liver cancer (18.5%) and lung and bronchogenic carcinoma(12.7%), showing the most decrease in gastric cancer but most increase in lung and bronchogenic carcinoma. The age- adjusted mortality rates by gastric cancer, hepatoma, laryngeal carcinoma were decreased in both male and female, and also uterine cancer was decreased in female. The age- adjusted mortality rates by lung and bronchogenic carcinoma, pancreatic cancer, rectal cancer were increased in both male and female, and also breast cancer was increased in female. The calculated overall age-adjusted death rate based on the 1995 population was 84.25 in 1991, which decreased to 77.67 in 1998. Male death rate decreased significantly from 119.81 in 1991 to 101.82 in 1998 while the female death rate increased from 48.64 in 1991 to 53.80 in 1998. A census of cancer death rate using accurate death records is important for the establishment of proper and high-quality district health and medical plan and policy. The effort to improve the accuracy of death reports using the health facility network, as had been attempted by this study, can be continued. Furthermore, there must be a way for the Health and Welfare Department to use the death reports to improve the present reporting system. Lastly, additional studies need to be conducted to investigate how much the accuracy was improved by the supplemented death reports in this study.

  • PDF

Computed Tomography-guided Localization with a Hook-wire Followed by Video-assisted Thoracic Surgery for Small Intrapulmonary and Ground Glass Opacity Lesions (폐실질 내에 위치한 소결질 및 간유리 병변에서 흉부컴퓨터단층촬영 유도하에 Hook Wire를 이용한 위치 선정 후 시행한 흉강경 폐절제술의 유용성)

  • Kang, Pil-Je;Kim, Yong-Hee;Park, Seung-Il;Kim, Dong-Kwan;Song, Jae-Woo;Do, Kyoung-Hyun
    • Journal of Chest Surgery
    • /
    • v.42 no.5
    • /
    • pp.624-629
    • /
    • 2009
  • Background: Making the histologic diagnosis of small pulmonary nodules and ground glass opacity (GGO) lesions is difficult. CT-guided percutaneous needle biopsies often fail to provide enough specimen for making the diagnosis. Video-assisted thoracoscopic surgery (VATS) can be inefficient for treating non-palpable lesions. Preoperative localization of small intrapulmonary lesions provides a more obvious target to facilitate performing intraoperative. resection. We evaluated the efficacy of CT-guided localization with using a hook wire and this was followed by VATS for making the histologic diagnosis of small intrapulmonary nodules and GGO lesions. Material and Method: Eighteen patients (13 males) were included in this study from August 2005 to March 2008. 18 intrapulmonary lesions underwent preoperative localization by using a CT-guided a hook wire system prior to performing VATS resection for intrapulmonary lesions and GGO lesions. The clinical data such as the accuracy of localization, the rate of conversion-to-thoracotomy, the operation time, the postoperative complications and the histology of the pulmonary lesion were retrospectively collected. Result: Eighteen VATS resections were performed in 18 patients. Preoperative CT-guided localization with a hook-wire was successful in all the patients. Dislodgement of a hook wire was observed in one case. There was no conversion to thoracotomy, The median diameter of lesions was 8 mm (range: $3{\sim}15\;mm$). The median depth of the lesions from the pleural surfaces was 5.5 mm (range: $1{\sim}30\;mm$). The median interval between preoperative CT-guided with a hook-wire and VATS was 34.5 min (range: ($10{\sim}226$ min). The median operative time was 43.5.min (range: $26{\sim}83$ min). In two patients, clinically insignificant pneumothorax developed after CT-guided localization with a hook-wire and there were no other complications. Histological examinations confirmed 8 primary lung cancers, 3 cases of metastases, 3 cases of inflammation, 2 intrapulmonary lymph nodes and 2 other benign lesions. Conclusion: CT-guided localization with a hook-wire followed by VATS for treating small intrapulmonary nodules and GGO lesions provided a low conversion thoracotomy rate, a short operation time and few localization-related or postoperative complications. This procedure was efficient to confirm intrapulmonary lesions and GGO lesions.