• Title/Summary/Keyword: processing

Search Result 68,848, Processing Time 0.091 seconds

Studies on the Properties of Populus Grown in Korea (포플러재(材)의 재질(材質)에 관(關)한 시험(試驗))

  • Jo, Jae-Myeong;Kang, Sun-Goo;Lee, Yong-Dae;Jung, Hee-Suk;Ahn, Jung-Mo;Shim, Chong-Supp
    • Journal of the Korean Wood Science and Technology
    • /
    • v.10 no.3
    • /
    • pp.68-87
    • /
    • 1982
  • In Korea, this is the situation at moment that the total demand of timber in 1972 is more than 5 million cubic meters. On the other hand, however, the available domestic supply of timber at the same year is only about, 1 million cubic meters. A great unbalancing between demand and supply of timber has been prevailing. To solve this hard problem, it has been necessitiated to build up the forest stocks as early as possible with fast grown species such as poplar. Under circumstances, poplar plantations which have been carryed on government and private have reached to large area of 116,603 hectors from 1962 up to date. It has now be come a principal timber resources in this country, and required the basic study on various properties of wood for it's proper utilization, since it has not been made of any systematic study on the properties of Populus grown in Korea. In order to investigate the properties such as anatomical, physical and mechanical properties of nine different species (P. euramericana Guiner I-214. P. euramericana Guiner I-476, P. deltoides Marsh, P. nigra var. italica (Muchk) Koeme, P. alba L.,P. alba $\times$ glandulosa P. maximowiczii Henry, P. koreana Rehder, P. davidiana Dode) of poplar for their proper use and development of new ways of grading processing and quality improving, this study has been made by the Forest Research Institute.

  • PDF

FAMILY DYNAMICS OF INCEST PERCEIVED BY ADOLESECENTS (청소년이 지각한 근친상간의 가족역동)

  • Kim, Hun-Soo;Shin, Hwa-Sik
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.6 no.1
    • /
    • pp.56-64
    • /
    • 1995
  • Family is a primary unit of the major socialization processing for children. Parents among the family members are one of the most important figures from whom the child and adolescent acquire a wide variety of behavior patterns, attitudes, values and norms. An organization of family members product family structural functioning. Abnormal family structure is one of the most important reference models in the learning of antisocial patterns of behavior. Therefore incest and child sexual abuse including spouse abuse, elderly abuse, and neglect occurs in the abnormal family structural setting. In particular, incest, a specific form of sexual abuse, was once thought to be a phenomenon of great rarity, but our clinical experiences, especially over the past decade, have made us aware that incest and child sexual abuse is not rare case and on the increasing trend. Therefore, the aim of this study was to determine the family problem and dynamics of incest family, and character pattern of post-incest adolescent victim in Korea. A total of 1,838 adolescents from middle and high school(1,237) and juvenile correctional institute(601) were studied, sampled from Korean student population and adolescent delinquent population confined in juvenile correctional institutes, using proportional stratified random sampling method. The subjects' ages ranged from 12 to 21 years. Data were collected through questionnaire survey. Data analysis was done by IBM PC of Behavior Science Center at the Korea university, using SAS program. Statistical methods employed were Chi-square, principal component analysis and t-test etc. The results of this study were as follows ; 1) Of 1,071 subjects, 40(3.7%) reported incest experiences(sibling incest : 1.6% ; another type of incest : 2.1%) in their family setting. 2) The character pattern of post-incest adolescent victim was more socially maladjusted, immature, impulsive, rigid, anxious and dependent than non-incest adolescent. Also they showed some problem in academic performance and their assertiveness. 3) The other family members of incest family revealed more psychological and behavioral problem such as depression, alcoholism, psychotic disorder and criminal act than the non-incest family, even though there is no evidence of the context between them. 4) The family dynamics of incest family tended to be dysfunctional trend, as compared with non-incest family. It showed that the psychological instability of family member, parental rejection toward their children, coldness and indifference among family member and marital discordance between the parents had significant correlation with incest.

  • PDF

The Study about Application of LEAP Collimator at Brain Diamox Perfusion Tomography Applied Flash 3D Reconstruction: One Day Subtraction Method (Flash 3D 재구성을 적용한 뇌 혈류 부하 단층 촬영 시 LEAP 검출기의 적용에 관한 연구: One Day Subtraction Method)

  • Choi, Jong-Sook;Jung, Woo-Young;Ryu, Jae-Kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.3
    • /
    • pp.102-109
    • /
    • 2009
  • Purpose: Flash 3D (pixon(R) method; 3D OSEM) was developed as a software program to shorten exam time and improve image quality through reconstruction, it is an image processing method that usefully be applied to nuclear medicine tomography. If perfoming brain diamox perfusion scan by reconstructing subtracted images by Flash 3D with shortened image acquisition time, there was a problem that SNR of subtracted image is lower than basal image. To increase SNR of subtracted image, we use LEAP collimators, and we emphasized on sensitivity of vessel dilatation than resolution of brain vessel. In this study, our purpose is to confirm possibility of application of LEAP collimators at brain diamox perfusion tomography, identify proper reconstruction factors by using Flash 3D. Materials and methods: (1) The evaluation of phantom: We used Hoffman 3D Brain Phantom with $^{99m}Tc$. We obtained images by LEAP and LEHR collimators (diamox image) and after 6 hours (the half life of $^{99m}Tc$: 6 hours), we use obtained second image (basal image) by same method. Also, we acquired SNR and ratio of white matters/gray matters of each basal image and subtracted image. (2) The evaluation of patient's image: We quantitatively analyzed patients who were examined by LEAP collimators then was classified as a normal group and who were examined by LEHR collimators then was classified as a normal group from 2008. 05 to 2009. 01. We evaluate the results from phantom by substituting factors. We used one-day protocol and injected $^{99m}Tc$-ECD 925 MBq at both basal image acquisition and diamox image acquisition. Results: (1) The evaluation of phantom: After measuring counts from each detector, at basal image 41~46 kcount, stress image 79~90 kcount, subtraction image 40~47 kcount were detected. LEAP was about 102~113 kcount at basal image, 188~210 kcount at stress image and 94~103 at subtraction image kcount were detected. The SNR of LEHR subtraction image was decreased than LEHR basal image about 37%, the SNR of LEAP subtraction image was decreased than LEAP basal image about 17%. The ratio of gray matter versus white matter is 2.2:1 at LEHR basal image and 1.9:1 at subtraction, and at LEAP basal image was 2.4:1 and subtraction image was 2:1. (2) The evaluation of patient's image: the counts acquired by LEHR collimators are about 40~60 kcounts at basal image, and 80~100 kcount at stress image. It was proper to set FWHM as 7 mm at basal and stress image and 11mm at subtraction image. LEAP was about 80~100 kcount at basal image and 180~200 kcount at stress image. LEAP images could reduce blurring by setting FWHM as 5 mm at basal and stress images and 7 mm at subtraction image. At basal and stress image, LEHR image was superior than LEAP image. But in case of subtraction image like a phantom experiment, it showed rough image because SNR of LEHR image was decreased. On the other hand, in case of subtraction LEAP image was better than LEHR image in SNR and sensitivity. In all LEHR and LEAP collimator images, proper subset and iteration frequency was 8 times. Conclusions: We could archive more clear and high SNR subtraction image by using proper filter with LEAP collimator. In case of applying one day protocol and reconstructing by Flash 3D, we could consider application of LEAP collimator to acquire better subtraction image.

  • PDF

Postmortem Changes of the Protein and Amino Acid Composition of Muscles in the Partially Frozen Prawn, Pandalus japonica (보리새우육의 부분동결저장중 단백질 및 아미노산의 조성변화)

  • PYEUN Jae-Hyeung;CHOI Young-Joon;KIM Jeung-Han;CHO Kweon-Ock
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.17 no.4
    • /
    • pp.280-290
    • /
    • 1984
  • An extensive study has been made on the relationship between the freshness and the compositions of the muscle protein of prawn, Pandalus japonica during the storage under partially frozen condition. The variations of the subunit distribution for sarcoplasmic protein and myofibrillar protein extracted from the samples by changes of freshness were discussed by sodium dodecylsulfate-poly-acrylamide gel (SDS-PAG) electrophoresis. On the other hand, the denaturation constant ($K_D$) of the myofibrillar protein extracted from the prawn stored at $-3^{\circ}C\;and\;-20^{\circ}C$ were successively compared. The prawn muscle contained about $18\%$ of protein with the composition of $32\%$ in sarcoplasmic protein, $56\%$ in myofibrillar protein, $10\%$ in residual intracellular protein and $2\%$ in stroma. The indices for estimating freshness of the muscle were approached to the early stage of putrefaction on the 26th day of the storage with $25.29mg\%$ of total volatile basic nitrogen, $31.36\%$ of K-value and 8.83 of pH. The content of the myofibrillar protein was remarkably decreased with the time during the storage while that of residual intracellular protein was increased. The $K_D$ values of the myofibrillar protein were $9.03{\times}10^{-6}sec^{-1}\;at\;-3^{\circ}C\;and\;4.42{\times}10^{-6}sec^{-1}\;at\;-20^{\circ}C$. The results of the analysis of SDS-PAG electrophoretograms indicated that the sarcoplasmic protein and the myofibrillar protein were composed of 12 subunits and 17 subunits in the muscle of instantaneously killed prawn ana were changed into 8 subunits and 22 subunits in the muscle stored for 26 days, respectively. It is noticeable that 30,000, 41,000, 107,000, 136,000, 170,000 173,000, 185,000, and 198,000 daltons of the newly appeared 8 subunits were found in the myofibrillar protein from the prawn muscle stored for 26 days. The amino acid composition of the muscle protein showed that the most of amino acids were slightly decreased with the days of the storage. With respect to the free amino acid composition of the muscle of instantaneously killed prawn, glycine, proline, arginine, alanine and taurine comprised $93\%$ of the total free amino acids. Taurine, valine, leucine, phenylalanine, serine, lysine, methionine, isoleucine and histidine were increased during the storage period but exceptionally proline was decreased.

  • PDF

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Preparation of Powdered Smoked-Dried Mackerel Soup and Its Taste Compounds (고등어분말수우프의 제조 및 정미성분에 관한 연구)

  • LEE Eung-Ho;OH Kwang-Soo;AHN Chang-Bum;CHUNG Bu-Gil;BAE You-Kyung;HA Jin-Hwan
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.20 no.1
    • /
    • pp.41-51
    • /
    • 1987
  • This study was carried out to prepare powdered smoked-dried mackerel which can be used as a soup base, and to examine storage stability and the taste compounds of Products. Raw mackerel are filleted, toiled for 10 minutes and pressed to remove lipids, and then soaked in extract solution of skipjack meat. This soaked mackerel are smoked 3 times to $10-12\%$ moisture content at $80^{\circ}C$ for 8 hours. And the smoked-dried mackerel were pulverized to 50 mesh. Finally, the powdered smoked-dried mackerel were packed in a laminated film $bag(PET/Al\;foil/CPP:\;5{\mu}m/15{\mu}m/70{\mu}m,\;15\times17cm)$ with air(product C), nitrogen(product N) and oxygen absorber(product O), and then stored at room temperature for 100 days. The moisture and crude lipid content of powdered smoked-dried mackerel was $11.3-12.3\%,\;12\%$, respectively, and water activity is 0.52-0.56. And these values showed little changes during storage. The pH, VBN and amino nitrogen content increased slowly during storage. Hydrophilic and lipophilic brown pigment formation showed a tendency of increase in product(C) and showed little change in product(N) and (O). The TBA value, peroxide value and carbonyl value of product(N) and (O) were lower than those of product (C). The major fatty acids of products were 16:0, 18:1, 22:6, 18:0 and 20:5, and polyenoic acids decreased, while saturated and monoenoic acids increased during processing and storage of products. The IMP content in products were 420.2-454.2 mg/100 g and decreased slightly with storage period. And major non-volatile organic acids in products were lactic acid, succinic acid and $\alpha-ketoglutaric$ acid. In free amino acids and related compounds, major ones are histidine, alanine, hydroxyproline, lysine, glutamic acid and anserine, which occupied $80.8\%$ of total free amino acids. The taste compounds of powdered smoked-dried mackerel were free amino acids and related compounds (1,279.4 mg/100 g), non-volatile organic acids(948.1 mg/100 g), nucleotides and their related compounds (672.8 mg/100 g), total creatinine(430.4 ntg/100 g), tetaine(86.6 mg/100 g) and small amount of TMAO. The extraction condition of powdered smoked-dried mackerel in preparing soup stock is appropriate at $100^{\circ}C$ for 1 minute. Judging from the results of taste and sensory evaluation, it is concluded that the powdered smoked-dried mackerel can be used as natural flavoring substance in preparing soups and broth.

  • PDF

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

A Study on Developing Customized Bolus using 3D Printers (3D 프린터를 이용한 Customized Bolus 제작에 관한 연구)

  • Jung, Sang Min;Yang, Jin Ho;Lee, Seung Hyun;Kim, Jin Uk;Yeom, Du Seok
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.1
    • /
    • pp.61-71
    • /
    • 2015
  • Purpose : 3D Printers are used to create three-dimensional models based on blueprints. Based on this characteristic, it is feasible to develop a bolus that can minimize the air gap between skin and bolus in radiotherapy. This study aims to compare and analyze air gap and target dose at the branded 1 cm bolus with the developed customized bolus using 3D printers. Materials and Methods : RANDO phantom with a protruded tumor was used to procure images using CT simulator. CT DICOM file was transferred into the STL file, equivalent to 3D printers. Using this, customized bolus molding box (maintaining the 1 cm width) was created by processing 3D printers, and paraffin was melted to develop the customized bolus. The air gap of customized bolus and the branded 1 cm bolus was checked, and the differences in air gap was used to compare $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$ and $V_{95%}$ in treatment plan through Eclipse. Results : Customized bolus production period took about 3 days. The total volume of air gap was average $3.9cm^3$ at the customized bolus. And it was average $29.6cm^3$ at the branded 1 cm bolus. The customized bolus developed by the 3D printer was more useful in minimizing the air gap than the branded 1 cm bolus. In the 6 MV photon, at the customized bolus, $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of GTV were 102.8%, 88.1%, 99.1%, 95.0%, 94.4% and the $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of branded 1cm bolus were 101.4%, 92.0%, 98.2%, 95.2%, 95.7%, respectively. In the proton, at the customized bolus, $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of GTV were 104.1%, 84.0%, 101.2%, 95.1%, 99.8% and the $D_{max}$, $D_{min}$, $D_{mean}$, $D_{95%}$, $V_{95%}$ of branded 1cm bolus were 104.8%, 87.9%, 101.5%, 94.9%, 99.9%, respectively. Thus, in treatment plan, there was no significant difference between the customized bolus and 1 cm bolus. However, the normal tissue nearby the GTV showed relatively lower radiation dose. Conclusion : The customized bolus developed by 3D printers was effective in minimizing the air gap, especially when it is used against the treatment area with irregular surface. However, the air gap between branded bolus and skin was not enough to cause a change in target dose. On the other hand, in the chest wall could confirm that dose decrease for small the air gap. Customized bolus production period took about 3 days and the development cost was quite expensive. Therefore, the commercialization of customized bolus developed by 3D printers requires low-cost 3D printer materials, adequate for the use of bolus.

  • PDF

Performance Evaluation of Siemens CTI ECAT EXACT 47 Scanner Using NEMA NU2-2001 (NEMA NU2-2001을 이용한 Siemens CTI ECAT EXACT 47 스캐너의 표준 성능 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.259-267
    • /
    • 2004
  • Purpose: NEMA NU2-2001 was proposed as a new standard for performance evaluation of whole body PET scanners. in this study, system performance of Siemens CTI ECAT EXACT 47 PET scanner including spatial resolution, sensitivity, scatter fraction, and count rate performance in 2D and 3D mode was evaluated using this new standard method. Methods: ECAT EXACT 47 is a BGO crystal based PET scanner and covers an axial field of view (FOV) of 16.2 cm. Retractable septa allow 2D and 3D data acquisition. All the PET data were acquired according to the NEMA NU2-2001 protocols (coincidence window: 12 ns, energy window: $250{\sim}650$ keV). For the spatial resolution measurement, F-18 point source was placed at the center of the axial FOV((a) x=0, and y=1, (b)x=0, and y=10, (c)x=70, and y=0cm) and a position one fourth of the axial FOV from the center ((a) x=0, and y=1, (b)x=0, and y=10, (c)x=10, and y=0cm). In this case, x and y are transaxial horizontal and vertical, and z is the scanner's axial direction. Images were reconstructed using FBP with ramp filter without any post processing. To measure the system sensitivity, NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves were scanned at the center of transaxial FOV and 10 cm offset from the center. Attenuation free values of sensitivity wire estimated by extrapolating data to the zero wall thickness. NEMA scatter phantom with length of 70 cm was filled with F-18 or C-11solution (2D: 2,900 MBq, 3D: 407 MBq), and coincidence count rates wire measured for 7 half-lives to obtain noise equivalent count rate (MECR) and scatter fraction. We confirmed that dead time loss of the last flame were below 1%. Scatter fraction was estimated by averaging the true to background (staffer+random) ratios of last 3 frames in which the fractions of random rate art negligibly small. Results: Axial and transverse resolutions at 1cm offset from the center were 0.62 and 0.66 cm (FBP in 2D and 3D), and 0.67 and 0.69 cm (FBP in 2D and 3D). Axial, transverse radial, and transverse tangential resolutions at 10cm offset from the center were 0.72 and 0.68 cm (FBP in 2D and 3D), 0.63 and 0.66 cm (FBP in 2D and 3D), and 0.72 and 0.66 cm (FBP in 2D and 3D). Sensitivity values were 708.6 (2D), 2931.3 (3D) counts/sec/MBq at the center and 728.7 (2D, 3398.2 (3D) counts/sec/MBq at 10 cm offset from the center. Scatter fractions were 0.19 (2D) and 0.49 (3D). Peak true count rate and NECR were 64.0 kcps at 40.1 kBq/mL and 49.6 kcps at 40.1 kBq/mL in 2D and 53.7 kcps at 4.76 kBq/mL and 26.4 kcps at 4.47 kBq/mL in 3D. Conclusion: Information about the performance of CTI ECAT EXACT 47 PET scanner reported in this study will be useful for the quantitative analysis of data and determination of optimal image acquisition protocols using this widely used scanner for clinical and research purposes.