• Title/Summary/Keyword: Evaluation of Use

Search Result 8,850, Processing Time 0.055 seconds

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Assessment for the Utility of Treatment Plan QA System according to Dosimetric Leaf Gap in Multileaf Collimator (다엽콜리메이터의 선량학적엽간격에 따른 치료계획 정도관리시스템의 효용성 평가)

  • Lee, Soon Sung;Choi, Sang Hyoun;Min, Chul Kee;Kim, Woo Chul;Ji, Young Hoon;Park, Seungwoo;Jung, Haijo;Kim, Mi-Sook;Yoo, Hyung Jun;Kim, Kum Bae
    • Progress in Medical Physics
    • /
    • v.26 no.3
    • /
    • pp.168-177
    • /
    • 2015
  • For evaluating the treatment planning accurately, the quality assurance for treatment planning is recommended when patients were treated with IMRT which is complex and delicate. To realize this purpose, treatment plan quality assurance software can be used to verify the delivered dose accurately before and after of treatment. The purpose of this study is to evaluate the accuracy of treatment plan quality assurance software for each IMRT plan according to MLC DLG (dosimetric leaf gap). Novalis Tx with a built-in HD120 MLC was used in this study to acquire the MLC dynalog file be imported in MobiusFx. To establish IMRT plan, Eclipse RTP system was used and target and organ structures (multi-target, mock prostate, mock head/neck, C-shape case) were contoured in I'mRT phantom. To verify the difference of dose distribution according to DLG, MLC dynalog files were imported to MobiusFx software and changed the DLG (0.5, 0.7, 1.0, 1.3, 1.6 mm) values in MobiusFx. For evaluation dose, dose distribution was evaluated by using 3D gamma index for the gamma criteria 3% and distance to agreement 3 mm, and the point dose was acquired by using the CC13 ionization chamber in isocenter of I'mRT phantom. In the result for point dose, the mock head/neck and multi-target had difference about 4% and 3% in DLG 0.5 and 0.7 mm respectively, and the other DLGs had difference less than 3%. The gamma index passing-rate of mock head/neck were below 81% for PTV and cord, and multi-target were below 30% for center and superior target in DLGs 0.5, 0.7 mm, however, inferior target of multi-target case and parotid of mock head/neck case had 100.0% passing rate in all DLGs. The point dose of mock prostate showed difference below 3.0% in all DLGs, however, the passing rate of PTV were below 95% in 0.5, 0.7 mm DLGs, and the other DLGs were above 98%. The rectum and bladder had 100.0% passing rate in all DLGs. As the difference of point dose in C-shape were 3~9% except for 1.3 mm DLG, the passing rate of PTV in 1.0 1.3 mm were 96.7, 93.0% respectively. However, passing rate of the other DLGs were below 86% and core was 100.0% passing rate in all DLGs. In this study, we verified that the accuracy of treatment planning QA system can be affected by DLG values. For precise quality assurance for treatment technique using the MLC motion like IMRT and VMAT, we should use appropriate DLG value in linear accelerator and RTP system.

The Evaluation of SUV Variations According to the Errors of Entering Parameters in the PET-CT Examinations (PET/CT 검사에서 매개변수 입력오류에 따른 표준섭취계수 평가)

  • Kim, Jia;Hong, Gun Chul;Lee, Hyeok;Choi, Seong Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.43-48
    • /
    • 2014
  • Purpose: In the PET/CT images, The SUV (standardized uptake value) enables the quantitative assessment according to the biological changes of organs as the index of distinction whether lesion is malignant or not. Therefore, It is too important to enter parameters correctly that affect to the SUV. The purpose of this study is to evaluate an allowable error range of SUV as measuring the difference of results according to input errors of Activity, Weight, uptake Time among the parameters. Materials and Methods: Three inserts, Hot, Teflon and Air, were situated in the 1994 NEMA Phantom. Phantom was filled with 27.3 MBq/mL of 18F-FDG. The ratio of hotspot area activity to background area activity was regulated as 4:1. After scanning, Image was re-reconstructed after incurring input errors in Activity, Weight, uptake Time parameters as ${\pm}5%$, 10%, 15%, 30%, 50% from original data. ROIs (region of interests) were set one in the each insert areas and four in the background areas. $SUV_{mean}$ and percentage differences were calculated and compared in each areas. Results: $SUV_{mean}$ of Hot. Teflon, Air and BKG (Background) areas of original images were 4.5, 0.02. 0.1 and 1.0. The min and max value of $SUV_{mean}$ according to change of Activity error were 3.0 and 9.0 in Hot, 0.01 and 0.04 in Teflon, 0.1 and 0.3 in Air, 0.6 and 2.0 in BKG areas. And percentage differences were equally from -33% to 100%. In case of Weight error showed $SUV_{mean}$ as 2.2 and 6.7 in Hot, 0.01 and 0.03 in Tefron, 0.09 and 0.28 in Air, 0.5 and 1.5 in BKG areas. And percentage differences were equally from -50% to 50% except Teflon area's percentage deference that was from -50% to 52%. In case of uptake Time error showed $SUV_{mean}$ as 3.8 and 5.3 in Hot, 0.01 and 0.02 in Teflon, 0.1 and 0.2 in Air, 0.8 and 1.2 in BKG areas. And percentage differences were equally from 17% to -14% in Hot and BKG areas. Teflon area's percentage difference was from -50% to 52% and Air area's one was from -12% to 20%. Conclusion: As shown in the results, It was applied within ${\pm}5%$ of Activity and Weight errors if the allowable error range was configured within 5%. So, The calibration of dose calibrator and weighing machine has to conduct within ${\pm}5%$ error range because they can affect to Activity and Weight rates. In case of Time error, it showed separate error ranges according to the type of inserts. It showed within 5% error when Hot and BKG areas error were within ${\pm}15%$. So we have to consider each time errors if we use more than two clocks included scanner's one during the examinations.

  • PDF

Dosimetric evaluation of using in-house BoS Frame Fixation Tool for the Head and Neck Cancer Patient (두경부암 환자의 양성자 치료 시 사용하는 자체 제작한 BoS Frame 고정장치의 선량학적 유용성 평가)

  • Kim, kwang suk;Jo, kwang hyun;Choi, byeon ki
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.1
    • /
    • pp.35-46
    • /
    • 2016
  • Purpose : BoS(Base of Skull) Frame, the fixation tool which is used for the proton of brain cancer increases the lateral penumbra by increasing the airgap (the distance between patient and beam jet), due to the collision of the beam of the posterior oblique direction. Thus, we manufactured the fixation tool per se for improving the limits of BoS frame, and we'd like to evaluate the utility of the manufactured fixation tool throughout this study. Materials and Methods : We've selected the 3 patients of brain cancer who have received the proton therapy from our hospital, and also selected the 6 beam angles; for this, we've selected the beam angle of the posterior oblique direction. We' ve measured the planned BoS frame and the distance of Snout for each beam which are planned for the treatment of the patient using the BoS frame. After this, we've proceeded with the set-up that is above the location which was recommended by the manufacturer of the BoS frame, at the same beam angle of the same patient, by using our in-house Bos frame fixation tool. The set-up was above 21 cm toward the superior direction, compared to the situation when the BoS frame was only used with the basic couch. After that, we've stacked the snout to the BoS frame as much as possible, and measured the distance of snout. We've also measured the airgap, based on the gap of that snout distance; and we've proceeded the normalization based on each dose (100% of each dose), after that, we've conducted the comparative analysis of lateral penumbra. Moreover, we've established the treatment plan according to the changed airgap which has been transformed to the Raystation 5.0 proton therapy planning system, and we've conducted the comparative analysis of DVH(Dose Volume Histogram). Results : When comparing the result before using the in-house Bos frame fixation tool which was manufactured for each beam angle with the result after using the fixation tool, we could figure out that airgap than when not used in accordance with the use of the in-house Bos frame fixation tool was reduced by 5.4 cm ~ 15.4 cm, respectively angle. The reduced snout distance means the airgap. Lateral Penumbra could reduce left, right, 0.1 cm ~ 0.4 cm by an angle in accordance with decreasing the airgap while using each beam angle in-house Bos frame fixation tool. Due to the reduced lateral penumbra, Lt.eyeball, Lt.lens, Lt. hippocampus, Lt. cochlea, Rt. eyeball, Rt. lens, Rt. cochlea, Rt. hippocampus, stem that can be seen that the dose is decreased by 0 CGE ~ 4.4 CGE. Conclusion : It was possible to reduced the airgap by using our in-house Bos frame fixation tool for the proton therapy; as a result, it was possible to figure out that the lateral penumbra reduced. Moreover, it was also possible to check through the comparative analysis of the treatment plan that when we reduce the lateral penumbra, the reduction of the unnecessary irradiation for the normal tissues. Therefore, Using the posterior oblique the Brain cancer proton therapy should be preceded by decreasing the airgap, by using our in-house Bos frame fixation tool; also, the continuous efforts for reducing the airgap as much as possible for the proton therapy of other area will be necessary as well.

  • PDF

A Study on the Various Attributes of E-Sport Influencing Flow and Identification (e-스포츠의 다양한 속성이 유동(flow)과 동일시에 미치는 영향에 관한 연구)

  • Suh, Mun-Shik;Ahn, Jin-Woo;Kim, Eun-Young;Um, Seong-Won
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.1
    • /
    • pp.59-80
    • /
    • 2008
  • Recently, e-sports are growing with potentiality as a new industry with conspicuous profit model. But studies that dealing with e-sports are not enough. Hence, proposes of this paper are both to establish basic model that is for the design of e-sport marketing strategy and to contribute toward future studies which are related to e-sports. Recently, the researches to explain sports-sponsorship through the identification theory have been discovered. Many researches say that somewhat proper identification is a requirement for most sponsors to improve the their images which is essential to sponsorship activity. Consequently, the research for sponsorship associated with identification in the e-sports, not in the physical sports is the core sector of this study. We extracted the variables from online's major characteristics and existing sport sponsorship researches. First, because e-sports mean the tournaments or leagues in the use of online game, the main event of the game is likely to call it online game. Online media's attributes are distinguished from those of offline. Especially, interactivity, anonymity, and expandibility as a e-sport game attributes are able to be mentioned. So, these inherent online attributes are examined on the relationship with flow. Second, in physical sports games, Fisher(1998) revealed that team similarity and team attractivity were positively related to team identification. Wann(1996) said that the result of former game influenced the evaluation of the next game, then in turn has an effect on the identification of team supporters. Considering these results in the e-sports side, e-sports gamer' attractivity, similarity, and match result seem to be important precedent variables of the identification with a gamer. So, these e-sport gamer attributes are examined on the relationship with both flow and identification with a gamer. Csikszentmihalyi(1988) defined the term flow as feeling status for him to be making current positive experience optimally. Hoffman and Novak(1996) also said that if a user experienced the flow he would visit a website without any reward. Therefore flow might be positively associated with user's identification with a gamer. And, Swanson(2003) disclosed that team identification influenced the positive results of sponsorship, which included attitude toward sponsors, sponsor patronage, and satisfaction with sponsors. That is, identification with a gamer expect to be connected with corporation identification significantly. According to the above, we can design the following research model. All variables used in this study(interactivity, anonymity, expandibility, attractivity, similarity, match result, flow, identification with a gamer, and identification with a sponsor) definitely were defined operationally underlying precedent researches. Sample collection was carried out to the person who has an experience to have enjoyed e-sports during June 2006. Much portion of samples is men because much more men than women enjoy e-sports in general. Two-step approach was used to test the hypotheses. First, confirmatory factor analysis was committed to guarantee the validity and reliability of variables. The results showed that all variables had not only intensive and discriminant validity, but also reliability. Then, research model was examined with fully structural equation using LISREL 8.3 version. The fitness of the suggested model mostly was at the acceptable level. Shortly speaking about the results, first of all, in e-sports game attributes, only interactivity which is called a basic feature in online situation affected flow positively. Secondly, in e-sports gamer's attributes, similarity with a gamer and match result influenced flow positively, but there was no significant effect in the relationship between the attractivity of a gamer and flow. And as expected, similarity had an effect on identification with a gamer significantly. But unexpectedly attractivity and match result did not influence identification with a gamer significantly. Just the same as the fact verified in the many precedent researches, flow greatly influenced identification with a gamer, and identification with a gamer continually had an influence on the identification with a sponsor significantly. There are some implications in these results. If the sponsor of e-sports supports the pro-game player who absolutely should have the superior ability to others and is similar to the user enjoying e-sports, many amateur gamers will feel much of the flow and identification with a pro-gamer, and then after all, feel the identification with a sponsor. Such identification with a sponsor leads people enjoying e-sports to have purchasing intention for products produced by the sponsor and to make a positive word-of-mouth for those products or the sponsor. For the future studies, we recommend a few ideas. Based on the results of this study, it is necessary to find new variables relating to the e-sports, which is not mentioned in this study. For this work to be possible, qualitative research seems to be needed to consider the inherent e-sport attributes. Finally, to generalize the results related to e-sports, a wide range of generations not a specific generation should be researched.

  • PDF

A Study on the Determinants of Patent Citation Relationships among Companies : MR-QAP Analysis (기업 간 특허인용 관계 결정요인에 관한 연구 : MR-QAP분석)

  • Park, Jun Hyung;Kwahk, Kee-Young;Han, Heejun;Kim, Yunjeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.21-37
    • /
    • 2013
  • Recently, as the advent of the knowledge-based society, there are more people getting interested in the intellectual property. Especially, the ICT companies leading the high-tech industry are working hard to strive for systematic management of intellectual property. As we know, the patent information represents the intellectual capital of the company. Also now the quantitative analysis on the continuously accumulated patent information becomes possible. The analysis at various levels becomes also possible by utilizing the patent information, ranging from the patent level to the enterprise level, industrial level and country level. Through the patent information, we can identify the technology status and analyze the impact of the performance. We are also able to find out the flow of the knowledge through the network analysis. By that, we can not only identify the changes in technology, but also predict the direction of the future research. In the field using the network analysis there are two important analyses which utilize the patent citation information; citation indicator analysis utilizing the frequency of the citation and network analysis based on the citation relationships. Furthermore, this study analyzes whether there are any impacts between the size of the company and patent citation relationships. 74 S&P 500 registered companies that provide IT and communication services are selected for this study. In order to determine the relationship of patent citation between the companies, the patent citation in 2009 and 2010 is collected and sociomatrices which show the patent citation relationship between the companies are created. In addition, the companies' total assets are collected as an index of company size. The distance between companies is defined as the absolute value of the difference between the total assets. And simple differences are considered to be described as the hierarchy of the company. The QAP Correlation analysis and MR-QAP analysis is carried out by using the distance and hierarchy between companies, and also the sociomatrices that shows the patent citation in 2009 and 2010. Through the result of QAP Correlation analysis, the patent citation relationship between companies in the 2009's company's patent citation network and the 2010's company's patent citation network shows the highest correlation. In addition, positive correlation is shown in the patent citation relationships between companies and the distance between companies. This is because the patent citation relationship is increased when there is a difference of size between companies. Not only that, negative correlation is found through the analysis using the patent citation relationship between companies and the hierarchy between companies. Relatively it is indicated that there is a high evaluation about the patent of the higher tier companies influenced toward the lower tier companies. MR-QAP analysis is carried out as follow. The sociomatrix that is generated by using the year 2010 patent citation relationship is used as the dependent variable. Additionally the 2009's company's patent citation network and the distance and hierarchy networks between the companies are used as the independent variables. This study performed MR-QAP analysis to find the main factors influencing the patent citation relationship between the companies in 2010. The analysis results show that all independent variables have positively influenced the 2010's patent citation relationship between the companies. In particular, the 2009's patent citation relationship between the companies has the most significant impact on the 2010's, which means that there is consecutiveness regarding the patent citation relationships. Through the result of QAP correlation analysis and MR-QAP analysis, the patent citation relationship between companies is affected by the size of the companies. But the most significant impact is the patent citation relationships that had been done in the past. The reason why we need to maintain the patent citation relationship between companies is it might be important in the use of strategic aspect of the companies to look into relationships to share intellectual property between each other, also seen as an important auxiliary of the partner companies to cooperate with.

Evaluation of indirect N2O Emission from Nitrogen Leaching in the Ground-water in Korea (우리나라 농경지에서 질소의 수계유출에 의한 아산화질소 간접배출량 평가)

  • Kim, Gun-Yeob;Jeong, Hyun-Cheol;Kim, Min-Kyeong;Roh, Kee-An;Lee, Deog-Bae;Kang, Kee-Kyung
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.44 no.6
    • /
    • pp.1232-1238
    • /
    • 2011
  • This experiment was conducted to measure concentration of dissolved $N_2O$ in ground-water of 59 wells and to make emission factor for assessment of indirect $N_2O$ emission at agricultural sector in agricultural areas of Gyeongnam province from 2007 to 2010. Concentrations of dissolved $N_2O$ in ground-water of 59 wells were ranged trace to $196.6{\mu}g-N\;L^{-1}$. $N_2O$ concentrations were positively related with $NO_3$-N suggesting that denitrification was the principal reason of $N_2O$ production and $NO_3$-N concentration was the best predictor of indirect $N_2O$ emission. The ratio of dissolved $N_2O$-N to $NO_3$-N in ground-water was very important to make emission factor for assessment of indirect $N_2O$ emission at agricultural sector. The mean ratio of $N_2O$-N to $NO_3$-N was 0.0035. It was greatly lower than 0.015, the default value of currently using in the Intergovernmental Panel on Climate Change (IPCC) methodology for assessing indirect $N_2O$ emission in agro-ecosystems (IPCC, 1996). It means that the IPCC's present nitrogen indirect emission factor ($EF_{5-g}$, 0.015) and indirect $N_2O$ emission estimated with IPCC's emission factor are too high to use adopt in Korea. So we recommend 0.0034 as national specific emission factor ($EF_{5-g}$) for assessment of indirect $N_2O$ emission at agricultural sector. Using the estimated value of 0.0034 as the emission factor ($EF_{5-g}$) revised the indirect $N_2O$ emission from agricultural sector in Korea decreased from 1,801,576 ton ($CO_2$-eq) to 964,645 ton ($CO_2$-eq) in 2008. The results of this study suggest that the indirect Emission of nitrous oxide from upland recommend 0.0034 as national specific emission factor ($EF_{5-g}$) for assessment of indirect $N_2O$ emission at agricultural sector.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

Evaluation of Tangential Fields Technique Using TOMO Direct Radiation Therapy after Breast Partial Mastectomy (유방 부분 절제술 후 방사선 치료 시 TOMO Direct를 이용한 접선 조사의 선량적 유용성에 관한 고찰)

  • Kim, Mi-Jung;Kim, Joo-Ho;Kim, Hun-Kyum;Cho, Kang-Chul;Chun, Byeong-Chul;Cho, Jeong-Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.23 no.1
    • /
    • pp.59-66
    • /
    • 2011
  • Purpose: Investigation of the clinical use of tangential fields technique using TOMO direct in comparison to conventional LINAC based radiation therapy after breast partial mastectomy. Materials and Methods: Treatment plans were created for 3 left-sided breast cancer patients who had radiation therapy after breast partial mastectomy by using wedged tangential fields technique, field in field technique (FIF), TOMO Direct, TOMO Direct intensity modulated radiation therapy (IMRT) under the normalized prescription condition ($D_{90%}$: 50.4 Gy/28 fx within CTV). Dose volume histogram (DVH) and isodose curve were used to evaluate the dose to the clinical target volume (CTV), organ at risk (OAR). We compared and analyzed dosimetric parameters of CTV and OAR. Dosimetric parameters of CTV are $D_{99}$, $D_{95}$, Dose homogeneity index (DHI: $D_{10}/D_{90}$) and $V_{105}$, $V_{110}$. And dosimetric parameters of OAR are $V_{10}$, $V_{20}$, $V_{30}$, $V_{40}$ of the heart and $V_{10}$, $V_{20}$, $V_{30}$ of left lung. Results: Dosimetric results of CTV, the average value of $D_{99}$, $D_{95}$ were $47.7{\pm}1.1Gy$, $49.4{\pm}0.1Gy$ from wedged tangential fields technique (W) and FIF (F) were $47.1{\pm}0.6Gy$, $49.2{\pm}0.4Gy$. And it was $49.2{\pm}0.4$ vs. $48.6{\pm}0.8Gy$, $49.9{\pm}0.4$ vs. $49.5{\pm}0.3Gy$ Gy for the TOMO Direct (D) and TOMO Direct IMRT (I). The average value of dose homogeneity index was W: $1.1{\pm}0.02$, F: $1.07{\pm}0.02$, D: $1.03{\pm}0.001$, I: $1.05{\pm}0.02$. When we compared the average value of $V_{105}$, $V_{110}$ using each technique, it was the highest as $34.6{\pm}9.3%$, $7.5{\pm}7.9%$ for wedged tangential fields technique and the value dropped for FIF as $16.5{\pm}14.8%$, $2.1{\pm}3.5%$, TOMO direct IMRT as $7.5{\pm}8.3%$, $0.1{\pm}0.1%$ and the TOMO direct showed the lowest values for both as 0%. Dosimetric results of OAR was no significant difference among each technique. Conclusion: TOMO direct provides improved target dose homogeneity over wedged tangential field technique. It is no increase the amount of normal tissue volumes receiving low doses, as oppose to IMRT or Helical TOMO IMRT. Also, it simply performs treatment plan procedure than FIF. TOMO Direct is a clinical useful technique for breast cancer patients after partial mastectomy.

  • PDF

Genetic Diversity of Korean Native Chicken Populations in DAD-IS Database Using 25 Microsatellite Markers (초위성체 마커를 활용한 가축다양성정보시스템(DAD-IS) 등재 재래닭 집단의 유전적 다양성 분석)

  • Roh, Hee-Jong;Kim, Kwan-Woo;Lee, Jinwook;Jeon, Dayeon;Kim, Seung-Chang;Ko, Yeoung-Gyu;Mun, Seong-Sil;Lee, Hyun-Jung;Lee, Jun-Heon;Oh, Dong-Yep;Byeon, Jae-Hyun;Cho, Chang-Yeon
    • Korean Journal of Poultry Science
    • /
    • v.46 no.2
    • /
    • pp.65-75
    • /
    • 2019
  • A number of Korean native chicken(KNC) populations were registered in FAO (Food and Agriculture Organization) DAD-IS (Domestic Animal Diversity Information Systems, http://www.fao.org/dad-is). But there is a lack of scientific basis to prove that they are unique population of Korea. For this reason, this study was conducted to prove KNC's uniqueness using 25 Microsatellite markers. A total of 548 chickens from 11 KNC populations (KNG, KNB, KNR, KNW, KNY, KNO, HIC, HYD, HBC, JJC, LTC) and 7 introduced populations (ARA: Araucana, RRC and RRD: Rhode Island Red C and D, LGF and LGK: White Leghorn F and K, COS and COH: Cornish brown and Cornish black) were used. Allele size per locus was decided using GeneMapper Software (v 5.0). A total of 195 alleles were observed and the range was 3 to 14 per locus. The MNA, $H_{\exp}$, $H_{obs}$, PIC value within population were the highest in KNY (4.60, 0.627, 0.648, 0.563 respectively) and the lowest in HYD (1.84, 0.297, 0.286, 0.236 respectively). The results of genetic uniformity analysis suggested 15 cluster (${\Delta}K=66.22$). Excluding JJC, the others were grouped in certain cluster with high genetic uniformity. JJC was not grouped in certain cluster but grouped in cluster 2 (44.3%), cluster 3 (17.7%) and cluster8 (19.1%). As a results of this study, we can secure a scientific basis about KNC's uniqueness and these results can be use to basic data for the genetic evaluation and management of KNC breeds.