• Title/Summary/Keyword: Application accuracy

Search Result 3,295, Processing Time 0.033 seconds

New Insights on Mobile Location-based Services(LBS): Leading Factors to the Use of Services and Privacy Paradox (모바일 위치기반서비스(LBS) 관련한 새로운 견해: 서비스사용으로 이끄는 요인들과 사생활염려의 모순)

  • Cheon, Eunyoung;Park, Yong-Tae
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.33-56
    • /
    • 2017
  • As Internet usage is becoming more common worldwide and smartphone become necessity in daily life, technologies and applications related to mobile Internet are developing rapidly. The results of the Internet usage patterns of consumers around the world imply that there are many potential new business opportunities for mobile Internet technologies and applications. The location-based service (LBS) is a service based on the location information of the mobile device. LBS has recently gotten much attention among many mobile applications and various LBSs are rapidly developing in numerous categories. However, even with the development of LBS related technologies and services, there is still a lack of empirical research on the intention to use LBS. The application of previous researches is limited because they focused on the effect of one particular factor and had not shown the direct relationship on the intention to use LBS. Therefore, this study presents a research model of factors that affect the intention to use and actual use of LBS whose market is expected to grow rapidly, and tested it by conducting a questionnaire survey of 330 users. The results of data analysis showed that service customization, service quality, and personal innovativeness have a positive effect on the intention to use LBS and the intention to use LBS has a positive effect on the actual use of LBS. These results implies that LBS providers can enhance the user's intention to use LBS by offering service customization through the provision of various LBSs based on users' needs, improving information service qualities such as accuracy, timeliness, sensitivity, and reliability, and encouraging personal innovativeness. However, privacy concerns in the context of LBS are not significantly affected by service customization and personal innovativeness and privacy concerns do not significantly affect the intention to use LBS. In fact, the information related to users' location collected by LBS is less sensitive when compared with the information that is used to perform financial transactions. Therefore, such outcomes on privacy concern are revealed. In addition, the advantages of using LBS are more important than the sensitivity of privacy protection to the users who use LBS than to the users who use information systems such as electronic commerce that involves financial transactions. Therefore, LBS are recommended to be treated differently from other information systems. This study is significant in the theoretical point of contribution that it proposed factors affecting the intention to use LBS in a multi-faceted perspective, proved the proposed research model empirically, brought new insights on LBS, and broadens understanding of the intention to use and actual use of LBS. Also, the empirical results of the customization of LBS affecting the user's intention to use the LBS suggest that the provision of customized LBS services based on the usage data analysis through utilizing technologies such as artificial intelligence can enhance the user's intention to use. In a practical point of view, the results of this study are expected to help LBS providers to develop a competitive strategy for responding to LBS users effectively and lead to the LBS market grows. We expect that there will be differences in using LBSs depending on some factors such as types of LBS, whether it is free of charge or not, privacy policies related to LBS, the levels of reliability related application and technology, the frequency of use, etc. Therefore, if we can make comparative studies with those factors, it will contribute to the development of the research areas of LBS. We hope this study can inspire many researchers and initiate many great researches in LBS fields.

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

Target dose study of effects of changes in the AAA Calculation resolution on Lung SABR plan (Lung SABR plan시 AAA의 Calculation resolution 변화에 의한 Target dose 영향 연구)

  • Kim, Dae Il;Son, Sang Jun;Ahn, Bum Seok;Jung, Chi Hoon;Yoo, Suk Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.171-176
    • /
    • 2014
  • Purpose : Changing the calculation grid of AAA in Lung SABR plan and to analyze the changes in target dose, and investigated the effects associated with it, and considered a suitable method of application. Materials and Methods : 4D CT image that was used to plan all been taken with Brilliance Big Bore CT (Philips, Netherlands) and in Lung SABR plan($Eclipse^{TM}$ ver10.0.42, Varian, the USA), use anisotropic analytic algorithm(AAA, ver.10, Varian Medical Systems, Palo Alto, CA, USA) and, was calculated by the calculation grid 1.0, 3.0, 5.0 mm in each Lung SABR plan. Results : Lung SABR plan of 10 cases are using each of 1.0 mm, 3.0 mm, 5.0 mm calculation grid, and in case of use a 1.0 mm calculation grid $V_{98}$. of the prescribed dose is about $99.5%{\pm}1.5%$, $D_{min}$ of the prescribed dose is about $92.5{\pm}1.5%$ and Homogeneity Index(HI) is $1.0489{\pm}0.0025$. In the case of use a 3.0 mm calculation grid $V_{98}$ dose of the prescribed dose is about $90{\pm}4.5%$, $D_{min}$ of the prescribed dose is about $87.5{\pm}3%$ and HI is about $1.07{\pm}1$. In the case of use a 5.0 mm calculation grid $V_{98}$ dose of the prescribed dose is about $63{\pm}15%$, $D_{min}$ of the prescribed dose is about $83{\pm}4%$ and HI is about $1.13{\pm}0.2$, respectively. Conclusion : The calculation grid of 1.0 mm is better improves the accuracy of dose calculation than using 3.0 mm and 5.0 mm, although calculation times increase in the case of smaller PTV relatively. As lung, spread relatively large and low density and small PTV, it is considered and good to use a calculation grid of 1.0 mm.

Dose Evaluation of TPS according to Treatment Sites in IMRT (세기조절방사선치료 시 치료 부위에 따른 치료계획 시스템 간 선량평가)

  • Kim, Jin Man;Kim, Jong Sik;Hong, Chae Seon;Park, Ju Young;Park, Su Yeon;Ju, Sang Gyu
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.2
    • /
    • pp.181-186
    • /
    • 2013
  • Purpose: This study executed therapy plans on prostate cancer (homogeneous density area) and lung cancer (non-homogeneous density area) using radiation treatment planning systems such as $Pinnacle^3$ (version 9.2, Philips Medical Systems, USA) and Eclipse (version 10.0, Varian Medical Systems, USA) in order to quantify the difference between dose calculation according to density in IMRT. Materials and Methods: The subjects were prostate cancer patients (n=5) and lung cancer patients (n=5) who had therapies in our hospital. Identical constraints and optimization process according to the Protocol were administered on the subjects. For the therapy plan of prostate cancer patients, 10 MV and 7Beam were used and 2.5 Gy was prescribed in 28 fx to make 70 Gy in total. For lung cancer patients, 6 MV and 6Beam were used and 2 Gy was prescribed in 33 fx to make 66 Gy in total. Through two therapy planning systems, maximum dose, average dose, and minimum dose of OAR (Organ at Risk) of CTV, PTV and around tumor were investigated. Results: In prostate cancer, both therapy planning systems showed within 2% change of dose of CTV and PTV and normal organs (Bladder, Both femur and Rectum out) near the tumor satisfied the dose constraints. In lung cancer, CTV and PTV showed less than 2% changes in dose and normal organs (Esophagus, Spinal cord and Both lungs) satisfied dose restrictions. However, the minimum dose of Eclipse therapy plan was 1.9% higher in CTV and 3.5% higher in PTV, and in case of both lungs there was 3.0% difference at V5 Gy. Conclusion: Each TPS according to the density satisfied dose limits of our hospital proving the clinical accuracy. It is considered more accurate and precise therapy plan can be made if studies on treatment planning for diverse parts and the application of such TPS are made.

  • PDF

Evaluation of Factors Used in AAPM TG-43 Formalism Using Segmented Sources Integration Method and Monte Carlo Simulation: Implementation of microSelectron HDR Ir-192 Source (미소선원 적분법과 몬테칼로 방법을 이용한 AAPM TG-43 선량계산 인자 평가: microSelectron HDR Ir-192 선원에 대한 적용)

  • Ahn, Woo-Sang;Jang, Won-Woo;Park, Sung-Ho;Jung, Sang-Hoon;Cho, Woon-Kap;Kim, Young-Seok;Ahn, Seung-Do
    • Progress in Medical Physics
    • /
    • v.22 no.4
    • /
    • pp.190-197
    • /
    • 2011
  • Currently, the dose distribution calculation used by commercial treatment planning systems (TPSs) for high-dose rate (HDR) brachytherapy is derived from point and line source approximation method recommended by AAPM Task Group 43 (TG-43). However, the study of Monte Carlo (MC) simulation is required in order to assess the accuracy of dose calculation around three-dimensional Ir-192 source. In this study, geometry factor was calculated using segmented sources integration method by dividing microSelectron HDR Ir-192 source into smaller parts. The Monte Carlo code (MCNPX 2.5.0) was used to calculate the dose rate $\dot{D}(r,\theta)$ at a point ($r,\theta$) away from a HDR Ir-192 source in spherical water phantom with 30 cm diameter. Finally, anisotropy function and radial dose function were calculated from obtained results. The obtained geometry factor was compared with that calculated from line source approximation. Similarly, obtained anisotropy function and radial dose function were compared with those derived from MCPT results by Williamson. The geometry factor calculated from segmented sources integration method and line source approximation was within 0.2% for $r{\geq}0.5$ cm and 1.33% for r=0.1 cm, respectively. The relative-root mean square error (R-RMSE) of anisotropy function obtained by this study and Williamson was 2.33% for r=0.25 cm and within 1% for r>0.5 cm, respectively. The R-RMSE of radial dose function was 0.46% at radial distance from 0.1 to 14.0 cm. The geometry factor acquired from segmented sources integration method and line source approximation was in good agreement for $r{\geq}0.1$ cm. However, application of segmented sources integration method seems to be valid, since this method using three-dimensional Ir-192 source provides more realistic geometry factor. The anisotropy function and radial dose function estimated from MCNPX in this study and MCPT by Williamson are in good agreement within uncertainty of Monte Carlo codes except at radial distance of r=0.25 cm. It is expected that Monte Carlo code used in this study could be applied to other sources utilized for brachytherapy.

Comparison of Natural Flow Estimates for the Han River Basin Using TANK and SWAT Models (TANK 모형과 SWAT 모형을 이용한 한강유역의 자연유출량 산정 비교)

  • Kim, Chul-Gyum;Kim, Nam-Won
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.3
    • /
    • pp.301-316
    • /
    • 2012
  • Two models, TANK and SWAT (Soil and Water Assessment Tool) were compared for simulating natural flows in the Paldang Dam upstream areas of the Han River basin in order to understand the limitations of TANK and to review the applicability and capability of SWAT. For comparison, simulation results from the previous research work were used. In the results for the calibrated watersheds (Chungju Dam and Soyanggang Dam), two models provided promising results for forecasting of daily flows with the Nash-Sutcliffe model efficiency of around 0.8. TANK simulated observations during some peak flood seasons better than SWAT, while it showed poor results during dry seasons, especially its simulations did not fall down under a certain value. It can be explained that TANK was calibrated for relatively larger flows than smaller ones. SWAT results showed a relatively good agreement with observed flows except some flood flows, and simulated inflows at the Paldang Dam considering discharges from upper dams coincided with observations with the model efficiency of around 0.9. This accounts for SWAT applicability with higher accuracy in predicting natural flows without dam operation or artificial water uses, and in assessing flow variations before and after dam development. Also, two model results were compared for other watersheds such as Pyeongchang-A, Dalcheon-B, Seomgang-B, Inbuk-A, Hangang-D, and Hongcheon-A to which calibrated TANK parameters were applied. The results were similar to the case of calibrated watersheds, that TANK simulated poor smaller flows except some flood flows and had same problem of keeping on over a certain value in dry seasons. This indicates that TANK application may have fatal uncertainties in estimating low flows used as an important index in water resources planning and management. Therefore, in order to reflect actually complex and complicated physical characteristics of Korean watersheds, and to manage efficiently water resources according to the land use and water use changes with urbanization or climate change in the future, it is necessary to utilize a physically based watershed model like SWAT rather than an existing conceptual lumped model like TANK.

A Multimodal Profile Ensemble Approach to Development of Recommender Systems Using Big Data (빅데이터 기반 추천시스템 구현을 위한 다중 프로파일 앙상블 기법)

  • Kim, Minjeong;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.93-110
    • /
    • 2015
  • The recommender system is a system which recommends products to the customers who are likely to be interested in. Based on automated information filtering technology, various recommender systems have been developed. Collaborative filtering (CF), one of the most successful recommendation algorithms, has been applied in a number of different domains such as recommending Web pages, books, movies, music and products. But, it has been known that CF has a critical shortcoming. CF finds neighbors whose preferences are like those of the target customer and recommends products those customers have most liked. Thus, CF works properly only when there's a sufficient number of ratings on common product from customers. When there's a shortage of customer ratings, CF makes the formation of a neighborhood inaccurate, thereby resulting in poor recommendations. To improve the performance of CF based recommender systems, most of the related studies have been focused on the development of novel algorithms under the assumption of using a single profile, which is created from user's rating information for items, purchase transactions, or Web access logs. With the advent of big data, companies got to collect more data and to use a variety of information with big size. So, many companies recognize it very importantly to utilize big data because it makes companies to improve their competitiveness and to create new value. In particular, on the rise is the issue of utilizing personal big data in the recommender system. It is why personal big data facilitate more accurate identification of the preferences or behaviors of users. The proposed recommendation methodology is as follows: First, multimodal user profiles are created from personal big data in order to grasp the preferences and behavior of users from various viewpoints. We derive five user profiles based on the personal information such as rating, site preference, demographic, Internet usage, and topic in text. Next, the similarity between users is calculated based on the profiles and then neighbors of users are found from the results. One of three ensemble approaches is applied to calculate the similarity. Each ensemble approach uses the similarity of combined profile, the average similarity of each profile, and the weighted average similarity of each profile, respectively. Finally, the products that people among the neighborhood prefer most to are recommended to the target users. For the experiments, we used the demographic data and a very large volume of Web log transaction for 5,000 panel users of a company that is specialized to analyzing ranks of Web sites. R and SAS E-miner was used to implement the proposed recommender system and to conduct the topic analysis using the keyword search, respectively. To evaluate the recommendation performance, we used 60% of data for training and 40% of data for test. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. A widely used combination metric called F1 metric that gives equal weight to both recall and precision was employed for our evaluation. As the results of evaluation, the proposed methodology achieved the significant improvement over the single profile based CF algorithm. In particular, the ensemble approach using weighted average similarity shows the highest performance. That is, the rate of improvement in F1 is 16.9 percent for the ensemble approach using weighted average similarity and 8.1 percent for the ensemble approach using average similarity of each profile. From these results, we conclude that the multimodal profile ensemble approach is a viable solution to the problems encountered when there's a shortage of customer ratings. This study has significance in suggesting what kind of information could we use to create profile in the environment of big data and how could we combine and utilize them effectively. However, our methodology should be further studied to consider for its real-world application. We need to compare the differences in recommendation accuracy by applying the proposed method to different recommendation algorithms and then to identify which combination of them would show the best performance.

A Study on Dosimetry for Small Fields of Photon Beam (광자선 소조사면의 선량 측정에 관한 연구)

  • 강위생;하성환;박찬일
    • Progress in Medical Physics
    • /
    • v.5 no.2
    • /
    • pp.57-68
    • /
    • 1994
  • Purpose : The purposes are to discuss the reason to measure dose distributions of circular small fields for stereotactic radiosurgery based on medical linear accelerator, finding of beam axis, and considering points on dosimetry using home-made small water phantom, and to report dosimetric results of 10MV X-ray of Clinac-18, like as TMR, OAR and field size factor required for treatment planning. Method and material : Dose-response linearity and dose-rate dependence of a p-type silicon (Si) diode, of which size and sensitivity are proper for small field dosimetry, are determined by means of measurement. Two water tanks being same in shape and size, with internal dimension, 30${\times}$30${\times}$30cm$^3$ were home-made with acrylic plates and connected by a hose. One of them a used as a water phantom and the other as a device to control depth of the Si detector in the phantom. Two orthogonal dose profiles at a specified depth were used to determine beam axis. TMR's of 4 circular cones, 10, 20, 30 and 40mm at 100cm SAD were measured, and OAR's of them were measured at 4 depths, d$\sub$max/, 6, 10, 15cm at 100cm SCD. Field size factor (FSF) defined by the ratio of D$\sub$max/ of a given cone at SAD to MU were also measured. Result : The dose-response linearity of the Si detector was almost perfect. Its sensitivity decreased with increasing dose rate but stable for high dose rate like as 100MU/min and higher even though dose out of field could be a little bit overestimated because of low dose rate. Method determining beam axis by two orthogonal profiles was simple and gave 0.05mm accuracy. Adjustment of depth of the detector in a water phantom by insertion and remove of some acryl pates under an auxiliary water tank was also simple and accurate. TMR, OAR and FSF measured by Si detector were sufficiently accurate for application to treatment planning of linac-based stereotactic radiosurgery. OAR in field was nearly independent of depth. Conclusion : The Si detector was appropriate for dosimetry of small circular fields for linac-based stereotactic radiosurgery. The beam axis could be determined by two orthogonal dose profiles. The adjustment of depth of the detector in water was possible by addition or removal of some acryl plates under the auxiliary water tank and simple. TMR, OAR and FSF were accurate enough to apply to stereotactic radiosurgery planning. OAR data at one depth are sufficient for radiosurgery planning.

  • PDF

A Comparative Analysis between Photogrammetric and Auto Tracking Total Station Techniques for Determining UAV Positions (무인항공기의 위치 결정을 위한 사진 측량 기법과 오토 트래킹 토탈스테이션 기법의 비교 분석)

  • Kim, Won Jin;Kim, Chang Jae;Cho, Yeon Ju;Kim, Ji Sun;Kim, Hee Jeong;Lee, Dong Hoon;Lee, On Yu;Meng, Ju Pil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.553-562
    • /
    • 2017
  • GPS (Global Positioning System) receiver among various sensors mounted on UAV (Unmanned Aerial Vehicle) helps to perform various functions such as hovering flight and waypoint flight based on GPS signals. GPS receiver can be used in an environment where GPS signals are smoothly received. However, recently, the use of UAV has been diversifying into various fields such as facility monitoring, delivery service and leisure as UAV's application field has been expended. For this reason, GPS signals may be interrupted by UAV's flight in a shadow area where the GPS signal is limited. Multipath can also include various noises in the signal, while flying in dense areas such as high-rise buildings. In this study, we used analytical photogrammetry and auto tracking total station technique for 3D positioning of UAV. The analytical photogrammetry is based on the bundle adjustment using the collinearity equations, which is the geometric principle of the center projection. The auto tracking total station technique is based on the principle of tracking the 360 degree prism target in units of seconds or less. In both techniques, the target used for positioning the UAV is mounted on top of the UAV and there is a geometric separation in the x, y and z directions between the targets. Data were acquired at different speeds of 0.86m/s, 1.5m/s and 2.4m/s to verify the flight speed of the UAV. Accuracy was evaluated by geometric separation of the target. As a result, there was an error from 1mm to 12.9cm in the x and y directions of the UAV flight. In the z direction with relatively small movement, approximately 7cm error occurred regardless of the flight speed.