• Title/Summary/Keyword: error range

Search Result 2,816, Processing Time 0.035 seconds

Patient Specific Quality Assurance of IMRT: Quantitative Approach Using Film Dosimetry and Optimization (강도변조방사선치료의 환자별 정도관리: 필름 선량계 및 최적화법을 이용한 정량적 접근)

  • Shin Kyung Hwan;Park Sung-Yong;Park Dong Hyun;Shin Dongho;Park Dahl;Kim Tae Hyun;Pyo Hongryull;Kim Joo-Young;Kim Dae Yong;Cho Kwan Ho;Huh Sun Nyung;Kim Il Han;Park Charn Il
    • Radiation Oncology Journal
    • /
    • v.23 no.3
    • /
    • pp.176-185
    • /
    • 2005
  • Purpose: Film dosimetry as a part of patient specific intensity modulated radiation therapy quality assurance (IMRT QA) was peformed to develop a new optimization method of film isocenter offset and to then suggest new quantitative criteria for film dosimetry. Materials and Methods: Film dosimetry was peformed on 14 IMRT patients with head and neck cancers. An optimization method for obtaining the local minimum was developed to adjust for the error in the film isocenter offset, which is the largest part of the systemic errors. Results: The adjust value of the film isocenter offset under optimization was 1 mm in 12 patients, while only two patients showed 2 mm translation. The means of absolute average dose difference before and after optimization were 2.36 and $1.56\%$, respectively, and the mean ratios over a $5\%$ tolerance were 9.67 and $2.88\%$. After optimization, the differences in the dose decreased dramatically. A low dose range cutoff (L-Cutoff) has been suggested for clinical application. New quantitative criteria of a ratio of over a $5\%$, but less than $10\%$ tolerance, and for an absolute average dose difference less than $3\%$ have been suggested for the verification of film dosimetry. Conclusion: The new optimization method was effective in adjusting for the film dosimetry error, and the newly quantitative criteria suggested in this research are believed to be sufficiently accurate and clinically useful.

Formulation of a reference coordinate system of three-dimensional head & neck images: Part II. Reproducibility of the horizontal reference plane and midsagittal plane (3차원 두부영상의 기준좌표계 설정을 위한 연구: II부 수평기준면과 정중시상면의 재현성)

  • Park, Jae-Woo;Kim, Nam-Kug;Chang, Young-Il
    • The korean journal of orthodontics
    • /
    • v.35 no.6 s.113
    • /
    • pp.475-484
    • /
    • 2005
  • This study was performed to investigate the reproducibility of the horizontal and midsagittal planes, and to suggest a stable coordinate system for three-dimensional (3D) cephalometric analysis. Eighteen CT scans were taken and the coordinate system was established using 7 reference points marked by a volume model, with no more than 4 points on the same plane. The 3D landmarks were selected on V works (Cybermed Inc., Seoul, Korea), then exported to V surgery (Cybermed Inc., Seoul, Korea) to calculate the coordinate values. All the landmarks were taken twice with a lapse of 2 weeks. The horizontal and midsagittal planes were constructed and its reproducibility was evaluated. There was no significant difference in the reproducibility of the horizontal reference planes, But, FH planes were more reproducible than other horizontal planes. FH planes showed no difference between the planes constructed with 3 out of 4 points. The angle of intersection made by 2 FH planes, composed of both Po and one Or showed less than $1^{\circ}$ difference. This was identical when 2 FH planes were composed of both Or and one Po. But, the latter cases showed a significantly smaller error. The reproducibility of the midsagittal plane was reliable with an error range of 0.61 to $1.93^{\circ}$ except for 5 establishments (FMS-Nc, Na-Rh, Na-ANS, Rh-ANS, and FR-PNS). The 3D coordinate system may be constructed with 3 planes; the horizontal plane constructed by both Po and right Or; the midsagittal plane perpendicular to the horizontal plane, including the midpoint of the Foramen Spinosum and Nc; and the coronal plane perpendicular to the horizontal and midsagittal planes, including point clinoidale, or sella, or PNS.

Improvement of Radar Rainfall Estimation Using Radar Reflectivity Data from the Hybrid Lowest Elevation Angles (혼합 최저고도각 반사도 자료를 이용한 레이더 강우추정 정확도 향상)

  • Lyu, Geunsu;Jung, Sung-Hwa;Nam, Kyung-Yeub;Kwon, Soohyun;Lee, Cheong-Ryong;Lee, Gyuwon
    • Journal of the Korean earth science society
    • /
    • v.36 no.1
    • /
    • pp.109-124
    • /
    • 2015
  • A novel approach, hybrid surface rainfall (KNU-HSR) technique developed by Kyungpook Natinal University, was utilized for improving the radar rainfall estimation. The KNU-HSR technique estimates radar rainfall at a 2D hybrid surface consistings of the lowest radar bins that is immune to ground clutter contaminations and significant beam blockage. Two HSR techniques, static and dynamic HSRs, were compared and evaluated in this study. Static HSR technique utilizes beam blockage map and ground clutter map to yield the hybrid surface whereas dynamic HSR technique additionally applies quality index map that are derived from the fuzzy logic algorithm for a quality control in real time. The performances of two HSRs were evaluated by correlation coefficient (CORR), total ratio (RATIO), mean bias (BIAS), normalized standard deviation (NSD), and mean relative error (MRE) for ten rain cases. Dynamic HSR (CORR=0.88, BIAS= $-0.24mm\;hr^{-1}$, NSD=0.41, MRE=37.6%) shows better performances than static HSR without correction of reflectivity calibration bias (CORR=0.87, BIAS= $-2.94mm\;hr^{-1}$, NSD=0.76, MRE=58.4%) for all skill scores. Dynamic HSR technique overestimates surface rainfall at near range whereas it underestimates rainfall at far ranges due to the effects of beam broadening and increasing the radar beam height. In terms of NSD and MRE, dynamic HSR shows the best results regardless of the distance from radar. Static HSR significantly overestimates a surface rainfall at weaker rainfall intensity. However, RATIO of dynamic HSR remains almost 1.0 for all ranges of rainfall intensity. After correcting system bias of reflectivity, NSD and MRE of dynamic HSR are improved by about 20 and 15%, respectively.

Assessment of Dynamic Stereoacuity of Adults in their 20s' with Howard-Dolman Test (하워드-돌먼 입체검사를 이용한 20대 성인의 동적 입체시 평가)

  • Shim, Hyun-Suk;Choi, Sun-Mi;Kim, Young-Cheong
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.20 no.1
    • /
    • pp.61-66
    • /
    • 2015
  • Purpose: In this study, dynamic stereoacuity of 20s' adults were measured by using the Howard-Dolman test(H-D TEST, Bernell, U.S.A), and compared of male and female. And the correlation between dynamic stereoacuity and PD(pupillary distance), and between dynamic stereoacuity and anisometropia caused by difference in the spherical refractive power of the left and right eyes were analyzed. Methods: The mean age of $22.68{\pm}0.50$(20~29)years old, 20s' 63 adults (30 male, 33 female) were conducted for this experiments. After the full correction of subject's refractive error, dynamic stereoacuity was measured 5 times for 1 subject at 2.5 m distance using the H-D test. at 2.5 distance. Results: The mean of dynamic stereoacuity was $28.44{\pm}25.03$ sec of arc for total subjects, $28.23{\pm}23.34$ sec of arc for male, and $28.63{\pm}26.83$ sec of arc for female. In the dynamic stereoacuity classified by the range of inter-pupil distance (IPD), the dynamic stereoacuity was $33.87{\pm}18.53$ sec for the IPD being under 59.80 mm, $26.24{\pm}25.26$ sec of arc for 59.81~66.15 mm, $34.60{\pm}25.65$ sec of arc for over 66.15 mm. However, there were no significant differences between 3 groups (P=0.73, r=0.03). In dynamic stereoacuity classified by the refractive error difference between two eyes, dynamic stereoacuity was $26.81{\pm}24.86$ sec of arc for the under 1 D, $41.45{\pm}24.18$ sec of arc for over 1 D, and there was no significant difference between two groups (P=0.15, r=0.15). Conclusions: Dynamic stereacuity by the H-D test in 20s adults showed that there was no significant differences between male and female, and PD and anisometropia did not have a significant impact upon the dynamic stereoacuity.

Accurate Quality Control Method of Bone Mineral Density Measurement -Focus on Dual Energy X-ray Absorptiometry- (골밀도 측정의 정확한 정도관리방법 -이중 에너지 방사선 흡수법을 중심으로-)

  • Kim, Ho-Sung;Dong, Kyung-Rae;Ryu, Young-Hwan
    • Journal of radiological science and technology
    • /
    • v.32 no.4
    • /
    • pp.361-370
    • /
    • 2009
  • The image quality management of bone mineral density is the responsibility and duty of radiologists who carry out examinations. However, inaccurate conclusions due to lack of understanding and ignorance regarding the methodology of image quality management can be a fatal error to the patient. Therefore, objective of this paper is to understand proper image quality management and enumerate methods for examiners and patients, thereby ensuring the reliability of bone mineral density exams. The accuracy and precision of bone mineral density measurements must be at the highest level so that actual biological changes can be detected with even slight changes in bone mineral density. Accuracy and precision should be continuously preserved for image quality of machines. Those factors will contribute to ensure the reliability in bone mineral density exams. Proper equipment management or control methods are set with correcting equipment each morning and after image quality management, a phantom, recommended from the manufacturer, is used for ten to twenty-five measurements in search of a mean value with a permissible range of ${\pm}1.5%$ set as standard. There needs to be daily measurement inspections on the phantom or at least inspections three times a week in order to confirm the existence or nonexistence of changes in values in actual bone mineral density. in addition, bone mineral density measurements were evaluated and recorded following the rules of Shewhart control chart. This type of management has to be conducted for the installation and movement of equipment. For the management methods of inspectors, evaluation of the measurement precision was conducted by testing the reproducibility of the exact same figures without any real biological changes occurring during reinspection. Bone mineral density inspection was applied as the measurement method for patients either taking two measurements thirty times or three measurements fifteen times. An important point when taking measurements was after a measurement whether it was the second or third examination, it was required to descend from the table and then reascend. With a 95% confidence level, the precision error produced from the measurement bone mineral figures came to 2.77 times the minimum of the biological bone mineral density change. The value produced can be stated as the least significant change (LSC) and in the case the value is greater, it can be stated as a section of genuine biological change. From the initial inspection to equipment moving and shifter, management must be carried out and continued in order to achieve the effects. The enforcement of proper quality control of radiologists performing bone mineral density inspections which brings about the durability extensions of equipment and accurate results of calculations will help the assurance of reliable inspections.

  • PDF

A digital Audio Watermarking Algorithm using 2D Barcode (2차원 바코드를 이용한 오디오 워터마킹 알고리즘)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.97-107
    • /
    • 2011
  • Nowadays there are a lot of issues about copyright infringement in the Internet world because the digital content on the network can be copied and delivered easily. Indeed the copied version has same quality with the original one. So, copyright owners and content provider want a powerful solution to protect their content. The popular one of the solutions was DRM (digital rights management) that is based on encryption technology and rights control. However, DRM-free service was launched after Steve Jobs who is CEO of Apple proposed a new music service paradigm without DRM, and the DRM is disappeared at the online music market. Even though the online music service decided to not equip the DRM solution, copyright owners and content providers are still searching a solution to protect their content. A solution to replace the DRM technology is digital audio watermarking technology which can embed copyright information into the music. In this paper, the author proposed a new audio watermarking algorithm with two approaches. First, the watermark information is generated by two dimensional barcode which has error correction code. So, the information can be recovered by itself if the errors fall into the range of the error tolerance. The other one is to use chirp sequence of CDMA (code division multiple access). These make the algorithm robust to the several malicious attacks. There are many 2D barcodes. Especially, QR code which is one of the matrix barcodes can express the information and the expression is freer than that of the other matrix barcodes. QR code has the square patterns with double at the three corners and these indicate the boundary of the symbol. This feature of the QR code is proper to express the watermark information. That is, because the QR code is 2D barcodes, nonlinear code and matrix code, it can be modulated to the spread spectrum and can be used for the watermarking algorithm. The proposed algorithm assigns the different spread spectrum sequences to the individual users respectively. In the case that the assigned code sequences are orthogonal, we can identify the watermark information of the individual user from an audio content. The algorithm used the Walsh code as an orthogonal code. The watermark information is rearranged to the 1D sequence from 2D barcode and modulated by the Walsh code. The modulated watermark information is embedded into the DCT (discrete cosine transform) domain of the original audio content. For the performance evaluation, I used 3 audio samples, "Amazing Grace", "Oh! Carol" and "Take me home country roads", The attacks for the robustness test were MP3 compression, echo attack, and sub woofer boost. The MP3 compression was performed by a tool of Cool Edit Pro 2.0. The specification of MP3 was CBR(Constant Bit Rate) 128kbps, 44,100Hz, and stereo. The echo attack had the echo with initial volume 70%, decay 75%, and delay 100msec. The sub woofer boost attack was a modification attack of low frequency part in the Fourier coefficients. The test results showed the proposed algorithm is robust to the attacks. In the MP3 attack, the strength of the watermark information is not affected, and then the watermark can be detected from all of the sample audios. In the sub woofer boost attack, the watermark was detected when the strength is 0.3. Also, in the case of echo attack, the watermark can be identified if the strength is greater and equal than 0.5.

User Centered Interface Design of Web-based Attention Testing Tools: Inhibition of Return(IOR) and Graphic UI (웹 기반 주의력 검사의 사용자 인터페이스 설계: 회귀억제 과제와 그래픽 UI를 중심으로)

  • Kwahk, Ji-Eun;Kwak, Ho-Wan
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.4
    • /
    • pp.331-367
    • /
    • 2008
  • This study aims to validate a web-based neuropsychological testing tool developed by Kwak(2007) and to suggest solutions to potential problems that can deteriorate its validity. When it targets a wider range of subjects, a web-based neuropsychological testing tool is challenged by high drop-out rates, lack of motivation, lack of interactivity with the experimenter, fear of computer, etc. As a possible solution to these threats, this study aims to redesign the user interface of a web-based attention testing tool through three phases of study. In Study 1, an extensive analysis of Kwak's(2007) attention testing tool was conducted to identify potential usability problems. The Heuristic Walkthrough(HW) method was used by three usability experts to review various design features. As a result, many problems were found throughout the tool. The findings concluded that the design of instructions, user information survey forms, task screen, results screen, etc. did not conform to the needs of users and their tasks. In Study 2, 11 guidelines for the design of web-based attention testing tools were established based on the findings from Study 1. The guidelines were used to optimize the design and organization of the tool so that it fits to the user and task needs. The resulting new design alternative was then implemented as a working prototype using the JAVA programming language. In Study 3, a comparative study was conducted to demonstrate the excellence of the new design of attention testing tool(named graphic style tool) over the existing design(named text style tool). A total of 60 subjects participated in user testing sessions where their error frequency, error patterns, and subjective satisfaction were measured through performance observation and questionnaires. Through the task performance measurement, a number of user errors in various types were observed in the existing text style tool. The questionnaire results were also in support of the new graphic style tool, users rated the new graphic style tool higher than the existing text style tool in terms of overall satisfaction, screen design, terms and system information, ease of learning, and system performance.

  • PDF

The characteristics on dose distribution of a large field (넓은 광자선 조사면($40{\times}40cm^2$ 이상)의 선량분포 특성)

  • Lee Sang Rok;Jeong Deok Yang;Lee Byoung Koo;Kwon Young Ho
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.15 no.1
    • /
    • pp.19-27
    • /
    • 2003
  • I. Purpose In special cases of Total Body Irradiation(TBI), Half Body Irradiation(HBI), Non-Hodgkin's lymphoma, E-Wing's sarcoma, lymphosarcoma and neuroblastoma a large field can be used clinically. The dose distribution of a large field can use the measurement result which gets from dose distribution of a small field (standard SSD 100cm, size of field under $40{\times}40cm2$) in the substitution which always measures in practice and it will be able to calibrate. With only the method of simple calculation, it is difficult to know the dose and its uniformity of actual body region by various factor of scatter radiation. II. Method & Materials In this study, using Multidata Water Phantom from standard SSD 100cm according to the size change of field, it measures the basic parameter (PDD,TMR,Output,Sc,Sp) From SSD 180cm (phantom is to the bottom vertically) according to increasing of a field, it measures a basic parameter. From SSD 350cm (phantom is to the surface of a wall, using small water phantom. which includes mylar capable of horizontal beam's measurement) it measured with the same method and compared with each other. III. Results & Conclusion In comparison with the standard dose data, parameter which measures between SSD 180cm and 350cm, it turned out there was little difference. The error range is not up to extent of the experimental error. In order to get the accurate data, it dose measures from anthropomorphous phantom or for this objective the dose measurement which is the possibility of getting the absolute value which uses the unlimited phantom that is devised especially is demanded. Additionally, it needs to consider ionization chamber use of small volume and stem effect of cable by a large field.

  • PDF

Impacts of OSTIA Sea Surface Temperature in Regional Ocean Data Assimilation System (지역 해양순환예측시스템에 대한 OSTIA 해수면온도 자료동화 효과에 관한 연구)

  • Kim, Ji Hye;Eom, Hyun-Min;Choi, Jong-Kuk;Lee, Sang-Min;Kim, Young-Ho;Chang, Pil-Hun
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.20 no.1
    • /
    • pp.1-15
    • /
    • 2015
  • Impacts of Sea Surface Temperature (SST) assimilation to the prediction of upper ocean temperature is investigated by using a regional ocean forecasting system, in which 3-dimensional optimal interpolation is applied. In the present study, Sea Surface Temperature and Sea Ice Analysis (OSTIA) dataset is adopted for the daily SST assimilation. This study mainly compares two experimental results with (Exp. DA) and without data assimilation (Exp. NoDA). When comparing both results with OSTIA SST data during Sept. 2011, Exp. NoDA shows Root Mean Square Error (RMSE) of about $1.5^{\circ}C$ at 24, 48, 72 forecast hour. On the other hand, Exp. DA yields the relatively lower RMSE of below $0.8^{\circ}C$ at all forecast hour. In particular, RMSE from Exp. DA reaches $0.57^{\circ}C$ at 24 forecast hour, indicating that the assimilation of daily SST (i.e., OSTIA) improves the performance in the early SST prediction. Furthermore, reduction ratio of RMSE in the Exp. DA reaches over 60% in the Yellow and East seas. In order to examine impacts in the shallow costal region, the SST measured by eight moored buoys around Korean peninsula is compared with both experiments. Exp. DA reveals reduction ratio of RMSE over 70% in all season except for summer, showing the contribution of OSTIA assimilation to the short-range prediction in the coastal region. In addition, the effect of SST assimilation in the upper ocean temperature is examined by the comparison with Argo data in the East Sea. The comparison shows that RMSE from Exp. DA is reduced by $1.5^{\circ}C$ up to 100 m depth in winter where vertical mixing is strong. Thus, SST assimilation is found to be efficient also in the upper ocean prediction. However, the temperature below the mixed layer in winter reveals larger difference in Exp. DA, implying that SST assimilation has still a limitation to the prediction of ocean interior.

A Quantification Method for the Cold Pool Effect on Nocturnal Temperature in a Closed Catchment (폐쇄집수역의 냉기호 모의를 통한 일 최저기온 분포 추정)

  • Kim, Soo-Ock;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.13 no.4
    • /
    • pp.176-184
    • /
    • 2011
  • Cold air on sloping surfaces flows down to the valley bottom in mountainous terrain at calm and clear nights. Based on the assumption that the cold air flow may be the same as the water flow, current models estimate temperature drop by regarding the cold air accumulation at a given location as the water-like free drainage. At a closed catchment whose outlet is blocked by man-made obstacles such as banks and roads, however, the water-like free drainage assumption is no longer valid because the cold air accumulates from the bottom first. We developed an empirical model to estimate quantitatively the effect of cold pool on nocturnal temperature in a closed catchment. In our model, a closed catchment is treated like a "vessel", and a digital elevation model (DEM) was used to calculate the maximum capacity of the cold pool formed in a closed catchment. We introduce a topographical variable named "shape factor", which is the ratio of the cold air accumulation potential across the whole catchment area to the maximum capacity of the cold pool to describe the relative size of temperature drop at a wider range of catchment shapes. The shape factor is then used to simulate the density profile of cold pool formed in a given catchment based on a hypsometric equation. The cold lake module was incorporated with the existing model (i.e., Chung et al., 2006), generating a new model and predicting distribution of minimum temperature over closed catchments. We applied this model to Akyang valley (i.e., a typical closed catchment of 53 $km^2$ area) in the southern skirt of Mt. Jiri National Park where 12 automated weather stations (AWS) are operational. The performance of the model was evaluated based on the feasibility of delineating the temperature pattern accurately at cold pool forming at night. Overall, the model's ability of simulating the spatial pattern of lower temperature were improved especially at the valley bottom, showing a similar pattern of the estimated temperature with that of thermal images obtained across the valley at dawn (0520 to 0600 local standard time) of 17 May 2011. Error in temperature estimation, calculated with the root mean square error using the 10 low-lying AWSs, was substantially decreased from $1.30^{\circ}C$ with the existing model to $0.71^{\circ}C$ with the new model. These results suggest the feasibility of the new method in predicting the site-specific freeze and frost warning at a closed catchment.