• Title/Summary/Keyword: average case 오차

Search Result 163, Processing Time 0.024 seconds

Estimating design floods based on bivariate rainfall frequency analysis and rainfall-runoff model (이변량 강우 빈도분석과 강우-유출 모형에 기반한 설계 홍수량 산정 방안)

  • Kim, Min Ji;Park, Kyung Woon;Kim, Seok-Woo;Kim, Tae-Woong
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.737-748
    • /
    • 2022
  • Due to the lack of flood data, the water engineering practice calculates the design flood using rainfall frequency analysis and rainfall-runoff model. However, the rainfall frequency analysis for arbitrary duration does not reflect the regional characteristics of the duration and amount of storm event. This study proposed a practical method to calculate the design flood in a watershed considering the characteristics of storm event, based on the bivariate rainfall frequency analysis. After extracting independent storm events for the Pyeongchang River basin and the upper Namhangang River basin, we performed the bivariate rainfall frequency analysis to determine the design storm events of various return periods, and calculated the design floods using the HEC-1 model. We compared the design floods based on the bivariate rainfall frequency analysis (DF_BRFA) with those estimated by the flood frequency analysis (DF_FFA), and those estimated by the HEC-1 with the univariate rainfall frequency analysis (DF_URFA). In the case of the Pyeongchang River basin, except for the 100-year flood, the average error of the DF_BRFA was 11.6%, which was the closest to the DF_FFA. In the case of the Namhangang River basin, the average error of the DF_BRFA was about 10%, which was the most similar to the DF_FFA. As the return period increased, the DF_URFA was calculated to be much larger than the DF_FFA, whereas the BRFA produced smaller average error in the design flood than the URFA. When the proposed method is used to calculate design flood in an ungauged watershed, it is expected that the estimated design flood might be close to the actual DF_FFA. Thus, the design of the hydrological structures and water resource plans can be carried out economically and reasonably.

The evaluation of the feasibility about prostate SBRT by analyzing interfraction errors of internal organs (분할치료간(Interfraction) 내부 장기 움직임 오류 분석을 통한 전립선암의 전신정위적방사선치료(SBRT) 가능성 평가)

  • Hong, soon gi;Son, sang joon;Moon, joon gi;Kim, bo kyum;Lee, je hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.179-186
    • /
    • 2016
  • Purpose : To figure out if the treatment plan for rectum, bladder and prostate that have a lot of interfraction errors satisfies dosimetric limits without adaptive plan by analyzing MR image. Materials and Methods : This study was based on 5 prostate cancer patients who had IMRT(total dose: 70Gy) Using ViewRay MRIdian System(ViewRay, ViewRay Inc., Cleveland, OH, USA) The treatment plans were made on the same CT images to compare with the plan quality according to adaptive plan, and the Eclipse(Ver 10.0.42, Varian, USA) was used. After registrate the 5 treatment MR images to the CT images for treatment plan to analyze the interfraction changes of organ, we measured the dose volume histogram and the changes of the absolute volume for each organ by appling the first treatment plan to each image. Over 5 fractions, the total dose for PTV was $V_{36.25}$ Gy $${\geq_-}$$ 95%. To confirm that the prescription dose satisfies the SBRT dose limit for prostate, we measured $V_{100%}$, $V_{95%}$, $V_{90%}$ for CTV and $V_{100%}$, $V_{90%}$, $V_{80%}$ $V_{50%}$ of rectum and bladder. Results : All dose average value of CTV, rectum and bladder satisfied dose limit, but there was a case that exceeded dose limit more than one after analyzing the each image of treatment. After measuring the changes of absolute volume comparing the MR image of the first treatment plan with the one of the interfraction treatment, the difference values were maximum 1.72 times at rectum and maximum 2.0 times at bladder. In case of rectum, the expected values were planned under the dose limit, on average, $V_{100%}=0.32%$, $V_{90%}=3.33%$, $V_{80%}=7.71%$, $V_{50%}=23.55%$ in the first treatment plan. In case of rectum, the average of absolute volume in first plan was 117.9 cc. However, the average of really treated volume was 79.2 cc. In case of CTV, the 100% prescription dose area didn't satisfy even though the margin for PTV was 5 mm because of the variation of rectal and bladder volume. Conclusion : There was no case that the value from average of five fractions is over the dosimetric limits. However, dosimetric errors of rectum and bladder in each fraction was significant. Therefore, the precise delivery is needed in case of prostate SBRT. The real-time tracking and adaptive plan is necessary to meet the precision delivery.

  • PDF

A Study on Pseudo-Range Correction Modeling in order to Improve DGNSS Accuracy (DGNSS 위치정확도 향상을 위한 PRC 보정정보 모델링에 관한 연구)

  • Sohn, Dong Hyo;Park, Kwan Dong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.4
    • /
    • pp.43-48
    • /
    • 2015
  • We studied on pseudo-range correction(PRC) modeling in order to improve differential GNSS(DGNSS) accuracy. The PRC is the range correction information that provides improved location accuracy using DGNSS technique. The digital correction signal is typically broadcast over ground-based transmitters. Sometimes the degradation of the positioning accuracy caused by the loss of PRC signals, radio interference, etc. To prevent the degradation, in this paper, we have designed a PRC model through polynomial curve fitting and evaluated this model. We compared two quantities, estimations of PRC using model parameters and observations from the reference station. In the case of GPS, the average is 0.1m and RMSE is 1.3m. Most of GPS satellites have a bias error of less than ${\pm}1.0m$ and a RMSE within 3.0m. In the case of GLONASS, the average and the RMSE are 0.2m and 2.6m, respectively. Most of satellites have less than ${\pm}2.0m$ for a bias error and less than 3.0m for RMSE. These results show that the estimated value calculated by the model can be used effectively to maintain the accuracy of the user's location. However;it is needed for further work relating to the big difference between the two values at low elevation.

A study on traffic signal control at signalized intersections in VANETs (VANETs 환경에서 단일 교차로의 교통신호 제어방법에 관한 연구)

  • Chang, Hyeong-Jun;Park, Gwi-Tae
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.6
    • /
    • pp.108-117
    • /
    • 2011
  • Seoul metropolitan government has been operating traffic signal control system with the name of COSMOS since 2001. COSMOS uses the degrees of saturation and congestion which are calculated by installing loop detectors. At present, inductive loop detector is generally used for detecting vehicles but it is inconvenient and costly for maintenance since it is buried on the road. In addition, the estimated queue length might be influenced in case of error occurred in measuring speed, because it only uses the speed of vehicles passing by the detector. A traffic signal control algorithm which enables smooth traffic flow at intersection is proposed. The proposed algorithm assigns vehicles to the group of each lane and calculates traffic volume and congestion degree using traffic information of each group using VANETs(Vehicular Ad-hoc Networks) inter-vehicle communication. It does not demand additional devices installation such as cameras, sensors or image processing units. In this paper, the algorithm we suggest is verified for AJWT(Average Junction Waiting Time) and TQL(Total Queue Length) under single intersection model based on GLD(Green Light District) Simulator. And the result is better than Random control method and Best first control method. In case real-time control method with VANETs is generalized, this research that suggests the technology of traffic control in signalized intersections using wireless communication will be highly useful.

Identifying Key Factors to Affect Taxi Travel Considering Spatial Dependence: A Case Study for Seoul (공간 상관성을 고려한 서울시 택시통행의 영향요인 분석)

  • Lee, Hyangsook;Kim, Ji yoon;Choo, Sangho;Jang, Jin young;Choi, Sung taek
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.5
    • /
    • pp.64-78
    • /
    • 2019
  • This paper explores key factors affecting taxi travel using global positioning system(GPS) data in Seoul, Korea, considering spatial dependence. We first analyzed the travel characteristics of taxis such as average travel time, average travel distance, and spatial distribution of taxi trips according to the time of the day and the day of the week. As a result, it is found that the most taxi trips were generated during the morning peak time (8 a.m. to 9 a.m.) and after the midnight (until 1 a.m.) on weekdays. The average travel distance and travel time for taxi trips were 5.9 km and 13 minutes, respectively. This implies that taxis are mainly used for short-distance travel and as an alternative to public transit after midnight in a large city. In addition, we identified that taxi trips were spatially correlated at the traffic analysis zone(TAZ) level through the Moran's I test. Thus, spatial regression models (spatial-lagged and spatial-error models) for taxi trips were developed, accounting for socio-demographics (such as the number of households, the number of elderly people, female ratio to the total population, and the number of vehicles), transportation services (such as the number of subway stations and bus stops), and land-use characteristics (such as population density, employment density, and residential areas) as explanatory variables. The model results indicate that these variables are significantly associated with taxi trips.

Evaluation of Every Other Day - Cone Beam Computed Tomography in Image Guided Radiation Therapy for Prostate Cancer (전립선암의 영상유도방사선치료 시 격일 콘빔 CT 적용의 유용성 평가)

  • Park, Byoung Suk;Ahn, Jong Ho;Kim, Jong Sik;Song, Ki Won
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.289-295
    • /
    • 2014
  • Purpose : Cone Beam Computed Tomography(CBCT) in Image Guided Radiation Therapy(IGRT), Set-up error can be reduced but exposure dose of the patient due to CBCT will increase. Through this study, we are to evaluate by making a scenario with the implementation period of CBCT as every other day. Materials and Methods : Of prostate cancer patients, 9 patients who got a Intensity Modulated Radiation Therapy(IMRT) with CBCT in IGRT were analyzed. Based on values corrected by analyzing set-up error by using CBCT every day during actual treatment, we created a scenario that conducts CBCT every other day. After applying set-up error values of the day not performing CBCT in the scenario to the treatment planning system(Pinnacle 9.2, Philips, USA) by moving them from the treatment iso-center during actual treatment, we established re-treatment plan under the same conditions as actual treatment. Based on this, the dose distribution of normal organs and Planning Target Volume(PTV) was compared and analyzed. Results : In the scenario that performs CBCT every other day based on set-up error values when conducting CBCT every day, average X-axis : $0.2{\pm}0.73mm$, Y-axis : $0.1{\pm}0.58mm$, Z-axis : $-1.3{\pm}1.17mm$ difference was shown. This was applied to the treatment planning to establish re-treatment plan and dose distribution was evaluated and as a result, Dmean : -0.17 Gy, D99% : -0.71 Gy of PTV difference was shown in comparison with the result obtained when carrying out CBCT every day. As for normal organs, V66 : 1.55% of rectal wall, V66 : -0.76% of bladder difference was shown. Conclusion : In case of a CBCT perform every other day could reduce exposure dose and additional treatment time. And it is thought to be able to consider the application depending on the condition of the patient because the difference in the dose distribution of normal organs, PTV is not large.

A Case Study on Forecasting Inbound Calls of Motor Insurance Company Using Interactive Data Mining Technique (대화식 데이터 마이닝 기법을 활용한 자동차 보험사의 인입 콜량 예측 사례)

  • Baek, Woong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.99-120
    • /
    • 2010
  • Due to the wide spread of customers' frequent access of non face-to-face services, there have been many attempts to improve customer satisfaction using huge amounts of data accumulated throughnon face-to-face channels. Usually, a call center is regarded to be one of the most representative non-faced channels. Therefore, it is important that a call center has enough agents to offer high level customer satisfaction. However, managing too many agents would increase the operational costs of a call center by increasing labor costs. Therefore, predicting and calculating the appropriate size of human resources of a call center is one of the most critical success factors of call center management. For this reason, most call centers are currently establishing a department of WFM(Work Force Management) to estimate the appropriate number of agents and to direct much effort to predict the volume of inbound calls. In real world applications, inbound call prediction is usually performed based on the intuition and experience of a domain expert. In other words, a domain expert usually predicts the volume of calls by calculating the average call of some periods and adjusting the average according tohis/her subjective estimation. However, this kind of approach has radical limitations in that the result of prediction might be strongly affected by the expert's personal experience and competence. It is often the case that a domain expert may predict inbound calls quite differently from anotherif the two experts have mutually different opinions on selecting influential variables and priorities among the variables. Moreover, it is almost impossible to logically clarify the process of expert's subjective prediction. Currently, to overcome the limitations of subjective call prediction, most call centers are adopting a WFMS(Workforce Management System) package in which expert's best practices are systemized. With WFMS, a user can predict the volume of calls by calculating the average call of each day of the week, excluding some eventful days. However, WFMS costs too much capital during the early stage of system establishment. Moreover, it is hard to reflect new information ontothe system when some factors affecting the amount of calls have been changed. In this paper, we attempt to devise a new model for predicting inbound calls that is not only based on theoretical background but also easily applicable to real world applications. Our model was mainly developed by the interactive decision tree technique, one of the most popular techniques in data mining. Therefore, we expect that our model can predict inbound calls automatically based on historical data, and it can utilize expert's domain knowledge during the process of tree construction. To analyze the accuracy of our model, we performed intensive experiments on a real case of one of the largest car insurance companies in Korea. In the case study, the prediction accuracy of the devised two models and traditional WFMS are analyzed with respect to the various error rates allowable. The experiments reveal that our data mining-based two models outperform WFMS in terms of predicting the amount of accident calls and fault calls in most experimental situations examined.

Improvement of GPS positioning accuracy by static post-processing method (정적 후처리방식에 의한 GPS의 측위정도 개선)

  • 김민선;신현옥
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.39 no.4
    • /
    • pp.251-261
    • /
    • 2003
  • To measure the GPS position accuracy and its distribution according to the length of the baseline, 30 minutes to 24 hours observations at the fixed location were conducted with two GPS receivers (Ll, 12 channels) on May 29 to June 2, 2002. The GPS data received at the reference station, the rover station and the ordinary times GPS observation station operated by the National Geography Institute in Korea were processed in kinematic and static post-processing methods with a post -processing software. The results obtained are summarized as follows: 1. The number of the satellite that could be observed continuously more than six hours was 16 and most of these satellites were positioned at east-west direction on May 31, 2002. The number of the satellite observed and the geometric dilution of precision (GDOP) determined by the average of every 10 minute for the day were 8 and 3.89, respectively. 2. Both the average GPS positions before and after post-processing were shifted (standalone: 1.17 m, post -processing: 0.43m) to the south and west. The twice distance root mean square (2drms) measured with standalone was 6.65m. The 2drms could be reduced to 33.8% (standard deviation 0=17.2) and 5.3% (0=2.2) of standalone by the kinematic and the static post-processing methods, respectively. 3. The relationship between the length of the baseline x (km) and the 2drms y (m) obtained by the static post-processing method was y=0.00l6x+0.006 $(R^2=0.87)$. In the case of the positioning with the static post-processing method using the GPS receiver, it was found that a positioning within 20cm 2drms was possible when the length of the baseline was less than 100km and the receiving time of the GPS is more than 30 minutes.

Statics corrections for shallow seismic refraction data (천부 굴절법 탄성파 탐사 자료의 정보정)

  • Palmer Derecke;Nikrouz Ramin;Spyrou Andreur
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.1
    • /
    • pp.7-17
    • /
    • 2005
  • The determination of seismic velocities in refractors for near-surface seismic refraction investigations is an ill-posed problem. Small variations in the computed time parameters can result in quite large lateral variations in the derived velocities, which are often artefacts of the inversion algorithms. Such artefacts are usually not recognized or corrected with forward modelling. Therefore, if detailed refractor models are sought with model based inversion, then detailed starting models are required. The usual source of artefacts in seismic velocities is irregular refractors. Under most circumstances, the variable migration of the generalized reciprocal method (GRM) is able to accommodate irregular interfaces and generate detailed starting models of the refractor. However, where the very-near-surface environment of the Earth is also irregular, the efficacy of the GRM is reduced, and weathering corrections can be necessary. Standard methods for correcting for surface irregularities are usually not practical where the very-near-surface irregularities are of limited lateral extent. In such circumstances, the GRM smoothing statics method (SSM) is a simple and robust approach, which can facilitate more-accurate estimates of refractor velocities. The GRM SSM generates a smoothing 'statics' correction by subtracting an average of the time-depths computed with a range of XY values from the time-depths computed with a zero XY value (where the XY value is the separation between the receivers used to compute the time-depth). The time-depths to the deeper target refractors do not vary greatly with varying XY values, and therefore an average is much the same as the optimum value. However, the time-depths for the very-near-surface irregularities migrate laterally with increasing XY values and they are substantially reduced with the averaging process. As a result, the time-depth profile averaged over a range of XY values is effectively corrected for the near-surface irregularities. In addition, the time-depths computed with a Bero XY value are the sum of both the near-surface effects and the time-depths to the target refractor. Therefore, their subtraction generates an approximate 'statics' correction, which in turn, is subtracted from the traveltimes The GRM SSM is essentially a smoothing procedure, rather than a deterministic weathering correction approach, and it is most effective with near-surface irregularities of quite limited lateral extent. Model and case studies demonstrate that the GRM SSM substantially improves the reliability in determining detailed seismic velocities in irregular refractors.

The Effect of Equatorial Spread F on Relative Orbit Determination of GRACE Using Differenced GPS Observations (DGPS기반 GRACE의 상대궤도결정과 Equatorial Spread F의 영향)

  • Roh, Kyoung-Min;Luehr, Hermann;Park, Sang-Young;Cho, Jung-Ho
    • Journal of Astronomy and Space Sciences
    • /
    • v.26 no.4
    • /
    • pp.499-510
    • /
    • 2009
  • In this paper, relative orbit of Low Earth Orbit satellites is determined using only GPS measurements and the effects of Equatorial Spread-F (ESF), that is one of biggest ionospheric irregularities, are investigated. First, relative orbit determiation process is constructed based on doubly differenced GPS observations. In order to see orbit determination performance, relative orbit of two GRACE satellites is estimated for one month in 2004 when no ESF is observed. The root mean square of the achieved baselines compared with that from K-Band Ranger sensor is about 2~3 mm and average of 95% of ambiguities are resolved. Based on this performance, the relative orbit is estimated for two weeks of two difference years, 2003 when there are lots of ESF occurred, and 2004 when only few ESF occurred. For 2003, the averaged baseline error over two weeks is about 15 mm. That is about 4 times larger than the case of 2004 (3.6 mm). Ionospheric status achieved from K-Band Ranging sensor also shows that more Equatorial Spread-F occurred at 2003 than 2004. Investigation on raw observations and screening process revealed that the ionospheric irregualarities caused by Equatorial Spread-F gave significant effects on GPS signal like signal loss or enhancement ionospheric error, From this study, relative orbit determination using GPS observations should consider the effect of Equatorial Spread-F and adjust orbit determination strategy, especially at the time of solar maximum.