• Title/Summary/Keyword: underestimation

Search Result 350, Processing Time 0.02 seconds

The Calculation Method of Shielding Coefficient of Neutral Line against an Induced Voltage by an Aerial Power Distribution Line Reflecting the Principle of Earth Return Current (가공 배전선의 전자유도전압에 대하여 대지 귀로전류 원리를 반영한 중성선 차폐계수 계산 방법)

  • Lee, Sangmu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.7
    • /
    • pp.86-91
    • /
    • 2016
  • To solve the problem of the excessive error caused by using a single value for the shielding effect of the neutral line of an electric power distribution line in the calculation of the voltage it induces in a telecommunication line, the general expression that was previously developed to reflect the mechanism of voltage induction by a distribution line with multiple grounds is employed in this paper to represent the relationship between the leakage current rates at each ground pole. In this way, the formula for calculating the shielding effect of the neutral line can be factorized against the unbalanced current flowing in the neutral line, which is the root current of induction. This shielding coefficient of the neutral line is not constant, but can vary when a range of induced voltages is generated in the whole power distribution line. The calculation method developed herein reduces the error rate to one tenth of that of the existing calculation result in the case of overestimation and increases it by 14% in the case of underestimation.

Application of Rainfall Runoff Model with Rainfall Uncertainty (강우자료의 불확실성을 고려한 강우 유출 모형의 적용)

  • Lee, Hyo-Sang;Jeon, Min-Woo;Balin, Daniela;Rode, Michael
    • Journal of Korea Water Resources Association
    • /
    • v.42 no.10
    • /
    • pp.773-783
    • /
    • 2009
  • The effects of rainfall input uncertainty on predictions of stream flow are studied based extended GLUE (Generalized Likelihood Uncertainty Estimation) approach. The uncertainty in the rainfall data is implemented by systematic/non-systematic rainfall measurement analysis in Weida catchment, Germany. PDM (Probability Distribution Model) rainfall runoff model is selected for hydrological representation of the catchment. Using general correction procedure and DUE(Data Uncertainty Engine), feasible rainfall time series are generated. These series are applied to PDM in MC(Monte Carlo) and GLUE method; Posterior distributions of the model parameters are examined and behavioural model parameters are selected for simplified GLUE prediction of stream flow. All predictions are combined to develop ensemble prediction and 90 percentile of ensemble prediction, which are used to show the effects of uncertainty sources of input data and model parameters. The results show acceptable performances in all flow regime, except underestimation of the peak flows. These results are not definite proof of the effects of rainfall uncertainty on parameter estimation; however, extended GLUE approach in this study is a potential method which can include major uncertainty in the rainfall-runoff modelling.

Application of the Radar Rainfall Estimates Using the Hybrid Scan Reflectivity Technique to the Hydrologic Model (Hybrid Scan Reflectivity 기법을 이용한 레이더 강우량의 수문모형 적용)

  • Lee, Jae-Kyoung;Lee, Min-Ho;Suk, Mi-Kyung;Park, Hye-Sook
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.10
    • /
    • pp.867-878
    • /
    • 2014
  • Due to the nature of weather radar, blank areas occur due to impediments to observation, such as the ground clutter. Radar beam blockages have resulted in the underestimation rainfall amounts. To overcome these limitations, this study developed the Hybrid Scan Reflectivity (HSR) technique and compared the HSR results with existing methods. As a result, the HSR technique was able to estimate rainfalls in areas from which no reflectivity information was observable using existing methods. In case of estimating rainfalls depending on reflectivity scan techniques and beam-blockage/non beam-blockage, the HSR accuracy is superior. Furthermore, rainfall amounts derived from each method was inputted to the HEC-HMS to examine the accuracy of the flood simulations. The accuracy of the results using the HSR technique in contrast to the RAR calculation system and M-P relation was improved by 7% and 10%(based on correlation coefficients), and 18% and 34%(based on Nash-Sutcliffe Efficiency), on average, respectively. Therefore, it is advised that the HSR technique be utilized in the hydrology field to estimate flood discharge more accurately.

Analyzing the Characteristics of Sea Ice Initial Conditions for a Global Ocean and Sea Ice Prediction System, the NEMO-CICE/NEMOVAR over the Arctic Region (전지구 해양·해빙예측시스템 NEMO-CICE/NEMOVAR의 북극 영역 해빙초기조건 특성 분석)

  • Ahn, Joong-Bae;Lee, Su-Bong
    • Journal of the Korean earth science society
    • /
    • v.36 no.1
    • /
    • pp.82-89
    • /
    • 2015
  • In this study, the characteristics of sea ice initial conditions generated from a global ocean and sea ice prediction system, the Nucleus for European Modeling of the Ocean (NEMO) - Los Alamos Sea Ice Model (CICE)/NEMOVAR were analyzed for the period June 2013 to May 2014 over the Arctic region. For the purpose, the observed and reanalyzed data were used to compare with the sea ice initial conditions. Results indicated that the variability of the monthly sea ice extent and thickness in model initial conditions were well represented as compared to the observation, while it was found that the extent and thickness of Arctic sea ice in initial data were narrower and thinner than those in reanalysis and observation for the period. The reason for the narrower sea ice extent in model initial conditions seems to be due to the fact that the initial sea ice concentration at the boundary area of sea ice was about 20 percent less than the reanalysis data. Also, the reason for the thinner sea-ice thickness in the Arctic region is due to the underestimation of Arctic sea ice thickness (about 60 cm) of the model initial conditions in the Arctic Ocean area adjacent to Greenland and Arctic archipelago where thick sea ice appears all the year round.

Analysis of the Fog Detection Algorithm of DCD Method with SST and CALIPSO Data (SST와 CALIPSO 자료를 이용한 DCD 방법으로 정의된 안개화소 분석)

  • Shin, Daegeun;Park, Hyungmin;Kim, Jae Hwan
    • Atmosphere
    • /
    • v.23 no.4
    • /
    • pp.471-483
    • /
    • 2013
  • Nighttime sea fog detection from satellite is very hard due to limitation in using visible channels. Currently, most widely used method for the detection is the Dual Channel Difference (DCD) method based on Brightness Temperature Difference between 3.7 and 11 ${\mu}m$ channel (BTD). However, this method have difficulty in distinguishing between fog and low cloud, and sometimes misjudges middle/high cloud as well as clear scene as fog. Using CALIPSO Lidar Profile measurements, we have analyzed the intrinsic problems in detecting nighttime sea fog from various satellite remote sensing algorithms and suggested the direction for the improvement of the algorithm. From the comparison with CALIPSO measurements for May-July in 2011, the DCD method excessively overestimates foggy pixels (2542 pixels). Among them, only 524 pixel are real foggy pixels, but 331 pixels and 1687 pixels are clear and other type of clouds, respectively. The 514 of real foggy pixels accounts for 70% of 749 foggy pixels identified by CALIPSO. Our proposed new algorithm detects foggy pixels by comparing the difference between cloud top temperature and underneath sea surface temperature from assimilated data along with the DCD method. We have used two types of cloud top temperature, which obtained from 11 ${\mu}m$ brightness temperature (B_S1) and operational COMS algorithm (B_S2). The detected foggy 1794 pixels from B_S1 and 1490 pixel from B_S2 are significantly reduced the overestimation detected by the DCD method. However, 477 and 446 pixels have been found to be real foggy pixels, 329 and 264 pixels be clear, and 989 and 780 pixels be other type of clouds, detected by B_S1 and B_S2 respectively. The analysis of the operational COMS fog detection algorithm reveals that the cloud screening process was strictly enforced, which resulted in underestimation of foggy pixel. The 538 of total detected foggy pixels obtain only 187 of real foggy pixels, but 61 of clear pixels and 290 of other type clouds. Our analysis suggests that there is no winner for nighttime sea fog detection algorithms, but loser because real foggy pixels are less than 30% among the foggy pixels declared by all algorithms. This overwhelming evidence reveals that current nighttime sea fog algorithms have provided a lot of misjudged information, which are mostly originated from difficulty in distinguishing between clear and cloudy scene as well as fog and other type clouds. Therefore, in-depth researches are urgently required to reduce the enormous error in nighttime sea fog detection from satellite.

Quality of Working Life (직장생활에 대한 새로운 인식)

  • 김영환
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.4 no.4
    • /
    • pp.43-61
    • /
    • 1981
  • Interest in the Quality of working life is spreading rapidly and the phrase has entered the popular vocabulary. That this should be so is probably due in large measure to changes in the values of society, nowadays accelerated as never before by the concerns and demands of younger people. But however topical the concept has become, there is very little agreement on its definition. Rather, the term appears to have become a kind of depository for a variety of sometimes contradictory meanings attributed to it by different groups. A list of all the elements it if held to cover would include availability and security of employment, adaquate income, safe and pleasant physical working conditions, reasonable hours of work, equitable treatment and democracy in the workplace, the possibility of self-development, control over one's work, a sense of pride in craftsmanship or product, wider career choices, and flexibility in matters such as the time of starting work, the number of working days in the week, Job sharing and so on altogether an array that encompasses a variety of traditional aspirations and many new ones reflecting the entry into the post industrial era. The term "quality of working life" was introduced by professor Louis E. Davis and his colleagues in the late 1960s to call attention to the prevailing and needlessly poor quality of life at the workplace. In their usage it referred to the quality of the relationship between the worker and his working environment as a whole, and was intended to emphasize the human dimension so often forgotten among the technical and economic factors in job design. Treating workers as if they were elements or cogs in the production process is not only an affront to the dignity of human life, but is also a serious underestimation of the human capabilities needed to operate more advanced technologies. When tasks demand high levels of vigilence, technical problem-solving skills, self initiated behavior, and social and communication skills. it is imperative that our concepts of man be of requisite complexity. Our aim is not just to protect the worker's life and health but to give them an informal interest in their job and opportunity to express their views and exercise control over everything that affects their working life. Certainly, so far as his work is concerned, a man must feel better protected but he must also have a greater feeling of freedom and responsibility. Something parallel but wholly different if happening in Europe, industrial democracy. What has happened in Europe has been discrete, fixed, finalized, and legalized. Those developing centuries driving toward industrialization like R.O.K, shall have to bear in mind the human complexity in processing and designing the work and its environment. Increasing attention is needed to the contradiction between autocratic rule at the workplace and democratic rights in society.n society.

  • PDF

Practical Design of the Sandmat Considering Consolidation Settlement Properties (연약지반의 침하특성을 고려한 샌드매트의 실용적 설계를 위한 고찰)

  • Lee, Bongjik;Kwon, Youngcheul;Lee, Jongkyu
    • Journal of the Korean GEO-environmental Society
    • /
    • v.8 no.5
    • /
    • pp.31-38
    • /
    • 2007
  • The practical design method on sandmat uses a drain length, rate of consolidation settlement and permeability of sand as a major design factors. And, on the basis of this design process, it has been installed beneath the embankment with same thickness. However, the possibility the underestimation on the thickness of sandmat and the delayed drain have been pointed out by several authors caused by a differential settlement at the center and the end of embankment. In this study, therefore, the effect of the differential settlement on the thickness of sandmat and delayed drain through the numerical analysis of embankment was analyzed. As a result, a substantial sandmat thickness becomes small and the possibility of the delayed drain can be certified because of the development of differential settlement at the center and ends of embankment. As a countermeasure to overcome this problem, the applicability of the mound type sandmat was also investigated by the numerical method. It can be concluded that it maintains the designated substantial sandmat thickness throughout consolidation process, and is useful method to maintain the drain capacity. Especially, the mound type sandmat is effective method for a construction site where can cause a differential settlement such as embankment. Furthermore, it has to be designed on the basis of the accurate prediction of consolidation settlement as well as rate of consolidation settlement, drain length and permeability of sand.

  • PDF

Transfer Impedence of Trip Chain with a Railway Mode Embedded - Using Seoul Metroplitan Transportation Card Data - (철도수단이 내재된 통행사슬의 환승저항 추정방안 - 수도권 교통카드자료를 활용하여 -)

  • Lee, Mee young;Sohn, Jhieon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.36 no.6
    • /
    • pp.1083-1091
    • /
    • 2016
  • This research uses public transportation card data to analyze the inter-regional transfer times, transfer frequencies, and transfer resistance that passengers experience during transit amongst the metropolitan public transportation modes. Currently, mode transfers between bus and rail are recorded up to five times during one transit movement by Trip Chain, facilitating greater comprehension of intermodal movements. However, lack of information on what arises during these transfers poses a problem in that it leads to an underestimation of transfer resistances on the Trip Chain. As such, a path choice model that reflects passenger movements during transit activities is created, which attains explanatory power on transfer resistance through its inclusion of transfer times and frequencies. The methodology adopted in this research is to first conceptualize the idea of metropolitan public transportation transfer, and in the case that mode transfers include the city-rail, to newly conceptualize the idea of transfer resistance using transportation card data. Also, the city-rail path choice model within the Trip Chain is constructed, with transfer time and frequency used to reevaluate transfer resistance. Further, in order to align bus and city-rail station administrative level small-zone coordinates to state and regional level mid-zone coordinates, the big node methdod is utilized. Finally, case studies on trip chains using at least one transfer onto the city-rail is used to determine the validity of the results obtained.

Dosimetric Validation of the Acuros XB Advanced Dose Calculation Algorithm for Volumetric Modulated Arc Therapy Plans

  • Park, So-Yeon;Park, Jong Min;Choi, Chang Heon;Chun, Minsoo;Kim, Jung-in
    • Progress in Medical Physics
    • /
    • v.27 no.4
    • /
    • pp.180-188
    • /
    • 2016
  • Acuros XB advanced dose calculation algorithm (AXB, Varian Medical Systems, Palo Alto, CA) has been released recently and provided the advantages of speed and accuracy for dose calculation. For clinical use, it is important to investigate the dosimetric performance of AXB compared to the calculation algorithm of the previous version, Anisotropic Analytical Algorithm (AAA, Varian Medical Systems, Palo Alto, CA). Ten volumetric modulated arc therapy (VMAT) plans for each of the following cases were included: head and neck (H&N), prostate, spine, and lung. The spine and lung cases were treated with stereotactic body radiation therapy (SBRT) technique. For all cases, the dose distributions were calculated using AAA and two dose reporting modes in AXB (dose-to-water, $AXB_w$, and dose-to-medium, $AXB_m$) with same plan parameters. For dosimetric evaluation, the dose-volumetric parameters were calculated for each planning target volume (PTV) and interested normal organs. The differences between AAA and AXB were statistically calculated with paired t-test. As a general trend, $AXB_w$ and $AXB_m$ showed dose underestimation as compared with AAA, which did not exceed within -3.5% and -4.5%, respectively. The maximum dose of PTV calculated by $AXB_w$ and $AXB_m$ was tended to be overestimated with the relative dose difference ranged from 1.6% to 4.6% for all cases. The absolute mean values of the relative dose differences were $1.1{\pm}1.2%$ and $2.0{\pm}1.2%$ when comparing between AAA and $AXB_w$, and AAA and $AXB_m$, respectively. For almost dose-volumetric parameters of PTV, the relative dose differences are statistically significant while there are no statistical significance for normal tissues. Both $AXB_w$ and $AXB_m$ was tended to underestimate dose for PTV and normal tissues compared to AAA. For analyzing two dose reporting modes in AXB, the dose distribution calculated by $AXB_w$ was similar to those of AAA when comparing the dose distributions between AAA and $AXB_m$.

Accuracy of Endoscopic Ultrasonography for Determination of Tumor Invasion Depth in Gastric Cancer

  • Razavi, Seyed Mohsen;Khodadost, Mahmoud;Sohrabi, Masoudreza;Keshavarzi, Azam;Zamani, Farhad;Rakhshani, Naser;Ameli, Mitra;Sadeghi, Reza;Hatami, Khadijeh;Ajdarkosh, Hossein;Golmahi, Zeynab;Ranjbaran, Mehdi
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.16 no.8
    • /
    • pp.3141-3145
    • /
    • 2015
  • Background: Gastric cancer (GC) is one the common lethal cancers in Iran. Detection of GC in the early stages would assesses to improve the survival of patients. In this study, we attempt to evaluate the accuracy of EUS in detection depth of invasion of GC among Iranian Patients. Materials and Methods: This study is a retrospective study of patients with pathologically confirmed GC. They underwent EUS before initiating the treatment. The accuracy of EUS and agreement between the two methods was evaluated by comparing pre treatment EUS finding with post operative histopathological results. Results: The overall accuracy of EUS for T and N staging was 67.9% and 75.47, respectively. Underestimation and overestimation was seen in 22 (14.2%) and 40 (25.6%) respectively. The EUS was more accurate in large tumors and the tumors located in the middle and lower parts of the stomach. The EUS was more sensitive in T3 staging. The values of weighted Kappa from the T and N staging were 0.53 and 0.66, respectively. Conclusions: EUS is a useful modality for evaluating the depth of invasion of GC. The accuracy of EUS was higher if the tumor was located in the lower parts of the stomach and the size of the tumor was more than 3 cm. Therefore, judgments made upon other criteria evaluated in this study need to be reconsidered.