• Title/Summary/Keyword: Underestimation

Search Result 353, Processing Time 0.032 seconds

Validation of Satellite Altimeter-Observed Significant Wave Height in the North Pacific and North Atlantic Ocean (1992-2016) (북태평양과 북대서양에서의 위성 고도계 관측 유의파고 검증 (1992-2016))

  • Hye-Jin Woo;Kyung-Ae Park
    • Journal of the Korean earth science society
    • /
    • v.44 no.2
    • /
    • pp.135-147
    • /
    • 2023
  • Satellite-observed significant wave heights (SWHs), which are widely used to understand the response of the ocean to climate change, require long-term and continuous validation. This study examines the accuracy and error characteristics of SWH observed by nine satellite altimeters in the North Pacific and North Atlantic Ocean for 25 years (1992-2016). A total of 137,929 matchups were generated to compare altimeter-observed SWH and in-situ measurements. The altimeter SWH showed a bias of 0.03 m and a root mean square error (RMSE) of 0.27 m, indicating relatively high accuracy in the North Pacific and North Atlantic Ocean. However, the spatial distribution of altimeter SWH errors showed notable differences. To better understand the error characteristics of altimeter-observed SWH, errors were analyzed with respect to in-situ SWH, time, latitude, and distance from the coast. Overestimation of SWH was observed in most satellite altimeters when in-situ SWH was low, while underestimation was observed when in-situ SWH was high. The errors of altimeter-observed SWH varied seasonally, with an increase during winter and a decrease during summer, and the variability of errors increased at higher latitudes. The RMSEs showed high accuracy of less than 0.3 m in the open ocean more than 100 km from the coast, while errors significantly increased to more than 0.5 m in coastal regions less than 15 km. These findings underscore the need for caution when analyzing the spatio-temporal variability of SWH in the global and regional oceans using satellite altimeter data.

The Regime of Peron(1943-1955) and the Apparition of the People as Social Subjects - from the Perspective of the Populist Discourse of Laclau - (페론체제(1943-1955)와 '대중'의 사회적 주체의 출현 - 라클라우의 포퓰리즘 담론의 시각에서 -)

  • Ahn, Tae-hwan
    • Iberoamérica
    • /
    • v.13 no.1
    • /
    • pp.123-152
    • /
    • 2011
  • The long standing people's culture of Latin America based on social solidarity of the communities makes the political relations between the leader and the people very different from them of the european societies based on the representative democracy. At any rate, the main stream of the Populist Discourses sees the real populist political processes with the pejorative senses attributing the demagogue style of the leaders. In these sense, it is very important to re-consider the populism discourses of Ernesto Laclau who thinks that the populism is a way of interpreting the emergence of the people to establish the social demands in the context of populist real politics. According to Laclau, "the populism seeks for the radical reconfiguration of the revolt of the 'Status Quo' and new order". This work will confirm if this interpretation of Laclau can be applied to Peronist political regime. Meanwhile the first group of the orthodox line of the discourses on populism including Gino Germani shows that the populism is a political movement based on the manipulation and demagogue by the charismatic leader of the irrational mass during the period in transition after the crises of the traditional oligarchy in Latin America. And another line of the main stream of discourses on populism including Cardoso and O'Donnell says that the populism is a political phenomena in a period of transition towards the modernization and the national development by means of the industrialization through the substitution of the imports and the alliance between the classes after the 1930's. But these principal interpretations on populism disregards that in Argentina many urban poor working class people had lived under the racist, unequal painful social relations due to the underestimation and the discrimination by the upper and the middle class with many intellectuals. But Peronism had considered them as the new social subjects with human dignities. And so we have to rethink the clientelism also with another meanings. In this sense, the theories of Ernesto Laclau on populism is very helpful to illuminate the sensitive and ambiguous meanings of Peronism. Especially Peronism makes the urban working class maintain their life styles more tended to them of the traditional communities and go towards the anti-Status Quo. That is a key of success of Peronism not only that time but until these days. And so this study will show that it is the most important thing that Peronist regime had made the emergence of the 'people' in the meaning of advancing the democracy in Argentina.

Comparative Study between Design Methods and Pile Load Tests for Bearing Capacity of Driven PHC Piles in the Nakdong River Delta (낙동강 삼각주에 항타된 PHC말뚝의 지지력을 위한 재하시험과 지지력 공식의 비교연구)

  • Dung, N.T.;Chung, S.G.;Kim, S.R.;Chung, J.G.
    • Journal of the Korean Geotechnical Society
    • /
    • v.23 no.3
    • /
    • pp.61-75
    • /
    • 2007
  • Deep foundations have been popularly installed in hard stratum such as gravels or rocks in Korea. However, it is necessary to consider sand or sandy gravel layers that locate at the mid-depths as the bearing stratum of piles in the thick Nakdong River deltaic deposits, as done in the Chaophraya (Bangkok) and Mississippi River deltas. This study was focused on the finding of suitable methods for estimating bearing capacity when driving prestressed high-strength concrete (PHC) piles to a required depth in the deltaic area. Ground investigation was performed at five locations of two sites in the deltaic area. Bearing capacity of the driven piles has been computed using a number of proposed methods such as CPT-based and other analytical methods, based on the ground investigation and comparison one another other. Five PDA (pile driving analyzer) tests were systematically carried out at the whole depths of embedded piles, which is a well-blown useful technique for the purposes. As the results, the bearing capacities calculated by various methods were compared with the PDA and static load testing results. It was found that the shaft resistance is significantly governed by set-up effects and then the long-term value agrees well with that of the $\beta$ method. Also, the design methods for toe resistance were determined based on the SLT result, rather than PDA results that led to underestimation. Moreover, using the CPT results, appropriate methods were proposed for calculating the bearing capacity of the piles in the area.

Application of Self-Organizing Map Theory for the Development of Rainfall-Runoff Prediction Model (강우-유출 예측모형 개발을 위한 자기조직화 이론의 적용)

  • Park, Sung Chun;Jin, Young Hoon;Kim, Yong Gu
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4B
    • /
    • pp.389-398
    • /
    • 2006
  • The present study compositely applied the self-organizing map (SOM), which is a kind of artificial neural networks (ANNs), and the back propagation algorithm (BPA) for the rainfall-runoff prediction model taking account of the irregular variation of the spatiotemporal distribution of rainfall. To solve the problems from the previous studies on ANNs, such as the overestimation of low flow during the dry season, the underestimation of runoff during the flood season and the persistence phenomenon, in which the predicted values continuously represent the preceding runoffs, we introduced SOM theory for the preprocessing in the prediction model. The theory is known that it has the pattern classification ability. The method proposed in the present research initially includes the classification of the rainfall-runoff relationship using SOM and the construction of the respective models according to the classification by SOM. The individually constructed models used the data corresponding to the respectively classified patterns for the runoff prediction. Consequently, the method proposed in the present study resulted in the better prediction ability of runoff than that of the past research using the usual application of ANNs and, in addition, there were no such problems of the under/over-estimation of runoff and the persistence.

Realtime Streamflow Prediction using Quantitative Precipitation Model Output (정량강수모의를 이용한 실시간 유출예측)

  • Kang, Boosik;Moon, Sujin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.6B
    • /
    • pp.579-587
    • /
    • 2010
  • The mid-range streamflow forecast was performed using NWP(Numerical Weather Prediction) provided by KMA. The NWP consists of RDAPS for 48-hour forecast and GDAPS for 240-hour forecast. To enhance the accuracy of the NWP, QPM to downscale the original NWP and Quantile Mapping to adjust the systematic biases were applied to the original NWP output. The applicability of the suggested streamflow prediction system which was verified in Geum River basin. In the system, the streamflow simulation was computed through the long-term continuous SSARR model with the rainfall prediction input transform to the format required by SSARR. The RQPM of the 2-day rainfall prediction results for the period of Jan. 1~Jun. 20, 2006, showed reasonable predictability that the total RQPM precipitation amounts to 89.7% of the observed precipitation. The streamflow forecast associated with 2-day RQPM followed the observed hydrograph pattern with high accuracy even though there occurred missing forecast and false alarm in some rainfall events. However, predictability decrease in downstream station, e.g. Gyuam was found because of the difficulties in parameter calibration of rainfall-runoff model for controlled streamflow and reliability deduction of rating curve at gauge station with large cross section area. The 10-day precipitation prediction using GQPM shows significantly underestimation for the peak and total amounts, which affects streamflow prediction clearly. The improvement of GDAPS forecast using post-processing seems to have limitation and there needs efforts of stabilization or reform for the original NWP.

Evaluation of Peak Ground Acceleration Based on Seismic Design Standards in Sejong City Area Using Gyeongju-Pohang Type Design Seismic Waves (경주·포항형 설계지진파를 활용한 세종시 지역의 내진설계기준 지표면최대가속도 성능평가)

  • Oh, Hyun Ju;Lee, Sung Hyun;Park, Hyung Choon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.1
    • /
    • pp.41-48
    • /
    • 2024
  • In 2017, the Ministry of the Interior and Safety conducted research for the revision of seismic design standards and performed studies on standard design response spectra. As a result, the Common Application Guidelines for Seismic Design Standards were introduced, and these guidelines have been implemented in the national design standards of the Ministry of Land, Infrastructure, and Transport for practical use. However, it should be noted that the research for proposing standard design response spectra during the 2017 revision was conducted before the occurrence of the significant seismic events in South Korea, such as the 2016 Gyeongju Earthquake and the 2017 Pohang Earthquake. To account for these recent major earthquakes, this study adjusted the standard design spectra based on the records of the 2016 Gyeongju Earthquake and the 2017 Pohang Earthquake and conducted ground response analyses accordingly. The results revealed variations in peak ground acceleration (PGA) at the ground surface even within the same ground classification. It was confirmed that this variation can lead to overestimation or underestimation of seismic loads.

Development and Assessment of LSTM Model for Correcting Underestimation of Water Temperature in Korean Marine Heatwave Prediction System (한반도 고수온 예측 시스템의 수온 과소모의 보정을 위한 LSTM 모델 구축 및 예측성 평가)

  • NA KYOUNG IM;HYUNKEUN JIN;GYUNDO PAK;YOUNG-GYU PARK;KYEONG OK KIM;YONGHAN CHOI;YOUNG HO KIM
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.29 no.2
    • /
    • pp.101-115
    • /
    • 2024
  • The ocean heatwave is emerging as a major issue due to global warming, posing a direct threat to marine ecosystems and humanity through decreased food resources and reduced carbon absorption capacity of the oceans. Consequently, the prediction of ocean heatwaves in the vicinity of the Korean Peninsula is becoming increasingly important for marine environmental monitoring and management. In this study, an LSTM model was developed to improve the underestimated prediction of ocean heatwaves caused by the coarse vertical grid system of the Korean Peninsula Ocean Prediction System. Based on the results of ocean heatwave predictions for the Korean Peninsula conducted in 2023, as well as those generated by the LSTM model, the performance of heatwave predictions in the East Sea, Yellow Sea, and South Sea areas surrounding the Korean Peninsula was evaluated. The LSTM model developed in this study significantly improved the prediction performance of sea surface temperatures during periods of temperature increase in all three regions. However, its effectiveness in improving prediction performance during periods of temperature decrease or before temperature rise initiation was limited. This demonstrates the potential of the LSTM model to address the underestimated prediction of ocean heatwaves caused by the coarse vertical grid system during periods of enhanced stratification. It is anticipated that the utility of data-driven artificial intelligence models will expand in the future to improve the prediction performance of dynamical models or even replace them.

Uncertainty Calculation Algorithm for the Estimation of the Radiochronometry of Nuclear Material (핵물질 연대측정을 위한 불확도 추정 알고리즘 연구)

  • JaeChan Park;TaeHoon Jeon;JungHo Song;MinSu Ju;JinYoung Chung;KiNam Kwon;WooChul Choi;JaeHak Cheong
    • Journal of Radiation Industry
    • /
    • v.17 no.4
    • /
    • pp.345-357
    • /
    • 2023
  • Nuclear forensics has been understood as a mendatory component in the international society for nuclear material control and non-proliferation verification. Radiochronometry of nuclear activities for nuclear forensics are decay series characteristics of nuclear materials and the Bateman equation to estimate when nuclear materials were purified and produced. Radiochronometry values have uncertainty of measurement due to the uncertainty factors in the estimation process. These uncertainties should be calculated using appropriate evaluation methods that are representative of the accuracy and reliability. The IAEA, US, and EU have been researched on radiochronometry and uncertainty of measurement, although the uncertainty calculation method using the Bateman equation is limited by the underestimation of the decay constant and the impossibility of estimating the age of more than one generation, so it is necessary to conduct uncertainty calculation research using computer simulation such as Monte Carlo method. This highlights the need for research using computational simulations, such as the Monte Carlo method, to overcome these limitations. In this study, we have analyzed mathematical models and the LHS (Latin Hypercube Sampling) methods to enhance the reliability of radiochronometry which is to develop an uncertainty algorithm for nuclear material radiochronometry using Bateman Equation. We analyzed the LHS method, which can obtain effective statistical results with a small number of samples, and applied it to algorithms that are Monte Carlo methods for uncertainty calculation by computer simulation. This was implemented through the MATLAB computational software. The uncertainty calculation model using mathematical models demonstrated characteristics based on the relationship between sensitivity coefficients and radiative equilibrium. Computational simulation random sampling showed characteristics dependent on random sampling methods, sampling iteration counts, and the probability distribution of uncertainty factors. For validation, we compared models from various international organizations, mathematical models, and the Monte Carlo method. The developed algorithm was found to perform calculations at an equivalent level of accuracy compared to overseas institutions and mathematical model-based methods. To enhance usability, future research and comparisons·validations need to incorporate more complex decay chains and non-homogeneous conditions. The results of this study can serve as foundational technology in the nuclear forensics field, providing tools for the identification of signature nuclides and aiding in the research, development, comparison, and validation of related technologies.

The Evaluation of the Difference of the SUV Caused by DFOV Change in PET/CT (PET/CT 검사에서 확대된 표시시야가 표준섭취계수에 미치는 영향 평가)

  • Kwak, In-Suk;Lee, Hyuk;Choi, Sung-Wook;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.13-20
    • /
    • 2011
  • Purpose: The limited FOV(Field of View) of CT (Computed Tomography) can cause truncation artifact at external DFOV (Display Field of View) in PET/CT image. In our study, we measured the difference of SUV and compared the influence affecting to the image reconstructed with the extended DFOV. Materials and Methods: NEMA 1994 PET Phantom was filled with $^{18}F$(FDG) of 5.3 kBq/mL and placed at the center of FOV. Phantom images were acquired through emission scan. Shift the phantom's location to the external edge of DFOV and images were acquired with same method. All of acquired data through each experiment were reconstructed with same method, DFOV was applied 50 cm and 70 cm respectively. Then ROI was set up on the emission image, performed the comparative analysis SUV. In the clinical test, patient group shown truncation artifact was selected. ROI was set up at the liver of patient's image and performed the comparative analysis SUV according to the change of DFOV. Results: The pixel size was increase from 3.91 mm to 5.47 mm according to the DFOV increment in the centered location phantom study. When extended DFOV was applied, $_{max}SUV$ of ROI was decreased from 1.49 to 1.35. In case of shifted the center of phantom location study, $_{max}SUV$ was decreased from 1.30 to 1.20. The $_{max}SUV$ was 1.51 at the truncated region in the extended DFOV. The difference of the $_{max}SUV$ was 25.9% higher at the outside of the truncated region than inside. When the extended DFOV was applied, $_{max}SUV$ was decreased from 3.38 to 3.13. Conclusion: When the extended DFOV was applied, $_{max}SUV$ decreasing phenomenon can cause pixel to pixel noise by increasing of pixel size. In this reason, $_{max}SUV$ was underestimated. Therefore, We should consider the underestimation of quantitative result in the whole image plane in case of patient study applied extended DFOV protocol. Consequently, the result of the quantitative analysis may show more higher than inside at the truncated region.

  • PDF

Study on the Small Fields Dosimetry for High Energy Photon-based Radiation Therapy (고에너지 광자선을 이용한 방사선 치료 시 소조사면에서의 흡수선량평가에 관한 연구)

  • Jeong, Hae-Sun;Han, Young-Yih;Kum, O-Yeon;Kim, Chan-Hyeong
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.290-297
    • /
    • 2009
  • In case of radiation treatment using small field high-energy photon beams, an accurate dosimetry is a challenging task because of dosimetrically unfavorable phenomena such as dramatic changes of the dose at the field boundaries, dis-equilibrium of the electrons, and non-uniformity between the detector and the phantom materials. In this study, the absorbed dose in the phantom was measured by using an ion chamber and a diode detector widely used in clinics. $GAFCHROMIC^{(R)}$ EBT films composed of water equivalent materials was also evaluated as a small field detector and compared with ionchamber and diode detectors. The output factors at 10 cm depth of a solid phantom located 100 cm from the 6 MV linear accelerator (Varian, 6 EX) source were measured for 6 field sizes ($5{\times}5\;cm^2$, $2{\times}2\;cm^2$, $1.5{\times}1.5\;cm^2$, $1{\times}1\;cm^2$, $0.7{\times}0.7\;cm^2$ and $0.5{\times}0.5\;cm^2$). As a result, from $5{\times}5\;cm^2$ to $1.5{\times}1.5\;cm^2$ field sizes, absorbed doses from three detectors were accurately identified within 1%. Wheres, the ion chamber underestimated dose compared to other detectors in the field sizes less than $1{\times}1\;cm^2$. In order to correct the observed underestimation, a convolution method was employed to eliminate the volume averaging effect of an ion chamber. Finally, in $1{\times}1\;cm^2$ field the absorbed dose with a diode detector was about 3% higher than that with the EBT film while the dose with the ion chamber after volume correction was 1% lower. For $0.5{\times}0.5\;cm^2$ field, the dose with the diode detector was 1% larger than that with the EBT film while dose with volume corrected ionization chamber was 7% lower. In conclusion, the possibility of $GAFCHROMIC^{(R)}$ EBT film as an small field dosimeter was tested and further investigation will be proceed using Monte Calro simulation.

  • PDF