• Title/Summary/Keyword: Analysis Error

Search Result 9,206, Processing Time 0.035 seconds

Analysis of Tidal Deflection and Ice Properties of Ross Ice Shelf, Antarctica, by using DDInSAR Imagery (DDInSAR 영상을 이용한 남극 로스 빙붕의 조위변형과 물성 분석)

  • Han, Soojeong;Han, Hyangsun;Lee, Hoonyol
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.933-944
    • /
    • 2019
  • This study analyzes the tide deformation of land boundary regions on the east (Region A) and west (Region B) sides of the Ross Ice Shelf in Antarctica using Double-Differential Interferometric Synthetic Aperture Radar (DDInSAR). A total of seven Sentinel-1A SAR images acquired in 2015-2016 were used to estimate the accuracy of tide prediction model and Young's modulus of ice shelf. First, we compared the Ross Sea Height-based Tidal Inverse (Ross_Inv) model, which is a representative tide prediction model for the Antarctic Ross Sea, with the tide deformation of the ice shelf extracted from the DDInSAR image. The accuracy was analyzed as 3.86 cm in the east region of Ross Ice Shelf and it was confirmed that the inverse barometric pressure effect must be corrected in the tide model. However, in the east, it is confirmed that the tide model may be inaccurate because a large error occurs even after correction of the atmospheric effect. In addition, the Young's modulus of the ice was calculated on the basis of the one-dimensional elastic beam model showing the correlation between the width of the hinge zone where the tide strain occurs and the ice thickness. For this purpose, the grounding line is defined as the line where the displacement caused by the tide appears in the DDInSAR image, and the hinge line is defined as the line to have the local maximum/minimum deformation, and the hinge zone as the area between the two lines. According to the one-dimensional elastic beam model assuming a semi-infinite plane, the width of the hinge region is directly proportional to the 0.75 power of the ice thickness. The width of the hinge zone was measured in the area where the ground line and the hinge line were close to the straight line shown in DDInSAR. The linear regression analysis with the 0.75 power of BEDMAP2 ice thickness estimated the Young's modulus of 1.77±0.73 GPa in the east and west of the Ross Ice Shelf. In this way, more accurate Young's modulus can be estimated by accumulating Sentinel-1 images in the future.

A Study of the Representation in the Elementary Mathematical Problem-Solving Process (초등 수학 문제해결 과정에 사용되는 표현 방법에 대한 연구)

  • Kim, Yu-Jung;Paik, Seok-Yoon
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.9 no.2
    • /
    • pp.85-110
    • /
    • 2005
  • The purpose of this study is to examine the characteristics of visual representation used in problem solving process and examine the representation types the students used to successfully solve the problem and focus on systematizing the visual representation method using the condition students suggest in the problems. To achieve the goal of this study, following questions have been raised. (1) what characteristic does the representation the elementary school students used in the process of solving a math problem possess? (2) what types of representation did students use in order to successfully solve elementary math problem? 240 4th graders attending J Elementary School located in Seoul participated in this study. Qualitative methodology was used for data analysis, and the analysis suggested representation method the students use in problem solving process and then suggested the representation that can successfully solve five different problems. The results of the study as follow. First, the students are not familiar with representing with various methods in the problem solving process. Students tend to solve the problem using equations rather than drawing a diagram when they can not find a word that gives a hint to draw a diagram. The method students used to restate the problem was mostly rewriting the problem, and they could not utilize a table that is essential in solving the problem. Thus, various errors were found. Students did not simplify the complicated problem to find the pattern to solve the problem. Second, the image and strategy created as the problem was read and the affected greatly in solving the problem. The first image created as the problem was read made students to draw different diagram and make them choose different strategies. The study showed the importance of first image by most of the students who do not pass the trial and error step and use the strategy they chose first. Third, the students who successfully solved the problems do not solely depend on the equation but put them in the form which information are decoded. They do not write difficult equation that they can not solve, but put them into a simplified equation that know to solve the problem. On fraction problems, they draw a diagram to solve the problem without calculation, Fourth, the students who. successfully solved the problem drew clear diagram that can be understood with intuition. By representing visually, unnecessary information were omitted and used simple image were drawn using symbol or lines, and to clarify the relationship between the information, numeric explanation was added. In addition, they restricted use of complicated motion line and dividing line, proper noun in the word problems were not changed into abbreviation or symbols to clearly restate the problem. Adding additional information was useful source in solving the problem.

  • PDF

The Economic Growth of Korea Since 1990 : Contributing Factors from Demand and Supply Sides (1990년대 이후 한국경제의 성장: 수요 및 공급 측 요인의 문제)

  • Hur, Seok-Kyun
    • KDI Journal of Economic Policy
    • /
    • v.31 no.1
    • /
    • pp.169-206
    • /
    • 2009
  • This study stems from a question, "How should we understand the pattern of the Korean economy after the 1990s?" Among various analytic methods applicable, this study chooses a Structural Vector Autoregression (SVAR) with long-run restrictions, identifies diverse impacts that gave rise to the current status of the Korean economy, and differentiates relative contributions of those impacts. To that end, SVAR is applied to four economic models; Blanchard and Quah (1989)'s 2-variable model, its 3-variable extensions, and the two other New Keynesian type linear models modified from Stock and Watson (2002). Especially, the latter two models are devised to reflect the recent transitions in the determination of foreign exchange rate (from a fixed rate regime to a flexible rate one) as well as the monetary policy rule (from aggregate targeting to inflation targeting). When organizing the assumed results in the form of impulse response and forecasting error variance decomposition, two common denominators are found as follows. First, changes in the rate of economic growth are mainly attributable to the impact on productivity, and such trend has grown strong since the 2000s, which indicates that Korea's economic growth since the 2000s has been closely associated with its potential growth rate. Second, the magnitude or consistency of impact responses tends to have subsided since the 2000s. Given Korea's high dependence on trade, it is possible that low interest rates, low inflation, steady growth, and the economic emergence of China as a world player have helped secure capital and demand for export and import, which therefore might reduced the impact of each sector on overall economic status. Despite the fact that a diverse mixture of models and impacts has been used for analysis, always two common findings are observed in the result. Therefore, it can be concluded that the decreased rate of economic growth of Korea since 2000 appears to be on the same track as the decrease in Korea's potential growth rate. The contents of this paper are constructed as follows: The second section observes the recent trend of the economic development of Korea and related Korean articles, which might help in clearly defining the scope and analytic methodology of this study. The third section provides an analysis model to be used in this study, which is Structural VAR as mentioned above. Variables used, estimation equations, and identification conditions of impacts are explained. The fourth section reports estimation results derived by the previously introduced model, and the fifth section concludes.

  • PDF

Methodological Comparison of the Quantification of Total Carbon and Organic Carbon in Marine Sediment (해양 퇴적물내 총탄소 및 유기탄소의 분석기법 고찰)

  • Kim, Kyeong-Hong;Son, Seung-Kyu;Son, Ju-Won;Ju, Se-Jong
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.9 no.4
    • /
    • pp.235-242
    • /
    • 2006
  • The precise estimation of total and organic carbon contents in sediments is fundamental to understand the benthic environment. To test the precision and accuracy of CHN analyzer and the procedure to quantify total and organic carbon contents(using in-situ acidification with sulfurous acid($H_2SO_3$)) in the sediment, the reference material s such as Acetanilide($C_8H_9NO$), Sulfanilammide($C_6H_8N_2O_2S$), and BCSS-1(standard estuary sediment) were used. The results indicate that CHN analyzer to quantify carbon and nitrogen content has high precision(percent error=3.29%) and accuracy(relative standard deviation=1.26%). Additionally, we conducted the instrumental comparison of carbon values analyzed using CHN analyzer and Coulometeric Carbon Analyzer. Total carbon contents measured from two different instruments were highly correlated($R^2=0.9993$, n=84, p<0.0001) with a linear relationship and show no significant differences(paired t-test, p=0.0003). The organic carbon contents from two instruments also showed the similar results with a significant linear relationship($R^2=0.8867$, n=84, p<0.0001) and no significant differences(paired t-test, p<0.0001). Although it is possible to overestimate organic carbon contents for some sediment types having high inorganic carbon contents(such as calcareous ooze) due to procedural and analytical errors, analysis of organic carbon contents in sediments using CHN Analyzer and current procedures seems to provide the best estimates. Therefore, we recommend that this method can be applied to measure the carbon content in normal any sediment samples and are considered to be one of the best procedure far routine analysis of total and organic carbon.

  • PDF

Analysis and Prediction of Sewage Components of Urban Wastewater Treatment Plant Using Neural Network (대도시 하수종말처리장 유입 하수의 성상 평가와 인공신경망을 이용한 구성성분 농도 예측)

  • Jeong, Hyeong-Seok;Lee, Sang-Hyung;Shin, Hang-Sik;Song, Eui-Yeol
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.28 no.3
    • /
    • pp.308-315
    • /
    • 2006
  • Since sewage characteristics are the most important factors that can affect the biological reactions in wastewater treatment plants, a detailed understanding on the characteristics and on-line measurement techniques of the influent sewage would play an important role in determining the appropriate control strategies. In this study, samples were taken at two hour intervals during 51 days from $1^{st}$ October to $21^{st}$ November 2005 from the influent gate of sewage treatment plant. Then the characteristics of sewage were investigated. It was found that the daily values of flow rate and concentrations of sewage components showed a defined profile. The highest and lowest peak values were observed during $11:00{\sim}13:00$ hours and $05:00{\sim}07:00$ hours, respectively. Also, it was shown that the concentrations of sewage components were strongly correlated with the absorbance measured at 300 nm of UV. Therefore, the objective of the paper is to develop on-line estimation technique of the concentration of each component in the sewage using accumulated profiles of sewage, absorbance, and flow rate which can be measured in real time. As a first step, regression analysis was performed using the absorbance and component concentration data. Then a neural network trained with the input of influent flow rate, absorbance, and inflow duration was used. Both methods showed remarkable accuracy in predicting the resulting concentrations of the individual components of the sewage. In case of using the neural network, the predicted value md of the measurement were 19.3 and 14.4 for TSS, 26.7 and 25.1 for TCOD, 5.4 and 4.1 for TN, and for TP, 0.45 to 0.39, respectively.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

The Comparative Analysis of External Dose Reconstruction in EPID and Internal Dose Measurement Using Monte Carlo Simulation (몬테 카를로 전산모사를 통한 EPID의 외부적 선량 재구성과 내부 선량 계측과의 비교 및 분석)

  • Jung, Joo-Young;Yoon, Do-Kun;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.24 no.4
    • /
    • pp.253-258
    • /
    • 2013
  • The purpose of this study is to evaluate and analyze the relationship between the external radiation dose reconstruction which is transmitted from the patient who receives radiation treatment through electronic portal imaging device (EPID) and the internal dose derived from the Monte Carlo simulation. As a comparative analysis of the two cases, it is performed to provide a basic indicator for similar studies. The geometric information of the experiment and that of the radiation source were entered into Monte Carlo n-particle (MCNPX) which is the computer simulation tool and to derive the EPID images, a tally card in MCNPX was used for visualizing and the imaging of the dose information. We set to source to surface distance (SSD) 100 cm for internal measurement and EPID. And the water phantom was set to be 100 cm of the source to surface distance (SSD) for the internal measurement and EPID was set to 90 cm of SSD which is 10 cm below. The internal dose was collected from the water phantom by using mesh tally function in MCNPX, accumulated dose data was acquired by four-portal beam exposures. At the same time, after getting the dose which had been passed through water phantom, dose reconstruction was performed using back-projection method. In order to analyze about two cases, we compared the penetrated dose by calibration of itself with the absorbed one. We also evaluated the reconstructed dose using EPID and partially accumulated (overlapped) dose in water phantom by four-portal beam exposures. The sum dose data of two cases were calculated as each 3.4580 MeV/g (absorbed dose in water) and 3.4354 MeV/g (EPID reconstruction). The result of sum dose match from two cases shows good agreement with 0.6536% dose error.

Evaluation of the quality of Italian Ryegrass Silages by Near Infrared Spectroscopy (근적외선 분광법을 이용한 이탈리안 라이그라스 사일리지의 품질 평가)

  • Park, Hyung-Soo;Lee, Sang-Hoon;Choi, Ki-Choon;Lim, Young-Chul;Kim, Jong-Gun;Jo, Kyu-Chea;Choi, Gi-Jun
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.32 no.3
    • /
    • pp.301-308
    • /
    • 2012
  • Near infrared reflectance spectroscopy (NIRS) has become increasingly used as a rapid and accurate method of evaluating some chemical compositions in forages. This study was carried out to explore the accuracy of near infrared spectroscopy (NIRS) for the prediction of chemical parameters of Italian ryegrass silages. A population of 267 Italian ryegrass silages representing a wide range in chemical parameters and fermentative characteristics was used in this investigation. Samples of silage were scanned at 2 nm intervals over the wavelength range 680~2,500 nm and the optical data recorded as log 1/Reflectance (log 1/R) and scanned in intact fresh condition. The spectral data were regressed against a range of chemical parameters using partial least squares (PLS) multivariate analysis in conjunction with spectral math treatments to reduced the effect of extraneous noise. The optimum calibrations were selected on the basis of the highest coefficients of determination in cross validation ($R^2$) and the lowest standard error of cross validation (SECV). The results of this study showed that NIRS predicted the chemical parameters with very high degree of accuracy. The $R^2$ and SECV were 0.98 (SECV 1.27%) for moisture, 0.88 (SECV 1.26%) for ADF, 0.84 (SECV 2.0%), 0.93 (SECV 0.96%) for CP and 0.78 (SECV 0.56), 0.81 (SECV 0.31%), 0.88 (SECV 1.26%) and 0.82 (SECV 4.46) for pH, lactic acid, TDN and RFV on a dry matter (%), respectively. Results of this experiment showed the possibility of NIRS method to predict the chemical composition and fermentation quality of Italian ryegrass silages as routine analysis method in feeding value evaluation and for farmer advice.

A comparison study of 76Se, 77Se and 78Se isotope spikes in isotope dilution method for Se (셀레늄의 동위원소 희석분석법에서 첨가 스파이크 동위원소 76Se, 77Se 및 78Se들의 비교분석)

  • Kim, Leewon;Lee, Seoyoung;Pak, Yong-Nam
    • Analytical Science and Technology
    • /
    • v.29 no.4
    • /
    • pp.170-178
    • /
    • 2016
  • Accuracy and precision of ID methods for different spike isotopes of 76Se, 77Se, and 78Se were compared for the analysis of Selenium using quadrupole ICP/MS equipped with Octopole reaction cell. From the analysis of Se inorganic standard solution, all of three spikes showed less than 1 % error and 1 % RSD for both short-term (a day) and long-term (several months) periods. They showed similar results with each other and 78Se showed was a bit better than 76Se and 77Se. However, different spikes showed different results when NIST SRM 1568a and SRM 2967 were analyzed because of the several interferences on the m/z measured and calculated. Interferences due to the generation of SeH from ORC was considered as well as As and Br in matrix. The results showed similar accuracy and precisions against SRM 1568a, which has a simple background matrix, for all three spikes and the recovery rate was about 80% with steadiness. The %RSD was a bit higher than inorganic standard (1.8 %, 8.6 %, and 6.3 % for 78Se, 76Se and 77Se, respectively) but low enough to conclude that this experiment is reliable. However, mussel tissue has a complex matrix showed inaccurate results in case of 78Se isotope spike (over 100 % RSD). 76Se and 77Se showd relatively good results of around 98.6 % and 104.2 % recovery rate. The errors were less than 5 % but the precision was a bit higher value of 15 % RSD. This clearly shows that Br interferences are so large that a simple mathematical calibration is not enough for a complex-matrixed sample. In conclusion, all three spikes show similar results when matrix is simple. However, 78Se should be avoided when large amount of Br exists in matrix. Either 76Se or 77Se would provide accurate results.

An Oceanic Current Map of the East Sea for Science Textbooks Based on Scientific Knowledge Acquired from Oceanic Measurements (해양관측을 통해 획득된 과학적 지식에 기반한 과학교과서 동해 해류도)

  • Park, Kyung-Ae;Park, Ji-Eun;Choi, Byoung-Ju;Byun, Do-Seong;Lee, Eun-Il
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.18 no.4
    • /
    • pp.234-265
    • /
    • 2013
  • Oceanic current maps in the secondary school science and earth science textbooks have played an important role in piquing students's inquisitiveness and interests in the ocean. Such maps can provide students with important opportunities to learn about oceanic currents relevant to abrupt climate change and global energy balance issues. Nevertheless, serious and diverse errors in these secondary school oceanic current maps have been discovered upon comparison with up-to-date scientific knowledge concerning oceanic currents. This study presents the fundamental methods and strategies for constructing such maps error-free, through the unification of the diverse current maps currently in the textbooks. In order to do so, we analyzed the maps found in 27 different textbooks and compared them with other up-to-date maps found in scientific journals, and developed a mapping technique for extracting digitalized quantitative information on warm and cold currents in the East Sea. We devised analysis items for the current visualization in relation to the branching features of the Tsushima Warm Current (TWC) in the Korea Strait. These analysis items include: its nearshore and offshore branches, the northern limit and distance from the coast of the East Korea Warm Current, outflow features of the TWC near the Tsugaru and Soya Straits and their returning currents, and flow patterns of the Liman Cold Current and the North Korea Cold Current. The first draft of the current map was constructed based upon the scientific knowledge and input of oceanographers based on oceanic in-situ measurements, and was corrected with the help of a questionnaire survey to the members of an oceanographic society. In addition, diverse comments have been collected from a special session of the 2013 spring meeting of the Korean Oceanographic Society to assist in the construction of an accurate current map of the East Sea which has been corrected repeatedly through in-depth discussions with oceanographers. Finally, we have obtained constructive comments and evaluations of the interim version of the current map from several well-known ocean current experts and incorporated their input to complete the map's final version. To avoid errors in the production of oceanic current maps in future textbooks, we provide the geolocation information (latitude and longitude) of the currents by digitalizing the map. This study is expected to be the first step towards the completion of an oceanographic current map suitable for secondary school textbooks, and to encourage oceanographers to take more interest in oceanic education.