• Title/Summary/Keyword: Process-error model

Search Result 1,158, Processing Time 0.033 seconds

Case of Non-face-to-face Teaching-learning in the subject of "Research and Guidance on Early Childhood Materials" in the Pre-service Early Childhood Teacher Training Program (예비유아교사 양성과정의 '유아 교재교구 연구 및 지도법' 교과목의 비대면 교수-학습 사례)

  • Kim, Ji-hyun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.227-238
    • /
    • 2022
  • This study is the case of non-face-to-face teaching-learning in the subject of "Research and Guidance on Early Childhood Materials" in the pre-child teacher training program. The study conducted a non-face-to-face teaching-learning model for 18 students at B University in region C who took lectures on 'Research and Guidance on Early Childhood Materials' in the first semester of 2021. As a non-face-to-face teaching-learning model, it consisted of video lectures, real-time zoom classes, and various forms of 'communication' through frequent feedback and interaction and 'participation'. As a teaching-learning strategy for the participation of pre-service early childhood teachers, comment on questions related to early childhood materials, in-depth reflection on early childhood materials through writing reflective journals and observation reports, and step-by-step presentation of making childhood materials plans, processes, and results were carried out. As a result of exploring the experience of making early childhood materials for pre-service early childhood teachers, factors such as "growth experience through trial and error," "thinking from child's point of view", "Increase efficiency and reduce burden through communication", "Process rather than result" and "The importance of communication and interaction in non-face-to-face classes"

The Inter-correlation Analysis between Oil Prices and Dry Bulk Freight Rates (유가와 벌크선 운임의 상관관계 분석에 관한 연구)

  • Ahn, Byoung-Churl;Lee, Kee-Hwan;Kim, Myoung-Hee
    • Journal of Navigation and Port Research
    • /
    • v.46 no.3
    • /
    • pp.289-296
    • /
    • 2022
  • The purpose of this study was to investigate the inter-correlation between crude oil prices and Dry Bulk Freight rates. Eco-friendly shipping fuels has being actively developed to reduce carbon emission. However, carbon neutrality will take longer than anticipated in terms of the present development process. Because of OVID-19 and the Russian invasion of Ukraine, crude oil price fluctuation has been exacerbated. So we must examine the impact on Dry Bulk Freight rates the oil prices have had, because oil prices play a major role in shipping fuels. By using the VAR (Vector Autoregressive) model with monthly data of crude oil prices (Brent, Dubai and WTI) and Dry Bulk Freight rates (BDI, BCI and (BP I) 2008.10~2022.02, the empirical analysis documents that the oil prices have an impact on Dry bulk Freight rates. From the analysis of the forecast error variance decomposition, WTI has the largest explanatory relationship with the BDI and Dubai ranks seoond, Brent ranks third. In conclusion, WTI and Dubai have the largest impact on the BDI, while there are some differences according to the ship-type.

Research on ANN based on Simulated Annealing in Parameter Optimization of Micro-scaled Flow Channels Electrochemical Machining (미세 유동채널의 전기화학적 가공 파라미터 최적화를 위한 어닐링 시뮬레이션에 근거한 인공 뉴럴 네트워크에 관한 연구)

  • Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.3
    • /
    • pp.93-98
    • /
    • 2023
  • In this paper, an artificial neural network based on simulated annealing was constructed. The mapping relationship between the parameters of micro-scaled flow channels electrochemical machining and the channel shape was established by training the samples. The depth and width of micro-scaled flow channels electrochemical machining on stainless steel surface were predicted, and the flow channels experiment was carried out with pulse power supply in NaNO3 solution to verify the established network model. The results show that the depth and width of the channel predicted by the simulated annealing artificial neural network with "4-7-2" structure are very close to the experimental values, and the error is less than 5.3%. The predicted and experimental data show that the etching degree in the process of channels electrochemical machining is closely related to voltage and current density. When the voltage is less than 5V, a "small island" is formed in the channel; When the voltage is greater than 40V, the lateral etching of the channel is relatively large, and the "dam" between the channels disappears. When the voltage is 25V, the machining morphology of the channel is the best.

Monte Carlo Study Using GEANT4 of Cyberknife Stereotactic Radiosurgery System (GEANT4를 이용한 정위적 사이버나이프 선량분포의 계산과 측정에 관한 연구)

  • Lee, Chung-Il;Shin, Jae-Won;Shin, Hun-Joo;Jung, Jae-Yong;Kim, Yon-Lae;Min, Jeong-Hwan;Hong, Seung-Woo;Chung, Su-Mi;Jung, Won-Gyun;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.2
    • /
    • pp.192-200
    • /
    • 2010
  • Cyberknife with small field size is more difficult and complex for dosimetry compared with conventional radiotherapy due to electronic disequilibrium, steep dose gradients and spectrum change of photons and electrons. The purpose of this study demonstrate the usefulness of Geant4 as verification tool of measurement dose for delivering accurate dose by comparing measurement data using the diode detector with results by Geant4 simulation. The development of Monte Carlo Model for Cyberknife was done through the two-step process. In the first step, the treatment head was simulated and Bremsstrahlung spectrum was calculated. Secondly, percent depth dose (PDD) was calculated for six cones with different size, i.e., 5 mm, 10 mm, 20 mm, 30 mm, 50 mm and 60 mm in the model of water phantom. The relative output factor was calculated about 12 fields from 5 mm to 60 mm and then it compared with measurement data by the diode detector. The beam profiles and depth profiles were calculated about different six cones and about each depth of 1.5 cm, 10 cm and 20 cm, respectively. The results about PDD were shown the error the less than 2% which means acceptable in clinical setting. For comparison of relative output factors, the difference was less than 3% in the cones lager than 7.5 mm. However, there was the difference of 6.91% in the 5 mm cone. Although beam profiles were shown the difference less than 2% in the cones larger than 20 mm, there was the error less than 3.5% in the cones smaller than 20 mm. From results, we could demonstrate the usefulness of Geant4 as dose verification tool.

Interactive analysis tools for the wide-angle seismic data for crustal structure study (Technical Report) (지각 구조 연구에서 광각 탄성파 자료를 위한 대화식 분석 방법들)

  • Fujie, Gou;Kasahara, Junzo;Murase, Kei;Mochizuki, Kimihiro;Kaneda, Yoshiyuki
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.1
    • /
    • pp.26-33
    • /
    • 2008
  • The analysis of wide-angle seismic reflection and refraction data plays an important role in lithospheric-scale crustal structure study. However, it is extremely difficult to develop an appropriate velocity structure model directly from the observed data, and we have to improve the structure model step by step, because the crustal structure analysis is an intrinsically non-linear problem. There are several subjective processes in wide-angle crustal structure modelling, such as phase identification and trial-and-error forward modelling. Because these subjective processes in wide-angle data analysis reduce the uniqueness and credibility of the resultant models, it is important to reduce subjectivity in the analysis procedure. From this point of view, we describe two software tools, PASTEUP and MODELING, to be used for developing crustal structure models. PASTEUP is an interactive application that facilitates the plotting of record sections, analysis of wide-angle seismic data, and picking of phases. PASTEUP is equipped with various filters and analysis functions to enhance signal-to-noise ratio and to help phase identification. MODELING is an interactive application for editing velocity models, and ray-tracing. Synthetic traveltimes computed by the MODELING application can be directly compared with the observed waveforms in the PASTEUP application. This reduces subjectivity in crustal structure modelling because traveltime picking, which is one of the most subjective process in the crustal structure analysis, is not required. MODELING can convert an editable layered structure model into two-way traveltimes which can be compared with time-sections of Multi Channel Seismic (MCS) reflection data. Direct comparison between the structure model of wide-angle data with the reflection data will give the model more credibility. In addition, both PASTEUP and MODELING are efficient tools for handling a large dataset. These software tools help us develop more plausible lithospheric-scale structure models using wide-angle seismic data.

Modeling and mapping fuel moisture content using equilibrium moisture content computed from weather data of the automatic mountain meteorology observation system (AMOS) (산악기상자료와 목재평형함수율에 기반한 산림연료습도 추정식 개발)

  • Lee, HoonTaek;WON, Myoung-Soo;YOON, Suk-Hee;JANG, Keun-Chang
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.3
    • /
    • pp.21-36
    • /
    • 2019
  • Dead fuel moisture content is a key variable in fire danger rating as it affects fire ignition and behavior. This study evaluates simple regression models estimating the moisture content of standardized 10-h fuel stick (10-h FMC) at three sites with different characteristics(urban and outside/inside the forest). Equilibrium moisture content (EMC) was used as an independent variable, and in-situ measured 10-h FMC was used as a dependent variable and validation data. 10-h FMC spatial distribution maps were created for dates with the most frequent fire occurrence during 2013-2018. Also, 10-h FMC values of the dates were analyzed to investigate under which 10-h FMC condition forest fire is likely to occur. As the results, fitted equations could explain considerable part of the variance in 10-h FMC (62~78%). Compared to the validation data, the models performed well with R2 ranged from 0.53 to 0.68, root mean squared error (RMSE) ranged from 2.52% to 3.43%, and bias ranged from -0.41% to 1.10%. When the 10-h FMC model fitted for one site was applied to the other sites, $R^2$ was maintained as the same while RMSE and bias increased up to 5.13% and 3.68%, respectively. The major deficiency of the 10-h FMC model was that it poorly caught the difference in the drying process after rainfall between 10-h FMC and EMC. From the analysis of 10-h FMC during the dates fire occurred, more than 70% of the fires occurred under a 10-h FMC condition of less than 10.5%. Overall, the present study suggested a simple model estimating 10-h FMC with acceptable performance. Applying the 10-h FMC model to the automatic mountain weather observation system was successfully tested to produce a national-scale 10-h FMC spatial distribution map. This data will be fundamental information for forest fire research, and will support the policy maker.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

A Real-Time Head Tracking Algorithm Using Mean-Shift Color Convergence and Shape Based Refinement (Mean-Shift의 색 수렴성과 모양 기반의 재조정을 이용한 실시간 머리 추적 알고리즘)

  • Jeong Dong-Gil;Kang Dong-Goo;Yang Yu Kyung;Ra Jong Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.1-8
    • /
    • 2005
  • In this paper, we propose a two-stage head tracking algorithm adequate for real-time active camera system having pan-tilt-zoom functions. In the color convergence stage, we first assume that the shape of a head is an ellipse and its model color histogram is acquired in advance. Then, the min-shift method is applied to roughly estimate a target position by examining the histogram similarity of the model and a candidate ellipse. To reflect the temporal change of object color and enhance the reliability of mean-shift based tracking, the target histogram obtained in the previous frame is considered to update the model histogram. In the updating process, to alleviate error-accumulation due to outliers in the target ellipse of the previous frame, the target histogram in the previous frame is obtained within an ellipse adaptively shrunken on the basis of the model histogram. In addition, to enhance tracking reliability further, we set the initial position closer to the true position by compensating the global motion, which is rapidly estimated on the basis of two 1-D projection datasets. In the subsequent stage, we refine the position and size of the ellipse obtained in the first stage by using shape information. Here, we define a robust shape-similarity function based on the gradient direction. Extensive experimental results proved that the proposed algorithm performs head hacking well, even when a person moves fast, the head size changes drastically, or the background has many clusters and distracting colors. Also, the propose algorithm can perform tracking with the processing speed of about 30 fps on a standard PC.

Data issue and Improvement Direction for Marine Spatial Planning (해양공간계획 지원을 위한 정보 현안 및 개선 방향 연구)

  • CHANG, Min-Chol;PARK, Byung-Moon;CHOI, Yun-Soo;CHOI, Hee-Jung;KIM, Tae-Hoon;LEE, Bang-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.175-190
    • /
    • 2018
  • Recently, policy of the marine advanced countries were switched from the preemption using ocean to post-project development. In this study, we suggest improvement and the pending issues when are deducted to the database of the marine spatial information is constructed over the GIS system for the Korean Marine Spatial Planning (KMSP). More than 250 spatial information in the seas of Korea were processed in order of data collection, GIS transformation, data analysis and processing, data grouping, and space mapping. It's process had some problem occurred to error of coordinate system, digitizing process for lack of the spatial information, performed by overlapping for the original marine spatial information, and so on. Moreover, solution is needed to data processing methods excluding personal information which is necessary when produce the spatial data for analysis of the used marine status and minimized method for different between the spatial information based GIS system and the based real information. Therefore, collection and securing system of lacking marine spatial information is enhanced for marine spatial planning. it is necessary to link and expand marine fisheries survey system. It is needed to the marine spatial planning. The marine spatial planning is required to the evaluation index of marine spatial and detailed marine spatial map. In addition, Marine spatial planning is needed to standard guideline and system of quality management. This standard guideline generate to phase for production, processing, analysis, and utilization. Also, the quality management system improve for the information quality of marine spatial information. Finally, we suggest necessity need for the depths study which is considered as opening extension of the marine spatial information and deduction on application model.

Near Infrared Spectroscopy for Diagnosis: Influence of Mammary Gland Inflammation on Cow´s Milk Composition Measurement

  • Roumiana Tsenkova;Stefka Atanassova;Kiyohiko Toyoda
    • Near Infrared Analysis
    • /
    • v.2 no.1
    • /
    • pp.59-66
    • /
    • 2001
  • Nowadays, medical diagnostics is efficiently supported by clinical chemistry and near infrared spectroscopy is becoming a new dimension, which has shown high potential to provide valuable information for diagnosis. The investigation was carried out to study the influence of mammary gland inflammation, called mastitis, on cow´s milk spectra and milk composition measured by near infrared spectroscopy (NIRS). Milk somatic cell counts (SCC) in milk were used as a measure of mammary gland inflammation. Naturally occurred variations with milk composition within lactation and in the process of milking were included in the experimental design of this study. Time series of unhomogenized, raw milk spectral data were collected from 3 cow along morning and evening milking, for 5 consecutive months, within their second lactation. In the time of the trial, the investigated cows had periods with mammary gland inflammation. Transmittance spectra of 258 milk samples were obtained by NIRSystem 6500 spectrophotometer in 1100-2400 nm region. Calibration equations for the examined milk components were developed by PLS regression using 3 different sets of samples: samples with low somatic cell count (SCC), samples with high SCC and combined data set. The NIR calibration and prediction of individual cow´s milk fat, protein, and lactose were highly influenced by the presence of mil samples from animals with mammary gland inflammation in the data set. The best accuracy of prediction (i.e. the lower SEP and the higher correlation coefficient) for fat, protein and lactose was obtained for equations, developed when using only “healthy” samples, with low SCC. The standard error of prediction increased and correlation coefficient decreased significantly when equations for low SCC milk were used to predict examined components in “mastitis” samples with high SCC, and vice versa. Combined data set that included samples from healthy and mastitis animals could be used to build up regression models for screening. Further use of separate model for healthy samples improved milk composition measurement. Regression vectors for NIR mild protein measurement obtained for “healthy” and “mastitic” group were compared and revealed differences in 1390-1450 nm, 1500-1740 nm and 1900-2200 nm regions and thus illustrated post-secretory breakdown of milk proteins by hydrolytic enzymes that occurred with mastitis. For the first time it has been found that monitoring the spectral differences in water bands at 1440 nm and 1912 nm could provide valuable information for inflammation diagnosis.