• Title/Summary/Keyword: basis sub-models

Search Result 53, Processing Time 0.028 seconds

The Analysis of Inquiry Activity in the Material Domain of the Elementary Science Textbook by Science and Engineering Practices (과학 공학적 실천에 의한 초등학교 과학 교과서 물질 영역의 탐구 활동 분석)

  • Cho, Seongho;Lim, Jiyeong;Lee, Junga;Choi, GeunChang;Jeon, Kyungmoon
    • Journal of Korean Elementary Science Education
    • /
    • v.35 no.2
    • /
    • pp.181-193
    • /
    • 2016
  • We examined the inquiry activities in the material domain of the elementary science textbooks and experimental workbooks based on 2009 revised curriculum. The analysis framework was SEP (Science and Engineering Practices) - 'Asking questions and defining problems', 'developing and using models', 'planning and carrying out investigations', 'analyzing and interpreting data', 'using mathematics and computational thinking', 'constructing explanations and designing solutions', 'engaging in argument from evidence', and 'obtaining, evaluating, and communicating information'. Sub-SEP of each grade band were also used. The results showed that the $3^{rd}{\sim}5^{th}$ grade science textbooks and workbooks mainly emphasized 'make observations and/or measurements', 'represent data in tables and/or various graphical displays', or 'use evidence to construct or support an explanation or design a solution to a problem' among around 40 sub-SEP. In the case of the inquiry activities for $6^{th}$ grade, majority of sub-SEP included were also only 'collect data to produce data to serve as the basis for evidence to answer scientific questions or test design solutions', 'analyze and interpret data to provide evidence for phenomena' or 'construct a scientific explanation based on valid and reliable evidence obtained from sources'. The type of 'asking questions and defining problems', 'using mathematics and computational thinking' or 'obtaining, evaluating, and communicating information' were little found out of 8 SEP. Educational implications were discussed.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

An automaticity indicator computation and a factory automation procedure (자동화 지표 계산 및 공장자동화 순서 결정을 위한 방법)

  • Cho, Hyun-Bo;Jeong, Ki-Yong;Lee, In-Bom;Joo, Jae-Koo;Lee, Joo-Kang;Jeon, Jong-Hag
    • IE interfaces
    • /
    • v.10 no.1
    • /
    • pp.209-222
    • /
    • 1997
  • The paper provides a methodology to obtain the automaticity indicator of a factory and the sequence of enabling technologies of factory automation. The automaticity indicator is the measure of the current automation status of a factory and can be used as a crucial criteria for the future automation schedule and investment. Although most industries have their own computation methods which usually consider the number of workers in the shop floor, this research covers five evaluation items of automation, such as, production facility, material transfer system, inspection and test system, information system, and flexibility. The detailed evaluation models are developed for each item. Automation sequencing prioritizes the enabling technologies of factory automation on the basis of several criteria which consist of two phases. The first phase includes the automation indicator and the second phase includes six sub-criteria such as production rate, quality, number of workers, capital investment, development duration, development difficulty. For this evaluation, AHP(Analytical Hierarchy Process) is introduced to prevent the decision maker's subject intention. As results of the automaticity indicator and automation sequence, the manager can save time and cost in building constructive and transparent automation plans.

  • PDF

Clustering of Seoul Public Parking Lots and Demand Prediction (서울시 공영주차장 군집화 및 수요 예측)

  • Jeongjoon Hwang;Young-Hyun Shin;Hyo-Sub Sim;Dohyun Kim;Dong-Guen Kim
    • Journal of Korean Society for Quality Management
    • /
    • v.51 no.4
    • /
    • pp.497-514
    • /
    • 2023
  • Purpose: This study aims to estimate the demand for various public parking lots in Seoul by clustering similar demand types of parking lots and predicting the demand for new public parking lots. Methods: We examined real-time parking information data and used time series clustering analysis to cluster public parking lots with similar demand patterns. We also performed various regression analyses of parking demand based on diverse heterogeneous data that affect parking demand and proposed a parking demand prediction model. Results: As a result of cluster analysis, 68 public parking lots in Seoul were clustered into four types with similar demand patterns. We also identified key variables impacting parking demand and obtained a precise model for predicting parking demands. Conclusion: The proposed prediction model can be used to improve the efficiency and publicity of public parking lots in Seoul, and can be used as a basis for constructing new public parking lots that meet the actual demand. Future research could include studies on demand estimation models for each type of parking lot, and studies on the impact of parking lot usage patterns on demand.

A Study on the Design of Controller for Speed Control of the Induction Motor in the Train Propulsion System-2 (열차추진시스템에서 유도전동기의 속도제어를 위한 제어기 설계에 대한 연구-2)

  • Lee, Jung-Ho;Kim, Min-Seok;Lee, Jong-Woo
    • Journal of the Korean Society for Railway
    • /
    • v.13 no.2
    • /
    • pp.166-172
    • /
    • 2010
  • Currently, vector control is used for speed control of trains because induction motor has high performance is installed in Electric railroad systems. Also, control of the induction motor is possible through various methods by developing inverters and control theory. Presently, rolling stocks which use the induction motor are possible to brake trains by using AC motor. Therefore model of motor block and induction motor is needed to adapt various methods. There is Variable Voltage Variable Frequency (VVVF) as the control method of the induction motor. The torque and speed is controlled in the VVVF. The propulsion system model in the electric railroad has many sub-systems. So, the analysis of performance of the speed control is very complex. In this paper, simulation models are suggested by using Matlab/Simulink in the speed control characteristic. On the basis of the simulation models, the response to disturbance input is analyzed about the load. Also, the current, speed and flux control model are proposed to analyze the speed control characteristic in the train propulsion system.

Daily Streamflow Model for the Korean Watersheds (韓國 河川의 日 流出量 模型)

  • Kim, Tae-Cheol;Park, Seong-Ki;Ahn, Byoung-Gi
    • Water for future
    • /
    • v.29 no.5
    • /
    • pp.223-233
    • /
    • 1996
  • Daily streamflow model, DAWAST, considering the meteorologic and geographic characteristics of the Korean watersheds has been developed to simulate the daily streamflow with the input data of daily rainfall and pan evaporation. The model is the conceptual one with three sub-models which are optimization, generalization, and regionalization models. The conceptual model consists of three linear reservoirs representing the surface, unsaturated, and saturated soil zones and water balance analysis was carried out in each soil zones on a daily basis. Optimization model calibrates the parameters by optimization technique and is applicable to the watersheds where the daily streamflow data are available Generalization model predicts the parameters by regression equations considering the geographic, soil type, land use, and hydrogeologic characteristics of watershed and is appicable to ungaged medium or small watersheds. Regionalization model cites the parameters from the analysed ones considering river system, latitude and longitude, and is applicable to ungaged large watersheds.

  • PDF

Yield and Production Forecasting of Paddy Rice at a Sub-county Scale Resolution by Using Crop Simulation and Weather Interpolation Techniques (기상자료 공간내삽과 작물 생육모의기법에 의한 전국의 읍면 단위 쌀 생산량 예측)

  • 윤진일;조경숙
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.3 no.1
    • /
    • pp.37-43
    • /
    • 2001
  • Crop status monitoring and yield prediction at higher spatial resolution is a valuable tool in various decision making processes including agricultural policy making by the national and local governments. A prototype crop forecasting system was developed to project the size of rice crop across geographic areas nationwide, based on daily weather pattern. The system consists of crop models and the input data for 1,455 cultivation zone units (the smallest administrative unit of local government in South Korea called "Myun") making up the coterminous South Korea. CERES-rice, a rice crop growth simulation model, was tuned to have genetic characteristics pertinent to domestic cultivars. Daily maximum/minimum temperature, solar radiation, and precipitation surface on 1km by 1km grid spacing were prepared by a spatial interpolation of 63 point observations from the Korea Meteorological Administration network. Spatial mean weather data were derived for each Myun and transformed to the model input format. Soil characteristics and management information at each Myun were available from the Rural Development Administration. The system was applied to the forecasting of national rice production for the recent 3 years (1997 to 1999). The model was run with the past weather data as of September 15 each year, which is about a month earlier than the actual harvest date. Simulated yields of 1,455 Myuns were grouped into 162 counties by acreage-weighted summation to enable the validation, since the official production statistics from the Ministry of Agriculture and Forestry is on the county basis. Forecast yields were less sensitive to the changes in annual climate than the reported yields and there was a relatively weak correlation between the forecast and the reported yields. However, the projected size of rice crop at each county, which was obtained by multiplication of the mean yield with the acreage, was close to the reported production with the $r^2$ values higher than 0.97 in all three years.

  • PDF

Multi-Scaling Models of TCP/IP and Sub-Frame VBR Video Traffic

  • Erramilli, Ashok;Narayan, Onuttom;Neidhardt, Arnold;Saniee, Iraj
    • Journal of Communications and Networks
    • /
    • v.3 no.4
    • /
    • pp.383-395
    • /
    • 2001
  • Recent measurement and simulation studies have revealed that wide area network traffic displays complex statistical characteristics-possibly multifractal scaling-on fine timescales, in addition to the well-known properly of self-similar scaling on coarser timescales. In this paper we investigate the performance and network engineering significance of these fine timescale features using measured TCP anti MPEG2 video traces, queueing simulations and analytical arguments. We demonstrate that the fine timescale features can affect performance substantially at low and intermediate utilizations, while the longer timescale self-similarity is important at intermediate and high utilizations. We relate the fine timescale structure in the measured TCP traces to flow controls, and show that UDP traffic-which is not flow controlled-lacks such fine timescale structure. Likewise we relate the fine timescale structure in video MPEG2 traces to sub-frame encoding. We show that it is possibly to construct a relatively parsimonious multi-fractal cascade model of fine timescale features that matches the queueing performance of both the TCP and video traces. We outline an analytical method ta estimate performance for traffic that is self-similar on coarse timescales and multi-fractal on fine timescales, and show that the engineering problem of setting safe operating points for planning or admission controls can be significantly influenced by fine timescale fluctuations in network traffic. The work reported here can be used to model the relevant characteristics of wide area traffic across a full range of engineering timescales, and can be the basis of more accurate network performance analysis and engineering.

  • PDF

State Transition Model-based Design of Wireless Gateway Types to Connect between a Sub-network of Things and Mobile Internet and their Performance Evaluations (사물 서브 망과 모바일 인터넷을 연계하는 무선 게이트웨이 타입들의 상태천이모델 기반 설계와 성능 평가)

  • Seong, Cheol-Je;Kim, Changhwa
    • Journal of the Korea Society for Simulation
    • /
    • v.25 no.3
    • /
    • pp.1-14
    • /
    • 2016
  • This paper proposes four general wireless gateway types, which are distinguished by their own processing ways to connect between a wireless sub-network of things and the mobile internet that links mobile network to internet step by step. In this paper, we also design general processing procedures of these four types using the state transition model. Gateways of each types were developed on the basis of the resulted state transition models and their performances were evaluated through several tests, analyzed, and compared each other. As the results of our evaluation, compared with the other types, the type, which combines both of a low-power Sleep-interrupt way and polling ways for receiving data or responses in all the waiting states of a gateway, shows the best performance in all of data transmission real-timeliness, data loss and energy consumption.

Jacobian-free Newton Krylov two-node coarse mesh finite difference based on nodal expansion method

  • Zhou, Xiafeng
    • Nuclear Engineering and Technology
    • /
    • v.54 no.8
    • /
    • pp.3059-3072
    • /
    • 2022
  • A Jacobian-Free Newton Krylov Two-Nodal Coarse Mesh Finite Difference algorithm based on Nodal Expansion Method (NEM_TNCMFD_JFNK) is successfully developed and proposed to solve the three-dimensional (3D) and multi-group reactor physics models. In the NEM_TNCMFD_JFNK method, the efficient JFNK method with the Modified Incomplete LU (MILU) preconditioner is integrated and applied into the discrete systems of the NEM-based two-node CMFD method by constructing the residual functions of only the nodal average fluxes and the eigenvalue. All the nonlinear corrective nodal coupling coefficients are updated on the basis of two-nodal NEM formulation including the discontinuity factor in every few newton steps. All the expansion coefficients and interface currents of the two-node NEM need not be chosen as the solution variables to evaluate the residual functions of the NEM_TNCMFD_JFNK method, therefore, the NEM_TNCMFD_JFNK method can greatly reduce the number of solution variables and the computational cost compared with the JFNK based on the conventional NEM. Finally the NEM_TNCMFD_JFNK code is developed and then analyzed by simulating the representative PWR MOX/UO2 core benchmark, the popular NEACRP 3D core benchmark and the complicated full-core pin-by-pin homogenous core model. Numerical solutions show that the proposed NEM_TNCMFD_JFNK method with the MILU preconditioner has the good numerical accuracy and can obtain higher computational efficiency than the NEM-based two-node CMFD algorithm with the power method in the outer iteration and the Krylov method using the MILU preconditioner in the inner iteration, which indicates the NEM_TNCMFD_JFNK method can serve as a potential and efficient numerical tool for reactor neutron diffusion analysis module in the JFNK-based multiphysics coupling application.