• Title/Summary/Keyword: Methods of problem solving

Search Result 1,411, Processing Time 0.03 seconds

Development and Applications of Mathematical Proof Learning-Teaching Methods: the Generative-Convergent Model (증명학습에서 생성-수렴 수업 모형의 개발과 적용)

  • 이종희;김부미
    • School Mathematics
    • /
    • v.6 no.1
    • /
    • pp.59-90
    • /
    • 2004
  • This study has been established with two purposes. The first one is to development the learning-teaching model for enhancing students' creative proof capacities in the domain of demonstrative geometry as subject content. The second one is to aim at experimentally testing its effectiveness. First, we develop the learning-teaching model for enhancing students' proof capacities. This model is named the generative-convergent model based instruction. It consists of the following components: warming-up activities, generative activities, convergent activities, reflective discussion, other high quality resources etc. Second, to investigate the effects of the generative-convergent model based instruction, 160 8th-grade students are selected and are assigned to experimental and control groups. We focused that the generative-convergent model based instruction would be more effective than the traditional teaching method for improving middle school students' proof-writing capacities and error remediation. In conclusion, the generative-convergent model based instruction would be useful for improving middle grade students' proof-writing capacities. We suggest the following: first, it is required to refine the generative-convergent model for enhancing proof-problem solving capacities; second, it is also required to develop teaching materials in the generative-convergent model based instruction.

  • PDF

Educational Psychology in the Age of the Fourth Industrial Revolution (제4차 산업혁명 시대의 교육심리학)

  • LEE, Sun-young
    • (The)Korea Educational Review
    • /
    • v.23 no.1
    • /
    • pp.231-260
    • /
    • 2017
  • The Fourth Industrial Revolution foreshadows radical changes in our lives. In the era of the fourth industrial revolution called the digital revolution, individualized learning based on ubiquitous learning is emphasized. The contents of learning will be centered on procedural knowledge rather than narrative knowledge, and fusion education in which boundaries between learning domains are broken down will be achieved. First of all, learners in the fourth industrial revolution era should have critical thinking and problem solving abilities. Metacognition based on self-control and cognitive flexibility is important for effective self-directed and active learning. Creativity-based collaborative activities, social vision skills, and social and emotional skills are also important competencies. Therefore, in order to provide individualized learning contents to learners in the fourth industrial revolution era, they should be transformed into learning paradigm based on personal characteristics such as learners' self-efficacy, interest, curiosity and creativity. In addition to this, evaluation forms should be diversified according to changing teaching and learning methods. In order to cultivate teachers to lead such educational innovation, it is necessary to reconsider the teaching capacity. Teachers should be able to construct creative lessons by skillfully exploiting technology in future learning environments. In addition to this, it should also have the ability to collaborate and cognitive flexibility to converge with other academic disciplines. Along with these discussions, we proposed the need for policy intervention along with changes in education.

Detorque force and surface change of coated abutment screw after repeated closing and opening (코팅된 지대주 나사의 반복 착탈 후 풀림력과 표면변화에 대한 연구)

  • Jang, Jong-Suk;Kim, Hee-Jung;Chung, Chae-Heon
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.5
    • /
    • pp.500-510
    • /
    • 2008
  • Statement of problem: Recently researches about WC/C (Tungsten Carbide/Carbon) or TiN (Titanium Nitride) coating on abutment screws are going on. It decreases friction coefficient, resistance against corrosion and withdrawal of physical fragility when the coating is applied to the metal surfaces. It is reported that coated abutment screws improved abrasion, adaptability and detorque force. Purpose: This study is about the effects of coated abutment screws on loosening of screw and for the purpose of solving the loosening phenomenon of abutment screws which is clinical problem. Material and methods: Detorque force and surface changes are compared when 10 times of repeated closing and opening are applied to both uncoated titanium abutment screws (Group A) and coated abutment screws with WC/C (Group B) and TiN (Group C). Each group was made up of 10 abutment screws. Results: 1. Before repeated closing and opening, Somewhat rough surface with regular direction was observed in Group A. Coated granules were observed in group B and group C and overall coated layer appeared in regular and smooth form. 2. Before repeated closing and opening, The coated surface showed bigger and thicker size of coated granules in Group C than Group B. 3. After repeated closing and opening, abrasion and deformation of abutment screw surface was observed in Group A and Group B. Exfoliation phenomenon was observed in Group B. 4. Group A showed biggest range of decrease when the weight changes of abutment screws were measured before and after repeated closing and opening. Group C showed less weight changes than Group B but there was no statistical difference between two groups. 5. Group B and Group C showed higher average detorque force than Group A and there was statistical difference. 6. Group A showed more prominent decrease tendency of average detorque force than Group B and Group C. Conclusion: Coated abutment screws with WC/C or TiN did not show prominent surface changes than uncoated titanium abutment screws even though they were repeatedly used. And they showed excellent resistance against friction and high detorque force. Thus it is considered that adaptation of WC/C or TiN coating on abutment screws will improve the screw loosening problem.

Design and Implementation of Communication Mechanism between External Educational Contents and LAMS (LAMS와 외부 교육용 콘텐츠간의 통신 메커니즘의 설계 및 구현)

  • Park, Chan;Jung, Seok-In;Han, Cheol-Dong;Seong, Dong-Ook;Yoo, Jae-Soo;Yoo, Kwan-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.3
    • /
    • pp.361-371
    • /
    • 2009
  • LAMS(learning activity management system)[1] is one of the useful tools for designing and managing effectively the learning activities such as web search, chat, forum, grouping, and board. Even if LAMS has been upgraded to support the methods for making e-Learning contents conveniently, it does not have a method to communicate with external educational contents (EEC) made by external tools like Flash, Java, Visual C++, and so on. LAMS, which has been operated on Web environment, should manage all EECs like video and dynamic educational contents as educational contents in LAMS database. However, the current LAMS does not support the functionalities which can provide information of EECs to LAMS database and can also access any information about EECs from the database yet. In this paper, we propose the communication mechanism between the LAMS and EECs for solving the problem. In special, the mechanism makes many statistical data by using the information, and provides them for reflecting in education, and can control various learning management that was impossible under the original LAMS. Based on the proposed mechanism, teachers using LAMS can make more various educational contents and can manage them in the system.

Three-Dimensional High-Frequency Electromagnetic Modeling Using Vector Finite Elements (벡터 유한 요소를 이용한 고주파 3차원 전자탐사 모델링)

  • Son Jeong-Sul;Song Yoonho;Chung Seung-Hwan;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.5 no.4
    • /
    • pp.280-290
    • /
    • 2002
  • Three-dimensional (3-D) electromagnetic (EM) modeling algorithm has been developed using finite element method (FEM) to acquire more efficient interpretation techniques of EM data. When FEM based on nodal elements is applied to EM problem, spurious solutions, so called 'vector parasite', are occurred due to the discontinuity of normal electric fields and may lead the completely erroneous results. Among the methods curing the spurious problem, this study adopts vector element of which basis function has the amplitude and direction. To reduce computational cost and required core memory, complex bi-conjugate gradient (CBCG) method is applied to solving complex symmetric matrix of FEM and point Jacobi method is used to accelerate convergence rate. To verify the developed 3-D EM modeling algorithm, its electric and magnetic field for a layered-earth model are compared with those of layered-earth solution. As we expected, the vector based FEM developed in this study does not cause ny vector parasite problem, while conventional nodal based FEM causes lots of errors due to the discontinuity of field variables. For testing the applicability to high frequencies 100 MHz is used as an operating frequency for the layer structure. Modeled fields calculated from developed code are also well matched with the layered-earth ones for a model with dielectric anomaly as well as conductive anomaly. In a vertical electric dipole source case, however, the discontinuity of field variables causes the conventional nodal based FEM to include a lot of errors due to the vector parasite. Even for the case, the vector based FEM gave almost the same results as the layered-earth solution. The magnetic fields induced by a dielectric anomaly at high frequencies show unique behaviors different from those by a conductive anomaly. Since our 3-D EM modeling code can reflect the effect from a dielectric anomaly as well as a conductive anomaly, it may be a groundwork not only to apply high frequency EM method to the field survey but also to analyze the fold data obtained by high frequency EM method.

A Study on Shape Optimization of Plane Truss Structures (평면(平面) 트러스 구조물(構造物)의 형상최적화(形狀最適化)에 관한 구연(究研))

  • Lee, Gyu won;Byun, Keun Joo;Hwang, Hak Joo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.5 no.3
    • /
    • pp.49-59
    • /
    • 1985
  • Formulation of the geometric optimization for truss structures based on the elasticity theory turn out to be the nonlinear programming problem which has to deal with the Cross sectional area of the member and the coordinates of its nodes simultaneously. A few techniques have been proposed and adopted for the analysis of this nonlinear programming problem for the time being. These techniques, however, bear some limitations on truss shapes loading conditions and design criteria for the practical application to real structures. A generalized algorithm for the geometric optimization of the truss structures which can eliminate the above mentioned limitations, is developed in this study. The algorithm developed utilizes the two-phases technique. In the first phase, the cross sectional area of the truss member is optimized by transforming the nonlinear problem into SUMT, and solving SUMT utilizing the modified Newton-Raphson method. In the second phase, the geometric shape is optimized utilizing the unidirctional search technique of the Rosenbrock method which make it possible to minimize only the objective function. The algorithm developed in this study is numerically tested for several truss structures with various shapes, loading conditions and design criteria, and compared with the results of the other algorithms to examme its applicability and stability. The numerical comparisons show that the two-phases algorithm developed in this study is safely applicable to any design criteria, and the convergency rate is very fast and stable compared with other iteration methods for the geometric optimization of truss structures.

  • PDF

An Analysis of Research Trends Related to Software Education for Young Children in Korea (유아의 소프트웨어 교육 관련 국내 최근 연구의 경향 분석)

  • Chun, Hui Young;Park, Soyeon;Sung, Jihyun
    • Korean Journal of Child Education & Care
    • /
    • v.19 no.2
    • /
    • pp.177-196
    • /
    • 2019
  • Objective: This study aims to analyze research trends related to software education for young children, focusing on studies published in Korea from 2016 to 2019 March. Methods: A total of 26 research publications on software education for young children, searched from Korea Citation Index and Research Information Sharing Service were identified for the analysis. The trend in these publications was classified and examined respectively by publication dates, types of publications, and the fields of study. To investigate a means of research, the analysis included key topics, types of research methods, and characteristics of the study variables. Results: The results of the analysis show that the number of publications on the topic of software education for young children has increased over the three years, of which most were published as a scholarly journal article. Among the 26 research studies analyzed, 16 (61.5%) are related to the field of early childhood education or child studies. Key topics and target subjects of the most research include the curriculum development of software education for young children or the effectiveness of software education on 4- and 5-year-old children. Most of the analyzed studies are experimental research designs or in the form of literature reviews. The most frequently studied research variable is young children's cognitive characteristics. For the studies that employ educational programs, the use of a physical computing environment is prevalent, and the most frequently used robot as a programming tool is "Albert". The duration of the program implementation varies, ranging from 5 weeks to 48 weeks. In the analyzed research studies, computational thinking is conceptualized as a problem-solving skill that can be improved by software education, and assessed by individual instruments measuring sub-factors of computational thinking. Conclusion/Implications: The present study reveals that, although the number of research publications in software education for young children has increased, the overall sufficiency of the accumulated research data and a variety of research methods are still lacking. An increased interest in software education for young children and more research activities in this area are needed to develop and implement developmentally appropriate software education programs in early childhood settings.

Multi-Variate Tabular Data Processing and Visualization Scheme for Machine Learning based Analysis: A Case Study using Titanic Dataset (기계 학습 기반 분석을 위한 다변량 정형 데이터 처리 및 시각화 방법: Titanic 데이터셋 적용 사례 연구)

  • Juhyoung Sung;Kiwon Kwon;Kyoungwon Park;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.121-130
    • /
    • 2024
  • As internet and communication technology (ICT) is improved exponentially, types and amount of available data also increase. Even though data analysis including statistics is significant to utilize this large amount of data, there are inevitable limits to process various and complex data in general way. Meanwhile, there are many attempts to apply machine learning (ML) in various fields to solve the problems according to the enhancement in computational performance and increase in demands for autonomous systems. Especially, data processing for the model input and designing the model to solve the objective function are critical to achieve the model performance. Data processing methods according to the type and property have been presented through many studies and the performance of ML highly varies depending on the methods. Nevertheless, there are difficulties in deciding which data processing method for data analysis since the types and characteristics of data have become more diverse. Specifically, multi-variate data processing is essential for solving non-linear problem based on ML. In this paper, we present a multi-variate tabular data processing scheme for ML-aided data analysis by using Titanic dataset from Kaggle including various kinds of data. We present the methods like input variable filtering applying statistical analysis and normalization according to the data property. In addition, we analyze the data structure using visualization. Lastly, we design an ML model and train the model by applying the proposed multi-variate data process. After that, we analyze the passenger's survival prediction performance of the trained model. We expect that the proposed multi-variate data processing and visualization can be extended to various environments for ML based analysis.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

A Study on Forecasting Accuracy Improvement of Case Based Reasoning Approach Using Fuzzy Relation (퍼지 관계를 활용한 사례기반추론 예측 정확성 향상에 관한 연구)

  • Lee, In-Ho;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.67-84
    • /
    • 2010
  • In terms of business, forecasting is a work of what is expected to happen in the future to make managerial decisions and plans. Therefore, the accurate forecasting is very important for major managerial decision making and is the basis for making various strategies of business. But it is very difficult to make an unbiased and consistent estimate because of uncertainty and complexity in the future business environment. That is why we should use scientific forecasting model to support business decision making, and make an effort to minimize the model's forecasting error which is difference between observation and estimator. Nevertheless, minimizing the error is not an easy task. Case-based reasoning is a problem solving method that utilizes the past similar case to solve the current problem. To build the successful case-based reasoning models, retrieving the case not only the most similar case but also the most relevant case is very important. To retrieve the similar and relevant case from past cases, the measurement of similarities between cases is an important key factor. Especially, if the cases contain symbolic data, it is more difficult to measure the distances. The purpose of this study is to improve the forecasting accuracy of case-based reasoning approach using fuzzy relation and composition. Especially, two methods are adopted to measure the similarity between cases containing symbolic data. One is to deduct the similarity matrix following binary logic(the judgment of sameness between two symbolic data), the other is to deduct the similarity matrix following fuzzy relation and composition. This study is conducted in the following order; data gathering and preprocessing, model building and analysis, validation analysis, conclusion. First, in the progress of data gathering and preprocessing we collect data set including categorical dependent variables. Also, the data set gathered is cross-section data and independent variables of the data set include several qualitative variables expressed symbolic data. The research data consists of many financial ratios and the corresponding bond ratings of Korean companies. The ratings we employ in this study cover all bonds rated by one of the bond rating agencies in Korea. Our total sample includes 1,816 companies whose commercial papers have been rated in the period 1997~2000. Credit grades are defined as outputs and classified into 5 rating categories(A1, A2, A3, B, C) according to credit levels. Second, in the progress of model building and analysis we deduct the similarity matrix following binary logic and fuzzy composition to measure the similarity between cases containing symbolic data. In this process, the used types of fuzzy composition are max-min, max-product, max-average. And then, the analysis is carried out by case-based reasoning approach with the deducted similarity matrix. Third, in the progress of validation analysis we verify the validation of model through McNemar test based on hit ratio. Finally, we draw a conclusion from the study. As a result, the similarity measuring method using fuzzy relation and composition shows good forecasting performance compared to the similarity measuring method using binary logic for similarity measurement between two symbolic data. But the results of the analysis are not statistically significant in forecasting performance among the types of fuzzy composition. The contributions of this study are as follows. We propose another methodology that fuzzy relation and fuzzy composition could be applied for the similarity measurement between two symbolic data. That is the most important factor to build case-based reasoning model.