• Title/Summary/Keyword: error backpropagation algorithm

Search Result 88, Processing Time 0.026 seconds

Using Artificial Neural Network in the reverse design of a composite sandwich structure

  • Mortda M. Sahib;Gyorgy Kovacs
    • Structural Engineering and Mechanics
    • /
    • v.85 no.5
    • /
    • pp.635-644
    • /
    • 2023
  • The design of honeycomb sandwich structures is often challenging because these structures can be tailored from a variety of possible cores and face sheets configurations, therefore, the design of sandwich structures is characterized as a time-consuming and complex task. A data-driven computational approach that integrates the analytical method and Artificial Neural Network (ANN) is developed by the authors to rapidly predict the design of sandwich structures for a targeted maximum structural deflection. The elaborated ANN reverse design approach is applied to obtain the thickness of the sandwich core, the thickness of the laminated face sheets, and safety factors for composite sandwich structure. The required data for building ANN model were obtained using the governing equations of sandwich components in conjunction with the Monte Carlo Method. Then, the functional relationship between the input and output features was created using the neural network Backpropagation (BP) algorithm. The input variables were the dimensions of the sandwich structure, the applied load, the core density, and the maximum deflection, which was the reverse input given by the designer. The outstanding performance of reverse ANN model revealed through a low value of mean square error (MSE) together with the coefficient of determination (R2) close to the unity. Furthermore, the output of the model was in good agreement with the analytical solution with a maximum error 4.7%. The combination of reverse concept and ANN may provide a potentially novel approach in designing of sandwich structures. The main added value of this study is the elaboration of a reverse ANN model, which provides a low computational technique as well as savestime in the design or redesign of sandwich structures compared to analytical and finite element approaches.

Prediction of Elementary Students' Computer Literacy Using Neural Networks (신경망을 이용한 초등학생 컴퓨터 활용 능력 예측)

  • Oh, Ji-Young;Lee, Soo-Jung
    • Journal of The Korean Association of Information Education
    • /
    • v.12 no.3
    • /
    • pp.267-274
    • /
    • 2008
  • A neural network is a modeling technique useful for finding out hidden patterns from data through repetitive learning process and for predicting target values for new data. In this study, we built multilayer perceptron neural networks for prediction of the students' computer literacy based on their personal characteristics, home and social environment, and academic record of other subjects. Prediction performance of the network was compared with that of a widely used prediction method, the regression model. From our experiments, it was found that personal characteristic features best explained computer proficiency level of a student, whereas the features of home and social environment resulted in the worse prediction accuracy among all. Moreover, the developed neural network model produced far more accurate prediction than the regression model.

  • PDF

Underachievers Realm Decision Support System using Computational Intelligence (연산지능을 이용한 부진아 영역진단 지원 시스템)

  • Lim, Chang-Gyoon;Kim, Kang-Chul;Yoo, Jae-Hung;Jhung, Jung-Ha
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.30-36
    • /
    • 2006
  • In this paper, we proposed the system that supports underachievers realm decision of Korean language curriculum in the middle school. Learning disability and stagnation should be minimized by using and applying the proposed system. The input layer of the system contains 36 variables, which can be specific items in the Koran language curriculum. The variables are encoded with the specific coding schemes. The number of nodes in the hidden layer was determined through a series of learning stage with best result. We assigned 4 neurons, which correspond to one realm of the curriculum to output layer respectively. We used the multilayer perceptron and the error backpropagation algorithm to develope the system. A total of 2,008 data for training and 380 for testing were used for evaluating the performance.

A Design and Implementation of Digital Vessel Context Diagnosis System Based on Context Aware (상황 인식 기반 해양 디지털 선박 상황 진단 시스템 구현 및 설계)

  • Song, Byoung-Ho;Choi, Myeong-Soo;Kwon, Jang-Woo;Lee, Sung-Ro
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.6B
    • /
    • pp.859-866
    • /
    • 2010
  • Digital vessels can occur large a disaster at sea because vessels in fire and collision in case of certain unforeseen circumstances. In this paper, We propose digital vessel context monitoring system through risk analysis. We propose environment information analysis system using wireless sensor that have to acquire marine environment and context of marine digital vessel. For conducting simulation, we chose 300 data sets to train the neural network. As a result, we obtained about 96% accuracy for fire risk context and we obtained 88.7% accuracy for body of vessel risk context. To improve the accuracy of the system, we implement a FEC (Forward Error Correction) block. We implemented digital vessel context monitoring system that transmitted to diagnosis result in CDMA.

Optimal Parameter Extraction based on Deep Learning for Premature Ventricular Contraction Detection (심실 조기 수축 비트 검출을 위한 딥러닝 기반의 최적 파라미터 검출)

  • Cho, Ik-sung;Kwon, Hyeog-soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1542-1550
    • /
    • 2019
  • Legacy studies for classifying arrhythmia have been studied to improve the accuracy of classification, Neural Network, Fuzzy, etc. Deep learning is most frequently used for arrhythmia classification using error backpropagation algorithm by solving the limit of hidden layer number, which is a problem of neural network. In order to apply a deep learning model to an ECG signal, it is necessary to select an optimal model and parameters. In this paper, we propose optimal parameter extraction method based on a deep learning. For this purpose, R-wave is detected in the ECG signal from which noise has been removed, QRS and RR interval segment is modelled. And then, the weights were learned by supervised learning method through deep learning and the model was evaluated by the verification data. The detection and classification rate of R wave and PVC is evaluated through MIT-BIH arrhythmia database. The performance results indicate the average of 99.77% in R wave detection and 97.84% in PVC classification.

Parameter Extraction for Based on AR and Arrhythmia Classification through Deep Learning (AR 기반의 특징점 추출과 딥러닝을 통한 부정맥 분류)

  • Cho, Ik-sung;Kwon, Hyeog-soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.10
    • /
    • pp.1341-1347
    • /
    • 2020
  • Legacy studies for classifying arrhythmia have been studied in order to improve the accuracy of classification, Neural Network, Fuzzy, Machine Learning, etc. In particular, deep learning is most frequently used for arrhythmia classification using error backpropagation algorithm by solving the limit of hidden layer number, which is a problem of neural network. In order to apply a deep learning model to an ECG signal, it is necessary to select an optimal model and parameters. In this paper, we propose parameter extraction based on AR and arrhythmia classification through a deep learning. For this purpose, the R-wave is detected in the ECG signal from which noise has been removed, QRS and RR interval is modelled. And then, the weights were learned by supervised learning method through deep learning and the model was evaluated by the verification data. The classification rate of PVC is evaluated through MIT-BIH arrhythmia database. The achieved scores indicate arrhythmia classification rate of over 97%.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.