• Title/Summary/Keyword: Hybrid learning

Search Result 566, Processing Time 0.026 seconds

Developing an Ensemble Classifier for Bankruptcy Prediction (부도 예측을 위한 앙상블 분류기 개발)

  • Min, Sung-Hwan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.7
    • /
    • pp.139-148
    • /
    • 2012
  • An ensemble of classifiers is to employ a set of individually trained classifiers and combine their predictions. It has been found that in most cases the ensembles produce more accurate predictions than the base classifiers. Combining outputs from multiple classifiers, known as ensemble learning, is one of the standard and most important techniques for improving classification accuracy in machine learning. An ensemble of classifiers is efficient only if the individual classifiers make decisions as diverse as possible. Bagging is the most popular method of ensemble learning to generate a diverse set of classifiers. Diversity in bagging is obtained by using different training sets. The different training data subsets are randomly drawn with replacement from the entire training dataset. The random subspace method is an ensemble construction technique using different attribute subsets. In the random subspace, the training dataset is also modified as in bagging. However, this modification is performed in the feature space. Bagging and random subspace are quite well known and popular ensemble algorithms. However, few studies have dealt with the integration of bagging and random subspace using SVM Classifiers, though there is a great potential for useful applications in this area. The focus of this paper is to propose methods for improving SVM performance using hybrid ensemble strategy for bankruptcy prediction. This paper applies the proposed ensemble model to the bankruptcy prediction problem using a real data set from Korean companies.

A Student Modeling Technique for Developing Student′s Level Oriented Dynamic Tutoring System for Science Class (수준별 동적 교수.학습 시스템 개발을 위한 학습자 모델링 기법)

  • 김성희;김수형
    • Journal of the Korea Society of Computer and Information
    • /
    • v.7 no.2
    • /
    • pp.59-67
    • /
    • 2002
  • Major Characteristic of the 7th National Curriculum in science is to provide deep and supplementary learning, depending on the level of each learner. In the level-oriented curriculum, coursewares are used to present teaching materials to various levels. In most coursewares, however, they provide their contents at a uniform level and hence it is hard to expect level-oriented learning. This paper presents learner's modeling for developing student's level-oriented dynamic tutoring system for science class , Instructional module of this system made by component unit is able to be reconstructed dynamically. Learning module is constructed using a hybrid model mixed of Overlay and Bug model. Testing module interprets diagnostic errors to be established by given differentiated weight in accordance with item's difficulty and discrimination. Through ITS student modeling, this system presents various problem solving methods reconstructed by learner's level differentiated.

  • PDF

Resource Allocation for Heterogeneous Service in Green Mobile Edge Networks Using Deep Reinforcement Learning

  • Sun, Si-yuan;Zheng, Ying;Zhou, Jun-hua;Weng, Jiu-xing;Wei, Yi-fei;Wang, Xiao-jun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.7
    • /
    • pp.2496-2512
    • /
    • 2021
  • The requirements for powerful computing capability, high capacity, low latency and low energy consumption of emerging services, pose severe challenges to the fifth-generation (5G) network. As a promising paradigm, mobile edge networks can provide services in proximity to users by deploying computing components and cache at the edge, which can effectively decrease service delay. However, the coexistence of heterogeneous services and the sharing of limited resources lead to the competition between various services for multiple resources. This paper considers two typical heterogeneous services: computing services and content delivery services, in order to properly configure resources, it is crucial to develop an effective offloading and caching strategies. Considering the high energy consumption of 5G base stations, this paper considers the hybrid energy supply model of traditional power grid and green energy. Therefore, it is necessary to design a reasonable association mechanism which can allocate more service load to base stations rich in green energy to improve the utilization of green energy. This paper formed the joint optimization problem of computing offloading, caching and resource allocation for heterogeneous services with the objective of minimizing the on-grid power consumption under the constraints of limited resources and QoS guarantee. Since the joint optimization problem is a mixed integer nonlinear programming problem that is impossible to solve, this paper uses deep reinforcement learning method to learn the optimal strategy through a lot of training. Extensive simulation experiments show that compared with other schemes, the proposed scheme can allocate resources to heterogeneous service according to the green energy distribution which can effectively reduce the traditional energy consumption.

A System for Determining the Growth Stage of Fruit Tree Using a Deep Learning-Based Object Detection Model (딥러닝 기반의 객체 탐지 모델을 활용한 과수 생육 단계 판별 시스템)

  • Bang, Ji-Hyeon;Park, Jun;Park, Sung-Wook;Kim, Jun-Yung;Jung, Se-Hoon;Sim, Chun-Bo
    • Smart Media Journal
    • /
    • v.11 no.4
    • /
    • pp.9-18
    • /
    • 2022
  • Recently, research and system using AI is rapidly increasing in various fields. Smart farm using artificial intelligence and information communication technology is also being studied in agriculture. In addition, data-based precision agriculture is being commercialized by convergence various advanced technology such as autonomous driving, satellites, and big data. In Korea, the number of commercialization cases of facility agriculture among smart agriculture is increasing. However, research and investment are being biased in the field of facility agriculture. The gap between research and investment in facility agriculture and open-air agriculture continues to increase. The fields of fruit trees and plant factories have low research and investment. There is a problem that the big data collection and utilization system is insufficient. In this paper, we are proposed the system for determining the fruit tree growth stage using a deep learning-based object detection model. The system was proposed as a hybrid app for use in agricultural sites. In addition, we are implemented an object detection function for the fruit tree growth stage determine.

Finite Element Analysis Study of CJS Composite Structural System with CFT Columns and Composite Beams (CFT기둥과 합성보로 구성된 CJS합성구조시스템의 유한요소해석 연구)

  • Moon, A Hae;Shin, Jiuk;Lim, Chang Gue;Lee, Kihak
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.71-82
    • /
    • 2022
  • This paper presents the effect on the inelastic behavior and structural performance of concrete and filled steel pipe through a numerical method for reliable judgment under various load conditions of the CJS composite structural system. Variable values optimized for the CJS synthetic structural system and the effects of multiple variables used for finite element analysis to present analytical modeling were compared and analyzed with experimental results. The Winfrith concrete model was used as a concrete material model that describes the confinement effect well, and the concrete structure was modeled with solid elements. Through geometric analysis of shell and solid elements, rectangular steel pipe columns and steel elements were modeled as shell elements. In addition, the slip behavior of the joint between the concrete column and the rectangular steel pipe was described using the Surface-to-Surface function. After finite element analysis modeling, simulation was performed for cyclic loading after assuming that the lower part of the foundation was a pin in the same way as in the experiment. The analysis model was verified by comparing the calculated analysis results with the experimental results, focusing on initial stiffness, maximum strength, and energy dissipation capability.

Cyber Threat Intelligence Traffic Through Black Widow Optimisation by Applying RNN-BiLSTM Recognition Model

  • Kanti Singh Sangher;Archana Singh;Hari Mohan Pandey
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.99-109
    • /
    • 2023
  • The darknet is frequently referred to as the hub of illicit online activity. In order to keep track of real-time applications and activities taking place on Darknet, traffic on that network must be analysed. It is without a doubt important to recognise network traffic tied to an unused Internet address in order to spot and investigate malicious online activity. Any observed network traffic is the result of mis-configuration from faked source addresses and another methods that monitor the unused space address because there are no genuine devices or hosts in an unused address block. Digital systems can now detect and identify darknet activity on their own thanks to recent advances in artificial intelligence. In this paper, offer a generalised method for deep learning-based detection and classification of darknet traffic. Furthermore, analyse a cutting-edge complicated dataset that contains a lot of information about darknet traffic. Next, examine various feature selection strategies to choose a best attribute for detecting and classifying darknet traffic. For the purpose of identifying threats using network properties acquired from darknet traffic, devised a hybrid deep learning (DL) approach that combines Recurrent Neural Network (RNN) and Bidirectional LSTM (BiLSTM). This probing technique can tell malicious traffic from legitimate traffic. The results show that the suggested strategy works better than the existing ways by producing the highest level of accuracy for categorising darknet traffic using the Black widow optimization algorithm as a feature selection approach and RNN-BiLSTM as a recognition model.

Image Quality and Lesion Detectability of Lower-Dose Abdominopelvic CT Obtained Using Deep Learning Image Reconstruction

  • June Park;Jaeseung Shin;In Kyung Min;Heejin Bae;Yeo-Eun Kim;Yong Eun Chung
    • Korean Journal of Radiology
    • /
    • v.23 no.4
    • /
    • pp.402-412
    • /
    • 2022
  • Objective: To evaluate the image quality and lesion detectability of lower-dose CT (LDCT) of the abdomen and pelvis obtained using a deep learning image reconstruction (DLIR) algorithm compared with those of standard-dose CT (SDCT) images. Materials and Methods: This retrospective study included 123 patients (mean age ± standard deviation, 63 ± 11 years; male:female, 70:53) who underwent contrast-enhanced abdominopelvic LDCT between May and August 2020 and had prior SDCT obtained using the same CT scanner within a year. LDCT images were reconstructed with hybrid iterative reconstruction (h-IR) and DLIR at medium and high strengths (DLIR-M and DLIR-H), while SDCT images were reconstructed with h-IR. For quantitative image quality analysis, image noise, signal-to-noise ratio, and contrast-to-noise ratio were measured in the liver, muscle, and aorta. Among the three different LDCT reconstruction algorithms, the one showing the smallest difference in quantitative parameters from those of SDCT images was selected for qualitative image quality analysis and lesion detectability evaluation. For qualitative analysis, overall image quality, image noise, image sharpness, image texture, and lesion conspicuity were graded using a 5-point scale by two radiologists. Observer performance in focal liver lesion detection was evaluated by comparing the jackknife free-response receiver operating characteristic figures-of-merit (FOM). Results: LDCT (35.1% dose reduction compared with SDCT) images obtained using DLIR-M showed similar quantitative measures to those of SDCT with h-IR images. All qualitative parameters of LDCT with DLIR-M images but image texture were similar to or significantly better than those of SDCT with h-IR images. The lesion detectability on LDCT with DLIR-M images was not significantly different from that of SDCT with h-IR images (reader-averaged FOM, 0.887 vs. 0.874, respectively; p = 0.581). Conclusion: Overall image quality and detectability of focal liver lesions is preserved in contrast-enhanced abdominopelvic LDCT obtained with DLIR-M relative to those in SDCT with h-IR.

Deep Learning-Based Reconstruction Algorithm With Lung Enhancement Filter for Chest CT: Effect on Image Quality and Ground Glass Nodule Sharpness

  • Min-Hee Hwang;Shinhyung Kang;Ji Won Lee;Geewon Lee
    • Korean Journal of Radiology
    • /
    • v.25 no.9
    • /
    • pp.833-842
    • /
    • 2024
  • Objective: To assess the effect of a new lung enhancement filter combined with deep learning image reconstruction (DLIR) algorithm on image quality and ground-glass nodule (GGN) sharpness compared to hybrid iterative reconstruction or DLIR alone. Materials and Methods: Five artificial spherical GGNs with various densities (-250, -350, -450, -550, and -630 Hounsfield units) and 10 mm in diameter were placed in a thorax anthropomorphic phantom. Four scans at four different radiation dose levels were performed using a 256-slice CT (Revolution Apex CT, GE Healthcare). Each scan was reconstructed using three different reconstruction algorithms: adaptive statistical iterative reconstruction-V at a level of 50% (AR50), Truefidelity (TF), which is a DLIR method, and TF with a lung enhancement filter (TF + Lu). Thus, 12 sets of reconstructed images were obtained and analyzed. Image noise, signal-to-noise ratio, and contrast-to-noise ratio were compared among the three reconstruction algorithms. Nodule sharpness was compared among the three reconstruction algorithms using the full-width at half-maximum value. Furthermore, subjective image quality analysis was performed. Results: AR50 demonstrated the highest level of noise, which was decreased by using TF + Lu and TF alone (P = 0.001). TF + Lu significantly improved nodule sharpness at all radiation doses compared to TF alone (P = 0.001). The nodule sharpness of TF + Lu was similar to that of AR50. Using TF alone resulted in the lowest nodule sharpness. Conclusion: Adding a lung enhancement filter to DLIR (TF + Lu) significantly improved the nodule sharpness compared to DLIR alone (TF). TF + Lu can be an effective reconstruction technique to enhance image quality and GGN evaluation in ultralow-dose chest CT scans.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF