• Title/Summary/Keyword: fast optimizer

Search Result 9, Processing Time 0.021 seconds

FAST-ADAM in Semi-Supervised Generative Adversarial Networks

  • Kun, Li;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.31-36
    • /
    • 2019
  • Unsupervised neural networks have not caught enough attention until Generative Adversarial Network (GAN) was proposed. By using both the generator and discriminator networks, GAN can extract the main characteristic of the original dataset and produce new data with similarlatent statistics. However, researchers understand fully that training GAN is not easy because of its unstable condition. The discriminator usually performs too good when helping the generator to learn statistics of the training datasets. Thus, the generated data is not compelling. Various research have focused on how to improve the stability and classification accuracy of GAN. However, few studies delve into how to improve the training efficiency and to save training time. In this paper, we propose a novel optimizer, named FAST-ADAM, which integrates the Lookahead to ADAM optimizer to train the generator of a semi-supervised generative adversarial network (SSGAN). We experiment to assess the feasibility and performance of our optimizer using Canadian Institute For Advanced Research - 10 (CIFAR-10) benchmark dataset. From the experiment results, we show that FAST-ADAM can help the generator to reach convergence faster than the original ADAM while maintaining comparable training accuracy results.

Acoustic Full-waveform Inversion using Adam Optimizer (Adam Optimizer를 이용한 음향매질 탄성파 완전파형역산)

  • Kim, Sooyoon;Chung, Wookeen;Shin, Sungryul
    • Geophysics and Geophysical Exploration
    • /
    • v.22 no.4
    • /
    • pp.202-209
    • /
    • 2019
  • In this study, an acoustic full-waveform inversion using Adam optimizer was proposed. The steepest descent method, which is commonly used for the optimization of seismic waveform inversion, is fast and easy to apply, but the inverse problem does not converge correctly. Various optimization methods suggested as alternative solutions require large calculation time though they were much more accurate than the steepest descent method. The Adam optimizer is widely used in deep learning for the optimization of learning model. It is considered as one of the most effective optimization method for diverse models. Thus, we proposed seismic full-waveform inversion algorithm using the Adam optimizer for fast and accurate convergence. To prove the performance of the suggested inversion algorithm, we compared the updated P-wave velocity model obtained using the Adam optimizer with the inversion results from the steepest descent method. As a result, we confirmed that the proposed algorithm can provide fast error convergence and precise inversion results.

Illumination correction via improved grey wolf optimizer for regularized random vector functional link network

  • Xiaochun Zhang;Zhiyu Zhou
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.816-839
    • /
    • 2023
  • In a random vector functional link (RVFL) network, shortcomings such as local optimal stagnation and decreased convergence performance cause a reduction in the accuracy of illumination correction by only inputting the weights and biases of hidden neurons. In this study, we proposed an improved regularized random vector functional link (RRVFL) network algorithm with an optimized grey wolf optimizer (GWO). Herein, we first proposed the moth-flame optimization (MFO) algorithm to provide a set of excellent initial populations to improve the convergence rate of GWO. Thereafter, the MFO-GWO algorithm simultaneously optimized the input feature, input weight, hidden node and bias of RRVFL, thereby avoiding local optimal stagnation. Finally, the MFO-GWO-RRVFL algorithm was applied to ameliorate the performance of illumination correction of various test images. The experimental results revealed that the MFO-GWO-RRVFL algorithm was stable, compatible, and exhibited a fast convergence rate.

Metaheuristic-hybridized multilayer perceptron in slope stability analysis

  • Ye, Xinyu;Moayedi, Hossein;Khari, Mahdy;Foong, Loke Kok
    • Smart Structures and Systems
    • /
    • v.26 no.3
    • /
    • pp.263-275
    • /
    • 2020
  • This research is dedicated to slope stability analysis using novel intelligent models. By coupling a neural network with spotted hyena optimizer (SHO), salp swarm algorithm (SSA), shuffled frog leaping algorithm (SFLA), and league champion optimization algorithm (LCA) metaheuristic algorithms, four predictive ensembles are built for predicting the factor of safety (FOS) of a single-layer cohesive soil slope. The data used to develop the ensembles are provided from a vast finite element analysis. After creating the proposed models, it was observed that the best population size for the SHO, SSA, SFLA, and LCA is 300, 400, 400, and 200, respectively. Evaluation of the results showed that the combination of metaheuristic and neural approaches offers capable tools for estimating the FOS. However, the SSA (error = 0.3532 and correlation = 0.9937), emerged as the most reliable optimizer, followed by LCA (error = 0.5430 and correlation = 0.9843), SFLA (error = 0.8176 and correlation = 0.9645), and SHO (error = 2.0887 and correlation = 0.8614). Due to the high accuracy of the SSA in properly adjusting the computational parameters of the neural network, the corresponding FOS predictive formula is presented to be used as a fast yet accurate substitution for traditional methods.

Improved Deep Learning Algorithm

  • Kim, Byung Joo
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.119-127
    • /
    • 2018
  • Training a very large deep neural network can be painfully slow and prone to overfitting. Many researches have done for overcoming the problem. In this paper, a combination of early stopping and ADAM based deep neural network was presented. This form of deep network is useful for handling the big data because it automatically stop the training before overfitting occurs. Also generalization ability is better than pure deep neural network model.

Performance Optimization of High Specific Speed Pump-Turbines by Means of Numerical Flow Simulation (CFD) and Model Testing

  • Kerschberger, Peter;Gehrer, Arno
    • International Journal of Fluid Machinery and Systems
    • /
    • v.3 no.4
    • /
    • pp.352-359
    • /
    • 2010
  • In recent years, the market has shown increasing interest in pump-turbines. The prompt availability of pumped storage plants and the benefits to the power system achieved by peak lopping, providing reserve capacity, and rapid response in frequency control are providing a growing advantage. In this context, there is a need to develop pumpturbines that can reliably withstand dynamic operation modes, fast changes of discharge rate by adjusting the variable diffuser vanes, as well as fast changes from pumping to turbine operation. In the first part of the present study, various flow patterns linked to operation of a pump-turbine system are discussed. In this context, pump and turbine modes are presented separately and different load cases are shown in each operating mode. In order to create modern, competitive pump-turbine designs, this study further explains what design challenges should be considered in defining the geometry of a pump-turbine impeller. The second part of the paper describes an innovative, staggered approach to impeller development, applied to a low head pump-turbine project. The first level of the process consists of optimization strategies based on evolutionary algorithms together with 3D in-viscid flow analysis. In the next stage, the hydraulic behavior of both pump mode and turbine mode is evaluated by solving the full 3D Navier-Stokes equations in combination with a robust turbulence model. Finally, the progress in hydraulic design is demonstrated by model test results that show a significant improvement in hydraulic performance compared to an existing reference design.

Design Optimization of the Air Bearing Surface for the Optical Flying Bead (Optical Flying Head의 Air Bearing Surface 형상 최적 설계)

  • Lee Jongsoo;Kim Jiwon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.29 no.2 s.233
    • /
    • pp.303-310
    • /
    • 2005
  • The systems with probe and SIL(Solid Immersion Lens) mechanisms have been researched as the technology to perform NFR(Near Field Recording). Most of them use the flying head mechanism to accomplish high recording density and fast data transfer rate. In this paper, ABS shape of flying head was optimized with the object of securing the maximum compliance ability of OFH. We suggest low different optimization processes to predict the static flying characteristics for the OFH. Two different approximation methods, regression analysis and back propagation neural network were used. And we compared the result of directly connected(between CAE and optimizer) method and two approximated optimization results. Design Optimization Tool(DOT) and ${\mu}GA$ were used as the optimizers.

A Simple and ]Reliable Method for PCR-Based Analyses in Plant Species Containing High Amounts of Polyphenols (Polyphenol 고함유 식물의 간편 PCR 분석)

  • 유남희;백소현;윤성중
    • Korean Journal of Plant Resources
    • /
    • v.14 no.3
    • /
    • pp.235-240
    • /
    • 2001
  • Polymerase chain reaction (PCR) is used in a wide array of researches in plant molecular genetics and breeding. However, considerable time and cost are still required for the preparation of DNA suitable for reliable PCR results, especially in plant species containing high amounts of polyphenols. To reduce time and effort for PCR-based analysis, a simplified but reliable method was developed by a combinational employment of a simple and fast DNA extraction procedure and BLOTTO (Bovine Lacto Transfer Technique Optimizer) in reaction mixture. Genomic DNAs prepared by one-step extraction method from recalcitrant plant species such as Rubus coreanus, apple, grape and lettuce were successfully amplified by random primers in the reaction mixture containing 2 to 4% BLOTTO. Successful amplification of ${\gamma}$-TMT transgene in lettuce transformants by the specific primers was also achieved in the same condition, making rapid screening of positive transformants possible. Our results suggest that use of a simple DNA extraction procedure and incorporation of BLOTTO in reaction mixture in combination can reduce time and effort required for the analyses of a large number of germplasms and transformants by PCR-based techniques.

  • PDF

Performance Evaluation of Hash Join Algorithm on Flash Memory SSDs (플래쉬 메모리 SSD 기반 해쉬 조인 알고리즘의 성능 평가)

  • Park, Jang-Woo;Park, Sang-Shin;Lee, Sang-Won;Park, Chan-Ik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.11
    • /
    • pp.1031-1040
    • /
    • 2010
  • Hash join is one of the core algorithms in databases management systems. If a hash join cannot complete in one-pass because the available memory is insufficient (i.e., hash table overflow), however, it may incur a few sequential writes and excessive random reads. With harddisk as the tempoary storage for hash joins, the I/O time would be dominated by slow random reads in its probing phase. Meanwhile, flash memory based SSDs (flash SSDs) are becoming popular, and we will witness in the foreseeable future that flash SSDs replace harddisks in enterprise databases. In contrast to harddisk, flash SSD without any mechanical component has fast latency in random reads, and thus it can boost hash join performance. In this paper, we investigate several important and practical issues when flash SSD is used as tempoary storage for hash join. First, we reveal the va patterns of hash join in detail and explain why flash SSD can outperform harddisk by more than an order of magnitude. Second, we present and analyze the impact of cluster size (i.e., va unit in hash join) on performance. Finally, we emperically demonstrate that, while a commerical query optimizer is error-prone in predicting the execution time with harddisk as temporary storage, it can precisely estimate the execution time with flash SSD. In summary, we show that, when used as temporary storage for hash join, flash SSD will provide more reliable cost estimation as well as fast performance.