• Title/Summary/Keyword: Computational Experiment

Search Result 991, Processing Time 0.032 seconds

Damage detection in structures using modal curvatures gapped smoothing method and deep learning

  • Nguyen, Duong Huong;Bui-Tien, T.;Roeck, Guido De;Wahab, Magd Abdel
    • Structural Engineering and Mechanics
    • /
    • v.77 no.1
    • /
    • pp.47-56
    • /
    • 2021
  • This paper deals with damage detection using a Gapped Smoothing Method (GSM) combined with deep learning. Convolutional Neural Network (CNN) is a model of deep learning. CNN has an input layer, an output layer, and a number of hidden layers that consist of convolutional layers. The input layer is a tensor with shape (number of images) × (image width) × (image height) × (image depth). An activation function is applied each time to this tensor passing through a hidden layer and the last layer is the fully connected layer. After the fully connected layer, the output layer, which is the final layer, is predicted by CNN. In this paper, a complete machine learning system is introduced. The training data was taken from a Finite Element (FE) model. The input images are the contour plots of curvature gapped smooth damage index. A free-free beam is used as a case study. In the first step, the FE model of the beam was used to generate data. The collected data were then divided into two parts, i.e. 70% for training and 30% for validation. In the second step, the proposed CNN was trained using training data and then validated using available data. Furthermore, a vibration experiment on steel damaged beam in free-free support condition was carried out in the laboratory to test the method. A total number of 15 accelerometers were set up to measure the mode shapes and calculate the curvature gapped smooth of the damaged beam. Two scenarios were introduced with different severities of the damage. The results showed that the trained CNN was successful in detecting the location as well as the severity of the damage in the experimental damaged beam.

A Heuristic for Service-Parts Lot-Sizing with Disassembly Option (분해옵션 포함 서비스부품 로트사이징 휴리스틱)

  • Jang, Jin-Myeong;Kim, Hwa-Joong;Son, Dong-Hoon;Lee, Dong-Ho
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.2
    • /
    • pp.24-35
    • /
    • 2021
  • Due to increasing awareness on the treatment of end-of-use/life products, disassembly has been a fast-growing research area of interest for many researchers over recent decades. This paper introduces a novel lot-sizing problem that has not been studied in the literature, which is the service-parts lot-sizing with disassembly option. The disassembly option implies that the demands of service parts can be fulfilled by newly manufactured parts, but also by disassembled parts. The disassembled parts are the ones recovered after the disassembly of end-of-use/life products. The objective of the considered problem is to maximize the total profit, i.e., the revenue of selling the service parts minus the total cost of the fixed setup, production, disassembly, inventory holding, and disposal over a planning horizon. This paper proves that the single-period version of the considered problem is NP-hard and suggests a heuristic by combining a simulated annealing algorithm and a linear-programming relaxation. Computational experiment results show that the heuristic generates near-optimal solutions within reasonable computation time, which implies that the heuristic is a viable optimization tool for the service parts inventory management. In addition, sensitivity analyses indicate that deciding an appropriate price of disassembled parts and an appropriate collection amount of EOLs are very important for sustainable service parts systems.

2D-MELPP: A two dimensional matrix exponential based extension of locality preserving projections for dimensional reduction

  • Xiong, Zixun;Wan, Minghua;Xue, Rui;Yang, Guowei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2991-3007
    • /
    • 2022
  • Two dimensional locality preserving projections (2D-LPP) is an improved algorithm of 2D image to solve the small sample size (SSS) problems which locality preserving projections (LPP) meets. It's able to find the low dimension manifold mapping that not only preserves local information but also detects manifold embedded in original data spaces. However, 2D-LPP is simple and elegant. So, inspired by the comparison experiments between two dimensional linear discriminant analysis (2D-LDA) and linear discriminant analysis (LDA) which indicated that matrix based methods don't always perform better even when training samples are limited, we surmise 2D-LPP may meet the same limitation as 2D-LDA and propose a novel matrix exponential method to enhance the performance of 2D-LPP. 2D-MELPP is equivalent to employing distance diffusion mapping to transform original images into a new space, and margins between labels are broadened, which is beneficial for solving classification problems. Nonetheless, the computational time complexity of 2D-MELPP is extremely high. In this paper, we replace some of matrix multiplications with multiple multiplications to save the memory cost and provide an efficient way for solving 2D-MELPP. We test it on public databases: random 3D data set, ORL, AR face database and Polyu Palmprint database and compare it with other 2D methods like 2D-LDA, 2D-LPP and 1D methods like LPP and exponential locality preserving projections (ELPP), finding it outperforms than others in recognition accuracy. We also compare different dimensions of projection vector and record the cost time on the ORL, AR face database and Polyu Palmprint database. The experiment results above proves that our advanced algorithm has a better performance on 3 independent public databases.

Applying a Novel Neuroscience Mining (NSM) Method to fNIRS Dataset for Predicting the Business Problem Solving Creativity: Emphasis on Combining CNN, BiLSTM, and Attention Network

  • Kim, Kyu Sung;Kim, Min Gyeong;Lee, Kun Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.1-7
    • /
    • 2022
  • With the development of artificial intelligence, efforts to incorporate neuroscience mining with AI have increased. Neuroscience mining, also known as NSM, expands on this concept by combining computational neuroscience and business analytics. Using fNIRS (functional near-infrared spectroscopy)-based experiment dataset, we have investigated the potential of NSM in the context of the BPSC (business problem-solving creativity) prediction. Although BPSC is regarded as an essential business differentiator and a difficult cognitive resource to imitate, measuring it is a challenging task. In the context of NSM, appropriate methods for assessing and predicting BPSC are still in their infancy. In this sense, we propose a novel NSM method that systematically combines CNN, BiLSTM, and attention network for the sake of enhancing the BPSC prediction performance significantly. We utilized a dataset containing over 150 thousand fNIRS-measured data points to evaluate the validity of our proposed NSM method. Empirical evidence demonstrates that the proposed NSM method reveals the most robust performance when compared to benchmarking methods.

Approximate Multiplier with High Density, Low Power and High Speed using Efficient Partial Product Reduction (효율적인 부분 곱 감소를 이용한 고집적·저전력·고속 근사 곱셈기)

  • Seo, Ho-Sung;Kim, Dae-Ik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.671-678
    • /
    • 2022
  • Approximate computing is an computational technique that is acceptable degree of inaccurate results of accurate results. Approximate multiplication is one of the approximate computing methods for high-performance and low-power computing. In this paper, we propose a high-density, low-power, and high-speed approximate multiplier using approximate 4-2 compressor and improved full adder. The approximate multiplier with approximate 4-2 compressor consists of three regions of the exact, approximate and constant correction regions, and we compared them by adjusting the size of region by applying an efficient partial product reduction. The proposed approximate multiplier was designed with Verilog HDL and was analyzed for area, power and delay time using Synopsys Design Compiler (DC) on a 25nm CMOS process. As a result of the experiment, the proposed multiplier reduced area by 10.47%, power by 26.11%, and delay time by 13% compared to the conventional approximate multiplier.

Multi-Region based Radial GCN algorithm for Human action Recognition (행동인식을 위한 다중 영역 기반 방사형 GCN 알고리즘)

  • Jang, Han Byul;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.11 no.1
    • /
    • pp.46-57
    • /
    • 2022
  • In this paper, multi-region based Radial Graph Convolutional Network (MRGCN) algorithm which can perform end-to-end action recognition using the optical flow and gradient of input image is described. Because this method does not use information of skeleton that is difficult to acquire and complicated to estimate, it can be used in general CCTV environment in which only video camera is used. The novelty of MRGCN is that it expresses the optical flow and gradient of the input image as directional histograms and then converts it into six feature vectors to reduce the amount of computational load and uses a newly developed radial type network model to hierarchically propagate the deformation and shape change of the human body in spatio-temporal space. Another important feature is that the data input areas are arranged being overlapped each other, so that information is not spatially disconnected among input nodes. As a result of performing MRGCN's action recognition performance evaluation experiment for 30 actions, it was possible to obtain Top-1 accuracy of 84.78%, which is superior to the existing GCN-based action recognition method using skeleton data as an input.

Elongation Behavior of Polymeric Materials for Membrane Applications Using Molecular Dynamics (분자동역학을 이용한 분리막용 소재로 사용되는 고분자 소재의 신장거동 연구)

  • Kang, Hoseong;Park, Chi Hoon
    • Membrane Journal
    • /
    • v.32 no.1
    • /
    • pp.57-65
    • /
    • 2022
  • Recently, computer simulation research has been rapidly increasing due to the development of computer and software technology. In particular, various computational simulation results related to polymers, which were previously limited by problems of the number of atoms and model size, are being published. In this study, a study was conducted to analyze the mechanical properties, one of the important properties for using a polymer material as a membrane, using molecular dynamics (MD) simulation. To this end, polyethylene (PE) and polystyrene (PS), which are commercial polymer materials with widely reported related properties, were selected as polymer models and the tensile properties of each polymer were compared through the difference in main chain length. Through the density, radius of gyration, and scattering analysis, it was found that the model produced in this study was in good agreement with the mechanical property trends obtained in the actual experiment. It is expected to enable the prediction of mechanical properties of various polymer materials for membrane fabrication.

The Effect of DMM on Learning Motivation and Academic Achievement in SW Education of Non-Major (비전공자의 SW 교육을 위한 시연 중심 모형의 학습동기와 학업성취도 효과)

  • Kang, Yun-Jeong;Won, Dong-Hyun;Park, Hyuk-Gyu;Lee, Min-Hye
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.258-260
    • /
    • 2022
  • In order to nurture talents who will lead the digital convergence era of the 4th industrial revolution that creates new knowledge and industries, research is being conducted on teaching methods that can improve the understanding of non-majors' SW concept, computational thinking ability, and convergence with majors is becoming Non-majors face difficulties in understanding and understanding the SW development environment, relevance to their major, and ability to converge. We used software education that is relatively easy to access for non-majors, and applied a demonstration-oriented model (DMM) that can be applied to beginners in SW education to understand the components and logical flow of ideas related to applications and majors used in real life. A convergence SW Learning method that combines repetitive implementation through instructor's demonstration and learner's modeling and learning motivational factors was proposed. In the experiment applying the teaching and learning method proposed in this paper, meaningful results were shown in terms of learning motivation and academic achievement in SW education.

  • PDF

A Study on the Data Analysis of Fire Simulation in Underground Utility Tunnel for Digital Twin Application (디지털트윈 적용을 위한 지하공동구 화재 시뮬레이션의 데이터 분석 연구)

  • Jae-Ho Lee;Se-Hong Min
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.82-92
    • /
    • 2024
  • Purpose: The purpose of this study is to find a solution to the massive data construction that occurs when fire simulation data is linked to augmented reality and the resulting data overload problem. Method: An experiment was conducted to set the interval between appropriate input data to improve the reliability and computational complexity of Linear Interpolation, a data estimation technology. In addition, a validity verification was conducted to confirm whether Linear Interpolation well reflected the dynamic changes of fire. Result: As a result of application to the underground common area, which is the study target building, it showed high satisfaction in improving the reliability of Interpolation and the operation processing speed of simulation when data was input at intervals of 10 m. In addition, it was verified through evaluation using MAE and R-Squared that the estimation method of fire simulation data using the Interpolation technique had high explanatory power and reliability. Conclusion: This study solved the data overload problem caused by applying digital twin technology to fire simulation through Interpolation techniques, and confirmed that fire information prediction and visualization were of great help in real-time fire prevention.

Robust Eye Localization using Multi-Scale Gabor Feature Vectors (다중 해상도 가버 특징 벡터를 이용한 강인한 눈 검출)

  • Kim, Sang-Hoon;Jung, Sou-Hwan;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.25-36
    • /
    • 2008
  • Eye localization means localization of the center of the pupils, and is necessary for face recognition and related applications. Most of eye localization methods reported so far still need to be improved about robustness as well as precision for successful applications. In this paper, we propose a robust eye localization method using multi-scale Gabor feature vectors without big computational burden. The eye localization method using Gabor feature vectors is already employed in fuck as EBGM, but the method employed in EBGM is known not to be robust with respect to initial values, illumination, and pose, and may need extensive search range for achieving the required performance, which may cause big computational burden. The proposed method utilizes multi-scale approach. The proposed method first tries to localize eyes in the lower resolution face image by utilizing Gabor Jet similarity between Gabor feature vector at an estimated initial eye coordinates and the Gabor feature vectors in the eye model of the corresponding scale. Then the method localizes eyes in the next scale resolution face image in the same way but with initial eye points estimated from the eye coordinates localized in the lower resolution images. After repeating this process in the same way recursively, the proposed method funally localizes eyes in the original resolution face image. Also, the proposed method provides an effective illumination normalization to make the proposed multi-scale approach more robust to illumination, and additionally applies the illumination normalization technique in the preprocessing stage of the multi-scale approach so that the proposed method enhances the eye detection success rate. Experiment results verify that the proposed eye localization method improves the precision rate without causing big computational overhead compared to other eye localization methods reported in the previous researches and is robust to the variation of post: and illumination.