• Title/Summary/Keyword: training models

Search Result 1,531, Processing Time 0.025 seconds

Refractive-index Prediction for High-refractive-index Optical Glasses Based on the B2O3-La2O3-Ta2O5-SiO2 System Using Machine Learning

  • Seok Jin Hong;Jung Hee Lee;Devarajulu Gelija;Woon Jin Chung
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.230-238
    • /
    • 2024
  • The refractive index is a key material-design parameter, especially for high-refractive-index glasses, which are used for precision optics and devices. Increased demand for high-precision optical lenses produced by the glass-mold-press (GMP) process has spurred extensive studies of proper glass materials. B2O3, SiO2, and multiple heavy-metal oxides such as Ta2O5, Nb2O5, La2O3, and Gd2O3 mostly compose the high-refractive-index glasses for GMP. However, due to many oxides including up to 10 components, it is hard to predict the refractivity solely from the composition of the glass. In this study, the refractive index of optical glasses based on the B2O3-La2O3-Ta2O5-SiO2 system is predicted using machine learning (ML) and compared to experimental data. A dataset comprising up to 271 glasses with 10 components is collected and used for training. Various ML algorithms (linear-regression, Bayesian-ridge-regression, nearest-neighbor, and random-forest models) are employed to train the data. Along with composition, the polarizability and density of the glasses are also considered independent parameters to predict the refractive index. After obtaining the best-fitting model by R2 value, the trained model is examined alongside the experimentally obtained refractive indices of B2O3-La2O3-Ta2O5-SiO2 quaternary glasses.

Complex nested U-Net-based speech enhancement model using a dual-branch decoder (이중 분기 디코더를 사용하는 복소 중첩 U-Net 기반 음성 향상 모델)

  • Seorim Hwang;Sung Wook Park;Youngcheol Park
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.253-259
    • /
    • 2024
  • This paper proposes a new speech enhancement model based on a complex nested U-Net with a dual-branch decoder. The proposed model consists of a complex nested U-Net to simultaneously estimate the magnitude and phase components of the speech signal, and the decoder has a dual-branch decoder structure that performs spectral mapping and time-frequency masking in each branch. At this time, compared to the single-branch decoder structure, the dual-branch decoder structure allows noise to be effectively removed while minimizing the loss of speech information. The experiment was conducted on the VoiceBank + DEMAND database, commonly used for speech enhancement model training, and was evaluated through various objective evaluation metrics. As a result of the experiment, the complex nested U-Net-based speech enhancement model using a dual-branch decoder increased the Perceptual Evaluation of Speech Quality (PESQ) score by about 0.13 compared to the baseline, and showed a higher objective evaluation score than recently proposed speech enhancement models.

Transfer Learning-based Generated Synthetic Images Identification Model (전이 학습 기반의 생성 이미지 판별 모델 설계)

  • Chaewon Kim;Sungyeon Yoon;Myeongeun Han;Minseo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.465-470
    • /
    • 2024
  • The advancement of AI-based image generation technology has resulted in the creation of various images, emphasizing the need for technology capable of accurately discerning them. The amount of generated image data is limited, and to achieve high performance with a limited dataset, this study proposes a model for discriminating generated images using transfer learning. Applying pre-trained models from the ImageNet dataset directly to the CIFAKE input dataset, we reduce training time cost followed by adding three hidden layers and one output layer to fine-tune the model. The modeling results revealed an improvement in the performance of the model when adjusting the final layer. Using transfer learning and then adjusting layers close to the output layer, small image data-related accuracy issues can be reduced and generated images can be classified.

ML-based prediction method for estimating vortex-induced vibration amplitude of steel tubes in tubular transmission towers

  • Jiahong Li;Tao Wang;Zhengliang Li
    • Structural Engineering and Mechanics
    • /
    • v.90 no.1
    • /
    • pp.27-40
    • /
    • 2024
  • The prediction of VIV amplitude is essential for the design and fatigue life estimation of steel tubes in tubular transmission towers. Limited to costly and time-consuming traditional experimental and computational fluid dynamics (CFD) methods, a machine learning (ML)-based method is proposed to efficiently predict the VIV amplitude of steel tubes in transmission towers. Firstly, by introducing the first-order mode shape to the two-dimensional CFD method, a simplified response analysis method (SRAM) is presented to calculate the VIV amplitude of steel tubes in transmission towers, which enables to build a dataset for training ML models. Then, by taking mass ratio M*, damping ratio ξ, and reduced velocity U* as the input variables, a Kriging-based prediction method (KPM) is further proposed to estimate the VIV amplitude of steel tubes in transmission towers by combining the SRAM with the Kriging-based ML model. Finally, the feasibility and effectiveness of the proposed methods are demonstrated by using three full-scale steel tubes with C-shaped, Cross-shaped, and Flange-plate joints, respectively. The results show that the SRAM can reasonably calculate the VIV amplitude, in which the relative errors of VIV maximum amplitude in three examples are less than 6%. Meanwhile, the KPM can well predict the VIV amplitude of steel tubes in transmission towers within the studied range of M*, ξ and U*. Particularly, the KPM presents an excellent capability in estimating the VIV maximum amplitude by using the reduced damping parameter SG.

UAV Path Planning based on Deep Reinforcement Learning using Cell Decomposition Algorithm (셀 분해 알고리즘을 활용한 심층 강화학습 기반 무인 항공기 경로 계획)

  • Kyoung-Hun Kim;Byungsun Hwang;Joonho Seon;Soo-Hyun Kim;Jin-Young Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.15-20
    • /
    • 2024
  • Path planning for unmanned aerial vehicles (UAV) is crucial in avoiding collisions with obstacles in complex environments that include both static and dynamic obstacles. Path planning algorithms like RRT and A* are effectively handle static obstacle avoidance but have limitations with increasing computational complexity in high-dimensional environments. Reinforcement learning-based algorithms can accommodate complex environments, but like traditional path planning algorithms, they struggle with training complexity and convergence in higher-dimensional environment. In this paper, we proposed a reinforcement learning model utilizing a cell decomposition algorithm. The proposed model reduces the complexity of the environment by decomposing the learning environment in detail, and improves the obstacle avoidance performance by establishing the valid action of the agent. This solves the exploration problem of reinforcement learning and improves the convergence of learning. Simulation results show that the proposed model improves learning speed and efficient path planning compared to reinforcement learning models in general environments.

A Study on the Evaluation of LLM's Gameplay Capabilities in Interactive Text-Based Games (대화형 텍스트 기반 게임에서 LLM의 게임플레이 기능 평가에 관한 연구)

  • Dongcheul Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.87-94
    • /
    • 2024
  • We investigated the feasibility of utilizing Large Language Models (LLMs) to perform text-based games without training on game data in advance. We adopted ChatGPT-3.5 and its state-of-the-art, ChatGPT-4, as the systems that implemented LLM. In addition, we added the persistent memory feature proposed in this paper to ChatGPT-4 to create three game player agents. We used Zork, one of the most famous text-based games, to see if the agents could navigate through complex locations, gather information, and solve puzzles. The results showed that the agent with persistent memory had the widest range of exploration and the best score among the three agents. However, all three agents were limited in solving puzzles, indicating that LLM is vulnerable to problems that require multi-level reasoning. Nevertheless, the proposed agent was still able to visit 37.3% of the total locations and collect all the items in the locations it visited, demonstrating the potential of LLM.

Deep learning-based clothing attribute classification using fashion image data (패션 이미지 데이터를 활용한 딥러닝 기반의 의류속성 분류)

  • Hye Seon Jeong;So Young Lee;Choong Kwon Lee
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.57-64
    • /
    • 2024
  • Attributes such as material, color, and fit in fashion images are important factors for consumers to purchase clothing. However, the process of classifying clothing attributes requires a large amount of manpower and is inconsistent because it relies on the subjective judgment of human operators. To alleviate this problem, there is a need for research that utilizes artificial intelligence to classify clothing attributes in fashion images. Previous studies have mainly focused on classifying clothing attributes for either tops or bottoms, so there is a limitation that the attributes of both tops and bottoms cannot be identified simultaneously in the case of full-body fashion images. In this study, we propose a deep learning model that can distinguish between tops and bottoms in fashion images and classify the category of each item and the attributes of the clothing material. The deep learning models ResNet and EfficientNet were used in this study, and the dataset used for training was 1,002,718 fashion images and 125 labels including clothing categories and material properties. Based on the weighted F1-Score, ResNet is 0.800 and EfficientNet is 0.781, with ResNet showing better performance.

Evaluation of the Feasibility of Deep Learning for Vegetation Monitoring (딥러닝 기반의 식생 모니터링 가능성 평가)

  • Kim, Dong-woo;Son, Seung-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.6
    • /
    • pp.85-96
    • /
    • 2023
  • This study proposes a method for forest vegetation monitoring using high-resolution aerial imagery captured by unmanned aerial vehicles(UAV) and deep learning technology. The research site was selected in the forested area of Mountain Dogo, Asan City, Chungcheongnam-do, and the target species for monitoring included Pinus densiflora, Quercus mongolica, and Quercus acutissima. To classify vegetation species at the pixel level in UAV imagery based on characteristics such as leaf shape, size, and color, the study employed the semantic segmentation method using the prominent U-net deep learning model. The research results indicated that it was possible to visually distinguish Pinus densiflora Siebold & Zucc, Quercus mongolica Fisch. ex Ledeb, and Quercus acutissima Carruth in 135 aerial images captured by UAV. Out of these, 104 images were used as training data for the deep learning model, while 31 images were used for inference. The optimization of the deep learning model resulted in an overall average pixel accuracy of 92.60, with mIoU at 0.80 and FIoU at 0.82, demonstrating the successful construction of a reliable deep learning model. This study is significant as a pilot case for the application of UAV and deep learning to monitor and manage representative species among climate-vulnerable vegetation, including Pinus densiflora, Quercus mongolica, and Quercus acutissima. It is expected that in the future, UAV and deep learning models can be applied to a variety of vegetation species to better address forest management.

A Comparative Study of Deep Learning Techniques for Alzheimer's disease Detection in Medical Radiography

  • Amal Alshahrani;Jenan Mustafa;Manar Almatrafi;Layan Albaqami;Raneem Aljabri;Shahad Almuntashri
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.53-63
    • /
    • 2024
  • Alzheimer's disease is a brain disorder that worsens over time and affects millions of people around the world. It leads to a gradual deterioration in memory, thinking ability, and behavioral and social skills until the person loses his ability to adapt to society. Technological progress in medical imaging and the use of artificial intelligence, has provided the possibility of detecting Alzheimer's disease through medical images such as magnetic resonance imaging (MRI). However, Deep learning algorithms, especially convolutional neural networks (CNNs), have shown great success in analyzing medical images for disease diagnosis and classification. Where CNNs can recognize patterns and objects from images, which makes them ideally suited for this study. In this paper, we proposed to compare the performances of Alzheimer's disease detection by using two deep learning methods: You Only Look Once (YOLO), a CNN-enabled object recognition algorithm, and Visual Geometry Group (VGG16) which is a type of deep convolutional neural network primarily used for image classification. We will compare our results using these modern models Instead of using CNN only like the previous research. In addition, the results showed different levels of accuracy for the various versions of YOLO and the VGG16 model. YOLO v5 reached 56.4% accuracy at 50 epochs and 61.5% accuracy at 100 epochs. YOLO v8, which is for classification, reached 84% accuracy overall at 100 epochs. YOLO v9, which is for object detection overall accuracy of 84.6%. The VGG16 model reached 99% accuracy for training after 25 epochs but only 78% accuracy for testing. Hence, the best model overall is YOLO v9, with the highest overall accuracy of 86.1%.

Predicting restraining effects in CFS channels: A machine learning approach

  • Seyed Mohammad Mojtabaei;Rasoul Khandan;Iman Hajirasouliha
    • Steel and Composite Structures
    • /
    • v.51 no.4
    • /
    • pp.441-456
    • /
    • 2024
  • This paper aims to develop Machine Learning (ML) algorithms to predict the buckling resistance of cold-formed steel (CFS) channels with restrained flanges, widely used in typical CFS sheathed wall panels, and provide practical design tools for engineers. The effects of cross-sectional restraints were first evaluated on the elastic buckling behaviour of CFS channels subjected to pure axial compressive load or bending moment. Feedforward multi-layer Artificial Neural Networks (ANNs) were then trained on different datasets comprising CFS channels with various dimensions and properties, plate thicknesses, and restraining conditions on one or two flanges, while the elastic distortional buckling resistance of the elements were determined according to the Finite Strip Method (FSM). To develop less biased networks and ensure that every observation from the original dataset has the chance of appearing in the training and test set, a K-fold cross-validation technique was implemented. In addition, the hyperparameters of the ANNs were tuned using a grid search technique to provide ANNs with optimum performances. The results demonstrated that the trained ANNs were able to predict the elastic distortional buckling resistance of CFS flange-restrained elements with an average accuracy of 99% in terms of coefficient of determination. The developed models were then used to propose a simple ANN-based design formula for the prediction of the elastic distortional buckling stress of CFS flange-restrained elements. Finally, the proposed formula was further evaluated on a separate set of unseen data to ensure its accuracy for practical applications.