• Title/Summary/Keyword: Improved deep learning

Search Result 558, Processing Time 0.025 seconds

Study on 2D Sprite *3.Generation Using the Impersonator Network

  • Yongjun Choi;Beomjoo Seo;Shinjin Kang;Jongin Choi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1794-1806
    • /
    • 2023
  • This study presents a method for capturing photographs of users as input and converting them into 2D character animation sprites using a generative adversarial network-based artificial intelligence network. Traditionally, 2D character animations have been created by manually creating an entire sequence of sprite images, which incurs high development costs. To address this issue, this study proposes a technique that combines motion videos and sample 2D images. In the 2D sprite generation process that uses the proposed technique, a sequence of images is extracted from real-life images captured by the user, and these are combined with character images from within the game. Our research aims to leverage cutting-edge deep learning-based image manipulation techniques, such as the GAN-based motion transfer network (impersonator) and background noise removal (U2 -Net), to generate a sequence of animation sprites from a single image. The proposed technique enables the creation of diverse animations and motions just one image. By utilizing these advancements, we focus on enhancing productivity in the game and animation industry through improved efficiency and streamlined production processes. By employing state-of-the-art techniques, our research enables the generation of 2D sprite images with various motions, offering significant potential for boosting productivity and creativity in the industry.

FGW-FER: Lightweight Facial Expression Recognition with Attention

  • Huy-Hoang Dinh;Hong-Quan Do;Trung-Tung Doan;Cuong Le;Ngo Xuan Bach;Tu Minh Phuong;Viet-Vu Vu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2505-2528
    • /
    • 2023
  • The field of facial expression recognition (FER) has been actively researched to improve human-computer interaction. In recent years, deep learning techniques have gained popularity for addressing FER, with numerous studies proposing end-to-end frameworks that stack or widen significant convolutional neural network layers. While this has led to improved performance, it has also resulted in larger model sizes and longer inference times. To overcome this challenge, our work introduces a novel lightweight model architecture. The architecture incorporates three key factors: Depth-wise Separable Convolution, Residual Block, and Attention Modules. By doing so, we aim to strike a balance between model size, inference speed, and accuracy in FER tasks. Through extensive experimentation on popular benchmark FER datasets, our proposed method has demonstrated promising results. Notably, it stands out due to its substantial reduction in parameter count and faster inference time, while maintaining accuracy levels comparable to other lightweight models discussed in the existing literature.

Analyzing the Influence of Spatial Sampling Rate on Three-dimensional Temperature-field Reconstruction

  • Shenxiang Feng;Xiaojian Hao;Tong Wei;Xiaodong Huang;Pan Pei;Chenyang Xu
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.246-258
    • /
    • 2024
  • In aerospace and energy engineering, the reconstruction of three-dimensional (3D) temperature distributions is crucial. Traditional methods like algebraic iterative reconstruction and filtered back-projection depend on voxel division for resolution. Our algorithm, blending deep learning with computer graphics rendering, converts 2D projections into light rays for uniform sampling, using a fully connected neural network to depict the 3D temperature field. Although effective in capturing internal details, it demands multiple cameras for varied angle projections, increasing cost and computational needs. We assess the impact of camera number on reconstruction accuracy and efficiency, conducting butane-flame simulations with different camera setups (6 to 18 cameras). The results show improved accuracy with more cameras, with 12 cameras achieving optimal computational efficiency (1.263) and low error rates. Verification experiments with 9, 12, and 15 cameras, using thermocouples, confirm that the 12-camera setup as the best, balancing efficiency and accuracy. This offers a feasible, cost-effective solution for real-world applications like engine testing and environmental monitoring, improving accuracy and resource management in temperature measurement.

Approach to diagnosing multiple abnormal events with single-event training data

  • Ji Hyeon Shin;Seung Gyu Cho;Seo Ryong Koo;Seung Jun Lee
    • Nuclear Engineering and Technology
    • /
    • v.56 no.2
    • /
    • pp.558-567
    • /
    • 2024
  • Diagnostic support systems are being researched to assist operators in identifying and responding to abnormal events in a nuclear power plant. Most studies to date have considered single abnormal events only, for which it is relatively straightforward to obtain data to train the deep learning model of the diagnostic support system. However, cases in which multiple abnormal events occur must also be considered, for which obtaining training data becomes difficult due to the large number of combinations of possible abnormal events. This study proposes an approach to maintain diagnostic performance for multiple abnormal events by training a deep learning model with data on single abnormal events only. The proposed approach is applied to an existing algorithm that can perform feature selection and multi-label classification. We choose an extremely randomized trees classifier to select dedicated monitoring parameters for target abnormal events. In diagnosing each event occurrence independently, two-channel convolutional neural networks are employed as sub-models. The algorithm was tested in a case study with various scenarios, including single and multiple abnormal events. Results demonstrated that the proposed approach maintained diagnostic performance for 15 single abnormal events and significantly improved performance for 105 multiple abnormal events compared to the base model.

Development of an Optimal Convolutional Neural Network Backbone Model for Personalized Rice Consumption Monitoring in Institutional Food Service using Feature Extraction

  • Young Hoon Park;Eun Young Choi
    • The Korean Journal of Food And Nutrition
    • /
    • v.37 no.4
    • /
    • pp.197-210
    • /
    • 2024
  • This study aims to develop a deep learning model to monitor rice serving amounts in institutional foodservice, enhancing personalized nutrition management. The goal is to identify the best convolutional neural network (CNN) for detecting rice quantities on serving trays, addressing balanced dietary intake challenges. Both a vanilla CNN and 12 pre-trained CNNs were tested, using features extracted from images of varying rice quantities on white trays. Configurations included optimizers, image generation, dropout, feature extraction, and fine-tuning, with top-1 validation accuracy as the evaluation metric. The vanilla CNN achieved 60% top-1 validation accuracy, while pre-trained CNNs significantly improved performance, reaching up to 90% accuracy. MobileNetV2, suitable for mobile devices, achieved a minimum 76% accuracy. These results suggest the model can effectively monitor rice servings, with potential for improvement through ongoing data collection and training. This development represents a significant advancement in personalized nutrition management, with high validation accuracy indicating its potential utility in dietary management. Continuous improvement based on expanding datasets promises enhanced precision and reliability, contributing to better health outcomes.

Integration of computer-based technology in smart environment in an EFL structures

  • Cao, Yan;AlKubaisy, Zenah M.
    • Smart Structures and Systems
    • /
    • v.29 no.2
    • /
    • pp.375-387
    • /
    • 2022
  • One of the latest teaching strategies is smart classroom teaching. Teaching is carried out with the assistance of smart teaching technologies to improve teacher-student contact, increase students' learning autonomy, and give fresh ideas for the fulfillment of students' deep learning. Computer-based technology has improved students' language learning and significantly motivating them to continue learning while also stimulating their creativity and enthusiasm. However, the difficulties and barriers that many EFL instructors are faced on seeking to integrate information and communication technology (ICT) into their instruction have raised discussions and concerns regarding ICT's real worth in the language classroom. This is a case study that includes observations in the classroom, field notes, interviews, and written materials. In EFL classrooms, both computer-based and non-computer-based activities were recorded and analyzed. The main instrument in this study was a survey questionnaire comprising 43 items, which was used to examine the efficiency of ICT integration in teaching and learning in public schools in Kuala Lumpur. A total of 101 questionnaires were delivered, while each responder being requested to read the statements provided. The total number of respondents for this study was 101 teachers from Kuala Lumpur's public secondary schools. The questionnaire was randomly distributed to respondents with a teaching background. This study indicated the accuracy of utilizing Teaching-Learning-Based Optimization (TLBO) in analyzing the survey results and potential for students to learn English as a foreign language using computers. Also, the usage of foreign language may be improved if real computer-based activities are introduced into the lesson.

Change Detection for High-resolution Satellite Images Using Transfer Learning and Deep Learning Network (전이학습과 딥러닝 네트워크를 활용한 고해상도 위성영상의 변화탐지)

  • Song, Ah Ram;Choi, Jae Wan;Kim, Yong Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.199-208
    • /
    • 2019
  • As the number of available satellites increases and technology advances, image information outputs are becoming increasingly diverse and a large amount of data is accumulating. In this study, we propose a change detection method for high-resolution satellite images that uses transfer learning and a deep learning network to overcome the limit caused by insufficient training data via the use of pre-trained information. The deep learning network used in this study comprises convolutional layers to extract the spatial and spectral information and convolutional long-short term memory layers to analyze the time series information. To use the learned information, the two initial convolutional layers of the change detection network are designed to use learned values from 40,000 patches of the ISPRS (International Society for Photogrammertry and Remote Sensing) dataset as initial values. In addition, 2D (2-Dimensional) and 3D (3-dimensional) kernels were used to find the optimized structure for the high-resolution satellite images. The experimental results for the KOMPSAT-3A (KOrean Multi-Purpose SATllite-3A) satellite images show that this change detection method can effectively extract changed/unchanged pixels but is less sensitive to changes due to shadow and relief displacements. In addition, the change detection accuracy of two sites was improved by using 3D kernels. This is because a 3D kernel can consider not only the spatial information but also the spectral information. This study indicates that we can effectively detect changes in high-resolution satellite images using the constructed image information and deep learning network. In future work, a pre-trained change detection network will be applied to newly obtained images to extend the scope of the application.

DNN-based LTE Signal Propagation Modelling for Positioning Fingerprint DB Generation

  • Kwon, Jae Uk;Cho, Seong Yun
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.10 no.1
    • /
    • pp.55-66
    • /
    • 2021
  • In this paper, we propose a signal propagation modeling technique for generating a positioning fingerprint DB based on Long Term Evolution (LTE) signals. When a DB is created based on the location-based signal information collected in an urban area, gaps in the DB due to uncollected areas occur. The spatial interpolation method for filling the gaps has limitations. In addition, the existing gap filling technique through signal propagation modeling does not reflect the signal attenuation characteristics according to directions occurring in urban areas by considering only the signal attenuation characteristics according to distance. To solve this problem, this paper proposes a Deep Neural Network (DNN)-based signal propagation functionalization technique that considers distance and direction together. To verify the performance of this technique, an experiment was conducted in Seocho-gu, Seoul. Based on the acquired signals, signal propagation characteristics were modeled for each method, and Root Mean Squared Errors (RMSE) was calculated using the verification data to perform comparative analysis. As a result, it was shown that the proposed technique is improved by about 4.284 dBm compared to the existing signal propagation model. Through this, it can be confirmed that the DNN-based signal propagation model proposed in this paper is excellent in performance, and it is expected that the positioning performance will be improved based on the fingerprint DB generated through it.

Image Enhancement for Visual SLAM in Low Illumination (저조도 환경에서 Visual SLAM을 위한 이미지 개선 방법)

  • Donggil You;Jihoon Jung;Hyeongjun Jeon;Changwan Han;Ilwoo Park;Junghyun Oh
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.66-71
    • /
    • 2023
  • As cameras have become primary sensors for mobile robots, vision based Simultaneous Localization and Mapping (SLAM) has achieved impressive results with the recent development of computer vision and deep learning. However, vision information has a disadvantage in that a lot of information disappears in a low-light environment. To overcome the problem, we propose an image enhancement method to perform visual SLAM in a low-light environment. Using the deep generative adversarial models and modified gamma correction, the quality of low-light images were improved. The proposed method is less sharp than the existing method, but it can be applied to ORB-SLAM in real time by dramatically reducing the amount of computation. The experimental results were able to prove the validity of the proposed method by applying to public Dataset TUM and VIVID++.

Development of Fire Detection System using YOLOv8 (YOLOv8을 이용한 화재 검출 시스템 개발)

  • Chae Eun Lee;Chun-Su Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.23 no.1
    • /
    • pp.19-24
    • /
    • 2024
  • It is not an exaggeration to say that a single fire causes a lot of damage, so fires are one of the disaster situations that must be alerted as soon as possible. Various technologies have been utilized so far because preventing and detecting fires can never be completely accomplished with individual human efforts. Recently, deep learning technology has been developed, and fire detection systems using object detection neural networks are being actively studied. In this paper, we propose a new fire detection system that improves the previously studied fire detection system. We train the YOLOv8 model using refined datasets through improved labeling methods, derive results, and demonstrate the superiority of the proposed system by comparing it with the results of previous studies.

  • PDF