• Title/Summary/Keyword: CNNs

Search Result 125, Processing Time 0.043 seconds

A Manually Captured and Modified Phone Screen Image Dataset for Widget Classification on CNNs

  • Byun, SungChul;Han, Seong-Soo;Jeong, Chang-Sung
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.197-207
    • /
    • 2022
  • The applications and user interfaces (UIs) of smart mobile devices are constantly diversifying. For example, deep learning can be an innovative solution to classify widgets in screen images for increasing convenience. To this end, the present research leverages captured images and the ReDraw dataset to write deep learning datasets for image classification purposes. First, as the validation for datasets using ResNet50 and EfficientNet, the experiments show that the dataset composed in this study is helpful for classification according to a widget's functionality. An implementation for widget detection and classification on RetinaNet and EfficientNet is then executed. Finally, the research suggests the Widg-C and Widg-D datasets-a deep learning dataset for identifying the widgets of smart devices-and implementing them for use with representative convolutional neural network models.

Scaling Up Face Masks Classification Using a Deep Neural Network and Classical Method Inspired Hybrid Technique

  • Kumar, Akhil;Kalia, Arvind;Verma, Kinshuk;Sharma, Akashdeep;Kaushal, Manisha;Kalia, Aayushi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.11
    • /
    • pp.3658-3679
    • /
    • 2022
  • Classification of persons wearing and not wearing face masks in images has emerged as a new computer vision problem during the COVID-19 pandemic. In order to address this problem and scale up the research in this domain, in this paper a hybrid technique by employing ResNet-101 and multi-layer perceptron (MLP) classifier has been proposed. The proposed technique is tested and validated on a self-created face masks classification dataset and a standard dataset. On self-created dataset, the proposed technique achieved a classification accuracy of 97.3%. To embrace the proposed technique, six other state-of-the-art CNN feature extractors with six other classical machine learning classifiers have been tested and compared with the proposed technique. The proposed technique achieved better classification accuracy and 1-6% higher precision, recall, and F1 score as compared to other tested deep feature extractors and machine learning classifiers.

TVM-based Performance Optimization for Image Classification in Embedded Systems (임베디드 시스템에서의 객체 분류를 위한 TVM기반의 성능 최적화 연구)

  • Cheonghwan Hur;Minhae Ye;Ikhee Shin;Daewoo Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.3
    • /
    • pp.101-108
    • /
    • 2023
  • Optimizing the performance of deep neural networks on embedded systems is a challenging task that requires efficient compilers and runtime systems. We propose a TVM-based approach that consists of three steps: quantization, auto-scheduling, and ahead-of-time compilation. Our approach reduces the computational complexity of models without significant loss of accuracy, and generates optimized code for various hardware platforms. We evaluate our approach on three representative CNNs using ImageNet Dataset on the NVIDIA Jetson AGX Xavier board and show that it outperforms baseline methods in terms of processing speed.

FTSnet: A Simple Convolutional Neural Networks for Action Recognition (FTSnet: 동작 인식을 위한 간단한 합성곱 신경망)

  • Zhao, Yulan;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.878-879
    • /
    • 2021
  • Most state-of-the-art CNNs for action recognition are based on a two-stream architecture: RGB frames stream represents the appearance and the optical flow stream interprets the motion of action. However, the cost of optical flow computation is very high and then it increases action recognition latency. We introduce a design strategy for action recognition inspired by a two-stream network and teacher-student architecture. There are two sub-networks in our neural networks, the optical flow sub-network as a teacher and the RGB frames sub-network as a student. In the training stage, we distill the feature from the teacher as a baseline to train student sub-network. In the test stage, we only use the student so that the latency reduces without computing optical flow. Our experiments show that its advantages over two-stream architecture in both speed and performance.

Impact of Hull Condition and Propeller Surface Maintenance on Fuel Efficiency of Ocean-Going Vessels

  • Tien Anh Tran;Do Kyun Kim
    • Journal of Ocean Engineering and Technology
    • /
    • v.37 no.5
    • /
    • pp.181-189
    • /
    • 2023
  • The fuel consumption of marine diesel engines holds paramount importance in contemporary maritime transportation and shapes energy efficiency strategies of ocean-going vessels. Nonetheless, a noticeable gap in knowledge prevails concerning the influence of ship hull conditions and propeller roughness on fuel consumption. This study bridges this gap by utilizing artificial intelligence techniques in Matlab, particularly convolutional neural networks (CNNs) to comprehensively investigate these factors. We propose a time-series prediction model that was built on numerical simulations and aimed at forecasting ship hull and propeller conditions. The model's accuracy was validated through a meticulous comparison of predictions with actual ship-hull and propeller conditions. Furthermore, we executed a comparative analysis juxtaposing predictive outcomes with navigational environmental factors encompassing wind speed, wave height, and ship loading conditions by the fuzzy clustering method. This research's significance lies in its pivotal role as a foundation for fostering a more intricate understanding of energy consumption within the realm of maritime transport.

Analysis of Odor Data Based on Mixed Neural Network of CNNs and LSTM Hybrid Model

  • Sang-Bum Kim;Sang-Hyun Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.464-469
    • /
    • 2023
  • As modern society develops, the number of diseases caused by bad smells is increasing. As it can harm people's health, it is important to predict in advance the extent to which bad smells may occur, inform the public about this, and take preventive measures. In this paper, we propose a hybrid neural network structure of CNN and LSTM that can be used to detect or predict the occurrence of odors, which are most required in manufacturing or real life, using odor complex sensors. In addition, the proposed learning model uses a complex odor sensor to receive four types of data, including hydrogen sulfide, ammonia, benzene, and toluene, in real time, and applies this data to the inference model to detect and predict the odor state. The proposed model evaluated the prediction accuracy of the training model through performance indicators based on accuracy, and the evaluation results showed an average performance of more than 94%.

Comparison of Deep Learning-based Unsupervised Domain Adaptation Models for Crop Classification (작물 분류를 위한 딥러닝 기반 비지도 도메인 적응 모델 비교)

  • Kwak, Geun-Ho;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.2
    • /
    • pp.199-213
    • /
    • 2022
  • The unsupervised domain adaptation can solve the impractical issue of repeatedly collecting high-quality training data every year for annual crop classification. This study evaluates the applicability of deep learning-based unsupervised domain adaptation models for crop classification. Three unsupervised domain adaptation models including a deep adaptation network (DAN), a deep reconstruction-classification network, and a domain adversarial neural network (DANN) are quantitatively compared via a crop classification experiment using unmanned aerial vehicle images in Hapcheon-gun and Changnyeong-gun, the major garlic and onion cultivation areas in Korea. As source baseline and target baseline models, convolutional neural networks (CNNs) are additionally applied to evaluate the classification performance of the unsupervised domain adaptation models. The three unsupervised domain adaptation models outperformed the source baseline CNN, but the different classification performances were observed depending on the degree of inconsistency between data distributions in source and target images. The classification accuracy of DAN was higher than that of the other two models when the inconsistency between source and target images was low, whereas DANN has the best classification performance when the inconsistency between source and target images was high. Therefore, the extent to which data distributions of the source and target images match should be considered to select the best unsupervised domain adaptation model to generate reliable classification results.

AI-based stuttering automatic classification method: Using a convolutional neural network (인공지능 기반의 말더듬 자동분류 방법: 합성곱신경망(CNN) 활용)

  • Jin Park;Chang Gyun Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.71-80
    • /
    • 2023
  • This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Korean speakers who stutter. To this aim, speech data were collected from 9 adults who stutter and 9 normally-fluent speakers. The data were automatically segmented at the phrasal level using Google Cloud speech-to-text (STT), and labels such as 'fluent', 'blockage', prolongation', and 'repetition' were assigned to them. Mel frequency cepstral coefficients (MFCCs) and the CNN-based classifier were also used for detecting and classifying each type of the stuttered disfluency. However, in the case of prolongation, five results were found and, therefore, excluded from the classifier model. Results showed that the accuracy of the CNN classifier was 0.96, and the F1-score for classification performance was as follows: 'fluent' 1.00, 'blockage' 0.67, and 'repetition' 0.74. Although the effectiveness of the automatic classification identifier was validated using CNNs to detect the stuttered disfluencies, the performance was found to be inadequate especially for the blockage and prolongation types. Consequently, the establishment of a big speech database for collecting data based on the types of stuttered disfluencies was identified as a necessary foundation for improving classification performance.

A Parallel Deep Convolutional Neural Network for Alzheimer's disease classification on PET/CT brain images

  • Baydargil, Husnu Baris;Park, Jangsik;Kang, Do-Young;Kang, Hyun;Cho, Kook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3583-3597
    • /
    • 2020
  • In this paper, a parallel deep learning model using a convolutional neural network and a dilated convolutional neural network is proposed to classify Alzheimer's disease with high accuracy in PET/CT images. The developed model consists of two pipelines, a conventional CNN pipeline, and a dilated convolution pipeline. An input image is sent through both pipelines, and at the end of both pipelines, extracted features are concatenated and used for classifying Alzheimer's disease. Complimentary abilities of both networks provide better overall accuracy than single conventional CNNs in the dataset. Moreover, instead of performing binary classification, the proposed model performs three-class classification being Alzheimer's disease, mild cognitive impairment, and normal control. Using the data received from Dong-a University, the model performs classification detecting Alzheimer's disease with an accuracy of up to 95.51%.

Artificial Intelligence based Tumor detection System using Computational Pathology

  • Naeem, Tayyaba;Qamar, Shamweel;Park, Peom
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.15 no.2
    • /
    • pp.72-78
    • /
    • 2019
  • Pathology is the motor that drives healthcare to understand diseases. The way pathologists diagnose diseases, which involves manual observation of images under a microscope has been used for the last 150 years, it's time to change. This paper is specifically based on tumor detection using deep learning techniques. Pathologist examine the specimen slides from the specific portion of body (e-g liver, breast, prostate region) and then examine it under the microscope to identify the effected cells among all the normal cells. This process is time consuming and not sufficiently accurate. So, there is a need of a system that can detect tumor automatically in less time. Solution to this problem is computational pathology: an approach to examine tissue data obtained through whole slide imaging using modern image analysis algorithms and to analyze clinically relevant information from these data. Artificial Intelligence models like machine learning and deep learning are used at the molecular levels to generate diagnostic inferences and predictions; and presents this clinically actionable knowledge to pathologist through dynamic and integrated reports. Which enables physicians, laboratory personnel, and other health care system to make the best possible medical decisions. I will discuss the techniques for the automated tumor detection system within the new discipline of computational pathology, which will be useful for the future practice of pathology and, more broadly, medical practice in general.