• Title/Summary/Keyword: convolutional network

Search Result 1,663, Processing Time 0.027 seconds

COVID-19 Lung CT Image Recognition (COVID-19 폐 CT 이미지 인식)

  • Su, Jingjie;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.529-536
    • /
    • 2022
  • In the past two years, Severe Acute Respiratory Syndrome Coronavirus-2(SARS-CoV-2) has been hitting more and more to people. This paper proposes a novel U-Net Convolutional Neural Network to classify and segment COVID-19 lung CT images, which contains Sub Coding Block (SCB), Atrous Spatial Pyramid Pooling(ASPP) and Attention Gate(AG). Three different models such as FCN, U-Net and U-Net-SCB are designed to compare the proposed model and the best optimizer and atrous rate are chosen for the proposed model. The simulation results show that the proposed U-Net-MMFE has the best Dice segmentation coefficient of 94.79% for the COVID-19 CT scan digital image dataset compared with other segmentation models when atrous rate is 12 and the optimizer is Adam.

Age and Gender Classification with Small Scale CNN (소규모 합성곱 신경망을 사용한 연령 및 성별 분류)

  • Jamoliddin, Uraimov;Yoo, Jae Hung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.99-104
    • /
    • 2022
  • Artificial intelligence is getting a crucial part of our lives with its incredible benefits. Machines outperform humans in recognizing objects in images, particularly in classifying people into correct age and gender groups. In this respect, age and gender classification has been one of the hot topics among computer vision researchers in recent decades. Deployment of deep Convolutional Neural Network(: CNN) models achieved state-of-the-art performance. However, the most of CNN based architectures are very complex with several dozens of training parameters so they require much computation time and resources. For this reason, we propose a new CNN-based classification algorithm with significantly fewer training parameters and training time compared to the existing methods. Despite its less complexity, our model shows better accuracy of age and gender classification on the UTKFace dataset.

Electric Power Demand Prediction Using Deep Learning Model with Temperature Data (기온 데이터를 반영한 전력수요 예측 딥러닝 모델)

  • Yoon, Hyoup-Sang;Jeong, Seok-Bong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.7
    • /
    • pp.307-314
    • /
    • 2022
  • Recently, researches using deep learning-based models are being actively conducted to replace statistical-based time series forecast techniques to predict electric power demand. The result of analyzing the researches shows that the performance of the LSTM-based prediction model is acceptable, but it is not sufficient for long-term regional-wide power demand prediction. In this paper, we propose a WaveNet deep learning model to predict electric power demand 24-hour-ahead with temperature data in order to achieve the prediction accuracy better than MAPE value of 2% which statistical-based time series forecast techniques can present. First of all, we illustrate a delated causal one-dimensional convolutional neural network architecture of WaveNet and the preprocessing mechanism of the input data of electric power demand and temperature. Second, we present the training process and walk forward validation with the modified WaveNet. The performance comparison results show that the prediction model with temperature data achieves MAPE value of 1.33%, which is better than MAPE Value (2.33%) of the same model without temperature data.

Development and Usability Evaluation of Hand Rehabilitation Training System Using Multi-Channel EMG-Based Deep Learning Hand Posture Recognition (다채널 근전도 기반 딥러닝 동작 인식을 활용한 손 재활 훈련시스템 개발 및 사용성 평가)

  • Ahn, Sung Moo;Lee, Gun Hee;Kim, Se Jin;Bae, So Jeong;Lee, Hyun Ju;Oh, Do Chang;Tae, Ki Sik
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.5
    • /
    • pp.361-368
    • /
    • 2022
  • The purpose of this study was to develop a hand rehabilitation training system for hemiplegic patients. We also tried to find out five hand postures (WF: Wrist Flexion, WE: Wrist Extension, BG: Ball Grip, HG: Hook Grip, RE: Rest) in real-time using multi-channel EMG-based deep learning. We performed a pre-processing method that converts to Spider Chart image data for the classification of hand movement from five test subjects (total 1,500 data sets) using Convolution Neural Networks (CNN) deep learning with an 8-channel armband. As a result of this study, the recognition accuracy was 92% for WF, 94% for WE, 76% for BG, 82% for HG, and 88% for RE. Also, ten physical therapists participated for the usability evaluation. The questionnaire consisted of 7 items of acceptance, interest, and satisfaction, and the mean and standard deviation were calculated by dividing each into a 5-point scale. As a result, high scores were obtained in immersion and interest in game (4.6±0.43), convenience of the device (4.9±0.30), and satisfaction after treatment (4.1±0.48). On the other hand, Conformity of intention for treatment (3.90±0.49) was relatively low. This is thought to be because the game play may be difficult depending on the degree of spasticity of the hemiplegic patient, and compensation may occur in patient with weakened target muscles. Therefore, it is necessary to develop a rehabilitation program suitable for the degree of disability of the patient.

Emotional Expression Technique using Facial Recognition in User Review (사용자 리뷰에서 표정 인식을 이용한 감정 표현 기법)

  • Choi, Wongwan;Hwang, Mansoo;Kim, Neunghoe
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.23-28
    • /
    • 2022
  • Today, the online market has grown rapidly due to the development of digital platforms and the pandemic situation. Therefore, unlike the existing offline market, the distinctiveness of the online market has prompted users to check online reviews. It has been established that reviews play a significant part in influencing the user's purchase intention through precedents of several studies. However, the current review writing method makes it difficult for other users to understand the writer's emotions by expressing them through elements like tone and words. If the writer also wanted to emphasize something, it was very cumbersome to thicken the parts or change the colors to reflect their emotions. Therefore, in this paper, we propose a technique to check the user's emotions through facial expression recognition using a camera, to automatically set colors for each emotion using research on existing emotions and colors, and give colors based on the user's intention.

Deep Learning-based Real-time Heart Rate Measurement System Using Mobile Facial Videos (딥러닝 기반의 모바일 얼굴 영상을 이용한 실시간 심박수 측정 시스템)

  • Ji, Yerim;Lim, Seoyeon;Park, Soyeon;Kim, Sangha;Dong, Suh-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1481-1491
    • /
    • 2021
  • Since most biosignals rely on contact-based measurement, there is still a problem in that it is hard to provide convenience to users by applying them to daily life. In this paper, we present a mobile application for estimating heart rate based on a deep learning model. The proposed application measures heart rate by capturing real-time face images in a non-contact manner. We trained a three-dimensional convolutional neural network to predict photoplethysmography (PPG) from face images. The face images used for training were taken in various movements and situations. To evaluate the performance of the proposed system, we used a pulse oximeter to measure a ground truth PPG. As a result, the deviation of the calculated root means square error between the heart rate from remote PPG measured by the proposed system and the heart rate from the ground truth was about 1.14, showing no significant difference. Our findings suggest that heart rate measurement by mobile applications is accurate enough to help manage health during daily life.

A General Acoustic Drone Detection Using Noise Reduction Preprocessing (환경 소음 제거를 통한 범용적인 드론 음향 탐지 구현)

  • Kang, Hae Young;Lee, Kyung-ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.881-890
    • /
    • 2022
  • As individual and group users actively use drones, the risks (Intrusion, Information leakage, and Sircraft crashes and so on) in no-fly zones are also increasing. Therefore, it is necessary to build a system that can detect drones intruding into the no-fly zone. General acoustic drone detection researches do not derive location-independent performance by directly learning drone sound including environmental noise in a deep learning model to overcome environmental noise. In this paper, we propose a drone detection system that collects sounds including environmental noise, and detects drones by removing noise from target sound. After removing environmental noise from the collected sound, the proposed system predicts the drone sound using Mel spectrogram and CNN deep learning. As a result, It is confirmed that the drone detection performance, which was weak due to unstudied environmental noises, can be improved by more than 7%.

Deep Learning-based Pes Planus Classification Model Using Transfer Learning

  • Kim, Yeonho;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.21-28
    • /
    • 2021
  • This study proposes a deep learning-based flat foot classification methodology using transfer learning. We used a transfer learning with VGG16 pre-trained model and a data augmentation technique to generate a model with high predictive accuracy from a total of 176 image data consisting of 88 flat feet and 88 normal feet. To evaluate the performance of the proposed model, we performed an experiment comparing the prediction accuracy of the basic CNN-based model and the prediction model derived through the proposed methodology. In the case of the basic CNN model, the training accuracy was 77.27%, the validation accuracy was 61.36%, and the test accuracy was 59.09%. Meanwhile, in the case of our proposed model, the training accuracy was 94.32%, the validation accuracy was 86.36%, and the test accuracy was 84.09%, indicating that the accuracy of our model was significantly higher than that of the basic CNN model.

A deep learning-based approach for feeding behavior recognition of weanling pigs

  • Kim, MinJu;Choi, YoHan;Lee, Jeong-nam;Sa, SooJin;Cho, Hyun-chong
    • Journal of Animal Science and Technology
    • /
    • v.63 no.6
    • /
    • pp.1453-1463
    • /
    • 2021
  • Feeding is the most important behavior that represents the health and welfare of weanling pigs. The early detection of feed refusal is crucial for the control of disease in the initial stages and the detection of empty feeders for adding feed in a timely manner. This paper proposes a real-time technique for the detection and recognition of small pigs using a deep-leaning-based method. The proposed model focuses on detecting pigs on a feeder in a feeding position. Conventional methods detect pigs and then classify them into different behavior gestures. In contrast, in the proposed method, these two tasks are combined into a single process to detect only feeding behavior to increase the speed of detection. Considering the significant differences between pig behaviors at different sizes, adaptive adjustments are introduced into a you-only-look-once (YOLO) model, including an angle optimization strategy between the head and body for detecting a head in a feeder. According to experimental results, this method can detect the feeding behavior of pigs and screen non-feeding positions with 95.66%, 94.22%, and 96.56% average precision (AP) at an intersection over union (IoU) threshold of 0.5 for YOLOv3, YOLOv4, and an additional layer and with the proposed activation function, respectively. Drinking behavior was detected with 86.86%, 89.16%, and 86.41% AP at a 0.5 IoU threshold for YOLOv3, YOLOv4, and the proposed activation function, respectively. In terms of detection and classification, the results of our study demonstrate that the proposed method yields higher precision and recall compared to conventional methods.

Question Similarity Measurement of Chinese Crop Diseases and Insect Pests Based on Mixed Information Extraction

  • Zhou, Han;Guo, Xuchao;Liu, Chengqi;Tang, Zhan;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.3991-4010
    • /
    • 2021
  • The Question Similarity Measurement of Chinese Crop Diseases and Insect Pests (QSM-CCD&IP) aims to judge the user's tendency to ask questions regarding input problems. The measurement is the basis of the Agricultural Knowledge Question and Answering (Q & A) system, information retrieval, and other tasks. However, the corpus and measurement methods available in this field have some deficiencies. In addition, error propagation may occur when the word boundary features and local context information are ignored when the general method embeds sentences. Hence, these factors make the task challenging. To solve the above problems and tackle the Question Similarity Measurement task in this work, a corpus on Chinese crop diseases and insect pests(CCDIP), which contains 13 categories, was established. Then, taking the CCDIP as the research object, this study proposes a Chinese agricultural text similarity matching model, namely, the AgrCQS. This model is based on mixed information extraction. Specifically, the hybrid embedding layer can enrich character information and improve the recognition ability of the model on the word boundary. The multi-scale local information can be extracted by multi-core convolutional neural network based on multi-weight (MM-CNN). The self-attention mechanism can enhance the fusion ability of the model on global information. In this research, the performance of the AgrCQS on the CCDIP is verified, and three benchmark datasets, namely, AFQMC, LCQMC, and BQ, are used. The accuracy rates are 93.92%, 74.42%, 86.35%, and 83.05%, respectively, which are higher than that of baseline systems without using any external knowledge. Additionally, the proposed method module can be extracted separately and applied to other models, thus providing reference for related research.