• Title/Summary/Keyword: neural network learning

Search Result 4,140, Processing Time 0.036 seconds

A Best Effort Classification Model For Sars-Cov-2 Carriers Using Random Forest

  • Mallick, Shrabani;Verma, Ashish Kumar;Kushwaha, Dharmender Singh
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.27-33
    • /
    • 2021
  • The whole world now is dealing with Coronavirus, and it has turned to be one of the most widespread and long-lived pandemics of our times. Reports reveal that the infectious disease has taken toll of the almost 80% of the world's population. Amidst a lot of research going on with regards to the prediction on growth and transmission through Symptomatic carriers of the virus, it can't be ignored that pre-symptomatic and asymptomatic carriers also play a crucial role in spreading the reach of the virus. Classification Algorithm has been widely used to classify different types of COVID-19 carriers ranging from simple feature-based classification to Convolutional Neural Networks (CNNs). This research paper aims to present a novel technique using a Random Forest Machine learning algorithm with hyper-parameter tuning to classify different types COVID-19-carriers such that these carriers can be accurately characterized and hence dealt timely to contain the spread of the virus. The main idea for selecting Random Forest is that it works on the powerful concept of "the wisdom of crowd" which produces ensemble prediction. The results are quite convincing and the model records an accuracy score of 99.72 %. The results have been compared with the same dataset being subjected to K-Nearest Neighbour, logistic regression, support vector machine (SVM), and Decision Tree algorithms where the accuracy score has been recorded as 78.58%, 70.11%, 70.385,99% respectively, thus establishing the concreteness and suitability of our approach.

A Calf Disease Decision Support Model (송아지 질병 결정 지원 모델)

  • Choi, Dong-Oun;Kang, Yun-Jeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1462-1468
    • /
    • 2022
  • Among the data used for the diagnosis of calf disease, feces play an important role in disease diagnosis. In the image of calf feces, the health status can be known by the shape, color, and texture. For the fecal image that can identify the health status, data of 207 normal calves and 158 calves with diarrhea were pre-processed according to fecal status and used. In this paper, images of fecal variables are detected among the collected calf data and images are trained by applying GLCM-CNN, which combines the properties of CNN and GLCM, on a dataset containing disease symptoms using convolutional network technology. There was a significant difference between CNN's 89.9% accuracy and GLCM-CNN, which showed 91.7% accuracy, and GLCM-CNN showed a high accuracy of 1.8%.

Movement Route Generation Technique through Location Area Clustering (위치 영역 클러스터링을 통한 이동 경로 생성 기법)

  • Yoon, Chang-Pyo;Hwang, Chi-Gon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.355-357
    • /
    • 2022
  • In this paper, as a positioning technology for predicting the movement path of a moving object using a recurrent neural network (RNN) model, which is a deep learning network, in an indoor environment, continuous location information is used to predict the path of a moving vehicle within a local path. We propose a movement path generation technique that can reduce decision errors. In the case of an indoor environment where GPS information is not available, the data set must be continuous and sequential in order to apply the RNN model. However, Wi-Fi radio fingerprint data cannot be used as RNN data because continuity is not guaranteed as characteristic information about a specific location at the time of collection. Therefore, we propose a movement path generation technique for a vehicle moving a local path in an indoor environment by giving the necessary sequential location continuity to the RNN model.

  • PDF

Performance Enhancement of Speech Declipping using Clipping Detector (클리핑 감지기를 이용한 음성 신호 클리핑 제거의 성능 향상)

  • Eunmi Seo;Jeongchan Yu;Yujin Lim;Hochong Park
    • Journal of Broadcast Engineering
    • /
    • v.28 no.1
    • /
    • pp.132-140
    • /
    • 2023
  • In this paper, we propose a method for performance enhancement of speech declipping using clipping detector. Clipping occurs when the input speech level exceeds the dynamic range of microphone, and it significantly degrades the speech quality. Recently, many methods for high-performance speech declipping based on machine learning have been developed. However, they often deteriorate the speech signal because of degradation in signal reconstruction process when the degree of clipping is not high. To solve this problem, we propose a new approach that combines the declipping network and clipping detector, which enables a selective declipping operation depending on the clipping level and provides high-quality speech in all clipping levels. We measured the declipping performance using various metrics and confirmed that the proposed method improves the average performance over all clipping levels, compared with the conventional methods, and greatly improves the performance when the clipping distortion is small.

Performance Evaluation of Efficient Vision Transformers on Embedded Edge Platforms (임베디드 엣지 플랫폼에서의 경량 비전 트랜스포머 성능 평가)

  • Minha Lee;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.3
    • /
    • pp.89-100
    • /
    • 2023
  • Recently, on-device artificial intelligence (AI) solutions using mobile devices and embedded edge devices have emerged in various fields, such as computer vision, to address network traffic burdens, low-energy operations, and security problems. Although vision transformer deep learning models have outperformed conventional convolutional neural network (CNN) models in computer vision, they require more computations and parameters than CNN models. Thus, they are not directly applicable to embedded edge devices with limited hardware resources. Many researchers have proposed various model compression methods or lightweight architectures for vision transformers; however, there are only a few studies evaluating the effects of model compression techniques of vision transformers on performance. Regarding this problem, this paper presents a performance evaluation of vision transformers on embedded platforms. We investigated the behaviors of three vision transformers: DeiT, LeViT, and MobileViT. Each model performance was evaluated by accuracy and inference time on edge devices using the ImageNet dataset. We assessed the effects of the quantization method applied to the models on latency enhancement and accuracy degradation by profiling the proportion of response time occupied by major operations. In addition, we evaluated the performance of each model on GPU and EdgeTPU-based edge devices. In our experimental results, LeViT showed the best performance in CPU-based edge devices, and DeiT-small showed the highest performance improvement in GPU-based edge devices. In addition, only MobileViT models showed performance improvement on EdgeTPU. Summarizing the analysis results through profiling, the degree of performance improvement of each vision transformer model was highly dependent on the proportion of parts that could be optimized in the target edge device. In summary, to apply vision transformers to on-device AI solutions, either proper operation composition and optimizations specific to target edge devices must be considered.

Spatio-temporal potential future drought prediction using machine learning for time series data forecast in Abomey-calavi (South of Benin)

  • Agossou, Amos;Kim, Do Yeon;Yang, Jeong-Seok
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.268-268
    • /
    • 2021
  • Groundwater resource is mostly used in Abomey-calavi (southern region of Benin) as main source of water for domestic, industrial, and agricultural activities. Groundwater intake across the region is not perfectly controlled by a network due to the presence of many private boreholes and traditional wells used by the population. After some decades, this important resource is becoming more and more vulnerable and needs more attention. For a better groundwater management in the region of Abomey-calavi, the present study attempts to predict a future probable groundwater drought using Recurrent Neural Network (RNN) for future groundwater level prediction. The RNN model was created in python using jupyter library. Six years monthly groundwater level data was used for the model calibration, two years data for the model test and the model was finaly used to predict two years future groundwater level (years 2020 and 2021). GRI was calculated for 9 wells across the area from 2012 to 2021. The GRI value in dry season (by the end of March) showed groundwater drought for the first time during the study period in 2014 as severe and moderate; from 2015 to 2021 it shows only moderate drought. The rainy season in years 2020 and 2021 is relatively wet and near normal. GRI showed no drought in rainy season during the study period but an important diminution of groundwater level between 2012 and 2021. The Pearson's correlation coefficient calculated between GRI and rainfall from 2005 to 2020 (using only three wells with times series long period data) proved that the groundwater drought mostly observed in dry season is not mainly caused by rainfall scarcity (correlation values between -0.113 and -0.083), but this could be the consequence of an overexploitation of the resource which caused the important spatial and temporal diminution observed from 2012 to 2021.

  • PDF

Force-deformation relationship prediction of bridge piers through stacked LSTM network using fast and slow cyclic tests

  • Omid Yazdanpanah;Minwoo Chang;Minseok Park;Yunbyeong Chae
    • Structural Engineering and Mechanics
    • /
    • v.85 no.4
    • /
    • pp.469-484
    • /
    • 2023
  • A deep recursive bidirectional Cuda Deep Neural Network Long Short Term Memory (Bi-CuDNNLSTM) layer is recruited in this paper to predict the entire force time histories, and the corresponding hysteresis and backbone curves of reinforced concrete (RC) bridge piers using experimental fast and slow cyclic tests. The proposed stacked Bi-CuDNNLSTM layers involve multiple uncertain input variables, including horizontal actuator displacements, vertical actuators axial loads, the effective height of the bridge pier, the moment of inertia, and mass. The functional application programming interface in the Keras Python library is utilized to develop a deep learning model considering all the above various input attributes. To have a robust and reliable prediction, the dataset for both the fast and slow cyclic tests is split into three mutually exclusive subsets of training, validation, and testing (unseen). The whole datasets include 17 RC bridge piers tested experimentally ten for fast and seven for slow cyclic tests. The results bring to light that the mean absolute error, as a loss function, is monotonically decreased to zero for both the training and validation datasets after 5000 epochs, and a high level of correlation is observed between the predicted and the experimentally measured values of the force time histories for all the datasets, more than 90%. It can be concluded that the maximum mean of the normalized error, obtained through Box-Whisker plot and Gaussian distribution of normalized error, associated with unseen data is about 10% and 3% for the fast and slow cyclic tests, respectively. In recapitulation, it brings to an end that the stacked Bi-CuDNNLSTM layer implemented in this study has a myriad of benefits in reducing the time and experimental costs for conducting new fast and slow cyclic tests in the future and results in a fast and accurate insight into hysteretic behavior of bridge piers.

A Study on Information Expansion of Neighboring Clusters for Creating Enhanced Indoor Movement Paths (향상된 실내 이동 경로 생성을 위한 인접 클러스터의 정보 확장에 관한 연구)

  • Yoon, Chang-Pyo;Hwang, Chi-Gon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.264-266
    • /
    • 2022
  • In order to apply the RNN model to the radio fingerprint-based indoor path generation technology, the data set must be continuous and sequential. However, Wi-Fi radio fingerprint data is not suitable as RNN data because continuity is not guaranteed as characteristic information about a specific location at the time of collection. Therefore, continuity information of sequential positions should be given. For this purpose, clustering is possible through classification of each region based on signal data. At this time, the continuity information between the clusters does not contain information on whether actual movement is possible due to the limitation of radio signals. Therefore, correlation information on whether movement between adjacent clusters is possible is required. In this paper, a deep learning network, a recurrent neural network (RNN) model, is used to predict the path of a moving object, and it reduces errors that may occur when predicting the path of an object by generating continuous location information for path generation in an indoor environment. We propose a method of giving correlation between clustering for generating an improved moving path that can avoid erroneous path prediction that cannot move on the predicted path.

  • PDF

Prediction of pathological complete response in rectal cancer using 3D tumor PET image (3차원 종양 PET 영상을 이용한 직장암 치료반응 예측)

  • Jinyu Yang;Kangsan Kim;Ui-sup Shin;Sang-Keun Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.63-65
    • /
    • 2023
  • 본 논문에서는 FDG-PET 영상을 사용하는 딥러닝 네트워크를 이용하여 직장암 환자의 치료 후 완치를 예측하는 연구를 수행하였다. 직장암은 흔한 악성 종양 중 하나이지만 병리학적으로 완전하게 치료되는 가능성이 매우 낮아, 치료 후의 반응을 예측하고 적절한 치료 방법을 선택하는 것이 중요하다. 따라서 본 연구에서는 FDG-PET 영상에 합성곱 신경망(CNN)모델을 활용하여 딥러닝 네트워크를 구축하고 직장암 환자의 치료반응을 예측하는 연구를 진행하였다. 116명의 직장암 환자의 FDG-PET 영상을 획득하였다. 대상군은 2cm 이상의 종양 크기를 가지는 환자를 대상으로 하였으며 치료 후 완치된 환자는 21명이었다. FDG-PET 영상은 전신 영역과 종양 영역으로 나누어 평가하였다. 딥러닝 네트워크는 2차원 및 3차원 영상입력에 대한 CNN 모델로 구성되었다. 학습된 CNN 모델을 사용하여 직장암의 치료 후 완치를 예측하는 성능을 평가하였다. 학습 결과에서 평균 정확도와 정밀도는 각각 0.854와 0.905로 나타났으며, 모든 CNN 모델과 영상 영역에 따른 성능을 보였다. 테스트 결과에서는 3차원 CNN 모델과 종양 영역만을 이용한 네트워크에서 정확도가 높게 평가됨을 확인하였다. 본 연구에서는 CNN 모델의 입력 영상에 따른 차이와 영상 영역에 따른 딥러닝 네트워크의 성능을 평가하였으며 딥러닝 네트워크 모델을 통해 직장암 치료반응을 예측하고 적절한 치료 방향 결정에 도움이 될 것으로 기대한다.

  • PDF

Parameter Analysis for Super-Resolution Network Model Optimization of LiDAR Intensity Image (LiDAR 반사 강도 영상의 초해상화 신경망 모델 최적화를 위한 파라미터 분석)

  • Seungbo Shim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.5
    • /
    • pp.137-147
    • /
    • 2023
  • LiDAR is used in autonomous driving and various industrial fields to measure the size and distance of an object. In addition, the sensor also provides intensity images based on the amount of reflected light. This has a positive effect on sensor data processing by providing information on the shape of the object. LiDAR guarantees higher performance as the resolution increases but at an increased cost. These conditions also apply to LiDAR intensity images. Expensive equipment is essential to acquire high-resolution LiDAR intensity images. This study developed artificial intelligence to improve low-resolution LiDAR intensity images into high-resolution ones. Therefore, this study performed parameter analysis for the optimal super-resolution neural network model. The super-resolution algorithm was trained and verified using 2,500 LiDAR intensity images. As a result, the resolution of the intensity images were improved. These results can be applied to the autonomous driving field and help improve driving environment recognition and obstacle detection performance