• Title/Summary/Keyword: Deep learning (DL)

Search Result 115, Processing Time 0.019 seconds

Privacy-Preserving in the Context of Data Mining and Deep Learning

  • Altalhi, Amjaad;AL-Saedi, Maram;Alsuwat, Hatim;Alsuwat, Emad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.137-142
    • /
    • 2021
  • Machine-learning systems have proven their worth in various industries, including healthcare and banking, by assisting in the extraction of valuable inferences. Information in these crucial sectors is traditionally stored in databases distributed across multiple environments, making accessing and extracting data from them a tough job. To this issue, we must add that these data sources contain sensitive information, implying that the data cannot be shared outside of the head. Using cryptographic techniques, Privacy-Preserving Machine Learning (PPML) helps solve this challenge, enabling information discovery while maintaining data privacy. In this paper, we talk about how to keep your data mining private. Because Data mining has a wide variety of uses, including business intelligence, medical diagnostic systems, image processing, web search, and scientific discoveries, and we discuss privacy-preserving in deep learning because deep learning (DL) exhibits exceptional exactitude in picture detection, Speech recognition, and natural language processing recognition as when compared to other fields of machine learning so that it detects the existence of any error that may occur to the data or access to systems and add data by unauthorized persons.

DLDW: Deep Learning and Dynamic Weighing-based Method for Predicting COVID-19 Cases in Saudi Arabia

  • Albeshri, Aiiad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.212-222
    • /
    • 2021
  • Multiple waves of COVID-19 highlighted one crucial aspect of this pandemic worldwide that factors affecting the spread of COVID-19 infection are evolving based on various regional and local practices and events. The introduction of vaccines since early 2021 is expected to significantly control and reduce the cases. However, virus mutations and its new variant has challenged these expectations. Several countries, which contained the COVID-19 pandemic successfully in the first wave, failed to repeat the same in the second and third waves. This work focuses on COVID-19 pandemic control and management in Saudi Arabia. This work aims to predict new cases using deep learning using various important factors. The proposed method is called Deep Learning and Dynamic Weighing-based (DLDW) COVID-19 cases prediction method. Special consideration has been given to the evolving factors that are responsible for recent surges in the pandemic. For this purpose, two weights are assigned to data instance which are based on feature importance and dynamic weight-based time. Older data is given fewer weights and vice-versa. Feature selection identifies the factors affecting the rate of new cases evolved over the period. The DLDW method produced 80.39% prediction accuracy, 6.54%, 9.15%, and 7.19% higher than the three other classifiers, Deep learning (DL), Random Forest (RF), and Gradient Boosting Machine (GBM). Further in Saudi Arabia, our study implicitly concluded that lockdowns, vaccination, and self-aware restricted mobility of residents are effective tools in controlling and managing the COVID-19 pandemic.

Deep Learning-based system for plant disease detection and classification (딥러닝 기반 작물 질병 탐지 및 분류 시스템)

  • YuJin Ko;HyunJun Lee;HeeJa Jeong;Li Yu;NamHo Kim
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.9-17
    • /
    • 2023
  • Plant diseases and pests affect the growth of various plants, so it is very important to identify pests at an early stage. Although many machine learning (ML) models have already been used for the inspection and classification of plant pests, advances in deep learning (DL), a subset of machine learning, have led to many advances in this field of research. In this study, disease and pest inspection of abnormal crops and maturity classification were performed for normal crops using YOLOX detector and MobileNet classifier. Through this method, various plant pest features can be effectively extracted. For the experiment, image datasets of various resolutions related to strawberries, peppers, and tomatoes were prepared and used for plant pest classification. According to the experimental results, it was confirmed that the average test accuracy was 84% and the maturity classification accuracy was 83.91% in images with complex background conditions. This model was able to effectively detect 6 diseases of 3 plants and classify the maturity of each plant in natural conditions.

Smartphone-based structural crack detection using pruned fully convolutional networks and edge computing

  • Ye, X.W.;Li, Z.X.;Jin, T.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.141-151
    • /
    • 2022
  • In recent years, the industry and research communities have focused on developing autonomous crack inspection approaches, which mainly include image acquisition and crack detection. In these approaches, mobile devices such as cameras, drones or smartphones are utilized as sensing platforms to acquire structural images, and the deep learning (DL)-based methods are being developed as important crack detection approaches. However, the process of image acquisition and collection is time-consuming, which delays the inspection. Also, the present mobile devices such as smartphones can be not only a sensing platform but also a computing platform that can be embedded with deep neural networks (DNNs) to conduct on-site crack detection. Due to the limited computing resources of mobile devices, the size of the DNNs should be reduced to improve the computational efficiency. In this study, an architecture called pruned crack recognition network (PCR-Net) was developed for the detection of structural cracks. A dataset containing 11000 images was established based on the raw images from bridge inspections. A pruning method was introduced to reduce the size of the base architecture for the optimization of the model size. Comparative studies were conducted with image processing techniques (IPTs) and other DNNs for the evaluation of the performance of the proposed PCR-Net. Furthermore, a modularly designed framework that integrated the PCR-Net was developed to realize a DL-based crack detection application for smartphones. Finally, on-site crack detection experiments were carried out to validate the performance of the developed system of smartphone-based detection of structural cracks.

Review of medical imaging systems, medical imaging data problems, and XAI in the medical imaging field

  • Sun-Kuk Noh
    • Journal of Internet Computing and Services
    • /
    • v.25 no.5
    • /
    • pp.53-65
    • /
    • 2024
  • Currently, artificial intelligence (AI) is being applied in the medical field to collect and analyze data such as personal genetic information, medical information, and lifestyle information. In particular, in the medical imaging field, AI is being applied to the medical imaging field to analyze patients' medical image data and diagnose diseases. Deep learning (DL) of deep neural networks such as CNN and GAN have been introduced to medical image analysis and medical data augmentation to facilitate lesion detection, quantification, and classification. In this paper, we examine AI used in the medical imaging field and review related medical image data acquisition devices, medical information systems for transmitting medical image data, problems with medical image data, and the current status of explainable artificial intelligence (XAI) that has been actively applied recently. In the future, the continuous development of AI and information and communication technology (ICT) is expected to make it easier to analyze medical image data in the medical field, enabling disease diagnosis, prognosis prediction, and improvement of patients' quality of life. In the future, AI medicine is expected to evolve from the existing treatment-centered medical system to personalized healthcare through preemptive diagnosis and prevention.

A Novel Spiking Neural Network for ECG signal Classification

  • Rana, Amrita;Kim, Kyung Ki
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.1
    • /
    • pp.20-24
    • /
    • 2021
  • The electrocardiogram (ECG) is one of the most extensively employed signals used to diagnose and predict cardiovascular diseases (CVDs). In recent years, several deep learning (DL) models have been proposed to improve detection accuracy. Among these, deep neural networks (DNNs) are the most popular, wherein the features are extracted automatically. Despite the increment in classification accuracy, DL models require exorbitant computational resources and power. This causes the mapping of DNNs to be slow; in addition, the mapping is challenging for a wearable device. Embedded systems have constrained power and memory resources. Therefore full-precision DNNs are not easily deployable on devices. To make the neural network faster and more power-efficient, spiking neural networks (SNNs) have been introduced for fewer operations and less complex hardware resources. However, the conventional SNN has low accuracy and high computational cost. Therefore, this paper proposes a new binarized SNN which modifies the synaptic weights of SNN constraining it to be binary (+1 and -1). In the simulation results, this paper compares the DL models and SNNs and evaluates which model is optimal for ECG classification. Although there is a slight compromise in accuracy, the latter proves to be energy-efficient.

Restoration of Ghost Imaging in Atmospheric Turbulence Based on Deep Learning

  • Chenzhe Jiang;Banglian Xu;Leihong Zhang;Dawei Zhang
    • Current Optics and Photonics
    • /
    • v.7 no.6
    • /
    • pp.655-664
    • /
    • 2023
  • Ghost imaging (GI) technology is developing rapidly, but there are inevitably some limitations such as the influence of atmospheric turbulence. In this paper, we study a ghost imaging system in atmospheric turbulence and use a gamma-gamma (GG) model to simulate the medium to strong range of turbulence distribution. With a compressed sensing (CS) algorithm and generative adversarial network (GAN), the image can be restored well. We analyze the performance of correlation imaging, the influence of atmospheric turbulence and the restoration algorithm's effects. The restored image's peak signal-to-noise ratio (PSNR) and structural similarity index map (SSIM) increased to 21.9 dB and 0.67 dB, respectively. This proves that deep learning (DL) methods can restore a distorted image well, and it has specific significance for computational imaging in noisy and fuzzy environments.

Hybrid model-based and deep learning-based metal artifact reduction method in dental cone-beam computed tomography

  • Jin Hur;Yeong-Gil Shin;Ho Lee
    • Nuclear Engineering and Technology
    • /
    • v.55 no.8
    • /
    • pp.2854-2863
    • /
    • 2023
  • Objective: To present a hybrid approach that incorporates a constrained beam-hardening estimator (CBHE) and deep learning (DL)-based post-refinement for metal artifact reduction in dental cone-beam computed tomography (CBCT). Methods: Constrained beam-hardening estimator (CBHE) is derived from a polychromatic X-ray attenuation model with respect to X-ray transmission length, which calculates associated parameters numerically. Deep-learning-based post-refinement with an artifact disentanglement network (ADN) is performed to mitigate the remaining dark shading regions around a metal. Artifact disentanglement network (ADN) supports an unsupervised learning approach, in which no paired CBCT images are required. The network consists of an encoder that separates artifacts and content and a decoder for the content. Additionally, ADN with data normalization replaces metal regions with values from bone or soft tissue regions. Finally, the metal regions obtained from the CBHE are blended into reconstructed images. The proposed approach is systematically assessed using a dental phantom with two types of metal objects for qualitative and quantitative comparisons. Results: The proposed hybrid scheme provides improved image quality in areas surrounding the metal while preserving native structures. Conclusion: This study may significantly improve the detection of areas of interest in many dentomaxillofacial applications.

5G Network Resource Allocation and Traffic Prediction based on DDPG and Federated Learning (DDPG 및 연합학습 기반 5G 네트워크 자원 할당과 트래픽 예측)

  • Seok-Woo Park;Oh-Sung Lee;In-Ho Ra
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.33-48
    • /
    • 2024
  • With the advent of 5G, characterized by Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), efficient network management and service provision are becoming increasingly critical. This paper proposes a novel approach to address key challenges of 5G networks, namely ultra-high speed, ultra-low latency, and ultra-reliability, while dynamically optimizing network slicing and resource allocation using machine learning (ML) and deep learning (DL) techniques. The proposed methodology utilizes prediction models for network traffic and resource allocation, and employs Federated Learning (FL) techniques to simultaneously optimize network bandwidth, latency, and enhance privacy and security. Specifically, this paper extensively covers the implementation methods of various algorithms and models such as Random Forest and LSTM, thereby presenting methodologies for the automation and intelligence of 5G network operations. Finally, the performance enhancement effects achievable by applying ML and DL to 5G networks are validated through performance evaluation and analysis, and solutions for network slicing and resource management optimization are proposed for various industrial applications.

A Review of Deep Learning-based Trace Interpolation and Extrapolation Techniques for Reconstructing Missing Near Offset Data (가까운 벌림 빠짐 해결을 위한 딥러닝 기반의 트레이스 내삽 및 외삽 기술에 대한 고찰)

  • Jiho Park;Soon Jee Seol;Joongmoo Byun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.4
    • /
    • pp.185-198
    • /
    • 2023
  • In marine seismic surveys, the inevitable occurrence of trace gaps in the near offset resulting from geometrical differences between sources and receivers adversely affects subsequent seismic data processing and imaging. The absence of data in the near-offset region hinders accurate seismic imaging. Therefore, reconstructing the missing near-offset information is crucial for mitigating the influence of seismic multiples, particularly in the case of offshore surveys where the impact of multiple reflections is relatively more pronounced. Conventionally, various interpolation methods based on the Radon transform have been proposed to address the issue of the nearoffset data gap. However, these methods have several limitations, leading to the recent emergence of deep-learning (DL)-based approaches as alternatives. In this study, we conducted an in-depth analysis of two representative DL-based studies to scrutinize the challenges that future studies on near-offset interpolation must address. Furthermore, through field data experiments, we precisely analyze the limitations encountered when applying previous DL-based trace interpolation techniques to near-offset situations. Consequently, we suggest that near-offset data gaps must be approached by extrapolation rather than interpolation.