• Title/Summary/Keyword: E-Learning Challenges

Search Result 53, Processing Time 0.024 seconds

Deep Neural Network-Based Critical Packet Inspection for Improving Traffic Steering in Software-Defined IoT

  • Tam, Prohim;Math, Sa;Kim, Seokhoon
    • Journal of Internet Computing and Services
    • /
    • v.22 no.6
    • /
    • pp.1-8
    • /
    • 2021
  • With the rapid growth of intelligent devices and communication technologies, 5G network environment has become more heterogeneous and complex in terms of service management and orchestration. 5G architecture requires supportive technologies to handle the existing challenges for improving the Quality of Service (QoS) and the Quality of Experience (QoE) performances. Among many challenges, traffic steering is one of the key elements which requires critically developing an optimal solution for smart guidance, control, and reliable system. Mobile edge computing (MEC), software-defined networking (SDN), network functions virtualization (NFV), and deep learning (DL) play essential roles to complementary develop a flexible computation and extensible flow rules management in this potential aspect. In this proposed system, an accurate flow recommendation, a centralized control, and a reliable distributed connectivity based on the inspection of packet condition are provided. With the system deployment, the packet is classified separately and recommended to request from the optimal destination with matched preferences and conditions. To evaluate the proposed scheme outperformance, a network simulator software was used to conduct and capture the end-to-end QoS performance metrics. SDN flow rules installation was experimented to illustrate the post control function corresponding to DL-based output. The intelligent steering for network communication traffic is cooperatively configured in SDN controller and NFV-orchestrator to lead a variety of beneficial factors for improving massive real-time Internet of Things (IoT) performance.

Real-time RL-based 5G Network Slicing Design and Traffic Model Distribution: Implementation for V2X and eMBB Services

  • WeiJian Zhou;Azharul Islam;KyungHi Chang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2573-2589
    • /
    • 2023
  • As 5G mobile systems carry multiple services and applications, numerous user, and application types with varying quality of service requirements inside a single physical network infrastructure are the primary problem in constructing 5G networks. Radio Access Network (RAN) slicing is introduced as a way to solve these challenges. This research focuses on optimizing RAN slices within a singular physical cell for vehicle-to-everything (V2X) and enhanced mobile broadband (eMBB) UEs, highlighting the importance of adept resource management and allocation for the evolving landscape of 5G services. We put forth two unique strategies: one being offline network slicing, also referred to as standard network slicing, and the other being Online reinforcement learning (RL) network slicing. Both strategies aim to maximize network efficiency by gathering network model characteristics and augmenting radio resources for eMBB and V2X UEs. When compared to traditional network slicing, RL network slicing shows greater performance in the allocation and utilization of UE resources. These steps are taken to adapt to fluctuating traffic loads using RL strategies, with the ultimate objective of bolstering the efficiency of generic 5G services.

Enhancing E-commerce Security: A Comprehensive Approach to Real-Time Fraud Detection

  • Sara Alqethami;Badriah Almutanni;Walla Aleidarousr
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.1-10
    • /
    • 2024
  • In the era of big data, the growth of e-commerce transactions brings forth both opportunities and risks, including the threat of data theft and fraud. To address these challenges, an automated real-time fraud detection system leveraging machine learning was developed. Four algorithms (Decision Tree, Naïve Bayes, XGBoost, and Neural Network) underwent comparison using a dataset from a clothing website that encompassed both legitimate and fraudulent transactions. The dataset exhibited an imbalance, with 9.3% representing fraud and 90.07% legitimate transactions. Performance evaluation metrics, including Recall, Precision, F1 Score, and AUC ROC, were employed to assess the effectiveness of each algorithm. XGBoost emerged as the top-performing model, achieving an impressive accuracy score of 95.85%. The proposed system proves to be a robust defense mechanism against fraudulent activities in e-commerce, thereby enhancing security and instilling trust in online transactions.

Thermal imaging and computer vision technologies for the enhancement of pig husbandry: a review

  • Md Nasim Reza;Md Razob Ali;Samsuzzaman;Md Shaha Nur Kabir;Md Rejaul Karim;Shahriar Ahmed;Hyunjin Kyoung;Gookhwan Kim;Sun-Ok Chung
    • Journal of Animal Science and Technology
    • /
    • v.66 no.1
    • /
    • pp.31-56
    • /
    • 2024
  • Pig farming, a vital industry, necessitates proactive measures for early disease detection and crush symptom monitoring to ensure optimum pig health and safety. This review explores advanced thermal sensing technologies and computer vision-based thermal imaging techniques employed for pig disease and piglet crush symptom monitoring on pig farms. Infrared thermography (IRT) is a non-invasive and efficient technology for measuring pig body temperature, providing advantages such as non-destructive, long-distance, and high-sensitivity measurements. Unlike traditional methods, IRT offers a quick and labor-saving approach to acquiring physiological data impacted by environmental temperature, crucial for understanding pig body physiology and metabolism. IRT aids in early disease detection, respiratory health monitoring, and evaluating vaccination effectiveness. Challenges include body surface emissivity variations affecting measurement accuracy. Thermal imaging and deep learning algorithms are used for pig behavior recognition, with the dorsal plane effective for stress detection. Remote health monitoring through thermal imaging, deep learning, and wearable devices facilitates non-invasive assessment of pig health, minimizing medication use. Integration of advanced sensors, thermal imaging, and deep learning shows potential for disease detection and improvement in pig farming, but challenges and ethical considerations must be addressed for successful implementation. This review summarizes the state-of-the-art technologies used in the pig farming industry, including computer vision algorithms such as object detection, image segmentation, and deep learning techniques. It also discusses the benefits and limitations of IRT technology, providing an overview of the current research field. This study provides valuable insights for researchers and farmers regarding IRT application in pig production, highlighting notable approaches and the latest research findings in this field.

Time Series Crime Prediction Using a Federated Machine Learning Model

  • Salam, Mustafa Abdul;Taha, Sanaa;Ramadan, Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.119-130
    • /
    • 2022
  • Crime is a common social problem that affects the quality of life. As the number of crimes increases, it is necessary to build a model to predict the number of crimes that may occur in a given period, identify the characteristics of a person who may commit a particular crime, and identify places where a particular crime may occur. Data privacy is the main challenge that organizations face when building this type of predictive models. Federated learning (FL) is a promising approach that overcomes data security and privacy challenges, as it enables organizations to build a machine learning model based on distributed datasets without sharing raw data or violating data privacy. In this paper, a federated long short- term memory (LSTM) model is proposed and compared with a traditional LSTM model. Proposed model is developed using TensorFlow Federated (TFF) and the Keras API to predict the number of crimes. The proposed model is applied on the Boston crime dataset. The proposed model's parameters are fine tuned to obtain minimum loss and maximum accuracy. The proposed federated LSTM model is compared with the traditional LSTM model and found that the federated LSTM model achieved lower loss, better accuracy, and higher training time than the traditional LSTM model.

Vision-based Predictive Model on Particulates via Deep Learning

  • Kim, SungHwan;Kim, Songi
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.5
    • /
    • pp.2107-2115
    • /
    • 2018
  • Over recent years, high-concentration of particulate matters (e.g., a.k.a. fine dust) in South Korea has increasingly evoked considerable concerns about public health. It is intractable to track and report $PM_{10}$ measurements to the public on a real-time basis. Even worse, such records merely amount to averaged particulate concentration at particular regions. Under this circumstance, people are prone to being at risk at rapidly dispersing air pollution. To address this challenge, we attempt to build a predictive model via deep learning to the concentration of particulates ($PM_{10}$). The proposed method learns a binary decision rule on the basis of video sequences to predict whether the level of particulates ($PM_{10}$) in real time is harmful (>$80{\mu}g/m^3$) or not. To our best knowledge, no vision-based $PM_{10}$ measurement method has been proposed in atmosphere research. In experimental studies, the proposed model is found to outperform other existing algorithms in virtue of convolutional deep learning networks. In this regard, we suppose this vision based-predictive model has lucrative potentials to handle with upcoming challenges related to particulate measurement.

Using machine learning for anomaly detection on a system-on-chip under gamma radiation

  • Eduardo Weber Wachter ;Server Kasap ;Sefki Kolozali ;Xiaojun Zhai ;Shoaib Ehsan;Klaus D. McDonald-Maier
    • Nuclear Engineering and Technology
    • /
    • v.54 no.11
    • /
    • pp.3985-3995
    • /
    • 2022
  • The emergence of new nanoscale technologies has imposed significant challenges to designing reliable electronic systems in radiation environments. A few types of radiation like Total Ionizing Dose (TID) can cause permanent damages on such nanoscale electronic devices, and current state-of-the-art technologies to tackle TID make use of expensive radiation-hardened devices. This paper focuses on a novel and different approach: using machine learning algorithms on consumer electronic level Field Programmable Gate Arrays (FPGAs) to tackle TID effects and monitor them to replace before they stop working. This condition has a research challenge to anticipate when the board results in a total failure due to TID effects. We observed internal measurements of FPGA boards under gamma radiation and used three different anomaly detection machine learning (ML) algorithms to detect anomalies in the sensor measurements in a gamma-radiated environment. The statistical results show a highly significant relationship between the gamma radiation exposure levels and the board measurements. Moreover, our anomaly detection results have shown that a One-Class SVM with Radial Basis Function Kernel has an average recall score of 0.95. Also, all anomalies can be detected before the boards are entirely inoperative, i.e. voltages drop to zero and confirmed with a sanity check.

Impact of Online Learning in India: A Survey of University Students during the COVID-19 Crisis

  • Goswami, Manash Pratim;Thanvi, Jyoti;Padhi, Soubhagya Ranjan
    • Asian Journal for Public Opinion Research
    • /
    • v.9 no.4
    • /
    • pp.331-351
    • /
    • 2021
  • The unprecedented situation of COVID-19 caused the government of India to instruct educational institutions to switch to an online mode to mitigate the losses for students due to the pandemic. The present study attempts to explore the impact of online learning introduced as a stop-gap arrangement during the pandemic in India. A survey was conducted (N=289), via Facebook and WhatsApp, June 1-15, 2020 to understand the accessibility and effectiveness of online learning and constraints that students of higher education across the country faced during the peak times of the pandemic. The analysis and interpretation of the data revealed that the students acclimatized in a short span of time to online learning, with only 33.21% saying they were not satisfied with the online learning mode. However, the sudden shift to online education has presented more challenges for the socially and economically marginalized groups, including Scheduled Caste (SC), Scheduled Tribes (ST), Other Backward Class (OBC), females, and students in rural areas, due to factors like the price of high-speed Internet (78.20% identified it as a barrier to online learning), insufficient infrastructure (23.52% needed to share their device frequently or very frequently), poor Internet connectivity, etc. According to 76.47% of respondents, the future of learning will be in "blended mode." A total of 88.92% of the respondents suggested that the government should provide high-quality video conferencing facilities free to students to mitigate the division created by online education in an already divided society.

The Role of Information and Communication Technology to Combat COVID-19 Pandemic: Emerging Technologies, Recent Developments and Open Challenges

  • Arshad, Muhammad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.4
    • /
    • pp.93-102
    • /
    • 2021
  • The world is facing an unprecedented economic, social and political crisis with the spread of COVID-19. The Corona Virus (COVID-19) and its global spread have resulted in declaring a pandemic by the World Health Organization. The deadly pandemic of 21st century has spread its wings across the globe with an exponential increase in the number of cases in many countries. The developing and underdeveloped countries are struggling hard to counter the rapidly growing and widespread challenge of COVID-19 because it has greatly influenced the global economies whereby the underdeveloped countries are more affected by its devastating impacts, especially the life of the low-income population. Information and Communication Technology (ICT) were particularly useful in spreading key emergency information and helping to maintain extensive social distancing. Updated information and testing results were published on national and local government websites. Mobile devices were used to support early testing and contact tracing. The government provided free smartphone apps that flagged infection hotspots with text alerts on testing and local cases. The purpose of this research work is to provide an in depth overview of emerging technologies and recent ICT developments to combat COVID-19 Pandemic. Finally, the author highlights open challenges in order to give future research directions.

SHM data anomaly classification using machine learning strategies: A comparative study

  • Chou, Jau-Yu;Fu, Yuguang;Huang, Shieh-Kung;Chang, Chia-Ming
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.77-91
    • /
    • 2022
  • Various monitoring systems have been implemented in civil infrastructure to ensure structural safety and integrity. In long-term monitoring, these systems generate a large amount of data, where anomalies are not unusual and can pose unique challenges for structural health monitoring applications, such as system identification and damage detection. Therefore, developing efficient techniques is quite essential to recognize the anomalies in monitoring data. In this study, several machine learning techniques are explored and implemented to detect and classify various types of data anomalies. A field dataset, which consists of one month long acceleration data obtained from a long-span cable-stayed bridge in China, is employed to examine the machine learning techniques for automated data anomaly detection. These techniques include the statistic-based pattern recognition network, spectrogram-based convolutional neural network, image-based time history convolutional neural network, image-based time-frequency hybrid convolution neural network (GoogLeNet), and proposed ensemble neural network model. The ensemble model deliberately combines different machine learning models to enhance anomaly classification performance. The results show that all these techniques can successfully detect and classify six types of data anomalies (i.e., missing, minor, outlier, square, trend, drift). Moreover, both image-based time history convolutional neural network and GoogLeNet are further investigated for the capability of autonomous online anomaly classification and found to effectively classify anomalies with decent performance. As seen in comparison with accuracy, the proposed ensemble neural network model outperforms the other three machine learning techniques. This study also evaluates the proposed ensemble neural network model to a blind test dataset. As found in the results, this ensemble model is effective for data anomaly detection and applicable for the signal characteristics changing over time.