• Title/Summary/Keyword: Address Data

Search Result 2,389, Processing Time 0.028 seconds

The Effect of Chatbot Service Quality on Customer Satisfaction and Continuous Use Intention (챗봇 서비스품질이 고객만족과 지속사용의도에 미치는 영향)

  • Min Jeong KIM
    • Journal of Korea Artificial Intelligence Association
    • /
    • v.2 no.1
    • /
    • pp.15-24
    • /
    • 2024
  • This study is about the effect of chatbot service quality on customer satisfaction and continuous use intention. Data collection was conducted for 13 days from October 23 to November 5, 2023, and a survey was conducted on customers who have used chatbot services. A total of 572 questionnaires were targeted, of which 545 valid data were used for analysis, excluding those that responded insincerely or did not meet the purpose of the study. The analysis results of this study are as follows: First, chatbot service quality partially had a significant effect on satisfaction. Second, customer satisfaction had a significant effect on continuous use intention. Therefore, in order to have a positive impact on continuous use intention, it is necessary to focus on marketing strategies related to chatbot service quality. Also, research focusing on data analysis and performance evaluation is crucial for enhancing chatbot services, necessitating studies that address real-time changes. Through sophisticated data analysis and variable measurement, chatbot services can be effectively improved, leading to enhanced customer satisfaction.

Network Intrusion Detection Using Transformer and BiGRU-DNN in Edge Computing

  • Huijuan Sun
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.458-476
    • /
    • 2024
  • To address the issue of class imbalance in network traffic data, which affects the network intrusion detection performance, a combined framework using transformers is proposed. First, Tomek Links, SMOTE, and WGAN are used to preprocess the data to solve the class-imbalance problem. Second, the transformer is used to encode traffic data to extract the correlation between network traffic. Finally, a hybrid deep learning network model combining a bidirectional gated current unit and deep neural network is proposed, which is used to extract long-dependence features. A DNN is used to extract deep level features, and softmax is used to complete classification. Experiments were conducted on the NSLKDD, UNSWNB15, and CICIDS2017 datasets, and the detection accuracy rates of the proposed model were 99.72%, 84.86%, and 99.89% on three datasets, respectively. Compared with other relatively new deep-learning network models, it effectively improved the intrusion detection performance, thereby improving the communication security of network data.

Hyper-encryption Scheme for Data Confidentiality in Wireless Broadband (WiBro) Networks

  • Hamid, Abdul;Hong, Choong-Seon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.1096-1097
    • /
    • 2007
  • We address the data confidentiality for wireless broadband (WiBro) networks. In WiBro, as the channel is wireless in nature, it suffers from passive and active attack. Passive attack, for example is to decrypt traffic based on statistical analysis and active attack is to modify traffic or inject new traffic from unauthorized mobile stations. Due to high mobility, frequent session key distribution is a bottleneck for the mobile stations. In aspect of WiBro, there is a communication between mobile station to base station, and also in mobile station to mobile station. It is expected to ensure data confidentiality while maintaining minimum overhead for the resource constrained mobile stations. In this paper, we proposed a security framework based on the concept of hyper-encryption to provide data confidentiality for wireless broadband networks.

Secure and Efficient Storage of Video Data in a CCTV Environment

  • Kim, Won-Bin;Lee, Im-Yeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3238-3257
    • /
    • 2019
  • Closed-circuit television (CCTV) technology continuously captures and stores video streams. Users are typically required by policy to store all the captured video for a certain period. Accordingly, increasing the number of CCTV operation cycles and photographing positions expands the amount of data to be stored. However, expanding the available storage space for video data incurs increased costs. In recent years, this problem has been addressed with cloud storage solutions, which enable multiple users and devices to access and store data simultaneously. However, because of the large amount of data to be stored, a vast storage space is required. Consequently, cloud storage administrators need a way to store data more efficiently. To save storage space, deduplication technology has been proposed to prevent duplicate storage of the same data. However, because cloud storage is hosted on remote servers, data encryption technology must be applied to address data exposure issues. Although deduplication techniques for encrypted data have been studied, there have been various security vulnerabilities. We attempted to solve this problem by addressing various issues such as poison attacks, property forgery, and ownership management while removing the redundant data and handling the data more securely.

Big IoT Healthcare Data Analytics Framework Based on Fog and Cloud Computing

  • Alshammari, Hamoud;El-Ghany, Sameh Abd;Shehab, Abdulaziz
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1238-1249
    • /
    • 2020
  • Throughout the world, aging populations and doctor shortages have helped drive the increasing demand for smart healthcare systems. Recently, these systems have benefited from the evolution of the Internet of Things (IoT), big data, and machine learning. However, these advances result in the generation of large amounts of data, making healthcare data analysis a major issue. These data have a number of complex properties such as high-dimensionality, irregularity, and sparsity, which makes efficient processing difficult to implement. These challenges are met by big data analytics. In this paper, we propose an innovative analytic framework for big healthcare data that are collected either from IoT wearable devices or from archived patient medical images. The proposed method would efficiently address the data heterogeneity problem using middleware between heterogeneous data sources and MapReduce Hadoop clusters. Furthermore, the proposed framework enables the use of both fog computing and cloud platforms to handle the problems faced through online and offline data processing, data storage, and data classification. Additionally, it guarantees robust and secure knowledge of patient medical data.

Big Data Analytics Case Study from the Marketing Perspective : Emphasis on Banking Industry (마케팅 관점으로 본 빅 데이터 분석 사례연구 : 은행업을 중심으로)

  • Park, Sung Soo;Lee, Kun Chang
    • Journal of Information Technology Services
    • /
    • v.17 no.2
    • /
    • pp.207-218
    • /
    • 2018
  • Recently, it becomes a big trend in the banking industry to apply a big data analytics technique to extract essential knowledge from their customer database. Such a trend is based on the capability to analyze the big data with powerful analytics software and recognize the value of big data analysis results. However, there exits still a need for more systematic theory and mechanism about how to adopt a big data analytics approach in the banking industry. Especially, there is no study proposing a practical case study in which big data analytics is successfully accomplished from the marketing perspective. Therefore, this study aims to analyze a target marketing case in the banking industry from the view of big data analytics. Target database is a big data in which about 3.5 million customers and their transaction records have been stored for 3 years. Practical implications are derived from the marketing perspective. We address detailed processes and related field test results. It proved critical for the big data analysts to consider a sense of Veracity and Value, in addition to traditional Big Data's 3V (Volume, Velocity, and Variety), so that more significant business meanings may be extracted from the big data results.

RPIDA: Recoverable Privacy-preserving Integrity-assured Data Aggregation Scheme for Wireless Sensor Networks

  • Yang, Lijun;Ding, Chao;Wu, Meng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.12
    • /
    • pp.5189-5208
    • /
    • 2015
  • To address the contradiction between data aggregation and data security in wireless sensor networks, a Recoverable Privacy-preserving Integrity-assured Data Aggregation (RPIDA) scheme is proposed based on privacy homomorphism and aggregate message authentication code. The proposed scheme provides both end-to-end privacy and data integrity for data aggregation in WSNs. In our scheme, the base station can recover each sensing data collected by all sensors even if these data have been aggregated by aggregators, thus can verify the integrity of all sensing data. Besides, with these individual sensing data, base station is able to perform any further operations on them, which means RPIDA is not limited in types of aggregation functions. The security analysis indicates that our proposal is resilient against typical security attacks; besides, it can detect and locate the malicious nodes in a certain range. The performance analysis shows that the proposed scheme has remarkable advantage over other asymmetric schemes in terms of computation and communication overhead. In order to evaluate the performance and the feasibility of our proposal, the prototype implementation is presented based on the TinyOS platform. The experiment results demonstrate that RPIDA is feasible and efficient for resource-constrained sensor nodes.

Mitigating Data Imbalance in Credit Prediction using the Diffusion Model (Diffusion Model을 활용한 신용 예측 데이터 불균형 해결 기법)

  • Sangmin Oh;Juhong Lee
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.9-15
    • /
    • 2024
  • In this paper, a Diffusion Multi-step Classifier (DMC) is proposed to address the imbalance issue in credit prediction. DMC utilizes a Diffusion Model to generate continuous numerical data from credit prediction data and creates categorical data through a Multi-step Classifier. Compared to other algorithms generating synthetic data, DMC produces data with a distribution more similar to real data. Using DMC, data that closely resemble actual data can be generated, outperforming other algorithms for data generation. When experiments were conducted using the generated data, the probability of predicting delinquencies increased by over 20%, and overall predictive accuracy improved by approximately 4%. These research findings are anticipated to significantly contribute to reducing delinquency rates and increasing profits when applied in actual financial institutions.

Hot Data Identification For Flash Based Storage Systems Considering Continuous Write Operation

  • Lee, Seung-Woo;Ryu, Kwan-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.2
    • /
    • pp.1-7
    • /
    • 2017
  • Recently, NAND flash memory, which is used as a storage medium, is replacing HDD (Hard Disk Drive) at a high speed due to various advantages such as fast access speed, low power, and easy portability. In order to apply NAND flash memory to a computer system, a Flash Translation Layer (FTL) is indispensably required. FTL provides a number of features such as address mapping, garbage collection, wear leveling, and hot data identification. In particular, hot data identification is an algorithm that identifies specific pages where data updates frequently occur. Hot data identification helps to improve overall performance by identifying and managing hot data separately. MHF (Multi hash framework) technique, known as hot data identification technique, records the number of write operations in memory. The recorded value is evaluated and judged as hot data. However, the method of counting the number of times in a write request is not enough to judge a page as a hot data page. In this paper, we propose hot data identification which considers not only the number of write requests but also the persistence of write requests.

A Security-Enhanced Identity-Based Batch Provable Data Possession Scheme for Big Data Storage

  • Zhao, Jining;Xu, Chunxiang;Chen, Kefei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.9
    • /
    • pp.4576-4598
    • /
    • 2018
  • In big data age, flexible and affordable cloud storage service greatly enhances productivity for enterprises and individuals, but spontaneously has their outsourced data susceptible to integrity breaches. Provable Data Possession (PDP) as a critical technology, could enable data owners to efficiently verify cloud data integrity, without downloading entire copy. To address challenging integrity problem on multiple clouds for multiple owners, an identity-based batch PDP scheme was presented in ProvSec 2016, which attempted to eliminate public key certificate management issue and reduce computation overheads in a secure and batch method. In this paper, we firstly demonstrate this scheme is insecure so that any clouds who have outsourced data deleted or modified, could efficiently pass integrity verification, simply by utilizing two arbitrary block-tag pairs of one data owner. Specifically, malicious clouds are able to fabricate integrity proofs by 1) universally forging valid tags and 2) recovering data owners' private keys. Secondly, to enhance the security, we propose an improved scheme to withstand these attacks, and prove its security with CDH assumption under random oracle model. Finally, based on simulations and overheads analysis, our batch scheme demonstrates better efficiency compared to an identity based multi-cloud PDP with single owner effort.