• Title/Summary/Keyword: Average Hash

Search Result 40, Processing Time 0.022 seconds

SVC and CAS Combining Scheme for Support Multi-Device Watching Environment (다중기기 시청환경을 지원하기 위한 SVC와 CAS 결합 기법)

  • Son, Junggab;Oh, Heekuck;Kim, SangJin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.6
    • /
    • pp.1111-1120
    • /
    • 2013
  • CAS used in IPTV or DTV has an environment of sending single type of contents through single streaming. But it can be improved to support users' various video applications through single streaming by combining with SVC. For such an environment, efficiency should be firstly considered, and hierarchical key management methods for billing policy by service levels should be applied. This study aims to look into considerations to apply SVC to CAS and propose SVC encryption in CAS environment. The security of the proposed scheme is based on the safety of CAS and oneway hash function. If the proposed scheme is applied, scalability can be efficiently provided even in the encrypted contents and it is possible to bill users according to picture quality. In addition, the test results show that SVC contents given by streaming service with the average less than 10%overhead can be safely protected against illegal uses.

Security Analysis of the PHOTON Lightweight Cryptosystem in the Wireless Body Area Network

  • Li, Wei;Liao, Linfeng;Gu, Dawu;Ge, Chenyu;Gao, Zhiyong;Zhou, Zhihong;Guo, Zheng;Liu, Ya;Liu, Zhiqiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.476-496
    • /
    • 2018
  • With the advancement and deployment of wireless communication techniques, wireless body area network (WBAN) has emerged as a promising approach for e-healthcare that collects the data of vital body parameters and movements for sensing and communicating wearable or implantable healthful related information. In order to avoid any possible rancorous attacks and resource abuse, employing lightweight ciphers is most effective to implement encryption, decryption, message authentication and digital signature for security of WBAN. As a typical lightweight cryptosystem with an extended sponge function framework, the PHOTON family is flexible to provide security for the RFID and other highly-constrained devices. In this paper, we propose a differential fault analysis to break three flavors of the PHOTON family successfully. The mathematical analysis and simulating experimental results show that 33, 69 and 86 random faults in average are required to recover each message input for PHOTON-80/20/16, PHOTON-160/36/36 and PHOTON-224/32/32, respectively. It is the first result of breaking PHOTON with the differential fault analysis. It provides a new reference for the security analysis of the same structure of the lightweight hash functions in the WBAN.

Patient Adaptive Pattern Matching Method for Premature Ventricular Contraction(PVC) Classification (조기심실수축(PVC) 분류를 위한 환자 적응형 패턴 매칭 기법)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.9
    • /
    • pp.2021-2030
    • /
    • 2012
  • Premature ventricular contraction(PVC) is the most common disease among arrhythmia and it may cause serious situations such as ventricular fibrillation and ventricular tachycardia. Particularly, in the healthcare system that must continuously monitor patient's situation, it is necessary to process ECG (Electrocardiography) signal in realtime. In other words, the design of algorithm that exactly detects R wave using minimal computation and classifies PVC by analyzing the persons's physical condition and/or environment is needed. Thus, the patient adaptive pattern matching algorithm for the classification of PVC is presented in this paper. For this purpose, we detected R wave through the preprocessing method, adaptive threshold and window. Also, we applied pattern matching method to classify each patient's normal cardiac behavior through the Hash function. The performance of R wave detection and abnormal beat classification is evaluated by using MIT-BIH arrhythmia database. The achieved scores indicate the average of 99.33% in R wave detection and the rate of 0.32% in abnormal beat classification error.

A Development Study of The VPT for the improvement of Hadoop performance (하둡 성능 향상을 위한 VPT 개발 연구)

  • Yang, Ill Deung;Kim, Seong Ryeol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.9
    • /
    • pp.2029-2036
    • /
    • 2015
  • Hadoop MR(MapReduce) uses a partition function for passing the outputs of mappers to reducers. The partition function determines target reducers after calculating the hash-value from the key and performing mod-operation by reducer number. The legacy partition function doesn't divide the job effectively because it is so sensitive to key distribution. If the job isn't divided effectively then it can effect the total processing time of the job because some reducers need more time to process. This paper proposes the VPT(Virtual Partition Table) and has tested appling the VPT with a preponderance of data. The applied VPT improved three seconds on average and we figure it will improve more when data is increased.

Load Balancing Scheme for Machine Learning Distributed Environment (기계학습 분산 환경을 위한 부하 분산 기법)

  • Kim, Younggwan;Lee, Jusuk;Kim, Ajung;Hong, Jiman
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.25-31
    • /
    • 2021
  • As the machine learning becomes more common, development of application using machine learning is actively increasing. In addition, research on machine learning platform to support development of application is also increasing. However, despite the increasing of research on machine learning platform, research on suitable load balancing for machine learning platform is insufficient. Therefore, in this paper, we propose a load balancing scheme that can be applied to machine learning distributed environment. The proposed scheme composes distributed servers in a level hash table structure and assigns machine learning task to the server in consideration of the performance of each server. We implemented distributed servers and experimented, and compared the performance with the existing hashing scheme. Compared with the existing hashing scheme, the proposed scheme showed an average 26% speed improvement, and more than 38% reduced the number of waiting tasks to assign to the server.

Design of CCTV Enclosure Record Management System based on Blockchain

  • Yu, Kwan Woo;Lee, Byung Mun;Kang, Un Gu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.141-149
    • /
    • 2022
  • In this paper, we propose a design of CCTV enlcosure record management system based on blockchain. Since CCTV video records are transferred to the control center through enclosure, it is very important to manage the enclosure to prevent modulation and damage of the video records. Recently, a smart enclosure monitoring system with real-time remote monitoring and opening and closing state management functions is used to manage CCTV enclosures, but there is a limitation to securing the safety of CCTV video records. The proposed system detect modulated record and recover the record through hash value comparison by distributed stored record in the blockchain. In addition, the integrity verification API is provided to ensure the integrity of enclosure record received by the management server. In order to verify the effectiveness of the system, the integrity verification accuracy and elapsed time were measured through experiments. As a result, the integrity of enclosure record (accuracy: 100%) was confirmed, and it was confirmed that the elapsed time for verification (average: 73 ms) did not affect monitoring.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

A Name-based Service Discovering Mechanism for Efficient Service Delivery in IoT (IoT에서 효율적인 서비스 제공을 위한 이름 기반 서비스 탐색 메커니즘)

  • Cho, Kuk-Hyun;Kim, Jung-Jae;Ryu, Minwoo;Cha, Si-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.6
    • /
    • pp.46-54
    • /
    • 2018
  • The Internet of Things (IoT) is an environment in which various devices provide services to users through communications. Because of the nature of the IoT, data are stored and distributed in heterogeneous information systems. In this situation, IoT end applications should be able to access data without having information on where the data are or what the type of storage is. This mechanism is called Service Discovery (SD). However, some problems arise, since the current SD architectures search for data in physical devices. First, turnaround time increases from searching for services based on physical location. Second, there is a need for a data structure to manage devices and services separately. These increase the administrator's service configuration complexity. As a result, the device-oriented SD structure is not suitable to the IoT. Therefore, we propose an SD structure called Name-based Service-centric Service Discovery (NSSD). NSSD provides name-based centralized SD and uses the IoT edge gateway as a cache server to speed up service discovery. Simulation results show that NSSD provides about twice the improvement in average turnaround time, compared to existing domain name system and distributed hash table SD architectures.

USB Device Authentication Protocol based on OTP (OTP 기반의 USB 디바이스 인증 프로토콜)

  • Jeong, Yoon-Su;Kim, Yong-Tae;Park, Gil-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.8
    • /
    • pp.1735-1742
    • /
    • 2011
  • Now a days, as a mass-storage USB becomes comfortable to carry, function of USB is being developed fast. However, there is a problem that the personal information which is stored in USB could be exposed being used with negative purpose without other certification process. This paper suggests OTP(One-Time Password)-based certification protocol of USB to securely protect personal information stored in USB without additional certification information. The proposed OTP based certification protocol of USB not only demands low calculations but also prevents physical approach of USB of other network and does not allow unnecessary service access of user because it conducts simple action and uses one-way hash function. Therefore, communication overhead and service delay is improved. In the experiment, the proposed protocol compares and evaluates throughput of certification server according to the numbers of USB and delay time of packet certification with a device(USB driver) which simply save device and a device(USB Token) which can calculate by oneself. As a result, it is improved as the number of 12.5% in the certification delay time on average and is improved as the number of 10.8% in the throughput of certification server according to the numbers of USB.

X-tree Diff: An Efficient Change Detection Algorithm for Tree-structured Data (X-tree Diff: 트리 기반 데이터를 위한 효율적인 변화 탐지 알고리즘)

  • Lee, Suk-Kyoon;Kim, Dong-Ah
    • The KIPS Transactions:PartC
    • /
    • v.10C no.6
    • /
    • pp.683-694
    • /
    • 2003
  • We present X-tree Diff, a change detection algorithm for tree-structured data. Our work is motivated by need to monitor massive volume of web documents and detect suspicious changes, called defacement attack on web sites. From this context, our algorithm should be very efficient in speed and use of memory space. X-tree Diff uses a special ordered labeled tree, X-tree, to represent XML/HTML documents. X-tree nodes have a special field, tMD, which stores a 128-bit hash value representing the structure and data of subtrees, so match identical subtrees form the old and new versions. During this process, X-tree Diff uses the Rule of Delaying Ambiguous Matchings, implying that it perform exact matching where a node in the old version has one-to one corrspondence with the corresponding node in the new, by delaying all the others. It drastically reduces the possibility of wrong matchings. X-tree Diff propagates such exact matchings upwards in Step 2, and obtain more matchings downwsards from roots in Step 3. In step 4, nodes to ve inserted or deleted are decided, We aldo show thst X-tree Diff runs on O(n), woere n is the number of noses in X-trees, in worst case as well as in average case, This result is even better than that of BULD Diff algorithm, which is O(n log(n)) in worst case, We experimented X-tree Diff on reat data, which are about 11,000 home pages from about 20 wev sites, instead of synthetic documets manipulated for experimented for ex[erimentation. Currently, X-treeDiff algorithm is being used in a commeercial hacking detection system, called the WIDS(Web-Document Intrusion Detection System), which is to find changes occured in registered websites, and report suspicious changes to users.