• Title/Summary/Keyword: storage node

Search Result 234, Processing Time 0.028 seconds

A Danger Theory Inspired Protection Approach for Hierarchical Wireless Sensor Networks

  • Xiao, Xin;Zhang, Ruirui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.5
    • /
    • pp.2732-2753
    • /
    • 2019
  • With the application of wireless sensor networks in the fields of ecological observation, defense military, architecture and urban management etc., the security problem is becoming more and more serious. Characteristics and constraint conditions of wireless sensor networks such as computing power, storage space and battery have brought huge challenges to protection research. Inspired by the danger theory in biological immune system, this paper proposes an intrusion detection model for wireless sensor networks. The model abstracts expressions of antigens and antibodies in wireless sensor networks, defines meanings and functions of danger signals and danger areas, and expounds the process of intrusion detection based on the danger theory. The model realizes the distributed deployment, and there is no need to arrange an instance at each sensor node. In addition, sensor nodes trigger danger signals according to their own environmental information, and do not need to communicate with other nodes, which saves resources. When danger is perceived, the model acquires the global knowledge through node cooperation, and can perform more accurate real-time intrusion detection. In this paper, the performance of the model is analyzed including complexity and efficiency, and experimental results show that the model has good detection performance and reduces energy consumption.

AS B-tree: A study on the enhancement of the insertion performance of B-tree on SSD (AS B-트리: SSD를 사용한 B-트리에서 삽입 성능 향상에 관한 연구)

  • Kim, Sung-Ho;Roh, Hong-Chan;Lee, Dae-Wook;Park, Sang-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.18D no.3
    • /
    • pp.157-168
    • /
    • 2011
  • Recently flash memory has been being utilized as a main storage device in mobile devices, and flashSSDs are getting popularity as a major storage device in laptop and desktop computers, and even in enterprise-level server machines. Unlike HDDs, on flash memory, the overwrite operation is not able to be performed unless it is preceded by the erase operation to the same block. To address this, FTL(Flash memory Translation Layer) is employed on flash memory. Even though the modified data block is overwritten to the same logical address, FTL writes the updated data block to the different physical address from the previous one, mapping the logical address to the new physical address. This enables flash memory to avoid the high block-erase cost. A flashSSD has an array of NAND flash memory packages so it can access one or more flash memory packages in parallel at once. To take advantage of the internal parallelism of flashSSDs, it is beneficial for DBMSs to request I/O operations on sequential logical addresses. However, the B-tree structure, which is a representative index scheme of current relational DBMSs, produces excessive I/O operations in random order when its node structures are updated. Therefore, the original b-tree is not favorable to SSD. In this paper, we propose AS(Always Sequential) B-tree that writes the updated node contiguously to the previously written node in the logical address for every update operation. In the experiments, AS B-tree enhanced 21% of B-tree's insertion performance.

Segmented Douglas-Peucker Algorithm Based on the Node Importance

  • Wang, Xiaofei;Yang, Wei;Liu, Yan;Sun, Rui;Hu, Jun;Yang, Longcheng;Hou, Boyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1562-1578
    • /
    • 2020
  • Vector data compression algorithm can meet requirements of different levels and scales by reducing the data amount of vector graphics, so as to reduce the transmission, processing time and storage overhead of data. In view of the fact that large threshold leading to comparatively large error in Douglas-Peucker vector data compression algorithm, which has difficulty in maintaining the uncertainty of shape features and threshold selection, a segmented Douglas-Peucker algorithm based on node importance is proposed. Firstly, the algorithm uses the vertical chord ratio as the main feature to detect and extract the critical points with large contribution to the shape of the curve, so as to ensure its basic shape. Then, combined with the radial distance constraint, it selects the maximum point as the critical point, and introduces the threshold related to the scale to merge and adjust the critical points, so as to realize local feature extraction between two critical points to meet the requirements in accuracy. Finally, through a large number of different vector data sets, the improved algorithm is analyzed and evaluated from qualitative and quantitative aspects. Experimental results indicate that the improved vector data compression algorithm is better than Douglas-Peucker algorithm in shape retention, compression error, results simplification and time efficiency.

Context-aware Connectivity Analysis Method using Context Data Prediction Model in Delay Tolerant Networks (Delay Tolerant Networks에서 속성정보 예측 모델을 이용한 상황인식 연결성 분석 기법)

  • Jeong, Rae-Jin;Oh, Young-Jun;Lee, Kang-Whan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.4
    • /
    • pp.1009-1016
    • /
    • 2015
  • In this paper, we propose EPCM(Efficient Prediction-based Context-awareness Matrix) algorithm analyzing connectivity by predicting cluster's context data such as velocity and direction. In the existing DTN, unrestricted relay node selection causes an increase of delay and packet loss. The overhead is occurred by limited storage and capability. Therefore, we propose the EPCM algorithm analyzing predicted context data using context matrix and adaptive revision weight, and selecting relay node by considering connectivity between cluster and base station. The proposed algorithm saves context data to the context matrix and analyzes context according to variation and predicts context data after revision from adaptive revision weight. From the simulation results, the EPCM algorithm provides the high packet delivery ratio by selecting relay node according to predicted context data matrix.

An Efficient Concurrency Control Algorithm for Multi-dimensional Index Structures (다차원 색인구조를 위한 효율적인 동시성 제어기법)

  • 김영호;송석일;유재수
    • Journal of KIISE:Databases
    • /
    • v.30 no.1
    • /
    • pp.80-94
    • /
    • 2003
  • In this paper. we propose an enhanced concurrency control algorithm that minimizes the query delay efficiently. The factors that delay search operations and deteriorate the concurrency of index structures are node splits and MBR updates in multi dimensional index structures. In our algorithm, to reduce the query delay by split operations, we optimize exclusive latching time on a split node. It holds exclusive latches not during whole split time but only during physical node split time that occupies small part of whole split time. Also to avoid the query delay by MBR updates we introduce partial lock coupling(PLC) technique. The PLC technique increases concurrency by using lock coupling only in case of MBR shrinking operations that are less frequent than MBR expansion operations. For performance evaluation, we implement the proposed algorithm and one of the existing link technique-based algorithms on MIDAS-III that is a storage system of a BADA-III DBMS. We show through various experiments that our proposed algorithm outperforms the existing algorithm In terms of throughput and response time.

Concurrency Control and Recovery Methods for Multi-Dimensional Index Structures (다차원 색인구조를 위한 동시성제어 기법 및 회복기법)

  • Song, Seok-Il;Yoo, Jae-Soo
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.195-210
    • /
    • 2003
  • In this paper, we propose an enhanced concurrency control algorithm that maximizes the concurrency of multi-dimensional index structures. The factors that deteriorate the concurrency of index structures are node splits and minimum bounding region (MBR) updates in multi-dimensional index structures. The proposed concurrency control algorithm introduces PLC(Partial Lock Coupling) technique to avoid lock coupling during MBR updates. Also, a new MBR update method that allows searchers to access nodes where MBR updates are being performed is proposed. To reduce the performance degradation by node splits the proposed algorithm holds exclusive latches not during whole split time but only during physical node split time that occupies the small part of a whole split process. For performance evaluation, we implement the proposed concurrency control algorithm and one of the existing link technique-based algorithms on MIDAS-3 that is a storage system of a BADA-4 DBMS. We show through various experiments that our proposed algorithm outperforms the existing algorithm in terms of throughput and response time. Also, we propose a recovery protocol for our proposed concurrency control algorithm. The recovery protocol is designed to assure high concurrency and fast recovery.

The Big Data Analysis and Medical Quality Management for Wellness (웰니스를 위한 빅데이터 분석과 의료 질 관리)

  • Cho, Young-Bok;Woo, Sung-Hee;Lee, Sang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.12
    • /
    • pp.101-109
    • /
    • 2014
  • Medical technology development and increase the income level of a "Long and healthy Life=Wellness," with the growing interest in actively promoting and maintaining health and wellness has become enlarged. In addition, the demand for personalized health care services is growing and extensive medical moves of big data, disease prevention, too. In this paper, the main interest in the market, highlighting wellness in order to support big data-driven healthcare quality through patient-centered medical services purposes. Patients with drug dependence treatment is not to diet but to improve disease prevention and treatment based on analysis of big data. Analysing your Tweets-daily information and wellness disease prevention and treatment, based on the purpose of the dictionary. Efficient big data analysis for node while increasing processing time experiment. Test result case of total access time efficient 26% of one node to three nodes and case of data storage is 63%, case of data aggregate is 18% efficient of one node to three nodes.

A key management scheme for the cluster-based sensor network using polar coordinated (극 좌표를 이용한 클러스터 기반 센서 네트워크의 키 관리 기법)

  • Hong, Seong-Sik;Ryou, Hwang-Bin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.5
    • /
    • pp.870-878
    • /
    • 2008
  • The level of security of most sensor nodes that comprise the sensor networks is low, but because of the low computing power and small storage capacity, it is even very difficult to apply a security algorithm efficiently to the sensor nodes. Therefore, preventing the join of an illegal node to a sensor network is impossible, and the transmitting information is easily exposed and overheard when the transmitting algorithm of the sensor node is hewn. In this paper, we propose a group key management scheme for the sensor network using polar coordinates, so that the sensor nodes can deliver information securely inside a cluster and any illegal node is prevented from joining to the cluster where a sensor network is composed of many clusters. In the proposed scheme, all of the sensor nodes in a cluster set up the authentication keys based on the pivot value provided by the CH. The intensive simulations show that the proposed scheme outperforms the pair-wise scheme in terms of the secure key management and the prevention of the illegal nodes joining to the network.

Study of Improvement of Search Range Compression Method of VP-tree for Video Indexes (영상 색인용 VP-tree의 검색 범위 압축법의 개선에 관한 연구)

  • Park, Gil-Yang;Lee, Samuel Sang-Kon;Hwang, Jea-Jeong
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.2
    • /
    • pp.215-225
    • /
    • 2012
  • In multimedia database, a multidimensional space-based indexing has been used to increase search efficiency. However, this method is inefficient in terms of ubiquity because it uses Euclidean distance as a scale of distance calculation. On the contrary, a metric space-based indexing method, in which metric axiom is prerequisite is widely available because a metric scale other than Euclidean distance could be used. This paper is attempted to propose a way of improving VP-tree, one of the metric space indexing methods. The VP-tree calculates the distance with an object which is ultimately linked to the a leaf node depending on the node fit for the search range from a root node and examines if it is appropriate with the search range. Because search speed decreases as the number of distance calculations at the leaf node increases, however, this paper has proposed a method which uses the latest interface on query object as the base point of trigonometric inequality for improvement after focusing on the trigonometric inequality-based range compression method in a leaf node. This improvement method would be able to narrow the search range and reduce the number of distance calculations. According to a system performance test using 10,000 video data, the new method reduced search time for similar videos by 5-12%, compared to a conventional method.

Performance Enhancement Method Through Science DMZ Data Transfer Node Tuning Parameters (Science DMZ 데이터 전송 노드 튜닝 요소를 통한 성능 향상 방안)

  • Park, Jong Seon;Park, Jin Hyung;Kim, Seung Hae;Noh, Min Ki
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.2
    • /
    • pp.33-40
    • /
    • 2018
  • In an environment with a large network bandwidth, maximizing bandwidth utilization is an important issue to increase transmission efficiency. End-to-end transfer efficiency is significantly influenced by factors such as network, data transfer nodes, and intranet network security policies. Science DMZ is an innovative network architecture that maximizes transfer performance through optimal solution of these complex components. Among these, the data transfer node is a key factor that greatly affects the transfer performance depending on storage, network interface, operating system, and transfer application tool. However, tuning parameters constituting a data transfer node must be performed to provide high transfer efficiency. In this paper, we propose a method to enhance performance through tuning parameters of 100Gbps data transfer node. With experiment result, we confirmed that the transmission efficiency can be improved greatly in 100Gbps network environment through the tuning of Jumbo frame and CPU governor. The network performance test through Iperf showed improvement of 300% compared to the default state and NVMe SSD showed 140% performance improvement compared to hard disk.