• Title/Summary/Keyword: storage node

Search Result 235, Processing Time 0.029 seconds

Integration of flash memory for effective Weather monitoring system (재해예방 모니터링 시스템의 효율적인 데이터 전송을 위한 플래시 메모리의 활용)

  • Yoo, Jae-Ho;Lee, Seung-Chul;Kwon, Tae-Ha;Chung, Wan-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.223-225
    • /
    • 2010
  • In order to minimize the casualties and damages from natural disasters, local terrain and weather phenomena need to be constantly monitored. Various weather monitoring systems are designed to collect and monitor the weather information for disaster prevention. Nowadays, wireless sensor networks have been widely used to transmit the weather information and collected by the base station at a regular interval. In this paper, disaster prevention monitoring system for efficient data transfer of weather information such as temperature, humidity and illumination are designed. Weather information is able to burst the data transmission based on storage of flash memory. Telosb sensor node are used in the research; programmed by nesC language used by TinyOS.

  • PDF

ESBL: An Energy-Efficient Scheme by Balancing Load in Group Based WSNs

  • Mehmood, Amjad;Nouman, Muhammad;Umar, Muhammad Muneer;Song, Houbing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.4883-4901
    • /
    • 2016
  • Energy efficiency in Wireless Sensor Networks (WSNs) is very appealing research area due to serious constrains on resources like storage, processing, and communication power of the sensor nodes. Due to limited capabilities of sensing nodes, such networks are composed of a large number of nodes. The higher number of nodes increases the overall performance in data collection from environment and transmission of packets among nodes. In such networks the nodes sense data and ultimately forward the information to a Base Station (BS). The main issues in WSNs revolve around energy consumption and delay in relaying of data. A lot of research work has been published in this area of achieving energy efficiency in the network. Various techniques have been proposed to divide such networks; like grid division of network, group based division, clustering, making logical layers of network, variable size clusters or groups and so on. In this paper a new technique of group based WSNs is proposed by using some features from recent published protocols i.e. "Energy-Efficient Multi-level and Distance Aware Clustering (EEMDC)" and "Energy-Efficient Multi-level and Distance Aware Clustering (EEUC)". The proposed work is not only energy-efficient but also minimizes the delay in relaying of data from the sensor nodes to BS. Simulation results show, that it outperforms LEACH protocol by 38%, EEMDC by 10% and EEUC by 13%.

Design of Subthreshold SRAM Array utilizing Advanced Memory Cell (개선된 메모리 셀을 활용한 문턱전압 이하 스태틱 램 어레이 설계)

  • Kim, Taehoon;Chung, Yeonbae
    • Journal of IKEEE
    • /
    • v.23 no.3
    • /
    • pp.954-961
    • /
    • 2019
  • This paper suggests an advanced 8T SRAM which can operate properly in subthreshold voltage regime. The memory cell consists of symmetric 8 transistors, in which the latch storing data is controlled by a column-wise assistline. During the read, the data storage nodes are temporarily decoupled from the read path, thus eliminating the read disturbance. Additionally, the cell keeps the noise-vulnerable 'low' node close to the ground, thereby improving the dummy-read stability. In the write, the boosted wordline facilitates to change the contents of the memory bit. At 0.4 V supply, the advanced 8T cell achieves 65% higher dummy-read stability and 3.7 times better write-ability compared to the commercialized 8T cell. The proposed cell and circuit techniques have been verified in a 16-kbit SRAM array designed with an industrial 180-nm low-power CMOS process.

Determination of Harvesting Time and Effect of Diquat Treatment in Sesame Cropped After Winter Barley (맥류작 참깨의 수확기 결정과 건조제 처리의 효과)

  • Lee, H.J.;Kwon, Y.W.
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.25 no.2
    • /
    • pp.64-67
    • /
    • 1980
  • Field experiments were conducted to determine the optimum harvesting time and to evaluate the effect of Diquat spray in late seeded sesame, cultivar 'Suweon 9'. Sesame seed yield reached a plateau from Sept. 18 harvest when seed number was maximum. Thousand seed wt. increased to Sept. 29 harvest. As harvesting was delayed moisture content of capsule decreased and capsule dehiscence increased. Capsule dehiscence did not start until its moisture content dropped below 70%. Optimum harvesting might begin from the time which moisture content of capsule dropped below 70%, leaf senescence reached upper node, and 50% of capsules lost green. About 5% increase in seed weight after defoliation was estimated to be translocation from capsule wall. Diquat spray with 0.3% and 0.5% (v/v) solution of commercial Reglone (20%in A.I.) decreased rapidly capsule moisture content and promoted seed shattering. Dehiscence in 90% capsules was noted at seven days after Diquat spray. Diquat spray as a harvest aid could accelerate sesame desiccation up to 2 wks from normal field condition.

  • PDF

DNA Sequences Compression using Repeat technique and Selective Encryption using modified Huffman's Technique

  • Syed Mahamud Hossein; Debashis De; Pradeep Kumar Das Mohapatra
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.8
    • /
    • pp.85-104
    • /
    • 2024
  • The DNA (Deoxyribonucleic Acid) database size increases tremendously transmuting from millions to billions in a year. Ergo for storing, probing the DNA database requires efficient lossless compression and encryption algorithm for secure communication. The DNA short pattern repetitions are of paramount characteristics in biological sequences. This algorithm is predicated on probing exact reiterate, substring substitute by corresponding ASCII code and engender a Library file, as a result get cumulating of the data stream. In this technique the data is secured utilizing ASCII value and engendering Library file which acts as a signature. The security of information is the most challenging question with veneration to the communication perspective. The selective encryption method is used for security purpose, this technique is applied on compressed data or in the library file or in both files. The fractional part of a message is encrypted in the selective encryption method keeping the remaining part unchanged, this is very paramount with reference to selective encryption system. The Huffman's algorithm is applied in the output of the first phase reiterate technique, including transmuting the Huffman's tree level position and node position for encryption. The mass demand is the minimum storage requirement and computation cost. Time and space complexity of Repeat algorithm are O(N2) and O(N). Time and space complexity of Huffman algorithm are O(n log n) and O(n log n). The artificial data of equipollent length is additionally tested by this algorithm. This modified Huffman technique reduces the compression rate & ratio. The experimental result shows that only 58% to 100% encryption on actual file is done when above 99% modification is in actual file can be observed and compression rate is 1.97bits/base.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Design and Implementation of DEVSim++ and DiskSim Interface for Interoperation of System-level Simulation and Disk I/O-level Simulation (시스템수준 시뮬레이션과 디스크 I/O수준 시뮬레이션 연동을 위한 DEVSim++과 DiskSim 사이의 인터페이스 설계 및 구현)

  • Song, Hae Sang;Lee, Sun Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.4
    • /
    • pp.131-140
    • /
    • 2013
  • This paper deals with the design and implementation of an interface for interoperation between DiskSim, a well-known disk simulator, and a system-level simulator based on DEVSim++. Such inter-operational simulation aims at evaluation of an overall performance of storage systems which consist of multiple computer nodes with a variety of I/O level specifications. A well-known system-level simulation framework, DEVSim++ environment is based on the DEVS formalism, which provides a sound semantics of modular and hierarchical modeling methodology at the discrete event systems level such as multi-node computer systems. For maintainability we assume that there is no change of the source codes for two heterogeneous simulation engines. Thus, we adopt a notion of simulators interoperation in which there should be a means to synchronize simulation times as well as to exchange messages between simulators. As an interface for such interoperation DiskSimManager is designed and implemented. Various experiments, comparing the results of the standalone DiskSim simulation and the interoperation simulation using the proposed interface of DiskSimManager, proved that DiskSimManager works correctly as an interface for interoperation between DEVSim++ and DiskSim.

A NAND Flash File System for Sensor Nodes to support Data-centric Applications (데이터 중심 응용을 지원하기 위한 센서노드용 NAND 플래쉬 파일 시스템)

  • Sohn, Ki-Rack;Han, Kyung-Hun;Choi, Won-Chul;Han, Hyung-Jin;Han, Ji-Yeon;Lee, Ki-Hyeok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.3
    • /
    • pp.47-57
    • /
    • 2008
  • Recently, energy-efficient NAND Flash memory of large volume is favored as next-generation storage for sensor nodes. So far, most sensor node file systems are based on NOR flash and few file systems are applicable to large NAND flash memory. Although it is required to develop new file systems taking account of the features of NAND flash memory, it is difficult to develop them mainly due to the limit of SRAM memory on sensor nodes. Sensor nodes support SRAM of $4{\sim}10$ KBytes only. In this paper, we designed and implemented a novel file system to support data-centric applications. To do this, we added EEPROM of 1 KBytes to store persistent file description data efficiently and devised a simple wear-leveling method. This reduces the number of page updates, resulting in reduction in energy use and increase in lifetime of sensor nodes.

Key Update Protocols in Hierarchical Sensor Networks (계층적 센서 네트워크에서 안전한 통신을 위한 키 갱신 프로토콜)

  • Lee, Joo-Young;Park, So-Young;Lee, Sang-Ho
    • The KIPS Transactions:PartC
    • /
    • v.13C no.5 s.108
    • /
    • pp.541-548
    • /
    • 2006
  • Sensor network is a network for realizing the ubiquitous computing circumstances, which aggregates data by means of observation or detection deployed at the inaccessible places with the capacities of sensing and communication. To realize this circumstance, data which sensor nodes gathered from sensor networks are delivered to users, in which it is required to encrypt the data for the guarantee of secure communications. Therefore, it is needed to design key management scheme for encoding appropriate to the sensor nodes which feature continual data transfer, limited capacity of computation and storage and battery usage. We propose a key management scheme which is appropriate to sensor networks organizing hierarchical architecture. Because sensor nodes send data to their parent node, we can reduce routing energy. We assume that sensor nodes have different security levels by their levels in hierarchy. Our key management scheme provides different key establishment protocols according to the security levels of the sensor nodes. We reduce the number of sensor nodes which share the same key for encryption so that we reduce the damage by key exposure. Also, we propose key update protocols which take different terms for each level to update established keys efficiently for secure data encoding.

A Distributed SPARQL Query Processing Scheme Considering Data Locality and Query Execution Path (데이터 지역성 및 질의 수행 경로를 고려한 분산 SPARQL 질의 처리 기법)

  • Kim, Byounghoon;Kim, Daeyun;Ko, Geonsik;Noh, Yeonwoo;Lim, Jongtae;Bok, kyoungsoo;Lee, Byoungyup;Yoo, Jaesoo
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.275-283
    • /
    • 2017
  • A large amount of RDF data has been generated along with the increase of semantic web services. Various distributed storage and query processing schemes have been studied to efficiently use the massive amounts of RDF data. In this paper, we propose a distributed SPARQL query processing scheme that considers the data locality and query execution path of large RDF data. The proposed scheme considers the data locality and query execution path in order to reduce join and communication costs. In a distributed environment, when processing a SPARQL query, it is divided into several sub-queries according to the conditions of the WHERE clause by considering the data locality. The proposed scheme reduces data communication costs by grouping and processing the sub-queries through the index based on associated nodes. In addition, in order to reduce unnecessary joins and latency when processing the query, it creates an efficient query execution path considering data parsing cost, the amount of each node's data communication, and latency. It is shown through various performance evaluations that the proposed scheme outperforms the existing scheme.