• Title/Summary/Keyword: Distributed algorithms

Search Result 589, Processing Time 0.027 seconds

Blockchain for Securing Smart Grids

  • Aldabbagh, Ghadah;Bamasag, Omaimah;Almasari, Lola;Alsaidalani, Rabab;Redwan, Afnan;Alsaggaf, Amaal
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.4
    • /
    • pp.255-263
    • /
    • 2021
  • Smart grid is a fully-automated, bi-directional, power transmission network based on the physical grid system, which combines sensor measurement, computer, information communication, and automatic control technology. Blockchain technology, with its security features, can be integrated with Smart Grids to provide secure and efficient power management and transmission. This paper dicusses the deployment of Blockchain technology in Smart Grid. It presents application areas and protocols in which blockchain can be applied to in securing smart grid. One application of each area is explored in detail, such as efficient peer-to-peer transaction, lower platform costs, faster processes, greater flexibility in power generation to transmission, distribution and power consumption in different energy storage systems, current barriers obstructing the implementation of blockchain applications with some level of maturity in financial services but concepts only in energy and other sectors. Wide range of energy applications suggesting a suitable blockchain architecture in smart grid operations, a sample block structure and the potential blockchain technicalities employed in it. Also, added with efficient data aggregation schemes based on the blockchain technology to overcome the challenges related to privacy and security in the smart grid. Later on, consensus algorithms and protocols are discussed. Monitoring of the usage and statistics of energy distribution systems that can also be used to remotely control energy flow to a particular area. Further, the discussion on the blockchain-based frameworks that helps in the diagnosis and maintenance of smart grid equipment. We have also discussed several commercial implementations of blockchain in the smart grid. Finally, various challenges have been discussed for integrating these technologies. Overall, it can be said at the present point in time that blockchain technology certainly shows a lot of potentials from a customer perspective too and should be further developed by market participants. The approaches seen thus far may have a disruptive effect in the future and might require additional regulatory intervention in an already tightly regulated energy market. If blockchains are to deliver benefits for consumers (whether as consumers or prosumers of energy), a strong focus on consumer issues will be needed.

An improved regularized particle filter for remaining useful life prediction in nuclear plant electric gate valves

  • Xu, Ren-yi;Wang, Hang;Peng, Min-jun;Liu, Yong-kuo
    • Nuclear Engineering and Technology
    • /
    • v.54 no.6
    • /
    • pp.2107-2119
    • /
    • 2022
  • Accurate remaining useful life (RUL) prediction for critical components of nuclear power equipment is an important way to realize aging management of nuclear power equipment. The electric gate valve is one of the most safety-critical and widely distributed mechanical equipment in nuclear power installations. However, the electric gate valve's extended service in nuclear installations causes aging and degradation induced by crack propagation and leakages. Hence, it is necessary to develop a robust RUL prediction method to evaluate its operating state. Although the particle filter(PF) algorithm and its variants can deal with this nonlinear problem effectively, they suffer from severe particle degeneracy and depletion, which leads to its sub-optimal performance. In this study, we combined the whale algorithm with regularized particle filtering(RPF) to rationalize the particle distribution before resampling, so as to solve the problem of particle degradation, and for valve RUL prediction. The valve's crack propagation is studied using the RPF approach, which takes the Paris Law as a condition function. The crack growth is observed and updated using the root-mean-square (RMS) signal collected from the acoustic emission sensor. At the same time, the proposed method is compared with other optimization algorithms, such as particle swarm optimization algorithm, and verified by the realistic valve aging experimental data. The conclusion shows that the proposed method can effectively predict and analyze the typical valve degradation patterns.

Semantic Object Detection based on LiDAR Distance-based Clustering Techniques for Lightweight Embedded Processors (경량형 임베디드 프로세서를 위한 라이다 거리 기반 클러스터링 기법을 활용한 의미론적 물체 인식)

  • Jung, Dongkyu;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1453-1461
    • /
    • 2022
  • The accuracy of peripheral object recognition algorithms using 3D data sensors such as LiDAR in autonomous vehicles has been increasing through many studies, but this requires high performance hardware and complex structures. This object recognition algorithm acts as a large load on the main processor of an autonomous vehicle that requires performing and managing many processors while driving. To reduce this load and simultaneously exploit the advantages of 3D sensor data, we propose 2D data-based recognition using the ROI generated by extracting physical properties from 3D sensor data. In the environment where the brightness value was reduced by 50% in the basic image, it showed 5.3% higher accuracy and 28.57% lower performance time than the existing 2D-based model. Instead of having a 2.46 percent lower accuracy than the 3D-based model in the base image, it has a 6.25 percent reduction in performance time.

A Review on Advanced Methodologies to Identify the Breast Cancer Classification using the Deep Learning Techniques

  • Bandaru, Satish Babu;Babu, G. Rama Mohan
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.420-426
    • /
    • 2022
  • Breast cancer is among the cancers that may be healed as the disease diagnosed at early times before it is distributed through all the areas of the body. The Automatic Analysis of Diagnostic Tests (AAT) is an automated assistance for physicians that can deliver reliable findings to analyze the critically endangered diseases. Deep learning, a family of machine learning methods, has grown at an astonishing pace in recent years. It is used to search and render diagnoses in fields from banking to medicine to machine learning. We attempt to create a deep learning algorithm that can reliably diagnose the breast cancer in the mammogram. We want the algorithm to identify it as cancer, or this image is not cancer, allowing use of a full testing dataset of either strong clinical annotations in training data or the cancer status only, in which a few images of either cancers or noncancer were annotated. Even with this technique, the photographs would be annotated with the condition; an optional portion of the annotated image will then act as the mark. The final stage of the suggested system doesn't need any based labels to be accessible during model training. Furthermore, the results of the review process suggest that deep learning approaches have surpassed the extent of the level of state-of-of-the-the-the-art in tumor identification, feature extraction, and classification. in these three ways, the paper explains why learning algorithms were applied: train the network from scratch, transplanting certain deep learning concepts and constraints into a network, and (another way) reducing the amount of parameters in the trained nets, are two functions that help expand the scope of the networks. Researchers in economically developing countries have applied deep learning imaging devices to cancer detection; on the other hand, cancer chances have gone through the roof in Africa. Convolutional Neural Network (CNN) is a sort of deep learning that can aid you with a variety of other activities, such as speech recognition, image recognition, and classification. To accomplish this goal in this article, we will use CNN to categorize and identify breast cancer photographs from the available databases from the US Centers for Disease Control and Prevention.

Evaluation and Comparative Analysis of Scalability and Fault Tolerance for Practical Byzantine Fault Tolerant based Blockchain (프랙티컬 비잔틴 장애 허용 기반 블록체인의 확장성과 내결함성 평가 및 비교분석)

  • Lee, Eun-Young;Kim, Nam-Ryeong;Han, Chae-Rim;Lee, Il-Gu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.2
    • /
    • pp.271-277
    • /
    • 2022
  • PBFT (Practical Byzantine Fault Tolerant) is a consensus algorithm that can achieve consensus by resolving unintentional and intentional faults in a distributed network environment and can guarantee high performance and absolute finality. However, as the size of the network increases, the network load also increases due to message broadcasting that repeatedly occurs during the consensus process. Due to the characteristics of the PBFT algorithm, it is suitable for small/private blockchain, but there is a limit to its application to large/public blockchain. Because PBFT affects the performance of blockchain networks, the industry should test whether PBFT is suitable for products and services, and academia needs a unified evaluation metric and technology for PBFT performance improvement research. In this paper, quantitative evaluation metrics and evaluation frameworks that can evaluate PBFT family consensus algorithms are studied. In addition, the throughput, latency, and fault tolerance of PBFT are evaluated using the proposed PBFT evaluation framework.

Comparison of soil erosion simulation between empirical and physics-based models

  • Yeon, Min Ho;Kim, Seong Won;Jung, Sung Ho;Lee, Gi Ha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.172-172
    • /
    • 2020
  • In recent years, soil erosion has come to be regarded as an essential environmental problem in human life. Soil erosion causes various on- and off-site problems such as ecosystem destruction, decreased agricultural productivity, increased riverbed deposition, and deterioration of water quality in streams. To solve these problems caused by soil erosion, it is necessary to quantify where, when, how much soil erosion occurs. Empirical erosion models such as the Universal Soil Loss Equation (USLE) family models have been widely used to make spatially distributed soil erosion vulnerability maps. Even if the models detect vulnerable sites relatively well by utilizing big data related to climate, geography, geology, land use, etc. within study domains, they do not adequately describe the physical process of soil erosion on the ground surface caused by rainfall or overland flow. In other words, such models remain powerful tools to distinguish erosion-prone areas at the macro scale but physics-based models are necessary to better analyze soil erosion and deposition and eroded particle transport. In this study, the physics-based Surface Soil Erosion Model (SSEM) was upgraded based on field survey information to produce sediment yield at the watershed scale. The modified model (hereafter MoSE) adopted new algorithms on rainfall kinematic energy and surface flow transport capacity to simulate soil erosion more reliably. For model validation, we applied the model to the Doam dam watershed in Gangwon-do and compared the simulation results with the USLE outputs. The results showed that the revised physics-based soil erosion model provided more improved and reliable simulation results than the USLE in terms of the spatial distribution of soil erosion and deposition.

  • PDF

Performance Analysis and Comparison of Stream Ciphers for Secure Sensor Networks (안전한 센서 네트워크를 위한 스트림 암호의 성능 비교 분석)

  • Yun, Min;Na, Hyoung-Jun;Lee, Mun-Kyu;Park, Kun-Soo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.5
    • /
    • pp.3-16
    • /
    • 2008
  • A Wireless Sensor Network (WSN for short) is a wireless network consisting of distributed small devices which are called sensor nodes or motes. Recently, there has been an extensive research on WSN and also on its security. For secure storage and secure transmission of the sensed information, sensor nodes should be equipped with cryptographic algorithms. Moreover, these algorithms should be efficiently implemented since sensor nodes are highly resource-constrained devices. There are already some existing algorithms applicable to sensor nodes, including public key ciphers such as TinyECC and standard block ciphers such as AES. Stream ciphers, however, are still to be analyzed, since they were only recently standardized in the eSTREAM project. In this paper, we implement over the MicaZ platform nine software-based stream ciphers out of the ten in the second and final phases of the eSTREAM project, and we evaluate their performance. Especially, we apply several optimization techniques to six ciphers including SOSEMANUK, Salsa20 and Rabbit, which have survived after the final phase of the eSTREAM project. We also present the implementation results of hardware-oriented stream ciphers and AES-CFB fur reference. According to our experiment, the encryption speeds of these software-based stream ciphers are in the range of 31-406Kbps, thus most of these ciphers are fairly acceptable fur sensor nodes. In particular, the survivors, SOSEMANUK, Salsa20 and Rabbit, show the throughputs of 406Kbps, 176Kbps and 121Kbps using 70KB, 14KB and 22KB of ROM and 2811B, 799B and 755B of RAM, respectively. From the viewpoint of encryption speed, the performances of these ciphers are much better than that of the software-based AES, which shows the speed of 106Kbps.

A Dynamic Load Balancing Scheme based on Host Load Information in a Wireless Internet Proxy Server Cluster (무선 인터넷 프록시 서버 클러스터에서 호스트 부하 정보에 기반한 동적 부하 분산 방안)

  • Kwak Hu-Keun;Chung Kyu-Sik
    • Journal of KIISE:Information Networking
    • /
    • v.33 no.3
    • /
    • pp.231-246
    • /
    • 2006
  • A server load balancer is used to accept and distribute client requests to one of servers in a wireless internet proxy server cluster. LVS(Linux Virtual Server), a software based server load balancer, can support several load balancing algorithms where client requests are distributed to servers in a round robin way, in a hashing-based way or in a way to assign first to the server with the least number of its concurrent connections to LVS. An improved load balancing algorithm to consider server performance was proposed where they check upper and lower limits of concurrent connection numbers to be allowed within each maximum server performance in advance and apply the static limits to load balancing. However, they do not apply run-time server load information dynamically to load balancing. In this paper, we propose a dynamic load balancing scheme where the load balancer keeps each server CPU load information at run time and assigns a new client request first to the server with the lowest load. Using a cluster consisting of 16 PCs, we performed experiments with static content(image and HTML). Compared to the existing schemes, experimental results show performance improvement in the cases of client requests requiring CPU-intensive processing and a cluster consisting of servers with difference performance.

A Tunable Transmitter - Tunable Receiver Algorithm for Accessing the Multichannel Slotted-Ring WDM Metropolitan Network under Self-Similar Traffic

  • Sombatsakulkit, Ekanun;Sa-Ngiamsak, Wisitsak;Sittichevapak, Suvepol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.777-781
    • /
    • 2004
  • This paper presents an algorithm for multichannel slotted-ring topology medium access protocol (MAC) using in wavelength division multiplexing (WDM) networks. In multichannel ring, there are two main previously proposed architectures: Tunable Transmitter - Fixed Receiver (TTFR) and Fixed Transmitter - Tunable Receivers (FTTR). With TTFR, nodes can only receive packets on a fixed wavelength and can send packets on any wavelengths related to destination of packets. Disadvantage of this architecture is required as many wavelengths as there are nodes in the network. This is clearly a scalability limitation. In contrast, FTTR architecture has advantage that the number of nodes can be much larger than the number of wavelength. Source nodes send packet on a fixed channel (or wavelength) and destination nodes can received packets on any wavelength. If there are fewer wavelengths than there are nodes in the network, the nodes will also have to share all the wavelengths available for transmission. However the fixed wavelength approach of TTFR and FTTR bring low network utilization. Because source node with waiting data have to wait for an incoming empty slot on corresponding wavelength. Therefore this paper presents Tunable Transmitter - Tunable Receiver (TTTR) approach, in which the transmitting node can send a packet over any wavelengths and the receiving node can receive a packet from any wavelengths. Moreover, the self-similar distributed input traffic is used for evaluation of the performance of the proposed algorithm. The self-similar traffic performs better performance over long duration than short duration of the Poison distribution. In order to increase bandwidth efficiency, the Destination Stripping approach is used to mark the slot which has already reached the desired destination as an empty slot immediately at the destination node, so the slot does not need to go back to the source node to be marked as an empty slot as in the Source Stripping approach. MATLAB simulator is used to evaluate performance of FTTR, TTFR, and TTTR over 4 and 16 nodes ring network. From the simulation result, it is clear that the proposed algorithm overcomes higher network utilization and average throughput per node, and reduces the average queuing delay. With future works, mathematical analysis of those algorithms will be the main research topic.

  • PDF

Revision of ART with Iterative Partitioning for Performance Improvement (입력 도메인 반복 분할 기법 성능 향상을 위한 고려 사항 분석)

  • Shin, Seung-Hun;Park, Seung-Kyu;Jung, Ki-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.3
    • /
    • pp.64-76
    • /
    • 2009
  • Adaptive Random Testing through Iterative Partitioning(IP-ART) is one of Adaptive Random Testing(ART) techniques. IP-ART uses an iterative partitioning method for input domain to improve the performances of early-versions of ART that have significant drawbacks in computation time. Another version of IP-ART, named with EIP-ART(IP-ART with Enlarged Input Domain), uses virtually enlarged input domain to remove the unevenly distributed parts near the boundary of the domain. EIP-ART could mitigate non-uniform test case distribution of IP-ART and achieve relatively high performances in a variety of input domain environments. The EIP-ART algorithm, however, have the drawback of higher computation time to generate test cases mainly due to the additional workload from enlarged input domain. For this reason, a revised version of IP-ART without input domain enlargement needs to improve the distribution of test cases to remove the additional time cost. We explore three smoothing algorithms which influence the distribution of test cases, and analyze to check if any performance improvements take place by them. The simulation results show that the algorithm of a restriction area management achieves better performance than other ones.