• Title/Summary/Keyword: Data Redundancy

Search Result 364, Processing Time 0.021 seconds

Cellular-Automata Based Node Scheduling Scheme for Wireless Sensor Networks (무선 센서 네트워크를 위한 셀룰러 오토마타 기반의 노드 스케줄링 제어)

  • Byun, Heejung;Shon, Sugook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39B no.10
    • /
    • pp.708-714
    • /
    • 2014
  • Wireless sensor networks (WSNs) generally consist of densely deployed sensor nodes that depend on batteries for energy. Having a large number of densely deployed sensor nodes causes energy waste and high redundancy in sensor data transmissions. The problems of power limitation and high redundancy in sensing coverage can be solved by appropriate scheduling of node activity among sensor nodes. In this paper, we propose a cellular automata based node scheduling algorithm for prolonging network lifetime with a balance of energy savings among nodes while achieving high coverage quality. Based on a cellular automata framework, we propose a new mathematical model for the node scheduling algorithm. The proposed algorithm uses local interaction based on environmental state signaling for making scheduling decisions. We analyze the system behavior and derive steady states of the proposed system. Simulation results show that the proposed algorithm outperforms existing protocols by providing energy balance with significant energy savings while maintaining sensing coverage quality.

An Effcient Lossless Compression Algorithm using Arithmetic Coding for Indexed Color lmage (산술부호화를 이용한 인덱스 칼라 이미지에서의 효율적인 무손실 압축 방법)

  • You Kang-Soo;Lee Han-Jeong;Jang Euee S.;Kwak Hoon-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.1C
    • /
    • pp.35-43
    • /
    • 2005
  • This paper introduces a new algorithm to improve compression performance of 256 color images called palette-based or indexed images. The proposed scheme counts each frequency of index values after present index value and determines each rank for every index value by sorting them in descending order. Then, the scheme makes ranked index image instead of original indexed image using the way to replace index values with ranks. In the ranked index image's distribution produced as a result of this algorithm, the higher ranked index value, the more present same values. Therefore, data redundancy will be raised and more efficient performance of compression can be expected. Simulation results verify that because of higher compression ratio by up to 22.5, this newly designed algorithm shows a much better performance of compression in comparison with the arithmetic coding, intensity-based JPEG-LS and palette-based GIF.

Response Analysis of MW-Class Floating Offshore Wind Power System using International Standard IEC61400-3-2

  • Yu, Youngjae;Shin, Hyunkyoung
    • Journal of Ocean Engineering and Technology
    • /
    • v.34 no.6
    • /
    • pp.454-460
    • /
    • 2020
  • In 2019, the Korean government announced the 3rd Basic Plan for Energy, which included expanding the rate of renewable energy generation by 30-40% by 2040. Hence, offshore wind power generation, which is relatively easy to construct in large areas, should be considered. The East Sea coast of Korea is a sea area where the depth reaches 50 m, which is deeper than the west coast, even though it is only 2.5 km away from the coastline. Therefore, for offshore wind power projects on the East Sea coast, a floating offshore wind power should be considered instead of a fixed one. In this study, a response analysis was performed by applying the analytical conditions of IEC61400-3-2 for the design of floating offshore wind power generation systems. In the newly revised IEC61400-3-2 international standard, design load cases to be considered in floating offshore wind power systems are specified. The upper structure applied to the numerical analysis was a 5-MW-class wind generator developed by the National Renewable Energy Laboratory (NREL), and the marine environment conditions required for the analysis were based on the Ulsan Meteorological Buoy data from the Korea Meteorological Administration. The FAST v8 developed by NREL was used in the coupled analysis. From the simulation, the maximum response of the six degrees-of-freedom motion and the maximum load response of the joint part were compared. Additionally, redundancy was verified under abnormal conditions. The results indicate that the platform has a maximum displacement radius of approximately 40 m under an extreme sea state, and when one mooring line is broken, this distance increased to approximately 565 m. In conclusion, redundancy should be verified to determine the design of floating offshore wind farms or the arrangement of mooring systems.

FASIM: Fragments Assembly Simulation using Biased-Sampling Model and Assembly Simulation for Microbial Genome Shotgun Sequencing

  • Hur Cheol-Goo;Kim Sunny;Kim Chang-Hoon;Yoon Sung-Ho;In Yong-Ho;Kim Cheol-Min;Cho Hwan-Gue
    • Journal of Microbiology and Biotechnology
    • /
    • v.16 no.5
    • /
    • pp.683-688
    • /
    • 2006
  • We have developed a program for generating shotgun data sets from known genome sequences. Generation of synthetic data sets by computer program is a useful alternative to real data to which students and researchers have limited access. Uniformly-distributed-sampling clones that were adopted by previous programs cannot account for the real situation where sampled reads tend to come from particular regions of the target genome. To reflect such situation, a probabilistic model for biased sampling distribution was developed by using an experimental data set derived from a microbial genome project. Among the experimental parameters tested (varied fragment or read lengths, chimerism, and sequencing error), the extent of sequencing error was the most critical factor that hampered sequence assembly. We propose that an optimum sequencing strategy employing different insert lengths and redundancy can be established by performing a variety of simulations.

A comparative study of filter methods based on information entropy

  • Kim, Jung-Tae;Kum, Ho-Yeun;Kim, Jae-Hwan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.5
    • /
    • pp.437-446
    • /
    • 2016
  • Feature selection has become an essential technique to reduce the dimensionality of data sets. Many features are frequently irrelevant or redundant for the classification tasks. The purpose of feature selection is to select relevant features and remove irrelevant and redundant features. Applications of the feature selection range from text processing, face recognition, bioinformatics, speaker verification, and medical diagnosis to financial domains. In this study, we focus on filter methods based on information entropy : IG (Information Gain), FCBF (Fast Correlation Based Filter), and mRMR (minimum Redundancy Maximum Relevance). FCBF has the advantage of reducing computational burden by eliminating the redundant features that satisfy the condition of approximate Markov blanket. However, FCBF considers only the relevance between the feature and the class in order to select the best features, thus failing to take into consideration the interaction between features. In this paper, we propose an improved FCBF to overcome this shortcoming. We also perform a comparative study to evaluate the performance of the proposed method.

A Study on BMS by BDS for Distribution-Business: Business Model System by Buyer's Decision Step

  • Lim, Heon-Wook;Seo, Dae-Sung
    • Journal of Distribution Science
    • /
    • v.17 no.4
    • /
    • pp.27-32
    • /
    • 2019
  • Purpose - The business model is a method of creating corporate value, in existing "classification of business model", limitations and redundancy phenomena are applied when a new type flows in, and as consumer's purchasing decision of consumer behavior 5 steps. The classification schemes can be used for more accurate data analysis by proposing a new mapping technique in the fourth industry. Research design, data, and methodology - It was far more classified on the business model (BMS by BDS), and so on. Designing the new horizons of logistics, marketing, methodology by reclassifying these existing data to new useful data with the old methods, in order to analyze the areas where the problem has been raised for the point that the existing methods are not suitable configured. This will be applicable to the system of quaternary industry from the perspective of the buyer. Results - The mapping results of the consumer purchase decision were as follows,the 1st stage (interest) was 23.73%, 2nd stages (publicity) 33.90%, 3rd stages (sales) 13.56%, 4th stages (decision) 11.86%, 5th stages (repurchaser) 16.95%. This verified that "the business model can be classified through "BMS by BDS". Conclusions - This structural classification is the basis of logistics marketing in the 4th industry, and proposes a innovative and effective model of constructing theory.

Feature Based Decision Tree Model for Fault Detection and Classification of Semiconductor Process (반도체 공정의 이상 탐지와 분류를 위한 특징 기반 의사결정 트리)

  • Son, Ji-Hun;Ko, Jong-Myoung;Kim, Chang-Ouk
    • IE interfaces
    • /
    • v.22 no.2
    • /
    • pp.126-134
    • /
    • 2009
  • As product quality and yield are essential factors in semiconductor manufacturing, monitoring the main manufacturing steps is a critical task. For the purpose, FDC(Fault detection and classification) is used for diagnosing fault states in the processes by monitoring data stream collected by equipment sensors. This paper proposes an FDC model based on decision tree which provides if-then classification rules for causal analysis of the processing results. Unlike previous decision tree approaches, we reflect the structural aspect of the data stream to FDC. For this, we segment the data stream into multiple subregions, define structural features for each subregion, and select the features which have high relevance to results of the process and low redundancy to other features. As the result, we can construct simple, but highly accurate FDC model. Experiments using the data stream collected from etching process show that the proposed method is able to classify normal/abnormal states with high accuracy.

Dynamic data validation and reconciliation for improving the detection of sodium leakage in a sodium-cooled fast reactor

  • Sangjun Park;Jongin Yang;Jewhan Lee;Gyunyoung Heo
    • Nuclear Engineering and Technology
    • /
    • v.55 no.4
    • /
    • pp.1528-1539
    • /
    • 2023
  • Since the leakage of sodium in an SFR (sodium-cooled fast reactor) causes an explosion upon reaction with air and water, sodium leakages represent an important safety issue. In this study, a novel technique for improving the reliability of sodium leakage detection applying DDVR (dynamic data validation and reconciliation) is proposed and verified to resolve this technical issue. DDVR is an approach that aims to improve the accuracy of a target system in a dynamic state by minimizing random errors, such as from the uncertainty of instruments and the surrounding environment, and by eliminating gross errors, such as instrument failure, miscalibration, or aging, using the spatial redundancy of measurements in a physical model and the reliability information of the instruments. DDVR also makes it possible to estimate the state of unmeasured points. To validate this approach for supporting sodium leakage detection, this study applies experimental data from a sodium leakage detection experiment performed by the Korea Atomic Energy Research Institute. The validation results show that the reliability of sodium leakage detection is improved by cooperation between DDVR and hardware measurements. Based on these findings, technology integrating software and hardware approaches is suggested to improve the reliability of sodium leakage detection by presenting the expected true state of the system.

A Study on Reducing Data Obesity through Optimized Data Modeling in Research Support Database (연구지원 데이터베이스에서 최적화된 데이터모델링을 통한 데이터 비만도 개선에 관한 연구)

  • Kim, Hee-Wan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.119-127
    • /
    • 2018
  • The formal data used in the business is managed in a table form without normalization due to lack of understanding and application of data modeling. If the balance of the database design is destroyed, it affects the speed of response to the data query, and the data obesity becomes high. In this paper, it is investigated how data obesity improved through database design through optimized data modeling. The data query path was clearly visualized by square design through data modeling based on the relationship between object (data) and object, from the radial and task - oriented isolation design where data obesity is excessive. In terms of data obesity, the obesity degree of the current research support database was 57.2%, but it was 16.2% in the new research support database, and the data obesity degree was reducd by 40.5%. In addition, by minimizing redundancy of data, the database has been improved to ensure the accuracy and integrity of the data.

Data hub system based on SQL/XMDR message using Wrapper for distributed data interoperability (분산 데이터 상호운용을 위한 SQL/XMDR 메시지 기반의 Wrapper를 이용한 데이터 허브 시스템)

  • Moon, Seok-Jae;Jung, Gye-Dong;Choi, Young-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.11
    • /
    • pp.2047-2058
    • /
    • 2007
  • The business environment of enterprises could be difficult to obviate redundancy to filtrate data source occurred on data integrated to standard rules and meta-data and to produce integration of data and single viewer in geographical and spatial distributed environment. Specially, To can interchange various data from a heterogeneous system or various applications without types and forms and synchronize continually exactly integrated information#s is of paramount concern. Therefore data hub system based on SQL/XMDR message to overcome a problem of meaning interoperability occurred on exchanging or jointing between each legacy systems are proposed in this paper. This system use message mapping technique of query transform system to maintain data modified in real-time on cooperating data. It can consistently maintain data modified in realtime on exchanging or jointing data for cooperating legacy systems, it improve clarity and availability of data by providing a single interface on data retrieval.