• Title/Summary/Keyword: redundancy method

Search Result 557, Processing Time 0.022 seconds

Feature Based Decision Tree Model for Fault Detection and Classification of Semiconductor Process (반도체 공정의 이상 탐지와 분류를 위한 특징 기반 의사결정 트리)

  • Son, Ji-Hun;Ko, Jong-Myoung;Kim, Chang-Ouk
    • IE interfaces
    • /
    • v.22 no.2
    • /
    • pp.126-134
    • /
    • 2009
  • As product quality and yield are essential factors in semiconductor manufacturing, monitoring the main manufacturing steps is a critical task. For the purpose, FDC(Fault detection and classification) is used for diagnosing fault states in the processes by monitoring data stream collected by equipment sensors. This paper proposes an FDC model based on decision tree which provides if-then classification rules for causal analysis of the processing results. Unlike previous decision tree approaches, we reflect the structural aspect of the data stream to FDC. For this, we segment the data stream into multiple subregions, define structural features for each subregion, and select the features which have high relevance to results of the process and low redundancy to other features. As the result, we can construct simple, but highly accurate FDC model. Experiments using the data stream collected from etching process show that the proposed method is able to classify normal/abnormal states with high accuracy.

Power System Enhanced Monitoring through Strategic PMU Placement Considering Degree of Criticality of Buses

  • Singh, Ajeet Kumar;Fozdar, Manoj
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.5
    • /
    • pp.1769-1777
    • /
    • 2018
  • This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs) considering system configuration and its attributes during the planning phase of PMU deployment. Each bus of the system is assessed on four diverse attributes; namely, redundancy of measurements, rotor angle and frequency monitoring of generator buses, reactive power deficiency, and maximum loading limit under transmission line outage contingency, and a consolidated 'degree of criticality' is determined using Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The major contribution of the proposed work is the development of modified objective function which incorporates values of the degree of criticality of buses. The problem is formulated as maximization of the aggregate degree of criticality of the system. The resultant PMU configuration extends complete observability of the system and majority of the PMUs are located on critical buses. As budgetary restrictions on utilities may not allow installation PMUs even at optimal locations in a single phase, multi-horizon deployment of PMUs is also addressed. The proposed approach is tested on IEEE 14-bus, IEEE 30-bus, New England (NE) 39-bus, IEEE 57-bus and IEEE 118-bus systems and compared with some existing methods.

Random Linear Network Coding to Improve Reliability in the Satellite Communication (위성 통신에서 신뢰성 향상을 위한 랜덤 선형 네트워크 코딩 기술)

  • Lee, Kyu-Hwan;Kim, Jae-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38B no.9
    • /
    • pp.700-706
    • /
    • 2013
  • In this paper, we propose a method for applying random linear network coding in satellite communication to improve reliability. In the proposed protocol, network-coded redundancy (NC-R) packets are transmitted in the PEP (Performance Enhancement Proxy). Therefore, if data packets is lost by wireless channel error, they can be recovered by NC-R packets. We also develop the TCP performance model of the proposed protocol and evaluate the performance of the proposed protocol. In the simulation results, It is shown that the proposed protocol can improve the TCP throughput as compared with that of the conventional TCP because the NC-R packets is sent by the sender-side PEP and the receiver-side PEP use these packets to recover the lost packets, resulting in reducing the packet loss in TCP.

An Innovative Approach of Bangla Text Summarization by Introducing Pronoun Replacement and Improved Sentence Ranking

  • Haque, Md. Majharul;Pervin, Suraiya;Begum, Zerina
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.752-777
    • /
    • 2017
  • This paper proposes an automatic method to summarize Bangla news document. In the proposed approach, pronoun replacement is accomplished for the first time to minimize the dangling pronoun from summary. After replacing pronoun, sentences are ranked using term frequency, sentence frequency, numerical figures and title words. If two sentences have at least 60% cosine similarity, the frequency of the larger sentence is increased, and the smaller sentence is removed to eliminate redundancy. Moreover, the first sentence is included in summary always if it contains any title word. In Bangla text, numerical figures can be presented both in words and digits with a variety of forms. All these forms are identified to assess the importance of sentences. We have used the rule-based system in this approach with hidden Markov model and Markov chain model. To explore the rules, we have analyzed 3,000 Bangla news documents and studied some Bangla grammar books. A series of experiments are performed on 200 Bangla news documents and 600 summaries (3 summaries are for each document). The evaluation results demonstrate the effectiveness of the proposed technique over the four latest methods.

A Point Clouds Fast Thinning Algorithm Based on Sample Point Spatial Neighborhood

  • Wei, Jiaxing;Xu, Maolin;Xiu, Hongling
    • Journal of Information Processing Systems
    • /
    • v.16 no.3
    • /
    • pp.688-698
    • /
    • 2020
  • Point clouds have ability to express the spatial entities, however, the point clouds redundancy always involves some uncertainties in computer recognition and model construction. Therefore, point clouds thinning is an indispensable step in point clouds model reconstruction and other applications. To overcome the shortcomings of complex classification index and long time consuming in existing point clouds thinning algorithms, this paper proposes a point clouds fast thinning algorithm. Specifically, the two-dimensional index is established in plane linear array (x, y) for the scanned point clouds, and the thresholds of adjacent point distance difference and height difference are employed to further delete or retain the selected sample point. Sequentially, the index of sample point is traversed forwardly and backwardly until the process of point clouds thinning is completed. The results suggest that the proposed new algorithm can be applied to different targets when the thresholds are built in advance. Besides, the new method also performs superiority in time consuming, modelling accuracy and feature retention by comparing with octree thinning algorithm.

A Comparison Study for Ordination Methods in Ecology (생태학의 통계적 서열화 방법 비교에 관한 연구)

  • Ko, Hyeon-Seok;Jhun, Myoungshic;Jeong, Hyeong Chul
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.1
    • /
    • pp.49-60
    • /
    • 2015
  • Various kinds of ordination methods such as correspondence analysis and canonical correspondence analysis are used in community ecology to visualize relationships among species, sites, and environmental variables. Ter Braak (1986), Jackson and Somers (1991), Parmer (1993), compared the ordination methods using eigenvalue and distance graph. However, these methods did not show the relationship between population and biplot because they are only based on surveyed data. In this paper, a method that measures the extent to show population information to biplot was introduced to compare ordination methods objectively.

A Study of Effectiveness of the Improved Security Operation Model Based on Vulnerability Database (취약점 데이터베이스 기반 개선된 보안관제 모델의 효과성 연구)

  • Hyun, Suk-woo;Kwon, Taekyoung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.5
    • /
    • pp.1167-1177
    • /
    • 2019
  • In this paper, the improved security operation model based on the vulnerability database is studied. The proposed model consists of information protection equipment, vulnerability database, and a dashboard that visualizes and provides the results of interworking with detected logs. The evaluation of the model is analyzed by setting up a simulated attack scenario in a virtual infrastructure. In contrast to the traditional method, it is possible to respond quickly to threats of attacks specific to the security vulnerabilities that the asset has, and to find redundancy between detection rules with a secure agent, thereby creating an optimal detection rule.

An efficient reliability analysis strategy for low failure probability problems

  • Cao, Runan;Sun, Zhili;Wang, Jian;Guo, Fanyi
    • Structural Engineering and Mechanics
    • /
    • v.78 no.2
    • /
    • pp.209-218
    • /
    • 2021
  • For engineering, there are two major challenges in reliability analysis. First, to ensure the accuracy of simulation results, mechanical products are usually defined implicitly by complex numerical models that require time-consuming. Second, the mechanical products are fortunately designed with a large safety margin, which leads to a low failure probability. This paper proposes an efficient and high-precision adaptive active learning algorithm based on the Kriging surrogate model to deal with the problems with low failure probability and time-consuming numerical models. In order to solve the problem with multiple failure regions, the adaptive kernel-density estimation is introduced and improved. Meanwhile, a new criterion for selecting points based on the current Kriging model is proposed to improve the computational efficiency. The criterion for choosing the best sampling points considers not only the probability of misjudging the sign of the response value at a point by the Kriging model but also the distribution information at that point. In order to prevent the distance between the selected training points from too close, the correlation between training points is limited to avoid information redundancy and improve the computation efficiency of the algorithm. Finally, the efficiency and accuracy of the proposed method are verified compared with other algorithms through two academic examples and one engineering application.

Adaptive block tree structure for video coding

  • Baek, Aram;Gwon, Daehyeok;Son, Sohee;Lee, Jinho;Kang, Jung-Won;Kim, Hui Yong;Choi, Haechul
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.313-323
    • /
    • 2021
  • The Joint Video Exploration Team (JVET) has studied future video coding (FVC) technologies with a potential compression capacity that significantly exceeds that of the high-efficiency video coding (HEVC) standard. The joint exploration test model (JEM), a common platform for the exploration of FVC technologies in the JVET, employs quadtree plus binary tree block partitioning, which enhances the flexibility of coding unit partitioning. Despite significant improvement in coding efficiency for chrominance achieved by separating luminance and chrominance tree structures in I slices, this approach has intrinsic drawbacks that result in the redundancy of block partitioning data. In this paper, an adaptive tree structure correlating luminance and chrominance of single and dual trees is presented. Our proposed method resulted in an average reduction of -0.24% in the Y Bjontegaard Delta rate relative to the intracoding of JEM 6.0 common test conditions.

Considerations of the Record Management of the Digital Age While CRMS was Introduced (CRMS 도입을 맞아 생각해보는 디지털 시대의 기록관리)

  • Yim, Jin-Hee
    • Proceedings of Korean Society of Archives and Records Management
    • /
    • 2019.05a
    • /
    • pp.61-67
    • /
    • 2019
  • Recently, the central government organizations have changed their Business Management System to the cloud-based On-nara Document 2.0. According to this, the National Archives of Korea is spreading a cloud-based records management system. With the development of digital technology, including cloud computing, preservation and utilization of records must be redesigned continuously to be effective and efficient. It is needed that the process and method of the electronic records management will change from simple digitization of paper-based recording to digital technology. This article offers opinions related to the logical transfer, storage and redundancy elimination of digital objects, machine-readable format, big-data analysis, templates of official documents, and authenticity authentication system based on universally unique identifiers (UUID) and hash value.