• Title/Summary/Keyword: 중복

Search Result 3,900, Processing Time 0.035 seconds

LTRE: Lightweight Traffic Redundancy Elimination in Software-Defined Wireless Mesh Networks (소프트웨어 정의 무선 메쉬 네트워크에서의 경량화된 중복 제거 기법)

  • Park, Gwangwoo;Kim, Wontae;Kim, Joonwoo;Pack, Sangheon
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.976-985
    • /
    • 2017
  • Wireless mesh network (WMN) is a promising technology for building a cost-effective and easily-deployed wireless networking infrastructure. To efficiently utilize limited radio resources in WMNs, packet transmissions (particularly, redundant packet transmissions) should be carefully managed. We therefore propose a lightweight traffic redundancy elimination (LTRE) scheme to reduce redundant packet transmissions in software-defined wireless mesh networks (SD-WMNs). In LTRE, the controller determines the optimal path of each packet to maximize the amount of traffic reduction. In addition, LTRE employs three novel techniques: 1) machine learning (ML)-based information request, 2) ID-based source routing, and 3) popularity-aware cache update. Simulation results show that LTRE can significantly reduce the traffic overhead by 18.34% to 48.89%.

The Consistency Management Using Trees of Replicated Data Items in Partially Replicated Database (부분 중복 데이터베이스에서 중복 데이터의 트리를 이용한 일관성 유지)

  • Bae, Mi-Sook;Hwang, Bu-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.10D no.4
    • /
    • pp.647-654
    • /
    • 2003
  • The replication of data is used to increase its availability and to improve the performance of a system. The distributed database system has to maintain both the database consistency and the replica consistency. This paper proposes an algorithm which resolves the conflict of the operations by using the mechanism based on the structure that the replicas of each data item are hierarchically organized. Each update is propagated along the tree based on the fact that the root of each data item is the primary replica in partially replicated databases. The use of a hierarchy of data may eliminate useless propagation since the propagation can be done only to sites having the replicas. In consequence, the propagation delay of updates may be reduced. By using the timestamp and a compensating transaction, our algorithm resolves the non-serializability problem caused by the conflict of operations that can happen on the way of the update propagation due to the lazy propagation. This resolution also guarantees the data consistency.

Education Content of Department of Dental Hygiene andActual Condition of the Overlapping Analytic Syllabus (치위생과 교육내용 및 교수요목 중복실태 분석)

  • Park, Myung-Suk;Kim, Chang-Hee
    • Journal of dental hygiene science
    • /
    • v.7 no.1
    • /
    • pp.49-54
    • /
    • 2007
  • This research was conducted to provide standardization method for new dental hygiene curriculum by identifying the overlapping of education content of the Department of Dental Hygiene and analytic syllabus. To complement these overlapping education programs, I would like to make some proposals. First, unified course shall be provided by compromising specific terms of overlapping subject, overlapping curriculums for the necessary skills required for job analysis of dental hygienist and overlapping class time. This shall increase the efficiency the class time and required curriculums. Next, proactive and continuos research for standardized approach to the Department of Dental Hygiene education content is necessary and Dental Hygiene academic circle shall build trust.

  • PDF

Server Replication Degree Reducing Location Management Cost in Cellular Networks (셀룰라 네트워크에서 위치 정보 관리 비용을 최소화하는 서버의 중복도)

  • Kim, Jai-Hoon;Lim, Sung-Hwa
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.3
    • /
    • pp.265-275
    • /
    • 2002
  • A default server strategy is a very popular scheme for managing location and state information of mobile hosts in cellular networks. But the communication cost increases if the call requests are frequent and the distant between the default server and the client is long. Still more any connection to a mobile host cannot be established when the default server of the destination mobile host fails. These problems can be solved by replicating default server and by letting nearest replicated default server process the query request which is sent from a client. It is important to allocate replicated default servers efficiently in networks and determine the number of replicated default servers. In this paper, we suggest and evaluate a default server replication strategy to reduce communication costs and to improve service availabilities. Furthermore we propose and evaluate an optimized allocation algorithm and an optimal replication degree for replicating: dofault servers in nn grid networks and binary tree networks.

Evaluation of the Redundancy in Decoy Database Generation for Tandem Mass Analysis (탠덤 질량 분석을 위한 디코이 데이터베이스 생성 방법의 중복성 관점에서의 성능 평가)

  • Li, Honglan;Liu, Duanhui;Lee, Kiwook;Hwang, Kyu-Baek
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.56-60
    • /
    • 2016
  • Peptide identification in tandem mass spectrometry is usually done by searching the spectra against target databases consisting of reference protein sequences. To control false discovery rates for high-confidence peptide identification, spectra are also searched against decoy databases constructed by permuting reference protein sequences. In this case, a peptide of the same sequence could be included in both the target and the decoy databases or multiple entries of a same peptide could exist in the decoy database. These phenomena make the protein identification problem complicated. Thus, it is important to minimize the number of such redundant peptides for accurate protein identification. In this regard, we examined two popular methods for decoy database generation: 'pseudo-shuffling' and 'pseudo-reversing'. We experimented with target databases of varying sizes and investigated the effect of the maximum number of missed cleavage sites allowed in a peptide (MC), which is one of the parameters for target and decoy database generation. In our experiments, the level of redundancy in decoy databases was proportional to the target database size and the value of MC, due to the increase in the number of short peptides (7 to 10 AA). Moreover, 'pseudo-reversing' always generated decoy databases with lower levels of redundancy compared to 'pseudo-shuffling'.

Detection of a Large White-Specific Duplication in D-loop Region of the Porcine MtDNA (돼지 mtDNA D-loop 지역의 Large White 특이 중복현상 탐지)

  • Kim, Jae-Hwan;Han, Sang-Hyun;Lee, Sung-Soo;Ko, Moon-Suk;Lee, Jung-Gyu;Jeon, Jin-Tae;Cho, In-Cheol
    • Journal of Life Science
    • /
    • v.19 no.4
    • /
    • pp.467-471
    • /
    • 2009
  • The entire D-loop region of the porcine mitochondrial DNA (mtDNA) was amplified from six pig breeds (Landrace, Duroc, Large White, Korean native pig, Berkshire, and Hampshire) using a primer set designed on the basis of reported porcine mtDNA sequences. From analyses through cloning, DNA sequencing and multiple sequence alignment, an 11-bp (TAAAACACTTA) duplication was observed after known tandem repetition in the D-loop region, which promoted hetroplasmy in mtDNA. Although the existence of the 11-bp duplication has been previously reported in Duroc and Japanese native pigs, there have not been any attempts to know the characteristics of this duplication in other breeds so far. A 150 bp fragment containing the 11-duplication was amplified and typed by polyacrylamide gel electrophoresis (PAGE). All Large Whites had two duplication units and Duroc showed heteromorphic patterns, 11.2% (9/80) of the animals had the 11-bp duplication in total. On the other hand, Landrace, Berkshire, Hampshire and Korean native pigs were non-duplicated. This result showed that the 11-bp duplication could be used as a breed-specific DNA marker for distinguishing pure Landrace and Large White breeds.

Study on the Overlapping Effect of Certification Policies: Focusing on the ICT Industry (벤처인증정책과 이노비즈인증정책의 중복효과에 대한 연구: ICT산업을 중심으로)

  • Oh, Seunghwan;Shim, Dongnyok;Kim, Kyunam
    • Journal of Korea Technology Innovation Society
    • /
    • v.18 no.2
    • /
    • pp.358-386
    • /
    • 2015
  • The aim of this paper is to evaluate policy impact of Inno-biz verification and Venture verification, especially focusing on the complementarity effect according to the overlapped support in Korean ICT industry. Alongside the implementation of various government innovation policies, discussions regarding evaluations of such policies have been consistently undertaken in economics, because it is very important to evaluate whether public policies have played a proper role. However, one of the distinguished point of this research from previous studies is that this paper not only includes evaluations of a single policy, but also the discussion about interaction between different innovation policies. The main result of this paper is that, in the case of overlapping homogeneous policies such as Inno-biz and venture verification, the complementarity effect is negative. Compared with previous studies, the uniqueness of this research is as follows. First, deviating from the view of previous studies that focused on the evaluation of a single policy, this paper has considered interactions and the complementarity effect of innovation policy through "policy mix," an economic term. Second, based on this concept, the paper suggests an analysis framework for the evaluation of interactions and the complementarity effect of innovation policy.

The Analysis of Duplicated Contents and Sequence between Science and Technology·Home Economics Curricular and Textbooks in Middle School about 'Digestion' and 'Energy' (중학교 과학 및 기술·가정 교과의 교육과정과 교과서에 제시된 소화와 에너지 단원의 내용 중복 및 연계성 분석)

  • Sim, Wangseop;Lee, Hyundong;Park, Kyungsuk
    • Journal of Science Education
    • /
    • v.41 no.1
    • /
    • pp.1-15
    • /
    • 2017
  • The purpose of this study was to deduct implications for national curriculum and textbook by analysing the duplicated contents and sequence between science and technology home economics subjects of the 2009 revised middle school curriculum. For duplication analysis, overlapped achievement standards and themes were examined by comparing the science and technology home economics curricular. Next, analysis of duplicated concepts is performed by comparing science and technology home economics textbooks through the concept map. The result of analyses of the achievement standards showed 4 standards related with 'digestion', 'energy' were duplicated. The other results of studying duplicated contents in textbooks suggested overlapped concepts(terms) were existed as following: digestion(22 contents), Energy(9 contents). In science textbook, the duplicated concept is usually described in detail. In contrast, the technology home economics textbook explain the duplicated concept briefly with providing various type of examples and cases. There are differences of using terminology between two subject textbooks. The findings of this study may provide educational insights into teaching of the duplicated contents between science and technology home economics.

CORE-Dedup: IO Extent Chunking based Deduplication using Content-Preserving Access Locality (CORE-Dedup: 내용보존 접근 지역성 활용한 IO 크기 분할 기반 중복제거)

  • Kim, Myung-Sik;Won, You-Jip
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.6
    • /
    • pp.59-76
    • /
    • 2015
  • Recent wide spread of embedded devices and technology growth of broadband communication has led to rapid increase in the volume of created and managed data. As a result, data centers have to increase the storage capacity cost-effectively to store the created data. Data deduplication is one way to save the storage space by removing redundant data. This work propose IO extent based deduplication schemes called CORE-Dedup that exploits content-preserving access locality. We acquire IO traces from block device layer in virtual machine host, and compare the deduplication performance of chunking method between the fixed size and IO extent based. At multiple workload of 10 user's compile in virtual machine environment, the result shows that 4 KB fixed size chunking and IO extent based chunking use chunk index 14500 and 1700, respectively. The deduplication rate account for 60.4% and 57.6% on fixed size and IO extent chunking, respectively.

Design and Implementation of Inline Data Deduplication in Cluster File System (클러스터 파일 시스템에서 인라인 데이터 중복제거 설계 및 구현)

  • Kim, Youngchul;Kim, Cheiyol;Lee, Sangmin;Kim, Youngkyun
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.8
    • /
    • pp.369-374
    • /
    • 2016
  • The growing demand of virtual computing and storage resources in the cloud computing environment has led to deduplication of storage system for effective reduction and utilization of storage space. In particular, large reduction in the storage space is made possible by preventing data with identical content as the virtual desktop images from being stored on the virtual desktop infrastructure. However, in order to provide reliable support of virtual desktop services, the storage system must address a variety of workloads by virtual desktop, such as performance overhead due to deduplication, periodic data I/O storms and frequent random I/O operations. In this paper, we designed and implemented a clustered file system to support virtual desktop and storage services in cloud computing environment. The proposed clustered file system provides low storage consumption by means of inline deduplication on virtual desktop images. In addition, it reduces performance overhead by deduplication process in the data server and not the virtual host on which virtual desktops are running.