• Title/Summary/Keyword: de-duplication

Search Result 38, Processing Time 0.033 seconds

Intestinal duplication revealed by posterior reversible encephalopathy syndrome

  • Kerkeni, Yosra;Louati, Hela;Hamzaoui, Mourad
    • Clinical and Experimental Pediatrics
    • /
    • v.61 no.4
    • /
    • pp.132-134
    • /
    • 2018
  • We report a unique case of intestinal duplication detected on posterior reversible encephalopathy syndrome (PRES) in a 13-year-old girl. She was admitted to the pediatric Emergency Department because of generalized seizures. Radiological assessment revealed a large, well-defined, thick-walled cystic lesion in the mid abdomen, suggestive of duplication cyst associated to a PRES. Exploration confirmed the diagnosis of ileal duplication cyst, and the mass was resected. The postoperative course was uneventful. Both hypertension and neurological dysfunction resolved after the mass resection. A follow-up brain magnetic resonance imaging was performed 9 months later and showed complete resolution of the cerebellar changes. Although extrinsic compression of the retroperitoneal structures has not been reported in the literature as a complication of duplication cyst, we strongly believe that this is the most logical and plausible hypothesis that would explain the pathogenesis of PRES in our patient.

De-Duplication Performance Test for Massive Data (대용량 데이터의 중복제거(De-Duplication) 성능 실험)

  • Lee, Choelmin;Kim, Jai-Hoon;Kim, Young Gyu
    • Annual Conference of KIPS
    • /
    • 2012.11a
    • /
    • pp.271-273
    • /
    • 2012
  • 중복 제거(De-duplication) 여러 데이터를 저장한 스토리지에서 같은 내용을 담고 있는 파일자체나 블록단위의 chunk 등을 찾아 중복된 내용을 제거하여 중복된 부분은 하나의 데이터 단위를 유지함으로써 스토리지 공간을 절약할 수 있다. 본 논문에서는 실험적인 데이터가 아닌 실제 업무 환경에서 적용될만한 대용량의 데이터 백업을 가정한 상황에 대해 중복 제거 기법을 테스트해봄으로써 중복제거율과 성능을 측정하였으며 이를 시각적으로 표현하는 방법을 제안함으로써 평가자 및 사용자가 알아보기 쉽게 하였다.

High Available De-Duplication Algorithm (고가용성 중복제거(De-Duplication) 기법)

  • Lee, Choelmin;Kim, Jai-Hoon;Kim, Young Gyu
    • Annual Conference of KIPS
    • /
    • 2012.11a
    • /
    • pp.274-277
    • /
    • 2012
  • 중복 제거(De-duplication) 기법은 파일시스템 내에서 동일한 내용의 데이터 블록이나 파일의 중복을 제거하여 유일한 내용만을 보관함으로써, 저장장치의 낭비를 막을 수 있다. 상반된 개념으로 결함극복을 위하여 동일한 파일시스템이나 시스템 구성요소를 복제(이중화)함으로써, 일부 시스템 결함시 복제(이중화)된 다른 시스템을 이용하여 신뢰성과 가용도를 향상시킬 수 있다. 그러나 결함 극복을 위한 파일시스템의 이중화는 저장장치의 낭비화 복제된 파일시스템의 일치성 유지에 비용이 소요된다. 본 논문에서는 일정 수준의 가용도를 유지하기 위한 중복제거 기법을 제안하고 성능을 평가하였다. 제안하는 고가용도 중복제거 기법에서는 요구되는 가용도를 유지할 수 있는 범위내에서 중복을 제거하며, 필요에 따라 선택적으로 중복을 유지할 수 있도록 한다.

Data De-duplication and Recycling Technique in SSD-based Storage System for Increasing De-duplication Rate and I/O Performance (SSD 기반 스토리지 시스템에서 중복률과 입출력 성능 향상을 위한 데이터 중복제거 및 재활용 기법)

  • Kim, Ju-Kyeong;Lee, Seung-Kyu;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.149-155
    • /
    • 2012
  • SSD is a storage device of having high-performance controller and cache buffer and consists of many NAND flash memories. Because NAND flash memory does not support in-place update, valid pages are invalidated when update and erase operations are issued in file system and then invalid pages are completely deleted via garbage collection. However, garbage collection performs many erase operations of long latency and then it reduces I/O performance and increases wear leveling in SSD. In this paper, we propose a new method of de-duplicating valid data and recycling invalid data. The method de-duplicates valid data and then recycles invalid data so that it improves de-duplication ratio. Due to reducing number of writes and garbage collection, the method could increase I/O performance and decrease wear leveling in SSD. Experimental result shows that it can reduce maximum 20% number of garbage collections and 9% I/O latency than those of general case.

A case of de novo duplication of 15q24-q26.3

  • Kim, Eun-Young;Kim, Yu-Kyong;Kim, Mi-Kyoung;Jung, Ji-Mi;Jeon, Ga-Won;Kim, Hye-Ran;Sin, Jong-Beom
    • Clinical and Experimental Pediatrics
    • /
    • v.54 no.6
    • /
    • pp.267-271
    • /
    • 2011
  • Distal duplication, or trisomy 15q, is an extremely rare chromosomal disorder characterized by prenatal and postnatal overgrowth, mental retardation, and craniofacial malformations. Additional abnormalities typically include an unusually short neck, malformations of the fingers and toes, scoliosis and skeletal malformations, genital abnormalities, particularly in affected males, and, in some cases, cardiac defects. The range and severity of symptoms and physical findings may vary from case to case, depending upon the length and location of the duplicated portion of chromosome 15q. Most reported cases of duplication of the long arm of chromosome 15 frequently have more than one segmental imbalance resulting from unbalanced translocations involving chromosome 15 and deletions in another chromosome, as well as other structural chromosomal abnormalities. We report a female newborn with a de novo duplication, 15q24- q26.3, showing intrauterine overgrowth, a narrow asymmetric face with down-slanting palpebral fissures, a large, prominent nose, and micrognathia, arachnodactyly, camptodactyly, congenital heart disease, hydronephrosis, and hydroureter. Chromosomal analysis showed a 46,XX,inv(9)(p12q13),dup(15)(q24q26.3). Array comparative genomic hybridization analysis revealed a gain of 42 clones on 15q24-q26.3. This case represents the only reported patient with a de novo 15q24-q26.3 duplication that did not result from an unbalanced translocation and did not have a concomitant monosomic component in Korea.

Storage System Performance Enhancement Using Duplicated Data Management Scheme (중복 데이터 관리 기법을 통한 저장 시스템 성능 개선)

  • Jung, Ho-Min;Ko, Young-Woong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.1
    • /
    • pp.8-18
    • /
    • 2010
  • Traditional storage server suffers from duplicated data blocks which cause an waste of storage space and network bandwidth. To address this problem, various de-duplication mechanisms are proposed. Especially, lots of works are limited to backup server that exploits Contents-Defined Chunking (CDC). In backup server, duplicated blocks can be easily traced by using Anchor, therefore CDC scheme is widely used for backup server. In this paper, we propose a new de-duplication mechanism for improving a storage system. We focus on efficient algorithm for supporting general purpose de-duplication server including backup server, P2P server, and FTP server. The key idea is to adapt stride scheme on traditional fixed block duplication checking mechanism. Experimental result shows that the proposed mechanism can minimize computation time for detecting duplicated region of blocks and efficiently manage storage systems.

Design and Implementation of SANique Smart Vault Backup System for Massive Data Services (대용량 데이터 서비스를 위한 SANique Smart Vault 백업 시스템의 설계 및 구현)

  • Lee, Kyu Woong
    • The Journal of Korean Association of Computer Education
    • /
    • v.17 no.2
    • /
    • pp.97-106
    • /
    • 2014
  • There is a lot of interest in the data storage and backup systems according to increasing the data intensive services and related user's data. The overhead of backup performance in massive storage system is a critical issue because the traditional incremental backup strategies causes the time consuming bottleneck in the SAN environment. The SANique Smart Vault system is a high performance backup solution with data de-duplication technology and it guarantees these requirements. In this paper, we describe the architecture of SANique Smart Vault system and illustrate efficient delta incremental backup method based on journaling files. We also present the record-level data de-duplication method in our proposed backup system. The proposed forever incremental backup and data de-duplication algorithms are analyzed and investigated by performance evaluation of other commercial backup solutions.

  • PDF

Protection of a Multicast Connection Request in an Elastic Optical Network Using Shared Protection

  • BODJRE, Aka Hugues Felix;ADEPO, Joel;COULIBALY, Adama;BABRI, Michel
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.119-124
    • /
    • 2021
  • Elastic Optical Networks (EONs) allow to solve the high demand for bandwidth due to the increase in the number of internet users and the explosion of multicast applications. To support multicast applications, network operator computes a tree-shaped path, which is a set of optical channels. Generally, the demand for bandwidth on an optical channel is enormous so that, if there is a single fiber failure, it could cause a serious interruption in data transmission and a huge loss of data. To avoid serious interruption in data transmission, the tree-shaped path of a multicast connection may be protected. Several works have been proposed methods to do this. But these works may cause the duplication of some resources after recovery due to a link failure. Therefore, this duplication can lead to inefficient use of network resources. Our work consists to propose a method of protection that eliminates the link that causes duplication so that, the final backup path structure after link failure is a tree. Evaluations and analyses have shown that our method uses less backup resources than methods for protection of a multicast connection.

Ab ovo or de novo? Mechanisms of Centriole Duplication

  • Loncarek, Jadranka;Khodjakov, Alexey
    • Molecules and Cells
    • /
    • v.27 no.2
    • /
    • pp.135-142
    • /
    • 2009
  • The centrosome, an organelle comprising centrioles and associated pericentriolar material, is the major microtubule organizing center in animal cells. For the cell to form a bipolar mitotic spindle and ensure proper chromosome segregation at the end of each cell cycle, it is paramount that the cell contains two and only two centrosomes. Because the number of centrosomes in the cell is determined by the number of centrioles, cells have evolved elaborate mechanisms to control centriole biogenesis and to tightly coordinate this process with DNA replication. Here we review key proteins involved in centriole assembly, compare two major modes of centriole biogenesis, and discuss the mechanisms that ensure stringency of centriole number.

A Clustering File Backup Server Using Multi-level De-duplication (다단계 중복 제거 기법을 이용한 클러스터 기반 파일 백업 서버)

  • Ko, Young-Woong;Jung, Ho-Min;Kim, Jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.7
    • /
    • pp.657-668
    • /
    • 2008
  • Traditional off-the-shelf file server has several potential drawbacks to store data blocks. A first drawback is a lack of practical de-duplication consideration for storing data blocks, which leads to worse storage capacity waste. Second drawback is the requirement for high performance computer system for processing large data blocks. To address these problems, this paper proposes a clustering backup system that exploits file fingerprinting mechanism for block-level de-duplication. Our approach differs from the traditional file server systems in two ways. First, we avoid the data redundancy by multi-level file fingerprints technology which enables us to use storage capacity efficiently. Second, we applied a cluster technology to I/O subsystem, which effectively reduces data I/O time and network bandwidth usage. Experimental results show that the requirement for storage capacity and the I/O performance is noticeably improved.