• Title/Summary/Keyword: data duplication

Search Result 204, Processing Time 0.025 seconds

Alimentary Tract Duplication in Pediatric Patients: Its Distinct Clinical Features and Managements

  • Kim, Soo-Hong;Cho, Yong-Hoon;Kim, Hae-Young
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.23 no.5
    • /
    • pp.423-429
    • /
    • 2020
  • Purpose: Alimentary tract duplication (ATD) is a rare congenital condition that may occur throughout the intestinal tract. Clinical symptoms are generally related to the involved site, size of duplication, or associated ectopic mucosa. This study aimed to identify clinical implications by anatomical locations and age group and then suggest a relevant management according to its distinct features. Methods: We retrospectively reviewed the clinical data of pediatric patients who received a surgical management due to ATD. Furthermore, data including patients' demographics, anatomical distribution of the duplication, clinical features according to anatomical variants, and outcomes were compared. Results: A total of 25 patients were included in this study. ATD developed most commonly in the midgut, especially at the ileocecal region. The most common clinical presentation was abdominal pain, a sign resulting from intestinal obstruction, gastrointestinal bleeding, and intussusception. The non-communicating cystic type was the most common pathological feature in all age groups. Clinically, prenatal detection was relatively low; however, it usually manifested before the infantile period. A laparoscopic procedure was performed in most cases (18/25, 72.0%), significantly in the midgut lesion (p=0.012). Conclusion: ATD occurs most commonly at the ileocecal region, and a symptomatic one may usually be detected before the early childhood period. Surgical management should be considered whether symptom or not regarding its symptomatic progression, and a minimal invasive procedure is the preferred method, especially for the midgut lesion.

Storage System Performance Enhancement Using Duplicated Data Management Scheme (중복 데이터 관리 기법을 통한 저장 시스템 성능 개선)

  • Jung, Ho-Min;Ko, Young-Woong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.1
    • /
    • pp.8-18
    • /
    • 2010
  • Traditional storage server suffers from duplicated data blocks which cause an waste of storage space and network bandwidth. To address this problem, various de-duplication mechanisms are proposed. Especially, lots of works are limited to backup server that exploits Contents-Defined Chunking (CDC). In backup server, duplicated blocks can be easily traced by using Anchor, therefore CDC scheme is widely used for backup server. In this paper, we propose a new de-duplication mechanism for improving a storage system. We focus on efficient algorithm for supporting general purpose de-duplication server including backup server, P2P server, and FTP server. The key idea is to adapt stride scheme on traditional fixed block duplication checking mechanism. Experimental result shows that the proposed mechanism can minimize computation time for detecting duplicated region of blocks and efficiently manage storage systems.

Detection of hydin Gene Duplication in Personal Genome Sequence Data

  • Kim, Jong-Il;Ju, Young-Seok;Kim, Shee-Hyun;Hong, Dong-Wan;Seo, Jeong-Sun
    • Genomics & Informatics
    • /
    • v.7 no.3
    • /
    • pp.159-162
    • /
    • 2009
  • Human personal genome sequencing can be done with high efficiency by aligning a huge number of short reads derived from various next generation sequencing (NGS) technologies to the reference genome sequence. One of the major obstacles is the incompleteness of human reference genome. We tried to analyze the effect of hidden gene duplication on the NGS data using the known example of hydin gene. Hydin2, a duplicated copy of hydin on chromosome 16q22, has been recently found to be localized to chromosome 1q21, and is not included in the current version of standard human genome reference. We found that all of eight personal genome data published so far do not contain hydin2, and there is large number of nsSNPs in hydin. The heterozygosity of those nsSNPs was significantly higher than expected. The sequence coverage depth in hydin gene was about two fold of average depth. We believe that these unique finding of hydin can be used as useful indicators to discover new hidden multiplication in human genome.

Protection of a Multicast Connection Request in an Elastic Optical Network Using Shared Protection

  • BODJRE, Aka Hugues Felix;ADEPO, Joel;COULIBALY, Adama;BABRI, Michel
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.119-124
    • /
    • 2021
  • Elastic Optical Networks (EONs) allow to solve the high demand for bandwidth due to the increase in the number of internet users and the explosion of multicast applications. To support multicast applications, network operator computes a tree-shaped path, which is a set of optical channels. Generally, the demand for bandwidth on an optical channel is enormous so that, if there is a single fiber failure, it could cause a serious interruption in data transmission and a huge loss of data. To avoid serious interruption in data transmission, the tree-shaped path of a multicast connection may be protected. Several works have been proposed methods to do this. But these works may cause the duplication of some resources after recovery due to a link failure. Therefore, this duplication can lead to inefficient use of network resources. Our work consists to propose a method of protection that eliminates the link that causes duplication so that, the final backup path structure after link failure is a tree. Evaluations and analyses have shown that our method uses less backup resources than methods for protection of a multicast connection.

Trends and Appropriateness of Outpatient Prescription Drug Use in Veterans (보훈의료지원 대상자의 외래 처방의약품 사용경향과 적정성 평가)

  • Lee, Iyn-Hyang;Shim, Da-Young
    • Korean Journal of Clinical Pharmacy
    • /
    • v.28 no.2
    • /
    • pp.107-116
    • /
    • 2018
  • Objective: This study analyzed the national claims data of veterans to generate scientific evidence of the trends and appropriateness of their drug utilization in an outpatient setting. Methods: The claims data were provided by the Health Insurance Review & Assessment (HIRA). Through sampling and matching data, we selected two comparable groups; Veterans vs. National Health Insurance (NHI) patients and Veterans vs. Medical Aid (MAID) patients. Drug use and costs were compared between groups by using multivariate gamma regression models to account for the skewed distribution, and therapeutic duplication was analyzed by using multivariate logistic regression models. Results: In equivalent conditions, veteran patients made fewer visits to medical institutions (0.88 vs. 1), had 1.86 times more drug use, and paid 1.4 times more drug costs than NHI patients (p<0.05); similarly, veteran patients made fewer visits to medical institutions (0.96 vs. 1), had 1.11 times more drug use, and paid 0.95 times less drug costs than MAID patients (p<0.05). The risk of therapeutic duplication was 1.7 times higher (OR=1.657) in veteran patients than in NHI patients and 1.3 times higher (OR=1.311) than in MAID patients (p<0.0001). Conclusion: Similar patterns of drug use were found in veteran patients and MAID patients. There were greater concerns about the drug use behavior in veteran patients, with longer prescribing days and a higher rate of therapeutic duplication, than in MAID patients. Efforts should be made to measure if any inefficiency exists in veterans' drug use behavior.

De-Duplication Performance Test for Massive Data (대용량 데이터의 중복제거(De-Duplication) 성능 실험)

  • Lee, Choelmin;Kim, Jai-Hoon;Kim, Young Gyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.271-273
    • /
    • 2012
  • 중복 제거(De-duplication) 여러 데이터를 저장한 스토리지에서 같은 내용을 담고 있는 파일자체나 블록단위의 chunk 등을 찾아 중복된 내용을 제거하여 중복된 부분은 하나의 데이터 단위를 유지함으로써 스토리지 공간을 절약할 수 있다. 본 논문에서는 실험적인 데이터가 아닌 실제 업무 환경에서 적용될만한 대용량의 데이터 백업을 가정한 상황에 대해 중복 제거 기법을 테스트해봄으로써 중복제거율과 성능을 측정하였으며 이를 시각적으로 표현하는 방법을 제안함으로써 평가자 및 사용자가 알아보기 쉽게 하였다.

A Clustering File Backup Server Using Multi-level De-duplication (다단계 중복 제거 기법을 이용한 클러스터 기반 파일 백업 서버)

  • Ko, Young-Woong;Jung, Ho-Min;Kim, Jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.7
    • /
    • pp.657-668
    • /
    • 2008
  • Traditional off-the-shelf file server has several potential drawbacks to store data blocks. A first drawback is a lack of practical de-duplication consideration for storing data blocks, which leads to worse storage capacity waste. Second drawback is the requirement for high performance computer system for processing large data blocks. To address these problems, this paper proposes a clustering backup system that exploits file fingerprinting mechanism for block-level de-duplication. Our approach differs from the traditional file server systems in two ways. First, we avoid the data redundancy by multi-level file fingerprints technology which enables us to use storage capacity efficiently. Second, we applied a cluster technology to I/O subsystem, which effectively reduces data I/O time and network bandwidth usage. Experimental results show that the requirement for storage capacity and the I/O performance is noticeably improved.

Technical analysis of Cloud Storage for Cloud Computing (클라우드 컴퓨팅을 위한 클라우드 스토리지 기술 분석)

  • Park, Jeong-Su;Bae, Yu-Mi;Jung, Sung-Jae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.5
    • /
    • pp.1129-1137
    • /
    • 2013
  • Cloud storage system that cloud computing providers provides large amounts of data storage and processing of cloud computing is a key component. Large vendors (such as Facebook, YouTube, Google) in the mass sending of data through the network quickly and easily share photos, videos, documents, etc. from heterogeneous devices, such as tablets, smartphones, and the data that is stored in the cloud storage using was approached. At time, growth and development of the globally data, the cloud storage business model emerging is getting. Analysis new network storage cloud storage services concepts and technologies, including data manipulation, storage virtualization, data replication and duplication, security, cloud computing core.

A Study on Real Time Asynchronous Data Duplication Method for the Combat System (전투체계 시스템을 위한 실시간 환경에서의 비동기 이중화 기법 연구)

  • Lee, Jae-Sung;Ryu, Jon-Ha
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.10 no.2
    • /
    • pp.61-68
    • /
    • 2007
  • In a naval combat system, the information processing node is a key functional equipment and performs major combat management functions including control sensor and weapon systems. Therefore, a failure of one of the node causes fatal impacts on overall combat system capability. There were many methodologies to enhance system availability by reducing the impact of system failure like a fault tolerant method. This paper proposes a fault tolerant mechanism for information processing node using a replication algorithm with hardware duplication. The mechanism is designed as a generic algorithm and does not require any special hardware. Therefore all applications in combat system can use this functionality. The asynchronous characteristic of this mechanism provides the capability to adapt this algorithm to the module which has low performance hardware.

A Lightweight HL7 Message Strategy for Real-Time ECG Monitoring (실시간 심전도 모니터링을 위한 HL7 메시지 간소화 전략)

  • Lee, Kuyeon;Kang, Kyungtae;Lee, Jaemyoun;Park, Juyoung
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.3
    • /
    • pp.183-191
    • /
    • 2015
  • Recent developments in IT have made real-time ECG monitoring possible, and this represents a promising application for the emerging HL7 standard for the exchange of clinical information. However, applying the HL7 standard directly to real-time ECG monitoring causes problems, because the partial duplication of data within an HL7 message increases the amount of data to be transmitted, and the time taken to process it. We reduce these overheads by Feature Scaling, by standardizing the range of independent variables or features of data, while nevertheless generating HL7-compliant messages. We also use a De-Duplication algorithm to eliminate the partial repetition of the OBX field in an HL7 ORU message. Our strategy shortens the time required to create messages by 51%, and reduces the size of messages by 1/8, compared to naive HL7 coding.