• Title/Summary/Keyword: Data Merge

Search Result 189, Processing Time 0.023 seconds

Telemetry Data Recovery Method Using Multiple PCM Data (다중 PCM 데이터를 이용한 텔레메트리 데이터 복구 방법)

  • Jung, Haeseung;Kim, Joonyun
    • Aerospace Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.96-102
    • /
    • 2012
  • Recently, interests about frame error reduction method, using multiple PCM data which are received at several ground stations, are increasing. Simple data merge method is already applied to data processing system at Naro Space Center and have been used in the first and the second flight test analysis of KSLV-I. This paper is focused on error reduction with error correcting merge algorithm and time-delayed data correction algorithm. Result of applying the proposed algorithms to the flight test data shows 1.32% improvement in error rate, compared to simple-data-merge method. It is considered that presented algorithms could be very useful in generating various telemetry merge data.

An Efficient Log Buffer Management Through Join between Log Blocks (로그 블록 간 병합을 이용한 효율적인 로그 버퍼 관리)

  • Kim, hak-cheol;Park, youg-hun;Yun, jong-hyeon;Seo, dong-min;Song, seok-il;Yoo, jae-soo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.51-56
    • /
    • 2009
  • Flash memory has rapidly deployed as data storage. However, the flash memory has a major disadvantage that recorded data cannot be dynamically overwritten. In order to solve this "erase-before-write" problem, the log block buffer scheme used Flash memory file system. however, the current managements of the log buffer, in case random write pattern, BAST technique have problem of frequent merge operation, but FAST technique don't consider merge operation by frequently updated data. Previous methods not consider merge operation cost and frequently updated data. In this paper, we propose a new log buffer management scheme, called JBB. Our proposed method evaluates the worth of the merge of log blocks, so we conducts the merge operation between infrequently updated data and its data blocks, and postpone the merge operation between frequently updated data and its data blocks. Through the method, we prevent the unnecessary merge operations, reduce the number of the erase operation, and improve the utilization of the flash memory storage. We show the superiority of our proposed method through the performance evaluation with BAST and FAST.

  • PDF

Merge of VRML Mesh for 3D Shape Data Compression and Transmission (3D 형상 데이터의 압축 및 전송을 위한 VRML 메쉬의 병합에 관한 연구)

  • 장태범;문광원;정재열;김덕수
    • Korean Journal of Computational Design and Engineering
    • /
    • v.7 no.2
    • /
    • pp.89-95
    • /
    • 2002
  • VRML data, which is mainly structural element, is frequently used for modeling and visualizing 3D objects. Although there can be variations, it is a usual practice to represent 3D shapes in VRML format. Ever since the advent of Internet, there have been strong needs to transfer shape data through Internet. Because of this need, it is necessary to transform a data file in VRML or similar format into a more convenient form to transfer through the network. In a VRML file, a model is sometimes divided into a set of triangle meshes due to several practical reasons. However, this causes various demerits for the fast transmission. Therefore, it is more efficient to merge the mesh sets into one mesh set for the transmission. In this paper, we present the problems in the merge process and the techniques to handle the situation.

Indexing Methods of Splitting XML Documents (XML 문서의 분할 인덱스 기법)

  • Kim, Jong-Myung;Jin, Min
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.3
    • /
    • pp.397-408
    • /
    • 2003
  • Existing indexing mechanisms of XML data using numbering scheme have a drawback of rebuilding the entire index structure when insertion, deletion, and update occurs on the data. We propose a new indexing mechanism based on split blocks to cope with this problem. The XML data are split into blocks, where there exists at most a relationship between two blocks, and numbering scheme is applied to each block. This mechanism reduces the overhead of rebuilding index structures when insertion, deletion, and update occurs on the data. We also propose two algorithms, Parent-Child Block Merge Algorithm and Ancestor-Descendent Algorithm which retrieve the relationship between two entities in the XML hierarchy using this indexing mechanism. We also propose a mechanism in which the identifier of a block has the information of its Parents' block to expedite retrieval process of the ancestor-descendent relationship and also propose two algorithms. Parent-Child Block Merge Algorithm and Ancestor-Descendent Algorithm using this indexing mechanism.

  • PDF

External Merge Sorting in Tajo with Variable Server Configuration (매개변수 환경설정에 따른 타조의 외부합병정렬 성능 연구)

  • Lee, Jongbaeg;Kang, Woon-hak;Lee, Sang-won
    • Journal of KIISE
    • /
    • v.43 no.7
    • /
    • pp.820-826
    • /
    • 2016
  • There is a growing requirement for big data processing which extracts valuable information from a large amount of data. The Hadoop system employs the MapReduce framework to process big data. However, MapReduce has limitations such as inflexible and slow data processing. To overcome these drawbacks, SQL query processing techniques known as SQL-on-Hadoop were developed. Apache Tajo, one of the SQL-on-Hadoop techniques, was developed by a Korean development group. External merge sort is one of the heavily used algorithms in Tajo for query processing. The performance of external merge sort in Tajo is influenced by two parameters, sort buffer size and fanout. In this paper, we analyzed the performance of external merge sort in Tajo with various sort buffer sizes and fanouts. In addition, we figured out that there are two major causes of differences in the performance of external merge sort: CPU cache misses which increase as the sort buffer size grows; and the number of merge passes determined by fanout.

A Compact Divide-and-conquer Algorithm for Delaunay Triangulation with an Array-based Data Structure (배열기반 데이터 구조를 이용한 간략한 divide-and-conquer 삼각화 알고리즘)

  • Yang, Sang-Wook;Choi, Young
    • Korean Journal of Computational Design and Engineering
    • /
    • v.14 no.4
    • /
    • pp.217-224
    • /
    • 2009
  • Most divide-and-conquer implementations for Delaunay triangulation utilize quad-edge or winged-edge data structure since triangles are frequently deleted and created during the merge process. How-ever, the proposed divide-and-conquer algorithm utilizes the array based data structure that is much simpler than the quad-edge data structure and requires less memory allocation. The proposed algorithm has two important features. Firstly, the information of space partitioning is represented as a permutation vector sequence in a vertices array, thus no additional data is required for the space partitioning. The permutation vector represents adaptively divided regions in two dimensions. The two-dimensional partitioning of the space is more efficient than one-dimensional partitioning in the merge process. Secondly, there is no deletion of edge in merge process and thus no bookkeeping of complex intermediate state for topology change is necessary. The algorithm is described in a compact manner with the proposed data structures and operators so that it can be easily implemented with computational efficiency.

Analysis and Comparison of Sorting Algorithms (Insertion, Merge, and Heap) Using Java

  • Khaznah, Alhajri;Wala, Alsinan;Sahar, Almuhaishi;Fatimah, Alhmood;Narjis, AlJumaia;Azza., A.A
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.197-204
    • /
    • 2022
  • Sorting is an important data structure in many applications in the real world. Several sorting algorithms are currently in use for searching and other operations. Sorting algorithms rearrange the elements of an array or list based on the elements' comparison operators. The comparison operator is used in the accurate data structure to establish the new order of elements. This report analyzes and compares the time complexity and running time theoretically and experimentally of insertion, merge, and heap sort algorithms. Java language is used by the NetBeans tool to implement the code of the algorithms. The results show that when dealing with sorted elements, insertion sort has a faster running time than merge and heap algorithms. When it comes to dealing with a large number of elements, it is better to use the merge sort. For the number of comparisons for each algorithm, the insertion sort has the highest number of comparisons.

Constructing the Models Estimated for Speed Variation on the Merge Section in the Freeway (고속도로의 합류구간내 속도변화 추정모형 구축에 관한 연구)

  • 신광식;김태곤
    • Journal of Korean Port Research
    • /
    • v.13 no.1
    • /
    • pp.113-122
    • /
    • 1999
  • Congestion and traffic accidents occur on the merge and diverge sections in the interchange of the freeway. Studies have been conducted to reduce the traffic delay and accidents on the merge section in the freeway since 1960s. but a study was not conducted to estimate the speed variation on the merge section construct models estimated for the speed variation and suggest the appropriate measures. The purpose of this study was to identify the traffic flow characteristics on the merge section in the freeway construct the models estimated for the speed variation on the merge section in the freeway and finally establish the appropriate measure for reduction of traffic delay and accidents on the merge section in the freeway. The following results were obtained: I) Speed variations in the urban freeway appeared to be about 3.2mph, 6.5mph and 7.4mph based on the morning peak period, afternoon peak period and 24-hours period but those in the suburban freeway appeared to be about 8.0mph, 11.1mph and 10.1mph based on the same periods respectively. So different speed reduction signs need be installed to reduce delay and accidents on the merge section in the freeway based on the areas and periods as the freeway traffic management system(FTMS). ii) These models estimated for speed variation need to be studied with the changeable message sign(CMS) technique based on the real-time data so that the traffic flow could be maximized and the traffic delay and accidents be on the merge section in the freeway as more efficient freeway traffic management system(FTMS) in the near future.

  • PDF

Automatic Merging of Distributed Topic Maps based on T-MERGE Operator (T-MERGE 연산자에 기반한 분산 토픽맵의 자동 통합)

  • Kim Jung-Min;Shin Hyo-Pil;Kim Hyoung-Joo
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.9
    • /
    • pp.787-801
    • /
    • 2006
  • Ontology merging describes the process of integrating two ontologies into a new ontology. How this is done best is a subject of ongoing research in the Semantic Web, Data Integration, Knowledge Management System, and other ontology-related application systems. Earlier research on ontology merging, however, has studied for developing effective ontology matching approaches but missed analyzing and solving methods of problems of merging two ontologies given correspondences between them. In this paper, we propose a specific ontology merging process and a generic operator, T-MERGE, for integrating two source ontologies into a new ontology. Also, we define a taxonomy of merging conflicts which is derived from differing representations between input ontologies and a method for detecting and resolving them. Our T-MERGE operator encapsulates the process of detection and resolution of conflicts and merging two entities based on given correspondences between them. We define a data structure, MergeLog, for logging the execution of T-MERGE operator. MergeLog is used to inform detailed results of execution of merging to users or recover errors. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Naver philosophy dictionary as input ontologies. Our experiments show that the automatic merging module compared with manual merging by a expert has advantages in terms of time and effort.

Inverted Indexing Method for XML Data (XML 데이터의 역 인덱싱 기법)

  • 김종명;진민
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.11b
    • /
    • pp.343-346
    • /
    • 2002
  • 관계데이터베이스를 이용한 XML 데이터저장방법에서 데이터가 삽입, 삭제, 갱신될 경우 인덱스를 제정의해야 하는 부담을 줄여주는 인덱스 기법을 제안한다. XML 데이터를 블록과 블록사이에 많아야 하나의 관계가 유지되도록 블록단위로 나누어 각 블록에 대해 Numbering 스킴을 적용하여 인덱스를 정의한다. 또한 정의된 인덱스를 이용하여 XML 질의 처리하기 Parent-Child Block Merge Algorithm과 Ancestor-Descendent Block Merge Algorithm을 제안한다.

  • PDF