• Title/Summary/Keyword: Large Size data Processing

Search Result 246, Processing Time 0.025 seconds

BTC employing a Quad Tree Technique for Image Data Compression (QUAD TREE를 이용한 BTC에서의 영상데이타 압축)

  • 백인기;김해수;조성환;이근영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.13 no.5
    • /
    • pp.390-399
    • /
    • 1988
  • A conventional BTC has the merit of real time processing and simple computation, but has the problem that its compression rate is low. In this paper, a modified BTC using the Quad Tree which is frequently used in binary image is proposed. The method results in the low compression rate by decreasing the total number of subblocks by mean of making the size of a subblock large in the small variation area of graty level and the size af a subblock small in the large variation area of gary level. For the effective transmission of bit plane, the Huffman run-lengh code for the large size of a subblock and the lookup table for tha small size of a subblock are used. The proposed BTC method show the result of coding 256 level image at the average data rate of about 0.8 bit/pixel.

  • PDF

GPU Based Incremental Connected Component Processing in Dynamic Graphs (동적 그래프에서 GPU 기반의 점진적 연결 요소 처리)

  • Kim, Nam-Young;Choi, Do-Jin;Bok, Kyoung-Soo;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.6
    • /
    • pp.56-68
    • /
    • 2022
  • Recently, as the demand for real-time processing increases, studies on a dynamic graph that changes over time has been actively done. There is a connected components processing algorithm as one of the algorithms for analyzing dynamic graphs. GPUs are suitable for large-scale graph calculations due to their high memory bandwidth and computational performance. However, when computing the connected components of a dynamic graph using the GPU, frequent data exchange occurs between the CPU and the GPU during real graph processing due to the limited memory of the GPU. The proposed scheme utilizes the Weighted-Quick-Union algorithm to process large-scale graphs on the GPU. It supports fast connected components computation by applying the size to the connected component label. It computes the connected component by determining the parts to be recalculated and minimizing the data to be transmitted to the GPU. In addition, we propose a processing structure in which the GPU and the CPU execute asynchronously to reduce the data transfer time between GPU and CPU. We show the excellence of the proposed scheme through performance evaluation using real dataset.

Large-Memory Data Processing on a Remote Memory System using Commodity Hardware (대용량 메모리 데이타 처리를 위한 범용 하드웨어 기반의 원격 메모리 시스템)

  • Jung, Hyung-Soo;Han, Hyuck;Yeom, Heon-Y.
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.9
    • /
    • pp.445-458
    • /
    • 2007
  • This article presents a novel infrastructure for large-memory database processing using commodity hardware with operating system support. We exploit inexpensive PCs and a high-speed network capable of Remote Direct Memory Access (RDMA) operations to build a new memory hierarchy between fast volatile memory and slow disk storage. The new memory hierarchy guarantees a reasonable response time, and its storage size enables us to run large-memory database systems with little performance degradation. The proposed architecture has two main components: (1) a remote memory system inside the Linux kernel to manage other computers' memory pages efficiently and (2) a remote memory pager responsible for manipulating remote read/write operations on remote memory pages. We insist that the proposed architecture is practical enough to support the rigorous demands of commercial in-memory database systems by demonstrating the performance of publicly available main-memory databases (e.g., MySQL) on our prototyped system. The experimental results show very interesting results from the TPC-C benchmark.

Domain-Adaptation Technique for Semantic Role Labeling with Structural Learning

  • Lim, Soojong;Lee, Changki;Ryu, Pum-Mo;Kim, Hyunki;Park, Sang Kyu;Ra, Dongyul
    • ETRI Journal
    • /
    • v.36 no.3
    • /
    • pp.429-438
    • /
    • 2014
  • Semantic role labeling (SRL) is a task in natural-language processing with the aim of detecting predicates in the text, choosing their correct senses, identifying their associated arguments, and predicting the semantic roles of the arguments. Developing a high-performance SRL system for a domain requires manually annotated training data of large size in the same domain. However, such SRL training data of sufficient size is available only for a few domains. Constructing SRL training data for a new domain is very expensive. Therefore, domain adaptation in SRL can be regarded as an important problem. In this paper, we show that domain adaptation for SRL systems can achieve state-of-the-art performance when based on structural learning and exploiting a prior model approach. We provide experimental results with three different target domains showing that our method is effective even if training data of small size is available for the target domains. According to experimentations, our proposed method outperforms those of other research works by about 2% to 5% in F-score.

A VLSI Architecture for the Real-Time 2-D Digital Signal Processing (실시간 2차원 디지털 신호처리를 위한 VLSI 구조)

  • 권희훈
    • Information and Communications Magazine
    • /
    • v.9 no.9
    • /
    • pp.72-85
    • /
    • 1992
  • The throughput requirement for many digital signal processing is such that multiple processing units are essential for real-time implementation. Advances in VLSI technology make it feasible to design and implement computer systems consisting of a large number of function units. The research on a very high throughput VLSI architecture for digital signal processing applications requires the development of an algorithm, decomposition scheme which can minimize data communication requirements as well as minimize computational complexity. The objectives of the research are to investigate computationally efficient algorithms for solution of the class of problems which can be modeled as DLSI systems or adaptive system, and develop VLSI architectures and associated multiprocessor systems which can be used to implement these algorithms in real-time. A new VLSI architecture for real-time 2-D digital signal processing applications is proposed in this research. This VLSI architecture extends the concept of having a single processing units in a chip. Because this VLSI architecture has the advantage that the complexity and the number of computations per input does not increase as the size of the input data in increased, it can process very large 2-D date in near real-time.

  • PDF

Selecting Target Items and Estimating Volume Size for the Port Hinterland from the Transshipment Containers: Focusing on Trusted Processing (환적화물의 항만배후단지 유치 가능 품목 선정 및 물동량 추정: 수탁가공을 중심으로)

  • Kim, Geun-Sub
    • Journal of Korea Port Economic Association
    • /
    • v.37 no.4
    • /
    • pp.1-11
    • /
    • 2021
  • Port hinterland has been experiencing difficulty in generating new cargo volume and high value-added activity. It will be able to contribute to create new cargo volume and high added-value if transshipment cargo can be switched to trusted processing and then attract to port hinterland. This paper estimates items and volume size that can be the appropriate to attract in port hinterland and also be able to switch to trusted processing based on the trade data and manifest of transshipment container. The 50 items were classified from the result of trusted processing trade and the 33 items of them were suggested as the appropriate to attract in the port hinterland. The result shows that the 3.2 times transshipment cargo volume which is large than the total volume of trusted processing trade in Korea is transshipped at Busan port. This study is the first research to compare trade data and manifest of transshipment container, and thus it contributes to attracting firms in the port hinterland for the port authorities and the government.

An Application of the SRTM Dataset in Inland Water Stage Measurement

  • Bhang, Kon Joon;Lee, Jin-Duk
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2014.06a
    • /
    • pp.83-84
    • /
    • 2014
  • For hydrologic applications, lake levels is very important. As a first step in developing a remote-sensing based approach, lake stage estimation using remote sensing was proposed with the SRTM data from February 2000, which was providing a one-time snapshot. After several steps using contouring, masking, and CED, it was found that iterative contour fitting to a lake outline provided the outstanding result with the operator's decision. If the lake size is large enough, a constant meter of the difference removal due to bias found by Bhang et al. (2007) might be useful for more accurate estimations for the methods. A lake-level snapshot using SRTM data could provide estimates within 0.5 m level of accuracy for large lakes (> $10km^2$) with contouring. Also, even if the processing algorithm is complex, the accuracy was reliable. Overall, we confirmed that this study would provide useful information to ameliorate the quality of the SAR-derived DEMs specifically for water areas and if more expanded, SAR images can fruit result in water monitoring.

  • PDF

The Design and Implementation of a Reusable Viewer Component

  • Kim, Hong-Gab;Lim, Young-Jae;Kim, Kyung-Ok
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.66-69
    • /
    • 2002
  • This article outlines the capabilities of a viewer component called GridViewer, and proves its reusability. GridViewer was designed for the construction of the image display part of GIS or remote sensing application software, and consequently it is particularly straightforward to closely couple GridViewer with access to very large images. Displaying is performed through pyramid structure, which enables to treat very large dataset up to several gigabytes in size under the limited capability of PC. GridViewer is free from responsibility to handle various formats of raster data files by taking grid coverage, which is designed by OGC to promote interoperability between implementations done by data vendors and software vendors providing analysis and grid processing implementations. GridViewer differs from other such viewer by allowing for clients to extend its function and capability by using small set of methods originally implemented in it. We show its reusability and expandability by applying it in developing application programs performing various functions not supported originally by the GridViewer COM component.

  • PDF

A Data Mining Approach for Selecting Bitmap Join Indices

  • Bellatreche, Ladjel;Missaoui, Rokia;Necir, Hamid;Drias, Habiba
    • Journal of Computing Science and Engineering
    • /
    • v.1 no.2
    • /
    • pp.177-194
    • /
    • 2007
  • Index selection is one of the most important decisions to take in the physical design of relational data warehouses. Indices reduce significantly the cost of processing complex OLAP queries, but require storage cost and induce maintenance overhead. Two main types of indices are available: mono-attribute indices (e.g., B-tree, bitmap, hash, etc.) and multi-attribute indices (join indices, bitmap join indices). To optimize star join queries characterized by joins between a large fact table and multiple dimension tables and selections on dimension tables, bitmap join indices are well adapted. They require less storage cost due to their binary representation. However, selecting these indices is a difficult task due to the exponential number of candidate attributes to be indexed. Most of approaches for index selection follow two main steps: (1) pruning the search space (i.e., reducing the number of candidate attributes) and (2) selecting indices using the pruned search space. In this paper, we first propose a data mining driven approach to prune the search space of bitmap join index selection problem. As opposed to an existing our technique that only uses frequency of attributes in queries as a pruning metric, our technique uses not only frequencies, but also other parameters such as the size of dimension tables involved in the indexing process, size of each dimension tuple, and page size on disk. We then define a greedy algorithm to select bitmap join indices that minimize processing cost and verify storage constraint. Finally, in order to evaluate the efficiency of our approach, we compare it with some existing techniques.

Memory-Efficient Belief Propagation for Stereo Matching on GPU (GPU 에서의 고속 스테레오 정합을 위한 메모리 효율적인 Belief Propagation)

  • Choi, Young-Kyu;Williem, Williem;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.11a
    • /
    • pp.52-53
    • /
    • 2012
  • Belief propagation (BP) is a commonly used global energy minimization algorithm for solving stereo matching problem in 3D reconstruction. However, it requires large memory bandwidth and data size. In this paper, we propose a novel memory-efficient algorithm of BP in stereo matching on the Graphics Processing Units (GPU). The data size and transfer bandwidth are significantly reduced by storing only a part of the whole message. In order to maintain the accuracy of the matching result, the local messages are reconstructed using shared memory available in GPU. Experimental result shows that there is almost an order of reduction in the global memory consumption, and 21 to 46% saving in memory bandwidth when compared to the conventional algorithm. The implementation result on a recent GPU shows that we can obtain 22.8 times speedup in execution time compared to the execution on CPU.

  • PDF