• Title/Summary/Keyword: Partitioning methods

Search Result 242, Processing Time 0.024 seconds

SOC Verification Based on WGL

  • Du, Zhen-Jun;Li, Min
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.12
    • /
    • pp.1607-1616
    • /
    • 2006
  • The growing market of multimedia and digital signal processing requires significant data-path portions of SoCs. However, the common models for verification are not suitable for SoCs. A novel model--WGL (Weighted Generalized List) is proposed, which is based on the general-list decomposition of polynomials, with three different weights and manipulation rules introduced to effect node sharing and the canonicity. Timing parameters and operations on them are also considered. Examples show the word-level WGL is the only model to linearly represent the common word-level functions and the bit-level WGL is especially suitable for arithmetic intensive circuits. The model is proved to be a uniform and efficient model for both bit-level and word-level functions. Then Based on the WGL model, a backward-construction logic-verification approach is presented, which reduces time and space complexity for multipliers to polynomial complexity(time complexity is less than $O(n^{3.6})$ and space complexity is less than $O(n^{1.5})$) without hierarchical partitioning. Finally, a construction methodology of word-level polynomials is also presented in order to implement complex high-level verification, which combines order computation and coefficient solving, and adopts an efficient backward approach. The construction complexity is much less than the existing ones, e.g. the construction time for multipliers grows at the power of less than 1.6 in the size of the input word without increasing the maximal space required. The WGL model and the verification methods based on WGL show their theoretical and applicable significance in SoC design.

  • PDF

Improvement of SPIHT-based Document Encoding and Decoding System (SPIHT 기반 문서 부호화와 복호화 시스템의 성능 향상)

  • Jang, Joon;Lee, Ho-Suk
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.687-695
    • /
    • 2003
  • In this paper, we present a document image compression system based on segmentation, Quincunx downsampling, (5/3) wavelet lifting and subband-oriented SPIHT coding. We reduced the coding time by the adaptation of subband-oriented SPIHT coding and Quincunx downsampling. And to increase compression rate further, we applied arithmetic coding to the bitstream of SPIHT coding output. Finally, we present the reconstructed images for visual comparison and also present the compression rates and PSNR values under various scalar quantization methods.

Analysis and Evaluation of Data Partitioning Methods or On-line Scaling in a Shared Nothing Database Cluster (비공유 데이터베이스 클러스터에서 온-라인 확장을 위한 데이터 분할 기법의 분석 및 평가)

  • Jang, Yong-Il;Lee, Chung-Ho;Lee, Jae-Dong;Bae, Hae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11c
    • /
    • pp.1859-1862
    • /
    • 2002
  • 비공유 데이터베이스 클러스터는 그 구조의 특성 상 동적인 질의 패턴의 변화, 특정 데이터에 대한 질의 집중에 의한 부하 불균형 및 집중, 사용자 증가에 의한 처리량 한계 등의 문제가 발생한다. 이러한 문제를 해결하기 위해 데이터베이스 클러스터는 최근에 제안된 온-라인 확장기법을 사용하며, 이 기법은 데이터 베이스의 확장성에 의해 큰 영향을 받는다. 일반적으로 클러스터 시스템에서 사용되는 데이터 분할 기법에는 키 값의 순서대로 분할하는 라운드-로빈 분할 기법, 해쉬 함수를 이용해 데이터를 분할하는 해쉬 분할 기법, 범위에 따라 각 노드에 데이터를 분할하는 범위 분할기법, 그리고 조건식에 따라 데이터를 분할하는 조건식 분할 기법이 있다. 본 논문에서는 이 네 가지 분할 기법의 특성을 정리하고, 비공유 데이터베이스 클러스터에서 확장성에 있어서 우수한 분할 기법을 각 분할 기법의 성능평가를 통해 얻는다. 성능평가에서는 각각의 분한 기법을 평가하기 위해 확장 시 발생되는 이동 데이터의 크기, 질의처리에 대한 영향, CPU 사용률, 그리고 온-라인 확장기법의 수행 시 발생되는 특성에 대한 영향을 분석하며, 얻어진 결과를 토대로 비공유 데이터베이스 클러스터에서 가장 적합하면서도 온-라인 확장 기법적용을 위해 확장성이 우수한 데이터 분할기법을 찾는다.

  • PDF

A Knowledge-Based Machine Vision System for Automated Industrial Web Inspection

  • Cho, Tai-Hoon;Jung, Young-Kee;Cho, Hyun-Chan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.1 no.1
    • /
    • pp.13-23
    • /
    • 2001
  • Most current machine vision systems for industrial inspection were developed with one specific task in mind. Hence, these systems are inflexible in the sense that they cannot easily be adapted to other applications. In this paper, a general vision system framework has been developed that can be easily adapted to a variety of industrial web inspection problems. The objective of this system is to automatically locate and identify \\\"defects\\\" on the surface of the material being inspected. This framework is designed to be robust, to be flexible, and to be as computationally simple as possible. To assure robustness this framework employs a combined strategy of top-down and bottom-up control, hierarchical defect models, and uncertain reasoning methods. To make this framework flexible, a modular Blackboard framework is employed. To minimize computational complexity the system incorporates a simple multi-thresholding segmentation scheme, a fuzzy logic focus of attention mechanism for scene analysis operations, and a partitioning if knowledge that allows concurrent parallel processing during recognition.cognition.

  • PDF

Mesh Decimation for Polygon Rendering Based Real-Time 3-Axis NC Milling Simulation (실시간 3축 NC 밀링 시뮬레이션을 위한 메쉬 간략화 방법)

  • Joo, S.W.;Lee, S.H.;Park, K.H.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.5 no.4
    • /
    • pp.347-358
    • /
    • 2000
  • The view dependency of typical spatial-partitioning based NC simulation methods is overcome by polygon rendering technique that generates polygons to represent the workpiece, thus enabling dynamic viewing transformations without reconstruction of the entire data structure. However, the polygon rendering technique still has difficulty in realizing real-time simulation due to unsatisfactory performance of current graphics devices. Therefore, it is necessary to develop a mesh decimation method that enables rapid rendering without loss of display quality. In this paper. we proposed a new mesh decimation algorithm thor a workpiece whose shape varies dynamically. In this algorithm, the 2-map data thor a given workpiece is divided into several regions, and a triangular mesh is constructed for each region first. Then, if any region it cut by the tool, its mesh is regenerated and decimated again. Since the range of mesh decimation is confined to a few regions, the reduced polygons for rendering can be obtained rapidly. Our method enables the polygon-rendering based NC simulation to be applied to the computers equipped with a wider range of graphics cards.

  • PDF

Digital Watermarking using Wavelet Packet Transform for Remote Sensing Images (웨이블릿 패킷 변환을 이용한 원격 영상의 워터마킹 기법)

  • 한수영;이두수
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.365-370
    • /
    • 2003
  • In this paper, a new watermarking algorithm that based on wavelet packet transform is proposed for remote sensing images, which include many high frequency components. It applies watermark to the overall subband that includes the lowest frequency band. Watermark is embedded on original image after selecting the significant wavelet packet coefficients. For selection of significant coefficients which watermarks is embedded on, zerotree algorithm is applied to wavelet packet coefficients using CPSO (Coefficient Partitioning Scanning Order). From the experimental result, the proposed algorithm shows better invisibility and robustness performance compare with conventional watermarking methods. Especially, it demonstrates better robustness for high image compression in the remote images.

The Application of Quantum Yield of Nitrate Uptake to Estimate New Production in Well-Mixed Waters of the Yellow Sea: A Preliminary Result

  • Park, Myung-Gil;Shim, Jae-Hyung;Yang, Sung-Ryull
    • Journal of the korean society of oceanography
    • /
    • v.37 no.1
    • /
    • pp.45-50
    • /
    • 2002
  • New production (NP) values in well-mixed waters of the Yellow Sea were estimated using two different methods and were compared with each other; one is from the quantum yield model of nitrate uptake and chlorophyll ${\alpha}$-specific light absorption coefficient, and the other is from a traditional $^{15}N$-labelled stable isotope uptake technique. The quantum yields of nitrate uptake were highly variable, ranging from 0.0001 to 0.04 mol $NO_3Ein^{-1}$, and the small values in this study might have resulted from either the partitioning into nitrate uptake of little portions of light energy absorbed by phytoplankton or that phytoplankton may predominantly utilize other N sources (E. G. ammonium and/or urea) than nitrate. The estimates (0.54-8.47 nM $h^{-1}$) of NP from the quantum yield model correlated well ($r^2$=0.67, p<0.1) with those (0.01-4.93 nM $h^{-1}$) obtained using the $^{15}NO_3$ uptake technique. To improve the ability of estimating NP values using this model in the Yellow Sea, more data need to be accumulated in the future over a variety of time and space scales.

Evaluation of Electrokinetic Removal of Heavy Metals from Tailing Soils

  • Kim, Soon-Oh;Kim, Kyoung-Woong;Yun, Seong-Taek
    • Proceedings of the Korean Society of Soil and Groundwater Environment Conference
    • /
    • 2002.09a
    • /
    • pp.40-43
    • /
    • 2002
  • Electrokinetic remediation was studied for the removal of toxic heavy metals from tailing soils. This study emphasized the dependency of removal efficiency upon heavy metal speciation, as demonstrated by different extraction methods (sequential extraction, total digestion, and 0.1 N HC1 extraction). The tailing soils examined showed different physicochemical characteristics, in view of initial pH, particle size distribution, and major mineral constituents, and contained high concentrations of target metal contaminants in various forms. The electrokinetic removal efficiency of heavy metals was significantly influenced by their partitioning prior to treatment, and by the pHs of the tailing soils. The mobile and weakly bound fractions of heavy metals, such as exchangeable fraction, were easily removed by electrokinetic treatment (more than 90% in removal efficiency), whereas immobile and strongly bound fractions, such as organically bound and residual fractions, were not effectively removed (less than 20% in removal efficiency).

  • PDF

A modular function decomposition of multiple-valued logic functions using code assignment (코드할당에 의한 다치논리함수의 모듈러 함수분해에 관한 연구)

  • 최재석;박춘명;성형경;박승용;김형수
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.7
    • /
    • pp.78-91
    • /
    • 1998
  • This paper presents modular design techniques of multiple-valued logic functions about the function decomposition method and input variable management method. The function decomposition method takes avantage of the property of the column multiplicity in a single-column variable partitioning. Due to the increased number of identical modules, we can achieve a simpler circuit design by using a single T-gate, which can eliminate some of the control functions in the module libraty types. The input variable management method is to reduce the complexity of the input variables by proposing the look up table which assign input variables to a code. In this case as the number of sub-functions increase the code-length and the size of the code-assignment table grow. We identify some situations where shard input variables among sub-functions can be further reduced by a simplicication technique. According to the result of adapting this method to a function, we have demonstrated the superiority of the proposed methods which is bing decreased to about 12% of interconnection and about 16% of T-gate numbers compare with th eexisting for th enon-symmetric and irregular function realization.

  • PDF

Combining Distributed Word Representation and Document Distance for Short Text Document Clustering

  • Kongwudhikunakorn, Supavit;Waiyamai, Kitsana
    • Journal of Information Processing Systems
    • /
    • v.16 no.2
    • /
    • pp.277-300
    • /
    • 2020
  • This paper presents a method for clustering short text documents, such as news headlines, social media statuses, or instant messages. Due to the characteristics of these documents, which are usually short and sparse, an appropriate technique is required to discover hidden knowledge. The objective of this paper is to identify the combination of document representation, document distance, and document clustering that yields the best clustering quality. Document representations are expanded by external knowledge sources represented by a Distributed Representation. To cluster documents, a K-means partitioning-based clustering technique is applied, where the similarities of documents are measured by word mover's distance. To validate the effectiveness of the proposed method, experiments were conducted to compare the clustering quality against several leading methods. The proposed method produced clusters of documents that resulted in higher precision, recall, F1-score, and adjusted Rand index for both real-world and standard data sets. Furthermore, manual inspection of the clustering results was conducted to observe the efficacy of the proposed method. The topics of each document cluster are undoubtedly reflected by members in the cluster.