• Title/Summary/Keyword: embedding ratio

Search Result 101, Processing Time 0.139 seconds

An Watermarking Method based on Singular Vector Decomposition and Vector Quantization using Fuzzy C-Mean Clustering (특이치 분해와 Fuzzy C-Mean(FCM) 클러스터링을 이용한 벡터양자화에 기반한 워터마킹 방법)

  • Lee, Byung-Hee;Kang, Hwan-Il;Jang, Woo-Seok
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10d
    • /
    • pp.7-11
    • /
    • 2007
  • In this paper the one of image hide method for good compression ratio and satisfactory image quality of the cover image and the embedding image based on the singular value decomposition and the vector quantization using fuzzy c-mean clustering is introduced. Experimental result shows that the embedding image has invisibility and robustness to various serious attacks.

  • PDF

A Study on the Minimization of Layout Area for FPGA

  • Yi, Cheon-Hee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.9 no.2
    • /
    • pp.15-20
    • /
    • 2010
  • This paper deals with minimizing layout area of FPGA design. FPGAs are becoming increasingly important in the design of ASICs since they provide both large scale integration and user-programmability. This paper describes a method to obtain tight bound on the worst-case increase in area when drivers are introduced along many long wires in a layout. The area occupied by minimum-area embedding for a circuit can depend on the aspect ratio of the bounding rectangle of the layout. This paper presents a separator-based area-optimal embeddings for FPGA graphs in rectangles of several aspect ratios which solves the longest path problem in the constraint graph.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

A Benefit-Cost Analysis on the DSM Programs Part I (DSM 프로그램의 비용효과 분석 I)

  • Hwang, Sung-Wook;Kim, Bal-Ho;Kim, Jung-Hoon;Park, Jong-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.11a
    • /
    • pp.46-48
    • /
    • 2000
  • This paper presents an approach to B/C analysis amenable to evaluate the impact of DSM programs especially on the strategic conservation programs and the load management programs. The proposed approach embedding the existing B/C analyses is applicable to the new electricity market Case studies show the B/C ratio and the avoided cost due to the impact of DSM programs.

  • PDF

A Benefit-Cost Analysis on the DSM Programs Part II (DSM 프로그램의 비용효과 분석 II)

  • Park, Jong-Bae;Kim, Jin-Ho;Hwang, Sung-Wook;Kim, Bal-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2000.11a
    • /
    • pp.190-192
    • /
    • 2000
  • This paper presents an approach to B/C analysis amenable to evaluate the impact of DSM programs especially on the strategic conservation programs and the load management programs. The proposed approach embedding the existing B/C analyses is applicable to the new electricity market. Case studies show the B/C ratio and the avoided cost due to the impact of DSM programs.

  • PDF

Area-Optimization for VLSI by CAD (CAD에 의한 VLSI 설계를 위한 면적 최적화)

  • Yi, Cheon-Hee
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.24 no.4
    • /
    • pp.708-712
    • /
    • 1987
  • This paper deals with minimizing layout area of VLSI design. A long wire in a VLSI layout causes delay which can be reduced by using a driver. There can be significant area increase when many drivers are introduced in a layout. This paper describes a method to obtain tight bound on the worst-case increase in area when drivers are introduced along many long wires in a layout. The area occupied by minimum-area embedding for a circuit can depend on the aspect ratio of the bounding rectangle of the layout. This paper presents a separator-based area optimal embeddings for VLSI graphs in rectangles of several aspect ratios.

  • PDF

Rate-Distortion Optimized Zerotree Image Coding using Wavelet Transform (웨이브렛 변환을 이용한 비트율-왜곡 최적화 제로트리 영상 부호화)

  • 이병기;호요성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.101-109
    • /
    • 2004
  • In this paper, we propose an efficient algerian for wavelet-based sti image coding method that utilizes the rate-distortion (R-D) theory. Since conventional tree-structured image coding schemes do not consider the rate-distortion theory properly, they show reduced coding performance. In this paper, we apply an rate-distortion optimized embedding (RDE) operation into the set partitioning in hierarchical trees (SPIHT) algorithm. In this algorithm, we use the rate-distortion slope as a criterion for the coding order of wavelet coefficients in SPIHT lists. We also describe modified set partitioning and rate-distortion optimized list scan methods. Experimental results demonstrate that the proposed method outperforms the SPIHT algorithm and the rate-distortion optimized embedding algerian with respect to the PSNR (peak signal-to-noise ratio) performance.

A Novel Approach of Feature Extraction for Analog Circuit Fault Diagnosis Based on WPD-LLE-CSA

  • Wang, Yuehai;Ma, Yuying;Cui, Shiming;Yan, Yongzheng
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2485-2492
    • /
    • 2018
  • The rapid development of large-scale integrated circuits has brought great challenges to the circuit testing and diagnosis, and due to the lack of exact fault models, inaccurate analog components tolerance, and some nonlinear factors, the analog circuit fault diagnosis is still regarded as an extremely difficult problem. To cope with the problem that it's difficult to extract fault features effectively from masses of original data of the nonlinear continuous analog circuit output signal, a novel approach of feature extraction and dimension reduction for analog circuit fault diagnosis based on wavelet packet decomposition, local linear embedding algorithm, and clone selection algorithm (WPD-LLE-CSA) is proposed. The proposed method can identify faulty components in complicated analog circuits with a high accuracy above 99%. Compared with the existing feature extraction methods, the proposed method can significantly reduce the quantity of features with less time spent under the premise of maintaining a high level of diagnosing rate, and also the ratio of dimensionality reduction was discussed. Several groups of experiments are conducted to demonstrate the efficiency of the proposed method.

Robust Digital Watermarking for High-definition Video using Steerable Pyramid Transform, Two Dimensional Fast Fourier Transform and Ensemble Position-based Error Correcting

  • Jin, Xun;Kim, JongWeon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3438-3454
    • /
    • 2018
  • In this paper, we propose a robust blind watermarking scheme for high-definition video. In the embedding process, luminance component of each frame is transformed by 2-dimensional fast Fourier transform (2D FFT). A secret key is used to generate a matrix of random numbers for the security of watermark information. The matrix is transformed by inverse steerable pyramid transform (SPT). We embed the watermark into the low and mid-frequency of 2D FFT coefficients with the transformed matrix. In the extraction process, the 2D FFT coefficients of each frame and the transformed matrix are transformed by SPT respectively, to produce two oriented sub-bands. We extract the watermark from each frame by cross-correlating two oriented sub-bands. If a video is degraded by some attacks, the watermarks of frames contain some errors. Thus, we use an ensemble position-based error correcting algorithm to estimate the errors and correct them. The experimental results show that the proposed watermarking algorithm is imperceptible and moreover is robust against various attacks. After embedding 64 bits of watermark into each frame, the average peak signal-to-noise ratio between original frames and embedded frames is 45.7 dB.

An Watermarking Method Based on Singular Vector Decomposition and Vector Quantization Using Fuzzy C-Mean Clustering (특이치 분해와 Fuzzy C-Mean(FCM) 군집화를 이용한 벡터양자화에 기반한 워터마킹 방법)

  • Lee, Byung-Hee;Jang, Woo-Seok;Kang, Hwan-Il
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.7
    • /
    • pp.964-969
    • /
    • 2007
  • In this paper, we propose the image watermarking method for good compression ratio and satisfactory image quality of the cover image and the embedding image. This method is based on the singular value decomposition and the vector quantization using fuzzy c-mean clustering. Experimental results show that the embedding image has invisibility and robustness to various serious attacks. The advantage of this watermarking method is that we can achieve both the compression and the watermarking method for the copyright protection simultaneously.