• Title/Summary/Keyword: Embedding method

Search Result 701, Processing Time 0.023 seconds

Chaos and Correlation Dimension

  • Kim, Hung-Soo
    • Journal of Korea Water Resources Association
    • /
    • v.33 no.S1
    • /
    • pp.37-47
    • /
    • 2000
  • The method of delays is widely used for reconstruction chaotic attractors from experimental observations. Many studies have used a fixed delay time ${\tau}_d$ as the embedding dimension m is increased, but this is not necessarily the best choice for obtaining good convergence of the correlation dimension. Recently, some researchers have suggested that it is better to fix the delay time window ${\tau}_w$ instead. Unfortunately, ${\tau}_w$ cannot be estimated using either the autocorrelation function or the mutual information, and no standard procedure for estimating ${\tau}_w$ has yet emerged. However, a new technique, called the C-C method, can be used to estimate either ${\tau}_d\;or\;{\tau}_w$. Using this method, we show that, for small data sets, fixing ${\tau}_w$, rather than ${\tau}_d$, does indeed lead to a more rapid convergence of the correlation dimension as the embedding dimension m in increased.

  • PDF

Reversible Watermarking Method Using Optimal Histogram Pair Shifting Based on Prediction and Sorting

  • Hwang, Hee-Joon;Kim, Hyoung-Joong;Sachnev, Vasiliy;Joo, Sang-Hyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.4
    • /
    • pp.655-670
    • /
    • 2010
  • To be reversible as a data hiding method, the original content and hidden message should be completely recovered. One important objective of this approach is to achieve high embedding capacity and low distortion. Using predicted errors is very effective for increasing the embedding capacity. Sorting the predicted errors has a good influence on decreasing distortion. In this paper, we present an improved reversible data hiding scheme using upgraded histogram shifting based on sorting the predicted errors. This new scheme is characterized by the algorithm which is able to find the optimum threshold values and manage the location map effectively. Experimental results compared with other methods are presented to demonstrate the superiority of the proposed method.

Digital Watermarking Scheme based on SVD and Triplet (SVD 및 트리플릿 기반의 디지털 워터마킹 기법)

  • Park, Byung-Su;Chu, Hyung-Suk;An, Chong-Koo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.5
    • /
    • pp.1041-1046
    • /
    • 2009
  • In this paper, we proposed a robust watermark scheme for image based on SVD(Singular Value Transform) and Triplet. First, the original image is decomposed by using 3-level DWT, and then used the singular values changed for embedding and extracting of the watermark sequence in LL3 band. Since the matrix of singular values is not easily altered with various signal processing noises, the embedded watermark sequence has the ability to withstand various signal processing noise attacks. Nevertheless, this method does not guarantee geometric transformation(such as rotation, cropping, etc.) because the geometric transformation changes the matrix size. In this case, the watermark sequence cannot be extracted. To compensate for the above weaknesses, a method which uses the triplet for embedding a barcode image watermark in the middle of frequency band is proposed. In order to generate the barcode image watermark, the pattern of the watermark sequence embedded in a LL3 band is used. According to this method, the watermark information can be extracted from attacked images.

Chaos and Correlation Dimension

  • Kim, Hung-Soo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2000.05a
    • /
    • pp.37-47
    • /
    • 2000
  • The method of delays is widely used fur reconstructing chaotic attractors from experimental observations. Many studies have used a fixed delay time ${\tau}_d$ as the embedding dimension m is increased, but this is not necessarily the best choice for obtaining good convergence of the correlation dimension. Recently, some researchers have suggested that it is better to fix the delay time window ${\tau}_w$ instead. Unfortunately, ${\tau}_w$ cannot be estimated using either the autocorrelation function or the mutual information, and no standard procedure for estimating ${\tau}_w$has yet emerged. However, a new technique, called the C-C method, can be used to estimate either ${\tau}_d{\;}or{\;}{\tau}_w$. Using this method, we show that, for small data sets, fixing ${\tau}_w$, rather than ${\tau}_d$, does indeed lead to a more rapid convergence of the correlation dimension as the embedding dimension m is increased.

  • PDF

A Comparative Study of Text analysis and Network embedding Methods for Effective Fake News Detection (효과적인 가짜 뉴스 탐지를 위한 텍스트 분석과 네트워크 임베딩 방법의 비교 연구)

  • Park, Sung Soo;Lee, Kun Chang
    • Journal of Digital Convergence
    • /
    • v.17 no.5
    • /
    • pp.137-143
    • /
    • 2019
  • Fake news is a form of misinformation that has the advantage of rapid spreading of information on media platforms that users interact with, such as social media. There has been a lot of social problems due to the recent increase in fake news. In this paper, we propose a method to detect such false news. Previous research on fake news detection mainly focused on text analysis. This research focuses on a network where social media news spreads, generates qualities with DeepWalk, a network embedding method, and classifies fake news using logistic regression analysis. We conducted an experiment on fake news detection using 211 news on the Internet and 1.2 million news diffusion network data. The results show that the accuracy of false network detection using network embedding is 10.6% higher than that of text analysis. In addition, fake news detection, which combines text analysis and network embedding, does not show an increase in accuracy over network embedding. The results of this study can be effectively applied to the detection of fake news that organizations spread online.

Assignment Semantic Category of a Word using Word Embedding and Synonyms (워드 임베딩과 유의어를 활용한 단어 의미 범주 할당)

  • Park, Da-Sol;Cha, Jeong-Won
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.946-953
    • /
    • 2017
  • Semantic Role Decision defines the semantic relationship between the predicate and the arguments in natural language processing (NLP) tasks. The semantic role information and semantic category information should be used to make Semantic Role Decisions. The Sejong Electronic Dictionary contains frame information that is used to determine the semantic roles. In this paper, we propose a method to extend the Sejong electronic dictionary using word embedding and synonyms. The same experiment is performed using existing word-embedding and retrofitting vectors. The system performance of the semantic category assignment is 32.19%, and the system performance of the extended semantic category assignment is 51.14% for words that do not appear in the Sejong electronic dictionary of the word using the word embedding. The system performance of the semantic category assignment is 33.33%, and the system performance of the extended semantic category assignment is 53.88% for words that do not appear in the Sejong electronic dictionary of the vector using retrofitting. We also prove it is helpful to extend the semantic category word of the Sejong electronic dictionary by assigning the semantic categories to new words that do not have assigned semantic categories.

A Word Embedding used Word Sense and Feature Mirror Model (단어 의미와 자질 거울 모델을 이용한 단어 임베딩)

  • Lee, JuSang;Shin, JoonChoul;Ock, CheolYoung
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.4
    • /
    • pp.226-231
    • /
    • 2017
  • Word representation, an important area in natural language processing(NLP) used machine learning, is a method that represents a word not by text but by distinguishable symbol. Existing word embedding employed a large number of corpora to ensure that words are positioned nearby within text. However corpus-based word embedding needs several corpora because of the frequency of word occurrence and increased number of words. In this paper word embedding is done using dictionary definitions and semantic relationship information(hypernyms and antonyms). Words are trained using the feature mirror model(FMM), a modified Skip-Gram(Word2Vec). Sense similar words have similar vector. Furthermore, it was possible to distinguish vectors of antonym words.

Speaker verification system combining attention-long short term memory based speaker embedding and I-vector in far-field and noisy environments (Attention-long short term memory 기반의 화자 임베딩과 I-vector를 결합한 원거리 및 잡음 환경에서의 화자 검증 알고리즘)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.137-142
    • /
    • 2020
  • Many studies based on I-vector have been conducted in a variety of environments, from text-dependent short-utterance to text-independent long-utterance. In this paper, we propose a speaker verification system employing a combination of I-vector with Probabilistic Linear Discriminant Analysis (PLDA) and speaker embedding of Long Short Term Memory (LSTM) with attention mechanism in far-field and noisy environments. The LSTM model's Equal Error Rate (EER) is 15.52 % and the Attention-LSTM model is 8.46 %, improving by 7.06 %. We show that the proposed method solves the problem of the existing extraction process which defines embedding as a heuristic. The EER of the I-vector/PLDA without combining is 6.18 % that shows the best performance. And combined with attention-LSTM based embedding is 2.57 % that is 3.61 % less than the baseline system, and which improves performance by 58.41 %.

An Investigation of TEM Specimen Preparation Methods from Powders Using a Centrifuge (원심분리기를 이용한 분말시료의 TEM용 시편 준비법 연구)

  • Jeung, Jong-Man;Lee, Young-Boo;Kim, Youn-Joong
    • Applied Microscopy
    • /
    • v.29 no.1
    • /
    • pp.67-73
    • /
    • 1999
  • It is practically hard to prepare good TEM specimens from powders which are embedded in epoxy materials for ion milling, because the milling rate difference between powders and epoxy is quite large. In order to overcome this problem, we tried to find methods to increase the density of powders in the embedding epoxy without loosing the adhesive strength between them. Powder density was considerably increased by employing a centrifuge for embedding, compared to the result by a conventional vacuum embedding. In addition, mixing powders of different sizes after sieving also enhanced the final density by allowing smaller particles filling in the gaps of larger particles. Ion milling of powders embedded by these methods resulted in thin specimens good enough for normal TEM works. TEM specimens from spherical, platy and fibrous powders of submicron size were successfully prepared by this centrifuging method.

  • PDF

A Statistical Approach for Improving the Embedding Capacity of Block Matching based Image Steganography (블록 매칭 기반 영상 스테가노그래피의 삽입 용량 개선을 위한 통계적 접근 방법)

  • Kim, Jaeyoung;Park, Hanhoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.22 no.5
    • /
    • pp.643-651
    • /
    • 2017
  • Steganography is one of information hiding technologies and discriminated from cryptography in that it focuses on avoiding the existence the hidden information from being detected by third parties, rather than protecting it from being decoded. In this paper, as an image steganography method which uses images as media, we propose a new block matching method that embeds information into the discrete wavelet transform (DWT) domain. The proposed method, based on a statistical analysis, reduces loss of embedding capacity due to inequable use of candidate blocks. It works in such a way that computes the variance of each candidate block, preserves candidate blocks with high frequency components while reducing candidate blocks with low frequency components by compressing them exploiting the k-means clustering algorithm. Compared with the previous block matching method, the proposed method can reconstruct secret images with similar PSNRs while embedding higher-capacity information.