• Title/Summary/Keyword: 코사인

Search Result 352, Processing Time 0.022 seconds

Steganalysis of Content-Adaptive Steganography using Markov Features for DCT Coefficients (DCT 계수의 마코프 특징을 이용한 내용 적응적 스테가노그래피의 스테그분석)

  • Park, Tae Hee;Han, Jong Goo;Eom, Il Kyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.8
    • /
    • pp.97-105
    • /
    • 2015
  • Content-adaptive steganography methods embed secret messages in hard-to-model regions of covers such as complicated texture or noisy area. Content-adaptive steganalysis methods often need high dimensional features to capture more subtle relationships of local dependencies among adjacent pixels. However, these methods require many computational complexity and depend on the location of hidden message and the exploited distortion metrics. In this paper, we propose an improved steganalysis method for content-adaptive steganography to enhance detection rate with small number features. We first show that the features form the difference between DCT coefficients are useful for analyzing the content-adaptive steganography methods, and present feature extraction mehtod using first-order Markov probability for the the difference between DCT coefficients. The extracted features are used as input of ensemble classifier. Experimental results show that the proposed method outperforms previous schemes in terms of detection rates and accuracy in spite of a small number features in various content-adaptive stego images.

Evaluation Model for Gab Analysis Between NCS Competence Unit Element and Traditional Curriculum (NCS 능력단위 요소와 기존 교육과정 간 갭 분석을 위한 평가모델)

  • Kim, Dae-kyung;Kim, Chang-Bok
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.4
    • /
    • pp.338-344
    • /
    • 2015
  • The national competency standards (NCS) is a systematize and standardize for skills required to perform their job. The NCS has developed a learning module with materialization and standardize by competence unit element, which is the unit of specific job competency. The existing curriculum is material to gab analysis for use in education training with competence unit element. The existing gab analysis has evaluated subjectively by experts. The gab analysis by experts bring up a subject subjective decision, accuracy lack, temporal and spatial inefficiency by psychological factor. This paper is proposed automated evaluation model for problem resolve of subjective evaluation. This paper use index term extraction, term frequency-inverse document frequency for feature value extraction, cosine similarity algorithm for gab analysis between existing curriculum and competence unit element. This paper was presented similarity mapping table between existing curriculum and competence unit element. The evaluation model in this paper should be complemented by an improved algorithm from the structural characteristics and speed.

An Orthogonal Approximate DCT for Fast Image Compression (고속 영상 압축을 위한 근사 이산 코사인 변환)

  • Kim, Seehyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.10
    • /
    • pp.2403-2408
    • /
    • 2015
  • For image data the discrete cosine transform (DCT) has comparable energy compaction capability to Karhunen-Loeve transform (KLT) which is optimal. Hence DCT has been widely accepted in various image and video compression standard such as JPEG, MPEG-2, and MPEG-4. Recently some approximate DCT's have been reported, which can be computed much faster than the original DCT because their coefficients are either zero or the power of 2. Although the level of energy compaction is slightly degraded, the approximate DCT's can be utilized in real time implementation of image or visual compression applications. In this paper, an approximate 8-point DCT which contains 17 non-zero power-of-2 coefficients and high energy compaction capability comparable to DCT is proposed. Transform coding experiments with several images show that the proposed transform outperforms the published works.

Semantic Document-Retrieval Based on Markov Logic (마코프 논리 기반의 시맨틱 문서 검색)

  • Hwang, Kyu-Baek;Bong, Seong-Yong;Ku, Hyeon-Seo;Paek, Eun-Ok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.663-667
    • /
    • 2010
  • A simple approach to semantic document-retrieval is to measure document similarity based on the bag-of-words representation, e.g., cosine similarity between two document vectors. However, such a syntactic method hardly considers the semantic similarity between documents, often producing semantically-unsound search results. We circumvent such a problem by combining supervised machine learning techniques with ontology information based on Markov logic. Specifically, Markov logic networks are learned from similarity-tagged documents with an ontology representing the diverse relationship among words. The learned Markov logic networks, the ontology, and the training documents are applied to the semantic document-retrieval task by inferring similarities between a query document and the training documents. Through experimental evaluation on real world question-answering data, the proposed method has been shown to outperform the simple cosine similarity-based approach in terms of retrieval accuracy.

The Redundancy Reduction Using Fuzzy C-means Clustering and Cosine Similarity on a Very Large Gas Sensor Array for Mimicking Biological Olfaction (생물학적 후각 시스템을 모방한 대규모 가스 센서 어레이에서 코사인 유사도와 퍼지 클러스터링을 이용한 중복도 제거 방법)

  • Kim, Jeong-Do;Kim, Jung-Ju;Park, Sung-Dae;Byun, Hyung-Gi;Persaud, K.C.;Lim, Seung-Ju
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.1
    • /
    • pp.59-67
    • /
    • 2012
  • It was reported that the latest sensor technology allow an 65536 conductive polymer sensor array to be made with broad but overlapping selectivity to different families of chemicals emulating the characteristics found in biological olfaction. However, the supernumerary redundancy always accompanies great error and risk as well as an inordinate amount of computation time and local minima in signal processing, e.g. neural networks. In this paper, we propose a new method to reduce the number of sensor for analysis by reducing redundancy between sensors and by removing unstable sensors using the cosine similarity method and to decide on representative sensor using FCM(Fuzzy C-Means) algorithm. The representative sensors can be just used in analyzing. And, we introduce DWT(Discrete Wavelet Transform) for data compression in the time domain as preprocessing. Throughout experimental trials, we have done a comparative analysis between gas sensor data with and without reduced redundancy. The possibility and superiority of the proposed methods are confirmed through experiments.

Comparisons of Single Photo Resection Algorithms for the Determination of Exterior Orientation Parameters (단사진의 외부표정요소 결정을 위한 후방교회법 알고리즘의 비교)

  • Kim, Eui Myoung;Seo, Hong Deok
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.305-315
    • /
    • 2020
  • The purpose of this study is to compare algorithms of single photo resection, which determines the exterior orientation parameters used in fields such as photogrammetry, computer vision, robotics, etc. To this end, the algorithms were compared by generating experimental data by simulating terrain based on a camera used in aerial and close-range photogrammetry. Through experiments on aerial photographic camera that was taken almost vertically, it was possible to determine the exterior orientation parameters using three ground control points, but the Procrustes algorithm was sensitive to the configuration of the ground control points. Even in experiments with a close-range amateur camera where the attitude angles of the camera change significantly, the algorithm was sensitive to the configuration of the ground control points, and the other algorithms required at least six ground control points. Through experiments with two types of cameras, it was found that cosine lawbased spatial resection shows performance similar to that of a traditional photogrammetry algorithm because the number of iterations is short and no explicit initial values are required.

A Proofreader Matching Method Based on Topic Modeling Using the Importance of Documents (문서 중요도를 고려한 토픽 기반의 논문 교정자 매칭 방법론)

  • Son, Yeonbin;An, Hyeontae;Choi, Yerim
    • Journal of Internet Computing and Services
    • /
    • v.19 no.4
    • /
    • pp.27-33
    • /
    • 2018
  • In the process of submitting a manuscript to a journal in order to present the results of the research at the research institution, researchers often proofread the manuscript because it can manuscripts to communicate the results more effectively. Currently, most of the manuscript proofreading companies use the manual proofreader assignment method according to the subjective judgment of the matching manager. Therefore, in this paper, we propose a topic-based proofreader matching method for effective proofreading results. The proposed method is categorized into two steps. First, a topic modeling is performed by using Latent Dirichlet Allocation. In this process, the frequency of each document constituting the representative document of a user is determined according to the importance of the document. Second, the user similarity is calculated based on the cosine similarity method. In addition, we confirmed through experiments by using real-world dataset. The performance of the proposed method is superior to the comparative method, and the validity of the matching results was verified using qualitative evaluation.

Performance Comparison of DCT Algorithm Implementations Based on Hardware Architecture (프로세서 구조에 따른 DCT 알고리즘의 구현 성능 비교)

  • Lee Jae-Seong;Pack Young-Cheol;Youn Dae-Hee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.6C
    • /
    • pp.637-644
    • /
    • 2006
  • This paper presents performance and implementation comparisons of standard and fast DCT algorithms that are commonly used for subband filter bank in MPEG audio coders. The comparison is made according to the architectural difference of the implementation hardware. Fast DCT algorithms are known to have much less computational complexity than the standard method that involves computing a vector dot product of cosine coefficient. But, due to structural irregularity, fast DCT algorithms require extra cycles to generate the addresses for operands and to realign interim data. When algorithms are implemented using DSP processors that provide special operations such as single-cycle MAC (multiply-accumulate), zero-overhead nested loop, the standard algorithm is more advantageous than the fast algorithms. Also, in case of the finite-precision processing, the error performance of the standard method is far superior to that of the fast algorithms. In this paper, truncation errors and algorithmic suitability are analyzed and implementation results are provided to support the analysis.

An Exploratory Study of Collective E-Petitions Estimation Methodology Using Anomaly Detection: Focusing on the Voice of Citizens of Changwon City (이상탐지 활용 전자집단민원 추정 방법론에 관한 탐색적 연구: 창원시 시민의 소리 사례를 중심으로)

  • Jeong, Ha-Yeong
    • Informatization Policy
    • /
    • v.26 no.4
    • /
    • pp.85-106
    • /
    • 2019
  • Recently, there have been increasing cases of collective petitions filed in the electronic petitions system. However, there is no efficient management system, raising concerns on side effects such as increased administrative workload and mass production of social conflicts. Aimed at suggesting a methodology for estimating electronic collective petitions using anomaly detection and corpus linguistics-based content analysis, this study conducted the followings: i) a theoretical review of the concept of collective petitions, ii) estimation of electronic collective petitions using anomaly detection based on nonparametric unsupervised learning, iii) a content similarity analysis on petitions using n-gram cosine angle distance, and iv) a case study on the Voice of Citizens of Changwon City, through which the utility of the proposed methodology, policy implications and future tasks were reviewed.

Case Study on Public Document Classification System That Utilizes Text-Mining Technique in BigData Environment (빅데이터 환경에서 텍스트마이닝 기법을 활용한 공공문서 분류체계의 적용사례 연구)

  • Shim, Jang-sup;Lee, Kang-wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.1085-1089
    • /
    • 2015
  • Text-mining technique in the past had difficulty in realizing the analysis algorithm due to text complexity and degree of freedom that variables in the text have. Although the algorithm demanded lots of effort to get meaningful result, mechanical text analysis took more time than human text analysis. However, along with the development of hardware and analysis algorithm, big data technology has appeared. Thanks to big data technology, all the previously mentioned problems have been solved while analysis through text-mining is recognized to be valuable as well. However, applying text-mining to Korean text is still at the initial stage due to the linguistic domain characteristics that the Korean language has. If not only the data searching but also the analysis through text-mining is possible, saving the cost of human and material resources required for text analysis will lead efficient resource utilization in numerous public work fields. Thus, in this paper, we compare and evaluate the public document classification by handwork to public document classification where word frequency(TF-IDF) in a text-mining-based text and Cosine similarity between each document have been utilized in big data environment.

  • PDF