• Title/Summary/Keyword: Word Detection

Search Result 220, Processing Time 0.029 seconds

A Research of Anomaly Detection Method in MS Office Document (MS 오피스 문서 파일 내 비정상 요소 탐지 기법 연구)

  • Cho, Sung Hye;Lee, Sang Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.2
    • /
    • pp.87-94
    • /
    • 2017
  • Microsoft Office is an office suite of applications developed by Microsoft. Recently users with malicious intent customize Office files as a container of the Malware because MS Office is most commonly used word processing program. To attack target system, many of malicious office files using a variety of skills and techniques like macro function, hiding shell code inside unused area, etc. And, people usually use two techniques to detect these kinds of malware. These are Signature-based detection and Sandbox. However, there is some limits to what it can afford because of the increasing complexity of malwares. Therefore, this paper propose methods to detect malicious MS office files in Computer forensics' way. We checked Macros and potential problem area with structural analysis of the MS Office file for this purpose.

Keyword Extraction from News Corpus using Modified TF-IDF (TF-IDF의 변형을 이용한 전자뉴스에서의 키워드 추출 기법)

  • Lee, Sung-Jick;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.59-73
    • /
    • 2009
  • Keyword extraction is an important and essential technique for text mining applications such as information retrieval, text categorization, summarization and topic detection. A set of keywords extracted from a large-scale electronic document data are used for significant features for text mining algorithms and they contribute to improve the performance of document browsing, topic detection, and automated text classification. This paper presents a keyword extraction technique that can be used to detect topics for each news domain from a large document collection of internet news portal sites. Basically, we have used six variants of traditional TF-IDF weighting model. On top of the TF-IDF model, we propose a word filtering technique called 'cross-domain comparison filtering'. To prove effectiveness of our method, we have analyzed usefulness of keywords extracted from Korean news articles and have presented changes of the keywords over time of each news domain.

  • PDF

A BERT-Based Deep Learning Approach for Vulnerability Detection (BERT를 이용한 딥러닝 기반 소스코드 취약점 탐지 방법 연구)

  • Jin, Wenhui;Oh, Heekuck
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1139-1150
    • /
    • 2022
  • With the rapid development of SW Industry, softwares are everywhere in our daily life. The number of vulnerabilities are also increasing with a large amount of newly developed code. Vulnerabilities can be exploited by hackers, resulting the disclosure of privacy and threats to the safety of property and life. In particular, since the large numbers of increasing code, manually analyzed by expert is not enough anymore. Machine learning has shown high performance in object identification or classification task. Vulnerability detection is also suitable for machine learning, as a reuslt, many studies tried to use RNN-based model to detect vulnerability. However, the RNN model is also has limitation that as the code is longer, the earlier can not be learned well. In this paper, we proposed a novel method which applied BERT to detect vulnerability. The accuracy was 97.5%, which increased by 1.5%, and the efficiency also increased by 69% than Vuldeepecker.

Development of a Stock Information Retrieval System using Speech Recognition (음성 인식을 이용한 증권 정보 검색 시스템의 개발)

  • Park, Sung-Joon;Koo, Myoung-Wan;Jhon, Chu-Shik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.4
    • /
    • pp.403-410
    • /
    • 2000
  • In this paper, the development of a stock information retrieval system using speech recognition and its features are described. The system is based on DHMM (discrete hidden Markov model) and PLUs (phonelike units) are used as the basic unit for recognition. End-point detection and echo cancellation are included to facilitate speech input. Continuous speech recognizer is implemented to allow multi-word speech. Data collected over several months are analyzed.

  • PDF

Distinction of Korean and English Characters from Multi-font Images for the Recognition of Mixed Document Composed of Korean and English (한영 혼용문서 인식을 위한 다중 폰트 이미지로부터 한글과 영어의 구별)

  • 전일수
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.4 no.3
    • /
    • pp.52-58
    • /
    • 1999
  • This paper proposed and algorithm for distinguishing Korean and English characters which can be applied to multi-size and multi-font images The proposed algorithm distinguishes them as the ratio height to width of each character, the number of connected component, existing or not of stroke image on the left-upper area and detection of bars in an input image. The process of detecting bar is a sequence of left, upper, right, and lower. The proposed method was experimented and proved good performance for the Myungjo font, the Sinmyungjo font, the Gothic font, and the Kungseo font of Hanguel word processor which is widely used for the writing of documents.

  • PDF

Source Codes Plagiarism Detection By Using Reserved Word Sequence Matching (예약어 시퀀스 탐색을 통한 소스코드 표절검사)

  • Lee Yeong-Ju;Kim Seung;Gang Seok-Ho
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2006.05a
    • /
    • pp.1198-1206
    • /
    • 2006
  • 프로그램 소스코드 표절 검사에 대한 기존 방법은 크게 지문(finger-print)법과 구조기반 검사법으로 나뉘며, 주로 단어의 유사성이나 발생빈도를 사용하거나 소스코드 구조상의 특징으로 두 소스간의 유사성을 비교한다. 본 연구에서는 프로그래밍 언어의 예약어 시퀀스를 사용하여 소스코드들 간의 유사성을 비교하고, 이 결과를 FCA(Formal Concept Analysis)를 통해 해석하고 시각화 하는 방법을 제시한다. 일반적인 VSM(Vector Space Model)과 같은 단일 단어 분석으로는 단어의 인접성을 구분할 수 없으므로 단어의 시퀀스 분석이 가능하도록 알고리즘을 구성하였으며 이러한 방식은 지문법의 단점인 소스코드의 부분적인 표절 탐지의 난점을 해결할 수 있고 함수의 호출 순서나 수행 순서에 상관없이 표절을 탐지할 수 있는 장점을 가진다. 마지막으로 유사도 측정결과는 FCA를 이용하여 격자(lattice)로 시각화됨으로써 이용자의 이해도를 높일 수 있다.

  • PDF

Development of English Speech Recognizer for Pronunciation Evaluation (발성 평가를 위한 영어 음성인식기의 개발)

  • Park Jeon Gue;Lee June-Jo;Kim Young-Chang;Hur Yongsoo;Rhee Seok-Chae;Lee Jong-Hyun
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.37-40
    • /
    • 2003
  • This paper presents the preliminary result of the automatic pronunciation scoring for non-native English speakers, and shows the developmental process for an English speech recognizer for the educational and evaluational purposes. The proposed speech recognizer, featuring two refined acoustic model sets, implements the noise-robust data compensation, phonetic alignment, highly reliable rejection, key-word and phrase detection, easy-to-use language modeling toolkit, etc., The developed speech recognizer achieves 0.725 as the average correlation between the human raters and the machine scores, based on the speech database YOUTH for training and K-SEC for test.

  • PDF

Modified SNR-Normalization Technique for Robust Speech Recognition

  • Jung, Hoi-In;Shim, Kab-Jong;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.3E
    • /
    • pp.14-18
    • /
    • 1997
  • One fo the major problems in speech recognition is the mismatch between training and testing environments. Recently, SNR normalization technique, which normalizes the dynamic range of frequency channels in mel-scaled filterbank, was proposed[1]. While it showed improved robustness against additive noise, it requires a reliable speech detection mechanism and several adaptation parameters to be optimized. In this paper, we propose a modified SNR normalization technique. In this technique, we take simply the maximum of filterbank output and predetermined masking constant for each frequency band. According to the speaker-independent isolated word recognition in car noise environments, proposed modification yields better recognition performance that the original SNR normalization method, with rather reduced complexity.

  • PDF

Implementation of GSM Full Rate vocoder for the GSM mobile modem chip (GSM방식 단말기용 모뎀칩을 위한 GSM Full Rate 보코더 구현)

  • Lee Dong-Won
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.9-12
    • /
    • 2001
  • 본 논문에서는 유럽 통신 표준화기구인 ETSI 의 SMGll에서 채택된 GSM Full Rate(FR) 보코더 알고리wma[1]을 Teak DSP Core를 이용하여 실시간 구현하였다. GSM FR 보코더는 유럽에서 사용하는 통신 시스템인 GSM 의 full-rate Traffic Channel(TCH)의 표준 코덱[2]으로서 GSM HR, GSM EFR GSM AMR과 더불어 모뎀칩 내에 장착되는 필수적인 음성 서비스이다. 구현된 GSM FR는 13.05kbps의 비트율을 가지고 있으며, 인코더와 디코더 기능 외에 voice activity detection(VAD)[3]블록과 DTX[4]블록 등의 부가 기능도 구현되어 있다. 구현에 사용된 Teak[5]는 DSP Group 의 16bit고정 소수점 DSP core로서 최대 140MIPS 의 성능을 낼 수 있고 400bits ALU 와 두개의 MAC 이 장착되어 있어 음성 및 채널 부호화기의 실시간 처리에 최적화 되어있다. 구현된 GSM FR 은 인코더와 디코더 부분이 각각 약 235 MIPS 및 1.19MIPS 의 복잡도를 나타내며, 사용된 메모리는 프로그램 ROM 3.9K words, 데이터 ROM(table) 396 words 및 RAM 932words이다.

  • PDF

Weighted QPSK/PCM Speech Signal Detection with the Erasure Zone (가중치를 부여한 QPSK/PCM 음성신호의 소거대역 설정에 의한 신호수신)

  • Ahn, Seung-Choon;Lee, Moon-Ho
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.179-182
    • /
    • 1988
  • Since the bits in any encoded PCM word are of different importance to the bit positions, in order to improve the signal to noise ratio the technique that the encoded signal bits are weighted for the QPSK transmission system, is presented. Also the erasure zone is established at the detector, such that if the output falls into the erasure zone, the regenerated sample is replaced by interpolation. Two weighting methods are shown here. One is the method that the same weighting profile is used to Q and I dimension in QPSK signal constellations. The other is diferent weighting to Q and I dimension. The gains of this new technique in overall signal s/n compared to conventional QPSK transmission system were 5 db and 2db, respectively.

  • PDF