• Title/Summary/Keyword: Term Extraction

Search Result 336, Processing Time 0.025 seconds

A Novel RGB Image Steganography Using Simulated Annealing and LCG via LSB

  • Bawaneh, Mohammed J.;Al-Shalabi, Emad Fawzi;Al-Hazaimeh, Obaida M.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.143-151
    • /
    • 2021
  • The enormous prevalence of transferring official confidential digital documents via the Internet shows the urgent need to deliver confidential messages to the recipient without letting any unauthorized person to know contents of the secret messages or detect there existence . Several Steganography techniques such as the least significant Bit (LSB), Secure Cover Selection (SCS), Discrete Cosine Transform (DCT) and Palette Based (PB) were applied to prevent any intruder from analyzing and getting the secret transferred message. The utilized steganography methods should defiance the challenges of Steganalysis techniques in term of analysis and detection. This paper presents a novel and robust framework for color image steganography that combines Linear Congruential Generator (LCG), simulated annealing (SA), Cesar cryptography and LSB substitution method in one system in order to reduce the objection of Steganalysis and deliver data securely to their destination. SA with the support of LCG finds out the optimal minimum sniffing path inside a cover color image (RGB) then the confidential message will be encrypt and embedded within the RGB image path as a host medium by using Cesar and LSB procedures. Embedding and extraction processes of secret message require a common knowledge between sender and receiver; that knowledge are represented by SA initialization parameters, LCG seed, Cesar key agreement and secret message length. Steganalysis intruder will not understand or detect the secret message inside the host image without the correct knowledge about the manipulation process. The constructed system satisfies the main requirements of image steganography in term of robustness against confidential message extraction, high quality visual appearance, little mean square error (MSE) and high peak signal noise ratio (PSNR).

Spontaneous bone regeneration after surgical extraction of a horizontally impacted mandibular third molar: a retrospective panoramic radiograph analysis

  • Kim, Eugene;Eo, Mi Young;Nguyen, Truc Thi Hoang;Yang, Hoon Joo;Myoung, Hoon;Kim, Soung Min
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.41
    • /
    • pp.4.1-4.10
    • /
    • 2019
  • Background: The mandibular third molar (M3) is typically the last permanent tooth to erupt because of insufficient space and thick soft tissues covering its surface. Problems such as alveolar bone loss, development of a periodontal pocket, exposure of cementum, gingival recession, and dental caries can be found in the adjacent second molars (M2) following M3 extraction. The specific aims of the study were to assess the amount and rate of bone regeneration on the distal surface of M2 and to evaluate the aspects of bone regeneration in terms of varying degree of impaction. Methods: Four series of panoramic radiographic images were obtained from the selected cases, including images from the first visit, immediately after extraction, 6 weeks, and 6 months after extraction. ImageJ software® (NIH, USA) was used to measure linear distance from the region of interest to the distal root of the adjacent M2. Radiographic infrabony defect (RID) values were calculated from the measured radiographic bone height and cementoenamel junction with distortion compensation. Repeated measures of analysis of variance and one-way analysis of variance were conducted to analyze the statistical significant difference between RID and time, and a Spearman correlation test was conducted to assess the relationship between Pederson's difficulty index (DI) and RID. Results: A large RID (> 6 mm) can be reduced gradually and consistently over time. More than half of the samples recovered nearly to their normal healthy condition (RID ≤ 3 mm) by the 6-month follow-up. DI affected the first 6 weeks of post-extraction period and only showed a significant positive correlation with respect to the difference between baseline and final RID. Conclusions: Additional treatments on M2 for a minimum of 6 months after an M3 extraction could be recommended. Although DI may affect bone regeneration during the early healing period, further study is required to elucidate any possible factors associated with the healing process. The DI does not cause any long-term adverse effects on bone regeneration after surgical extraction.

A Term Weight Mensuration based on Popularity for Search Query Expansion (검색 질의 확장을 위한 인기도 기반 단어 가중치 측정)

  • Lee, Jung-Hun;Cheon, Suh-Hyun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.8
    • /
    • pp.620-628
    • /
    • 2010
  • With the use of the Internet pervasive in everyday life, people are now able to retrieve a lot of information through the web. However, exponential growth in the quantity of information on the web has brought limits to online search engines in their search performance by showing piles and piles of unwanted information. With so much unwanted information, web users nowadays need more time and efforts than in the past to search for needed information. This paper suggests a method of using query expansion in order to quickly bring wanted information to web users. Popularity based Term Weight Mensuration better performance than the TF-IDF and Simple Popularity Term Weight Mensuration to experiments without changes of search subject. When a subject changed during search, Popularity based Term Weight Mensuration's performance change is smaller than others.

Speaker verification system combining attention-long short term memory based speaker embedding and I-vector in far-field and noisy environments (Attention-long short term memory 기반의 화자 임베딩과 I-vector를 결합한 원거리 및 잡음 환경에서의 화자 검증 알고리즘)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.137-142
    • /
    • 2020
  • Many studies based on I-vector have been conducted in a variety of environments, from text-dependent short-utterance to text-independent long-utterance. In this paper, we propose a speaker verification system employing a combination of I-vector with Probabilistic Linear Discriminant Analysis (PLDA) and speaker embedding of Long Short Term Memory (LSTM) with attention mechanism in far-field and noisy environments. The LSTM model's Equal Error Rate (EER) is 15.52 % and the Attention-LSTM model is 8.46 %, improving by 7.06 %. We show that the proposed method solves the problem of the existing extraction process which defines embedding as a heuristic. The EER of the I-vector/PLDA without combining is 6.18 % that shows the best performance. And combined with attention-LSTM based embedding is 2.57 % that is 3.61 % less than the baseline system, and which improves performance by 58.41 %.

Development of u-Health standard terminology and guidelines for terminology standardization (유헬스 표준용어 및 용어 표준화 가이드라인 개발)

  • Lee, Soo-Kyoung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.6
    • /
    • pp.4056-4066
    • /
    • 2015
  • For understanding of terminology related to u-Health and activating u-Health industry, it is required to develop u-Health standard terminology for communication. The purpose of this study is to develop u-Health standard terminology and provides guidelines for terminology standardization in order to develop the u-Health standard terminology. We finally developed the 187 u-Health standard terminology through the process of data acquisition, term extraction, term refinement, term selection and term management based on reports, glossary and Telecommunications Technology Association (TTA) standards about u-Health. As a result, the standard terminology and guidelines of u-Health optimized to the domestic environment were suggested. They included details of definition, classification, components, the methods and principles of the process for u-Health standard terminology. Presented in this study, u-Health standard terminology and guidelines for terminology standardization would assist the cost-reducing of employing terminology and management of it, while making information transfer easy. This would make possible promoting efficient development of u-Health industry in general.

LSTM(Long Short-Term Memory)-Based Abnormal Behavior Recognition Using AlphaPose (AlphaPose를 활용한 LSTM(Long Short-Term Memory) 기반 이상행동인식)

  • Bae, Hyun-Jae;Jang, Gyu-Jin;Kim, Young-Hun;Kim, Jin-Pyung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.5
    • /
    • pp.187-194
    • /
    • 2021
  • A person's behavioral recognition is the recognition of what a person does according to joint movements. To this end, we utilize computer vision tasks that are utilized in image processing. Human behavior recognition is a safety accident response service that combines deep learning and CCTV, and can be applied within the safety management site. Existing studies are relatively lacking in behavioral recognition studies through human joint keypoint extraction by utilizing deep learning. There were also problems that were difficult to manage workers continuously and systematically at safety management sites. In this paper, to address these problems, we propose a method to recognize risk behavior using only joint keypoints and joint motion information. AlphaPose, one of the pose estimation methods, was used to extract joint keypoints in the body part. The extracted joint keypoints were sequentially entered into the Long Short-Term Memory (LSTM) model to be learned with continuous data. After checking the behavioral recognition accuracy, it was confirmed that the accuracy of the "Lying Down" behavioral recognition results was high.

Document classification using a deep neural network in text mining (텍스트 마이닝에서 심층 신경망을 이용한 문서 분류)

  • Lee, Bo-Hui;Lee, Su-Jin;Choi, Yong-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.5
    • /
    • pp.615-625
    • /
    • 2020
  • The document-term frequency matrix is a term extracted from documents in which the group information exists in text mining. In this study, we generated the document-term frequency matrix for document classification according to research field. We applied the traditional term weighting function term frequency-inverse document frequency (TF-IDF) to the generated document-term frequency matrix. In addition, we applied term frequency-inverse gravity moment (TF-IGM). We also generated a document-keyword weighted matrix by extracting keywords to improve the document classification accuracy. Based on the keywords matrix extracted, we classify documents using a deep neural network. In order to find the optimal model in the deep neural network, the accuracy of document classification was verified by changing the number of hidden layers and hidden nodes. Consequently, the model with eight hidden layers showed the highest accuracy and all TF-IGM document classification accuracy (according to parameter changes) were higher than TF-IDF. In addition, the deep neural network was confirmed to have better accuracy than the support vector machine. Therefore, we propose a method to apply TF-IGM and a deep neural network in the document classification.

Efficient Parallel TLD on CPU-GPU Platform for Real-Time Tracking

  • Chen, Zhaoyun;Huang, Dafei;Luo, Lei;Wen, Mei;Zhang, Chunyuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.201-220
    • /
    • 2020
  • Trackers, especially long-term (LT) trackers, now have a more complex structure and more intensive computation for nowadays' endless pursuit of high accuracy and robustness. However, computing efficiency of LT trackers cannot meet the real-time requirement in various real application scenarios. Considering heterogeneous CPU-GPU platforms have been more popular than ever, it is a challenge to exploit the computing capacity of heterogeneous platform to improve the efficiency of LT trackers for real-time requirement. This paper focuses on TLD, which is the first LT tracking framework, and proposes an efficient parallel implementation based on OpenCL. In this paper, we firstly make an analysis of the TLD tracker and then optimize the computing intensive kernels, including Fern Feature Extraction, Fern Classification, NCC Calculation, Overlaps Calculation, Positive and Negative Samples Extraction. Experimental results demonstrate that our efficient parallel TLD tracker outperforms the original TLD, achieving the 3.92 speedup on CPU and GPU. Moreover, the parallel TLD tracker can run 52.9 frames per second and meet the real-time requirement.

Natural language processing techniques for bioinformatics

  • Tsujii, Jun-ichi
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.3-3
    • /
    • 2003
  • With biomedical literature expanding so rapidly, there is an urgent need to discover and organize knowledge extracted from texts. Although factual databases contain crucial information the overwhelming amount of new knowledge remains in textual form (e.g. MEDLINE). In addition, new terms are constantly coined as the relationships linking new genes, drugs, proteins etc. As the size of biomedical literature is expanding, more systems are applying a variety of methods to automate the process of knowledge acquisition and management. In my talk, I focus on the project, GENIA, of our group at the University of Tokyo, the objective of which is to construct an information extraction system of protein - protein interaction from abstracts of MEDLINE. The talk includes (1) Techniques we use fDr named entity recognition (1-a) SOHMM (Self-organized HMM) (1-b) Maximum Entropy Model (1-c) Lexicon-based Recognizer (2) Treatment of term variants and acronym finders (3) Event extraction using a full parser (4) Linguistic resources for text mining (GENIA corpus) (4-a) Semantic Tags (4-b) Structural Annotations (4-c) Co-reference tags (4-d) GENIA ontology I will also talk about possible extension of our work that links the findings of molecular biology with clinical findings, and claim that textual based or conceptual based biology would be a viable alternative to system biology that tends to emphasize the role of simulation models in bioinformatics.

  • PDF

Detection of the morphologic change on tidal flat using intertidal DEMs

  • Lee, Yoon-Kyung;Ryu, Joo-Hyung;Eom, Jin-Ah;Kwak, Joon-Young;Won, Joong-Sun
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.247-249
    • /
    • 2006
  • The objective of this study is to detect a inter-tidal topographic change in a decade. Waterline extraction is a one of widely used method to generate digital elevation model (DEM) of tidal flat using multi-temporal optical data. This method has been well known that it is possible to construct detailed topographic relief of tidal flat using waterlines In this study, we generated two sets of tidal flat DEM for the southern Ganghwado. The DEMs showed that the Yeongjongdo northern tidal flat is relatively high elevation with steep gradients. The Ganghwado southern tidal flat is relatively low elevation and gentle gradients. To detect the morphologic change of tidal flat during a decade, we compared between early 1990's DEM and early 2000's DEM. Erosion during a decade is dominant at the west of southern Ganghwado tidal flat, while sedimentation is dominant at the wide channel between the southern Ganghwado and Yeongjongdo tidal flats. This area has been commonly affected by high current and sedimentation energy. Although we are not able to verify the accuracy of the changes in topography and absolute volume of sediments, this result shows that DEM using waterline extraction method is an effective tool for long term topographic change estimation.

  • PDF