• Title/Summary/Keyword: weighted dictionary

Search Result 19, Processing Time 0.029 seconds

Corpus-Based Ontology Learning for Semantic Analysis (의미 분석을 위한 말뭉치 기반의 온톨로지 학습)

  • 강신재
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.9 no.1
    • /
    • pp.17-23
    • /
    • 2004
  • This paper proposes to determine word senses in Korean language processing by corpus-based ontology learning. Our approach is a hybrid method. First, we apply the previously-secured dictionary information to select the correct senses of some ambiguous words with high precision, and then use the ontology to disambiguate the remaining ambiguous words. The mutual information between concepts in the ontology was calculated before using the ontology as knowledge for disambiguating word senses. If mutual information is regarded as a weight between ontology concepts, the ontology can be treated as a graph with weighted edges, and then we locate the least weighted path from one concept to the other concept. In our practical machine translation system, our word sense disambiguation method achieved a 9% improvement over methods which do not use ontology for Korean translation.

  • PDF

Object Cataloging Using Heterogeneous Local Features for Image Retrieval

  • Islam, Mohammad Khairul;Jahan, Farah;Baek, Joong Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4534-4555
    • /
    • 2015
  • We propose a robust object cataloging method using multiple locally distinct heterogeneous features for aiding image retrieval. Due to challenges such as variations in object size, orientation, illumination etc. object recognition is extraordinarily challenging problem. In these circumstances, we adapt local interest point detection method which locates prototypical local components in object imageries. In each local component, we exploit heterogeneous features such as gradient-weighted orientation histogram, sum of wavelet responses, histograms using different color spaces etc. and combine these features together to describe each component divergently. A global signature is formed by adapting the concept of bag of feature model which counts frequencies of its local components with respect to words in a dictionary. The proposed method demonstrates its excellence in classifying objects in various complex backgrounds. Our proposed local feature shows classification accuracy of 98% while SURF,SIFT, BRISK and FREAK get 81%, 88%, 84% and 87% respectively.

Practical Password-Authenticated Three-Party Key Exchange

  • Kwon, Jeong-Ok;Jeong, Ik-Rae;Lee, Dong-Hoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.6
    • /
    • pp.312-332
    • /
    • 2008
  • Password-based authentication key exchange (PAKE) protocols in the literature typically assume a password that is shared between a client and a server. PAKE has been applied in various environments, especially in the “client-server” applications of remotely accessed systems, such as e-banking. With the rapid developments in modern communication environments, such as ad-hoc networks and ubiquitous computing, it is customary to construct a secure peer-to-peer channel, which is quite a different paradigm from existing paradigms. In such a peer-to-peer channel, it would be much more common for users to not share a password with others. In this paper, we consider password-based authentication key exchange in the three-party setting, where two users do not share a password between themselves but only with one server. The users make a session-key by using their different passwords with the help of the server. We propose an efficient password-based authentication key exchange protocol with different passwords that achieves forward secrecy in the standard model. The protocol requires parties to only memorize human-memorable passwords; all other information that is necessary to run the protocol is made public. The protocol is also light-weighted, i.e., it requires only three rounds and four modular exponentiations per user. In fact, this amount of computation and the number of rounds are comparable to the most efficient password-based authentication key exchange protocol in the random-oracle model. The dispensation of random oracles in the protocol does not require the security of any expensive signature schemes or zero-knowlegde proofs.

Ontology Construction and Its Application to Disambiguate Word Senses (온톨로지 구축 및 단어 의미 중의성 해소에의 활용)

  • Kang, Sin-Jae
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.491-500
    • /
    • 2004
  • This paper presents an ontology construction method using various computational language resources, and an ontology-based word sense disambiguation method. In order to acquire a reasonably practical ontology the Kadokawa thesaurus is extended by inserting additional semantic relations into its hierarchy, which are classified as case relations and other semantic relations. To apply the ontology to disambiguate word senses, we apply the previously-secured dictionary information to select the correct senses of some ambiguous words with high precision, and then use the ontology to disambiguate the remaining ambiguous words. The mutual information between concepts in the ontology was calculated before using the ontology as knowledge for disambiguating word senses. If mutual information is regarded as a weight between ontology concepts, the ontology can be treated as a graph with weighted edges, and then we locate the weighted path from one concept to the other concept. In our practical machine translation system, our word sense disambiguation method achieved a 9% improvement over methods which do not use ontology for Korean translation.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

Searching for Variants Using Trie-Index (트라이 인덱스를 이용한 이형태 검색)

  • Park, In-Cheol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.8
    • /
    • pp.1986-1992
    • /
    • 2009
  • A user often searches a data by inputting a variant such as the abbreviation or substring of a word, or a misspelled word. The simple approach to the searching for variants is to build a variants dictionary. However, it entails enormous cost and time and can not handle variants by misspelling. Approximate searching, searching by approximate string matching, is a good approach to the searching. A problem in the approach is that it cannot handle variants by abbreviations. This paper propose a method for searching various variants including abbreviations and misspelled words, by using the trie indexing. First, this paper shows a variant matching method with the calculation of path weighted-metric. In addition, it provides variant searching algorithm to reduce the search time.

Developing the Customer Quality Satisfaction Index Using Online Reviews: Case Study of TV (리뷰를 활용한 고객 품질 만족도 지수 개발 : TV 사례연구)

  • Jiye, Shin;Heesoo, Kim;Jaiho, Lee;Hyoungwoo, Jeon;Jeongsik, Ahn;Sunghoon, Hwang
    • Journal of Korean Society for Quality Management
    • /
    • v.50 no.4
    • /
    • pp.863-876
    • /
    • 2022
  • Purpose: The purpose of this study is to propose the product quality satisfaction index based on multiple linear regression using customer reviews. Methods: The proposed framework is composed of four steps. First, we collect online reviews and divide it into insight phrases. The insight phrases are classified using product attribute dictionary and sentiment analysis is conducted. Second, the importance of attributes is calculated in consideration of both regression coefficient and frequency. Third, the positive rate is calculated concerning sentiment analysis result. Therefore, the quality satisfaction index is measured by the weighted sum of importance and positive rate in the last step. Results: We conduct a case study using 2-years(2020, 2021) of Samsung TV reviews to confirm the effectiveness of the proposed methodology. As a result, we found that Picture quality is the most crucial attribute in TV evaluation. The importance of Gaming and content has grown up as the positive rate has also increased. Therefore, the overall satisfaction of TV has increased in 2021 compared to 2020. Conclusion: The result of this study shows that the proposed index reveals the customer's mind efficiently and can be explained by the importance and positive rate of each attribute. By using the proposed index, companies are able to improve and the priority of improvement can be determined.

Infrared Image Sharpness Enhancement Method Using Super-resolution Based on Adaptive Dynamic Range Coding and Fusion with Visible Image (적외선 영상 선명도 개선을 위한 ADRC 기반 초고해상도 기법 및 가시광 영상과의 융합 기법)

  • Kim, Yong Jun;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.11
    • /
    • pp.73-81
    • /
    • 2016
  • In general, infrared images have less sharpness and image details than visible images. So, the prior image upscaling methods are not effective in the infrared images. In order to solve this problem, this paper proposes an algorithm which initially up-scales an input infrared (IR) image by using adaptive dynamic range encoding (ADRC)-based super-resolution (SR) method, and then fuses the result with the corresponding visible images. The proposed algorithm consists of a up-scaling phase and a fusion phase. First, an input IR image is up-scaled by the proposed ADRC-based SR algorithm. In the dictionary learning stage of this up-scaling phase, so-called 'pre-emphasis' processing is applied to training-purpose high-resolution images, hence better sharpness is achieved. In the following fusion phase, high-frequency information is extracted from the visible image corresponding to the IR image, and it is adaptively weighted according to the complexity of the IR image. Finally, a up-scaled IR image is obtained by adding the processed high-frequency information to the up-scaled IR image. The experimental results show than the proposed algorithm provides better results than the state-of-the-art SR, i.e., anchored neighborhood regression (A+) algorithm. For example, in terms of just noticeable blur (JNB), the proposed algorithm shows higher value by 0.2184 than the A+. Also, the proposed algorithm outperforms the previous works even in terms of subjective visual quality.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.