• Title/Summary/Keyword: Goal Similarity

Search Result 94, Processing Time 0.023 seconds

Design of a Extended Fuzzy Information Retrieval System using User한s Preference (사용자의 선호도를 반영한 확장 퍼지 정보 검색 시스템의 설계)

  • 김대원;이광형
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.4
    • /
    • pp.299-303
    • /
    • 2000
  • The goal of the information retrieval system is to search the docments which the user wants to obtain in fast and effiecient way. Many information retrieval models, including boolean models, vector models and fuzzy models based on the trasitional fuzzy set theory, have been proposed to achieve these kinds of objectives. However, the previous models have a limitation on the fact that they do not consider the users' preference in the search of documents. In this paper, we proposed a new extenced fuzzy information retrieval System which can handle the shortcomings of the previous ones. In the proposed model, a new similarity measure was applied in order to calculate the degree among documents, which can expliot the users' preference.

  • PDF

An Efficient Frame-Level Rate Control Algorithm for High Efficiency Video Coding

  • Lin, Yubei;Zhang, Xingming;Xiao, Jianen;Su, Shengkai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.4
    • /
    • pp.1877-1891
    • /
    • 2016
  • In video coding, the goal of rate control (RC) is not only to avoid the undesirable fluctuation in bit allocation, but also to provide a good visual perception. In this paper, a novel frame-level rate control algorithm for High Efficiency Video Coding (HEVC) is proposed. Firstly a model that reveals the relationship between bit per pixel (bpp), the bitrate of the intra frame and the bitrate of the subsequent inter frames in a group of pictures (GOP) is established, based on which the target bitrate of the first intra frame is well estimated. Then a novel frame-level bit allocation algorithm is developed, which provides a robust bit balancing scheme between the intra frame and the inter frames in a GOP to achieve the visual quality smoothness throughout the whole sequence. Our experimental results show that when compared to the RC scheme for HEVC encoder HM-16.0, the proposed algorithm can produce reconstructed frames with more consistent objective video quality. In addition, the objective visual quality of the reconstructed frames can be improved with less bitrate.

An Algorithm for Minimizing Exceptional Elements Considering Machine Duplication Cost and Space Constraint in Cellular Manufacturing System (기계중복비용과 공간제약을 고려한 예외적 요소의 최소화 알고리듬)

  • Chang, Ik;Chung, Byung-hee
    • IE interfaces
    • /
    • v.12 no.1
    • /
    • pp.10-18
    • /
    • 1999
  • Job shop manufacturing environments are using the concept of cellular manufacturing systems(CMS) which has several advantages in reducing production lead times, setup times, work-in-process, etc. Utilizing the similarities between cell-machine, part-machine, and the shape/size of parts, CMS can group machines and parts resulting in improved efficiency of this system. However, when grouping machines and parts in machine cells, there inevitably occurs exceptional elements(EEs), which can not operate in the same machine cell. Minimizing these EEs in CMS is a critical point that improving production efficiency. Constraints in machine duplication cost, machining process technology, machining capability, and factory space limitations are main problems that prevent achiving the goal of maintaining an ideal CMS environment. This paper presents an algorithm that minimizes EEs under the constraints of machine duplication cost and factory space limitation. Developing exceptional operation similarity(EOS) by cell-machine incidence matrix and part-machine incidence matrix, it brings the machine cells that operate the parts or not. A mathematical model to minimize machine duplication is developed by EOS, followed by a heuristic algorithm in order to reflect dynamic situation resulting from minimizing exceptional elements process and the mathematical model. A numerical example is provided to illustrate the algorithm.

  • PDF

A Study on Ludo-narrative Harmony in the Video Game "Ghost of Tsushima" (비디오 게임 "고스트 오브 쓰시마"의 게임플레이-스토리의 조화성 고찰)

  • Chun, Bumsue
    • Journal of Korea Game Society
    • /
    • v.21 no.5
    • /
    • pp.87-104
    • /
    • 2021
  • Ludo-narrative dissonance is a prevalent problem among open-world genre video games. However, Ghost of Tsushima (2020) alleviates this issue by designing its characters and narrative structure influenced by Akira Kurosawa's samurai films. The game's protagonist represents "Bushido," a samurai code, and the structure exudes similarity to Joseph Campbell's "Hero's Journey," which heavily influenced Kurosawa's films. The developers also designed the gameplay mechanics such as level-up system, map design, and side quests based on these narrative traits, ultimately making the goal of the narrative and the gameplay mechanics cohesive.

Composition and Abundance of Wood-Boring Beetles Inhabited by Pine Trees

  • Park, Yonghwan;Jang, Taewoong;Won, Daesung;Kim, Jongkuk
    • Journal of Forest and Environmental Science
    • /
    • v.35 no.3
    • /
    • pp.189-196
    • /
    • 2019
  • Plants are consumed by a myriad of organisms that compete for resources. Direct interactions among multiple plant-feeding organisms in a single host can range for each species from positive to negative. Wood-boring beetle faces a number of biotic and abiotic constraints that interfere with the good prospects from the tree. Biotic factors, including arthropod pests and diseases, and abiotic factors, such as drought and water-logging, are the major constraints affecting the species. The present study aimed to provide basic data for analyzing forest health, identify the kinds of wood-boring beetles in the central part of Korea. Our second goal was to analyze the species composition and diversity of regional communities and to examine. A total of 10,461 individual wood-boring beetles belonging to 8 families and 50 species attracted to trap trees in the pine forests were recorded during the study period on study sites. The results of the analysis of collected species showed that the community structure on all study sites was similar. Seasonal occurrences of dominant wood-boring beetles (5 species) from each study site showed the highest number of all species, except for Siphalinus gigas in May, followed by a gradual decline, and the largest number of Siphalinus gigas appeared in June. The similarity index of species composition was relatively high, ranging from 0.75 to 0.90 for each study site.

A Comparative Study on the Storm Hydrograph Separation Methods for Baseflow through Field Applications (수문곡선의 기저유출분리 방법에 대한 고찰)

  • Cho, SungHyen;Moon, Sang-Ho
    • Journal of Soil and Groundwater Environment
    • /
    • v.27 no.1
    • /
    • pp.50-59
    • /
    • 2022
  • There are several methods for separating the baseflow from the hydrograph, and graphical methods (GM) have mostly been used. GMs are those that separate the baseflow from the direct flow simply by connecting rising point with inflection point or points related to some duration from a hydrograph. Environmental tracer method (ETM) is another tool researched and developed under several conditions to estimate the groundwater recharge. The goal of this study is to separate the baseflow component from a storm hydrograph by applying various GMs and ETM, and to compare their results. The baseflow component estimated by ETM was different from the results by GMs in terms of their shapes of fluctuation and flow rates. Another important feature is that the form of the baseflow to which ETM is applied is similar to that of a storm hydrograph. This similarity is presumed to be due to the selection of tracer that respond quickly to rainfall.

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Fast Heuristic Algorithm for Similarity of Trajectories Using Discrete Fréchet Distance Measure (이산 프레셰 거리 척도를 이용한 궤적 유사도 고속계산 휴리스틱 알고리즘)

  • Park, Jinkwan;Kim, Taeyong;Park, Bokuk;Cho, Hwan-Gue
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.4
    • /
    • pp.189-194
    • /
    • 2016
  • A trajectory is the motion path of a moving object. The advances in IT have made it possible to collect an immeasurable amount of various type of trajectory data from a moving object using location detection devices like GPS. The trajectories of moving objects are widely used in many different fields of research, including the geographic information system (GIS) field. In the GIS field, several attempts have been made to automatically generate digital maps of roads by using the vehicle trajectory data. To achieve this goal, the method to cluster the trajectories on the same road is needed. Usually, the $Fr{\acute{e}}chet$ distance measure is used to calculate the distance between a pair of trajectories. However, the $Fr{\acute{e}}chet$ distance measure requires prolonged calculation time for a large amount of trajectories. In this paper, we presented a fast heuristic algorithm to distinguish whether the trajectories are in close distance or not using the discrete $Fr{\acute{e}}chet$ distance measure. This algorithm trades the accuracy of the resulting distance with decreased calculation time. By experiments, we showed that the algorithm could distinguish between the trajectory within 10 meters and the distant trajectory with 95% accuracy and, at worst, 65% of calculation reduction, as compared with the discrete $Fr{\acute{e}}chet$ distance.

A study on image region analysis and image enhancement using detail descriptor (디테일 디스크립터를 이용한 이미지 영역 분석과 개선에 관한 연구)

  • Lim, Jae Sung;Jeong, Young-Tak;Lee, Ji-Hyeok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.6
    • /
    • pp.728-735
    • /
    • 2017
  • With the proliferation of digital devices, the devices have generated considerable additive white Gaussian noise while acquiring digital images. The most well-known denoising methods focused on eliminating the noise, so detailed components that include image information were removed proportionally while eliminating the image noise. The proposed algorithm provides a method that preserves the details and effectively removes the noise. In this proposed method, the goal is to separate meaningful detail information in image noise environment using the edge strength and edge connectivity. Consequently, even as the noise level increases, it shows denoising results better than the other benchmark methods because proposed method extracts the connected detail component information. In addition, the proposed method effectively eliminated the noise for various noise levels; compared to the benchmark algorithms, the proposed algorithm shows a highly structural similarity index(SSIM) value and peak signal-to-noise ratio(PSNR) value, respectively. As shown the result of high SSIMs, it was confirmed that the SSIMs of the denoising results includes a human visual system(HVS).