• Title/Summary/Keyword: 검색 가중치

Search Result 401, Processing Time 0.024 seconds

Method of Measuring Color Difference Between Images using Corresponding Points and Histograms (대응점 및 히스토그램을 이용한 영상 간의 컬러 차이 측정 기법)

  • Hwang, Young-Bae;Kim, Je-Woo;Choi, Byeong-Ho
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.305-315
    • /
    • 2012
  • Color correction between two or multiple images is very crucial for the development of subsequent algorithms and stereoscopic 3D camera system. Even though various color correction methods are proposed recently, there are few methods for measuring the performance of these methods. In addition, when two images have view variation by camera positions, previous methods for the performance measurement may not be appropriate. In this paper, we propose a method of measuring color difference between corresponding images for color correction. This method finds matching points that have the same colors between two scenes to consider the view variation by correspondence searches. Then, we calculate statistics from neighbor regions of these matching points to measure color difference. From this approach, we can consider misalignment of corresponding points contrary to conventional geometric transformation by a single homography. To handle the case that matching points cannot cover the whole regions, we calculate statistics of color difference from the whole image regions. Finally, the color difference is computed by the weighted summation between correspondence based and the whole region based approaches. This weight is determined by calculating the ratio of occupying regions by correspondence based color comparison.

Disproportional Insertion Policy for Improving Query Performance in RFID Tag Data Indices (RFID 태그 데이타 색인의 질의 성능 향상을 위한 불균형 삽입 정책)

  • Kim, Gi-Hong;Hong, Bong-Hee;Ahn, Sung-Woo
    • Journal of KIISE:Databases
    • /
    • v.35 no.5
    • /
    • pp.432-446
    • /
    • 2008
  • Queries for tracing tag locations are among the most challenging requirements in RFID based applications, including automated manufacturing, inventory tracking and supply chain management. For efficient query processing, a previous study proposed the index scheme for storing tag objects, based on the moving object index, in 3-dimensional domain with the axes being the tag identifier, the reader identifier, and the time. In a different way of a moving object index, the ranges of coordinates for each domain are quite different so that the distribution of query regions is skewed to the reader identifier domain. Previous indexes for tags, however, do not consider the skewed distribution for query regions. This results in producing many overlaps between index nodes and query regions and then causes the problem of traversing many index nodes. To solve this problem, we propose a new disproportional insertion and split policy of the index for RFID tags which is based on the R*-tree. For efficient insertion of tag data, our method derives the weighted margin for each node by using weights of each axis and margin of nodes. Based the weighted margin, we can choose the subtree and the split method in order to insert tag data with the minimum cost. Proposed insertion method also reduces the cost of region query by reducing overlapped area of query region and MBRs. Our experiments show that the index based on the proposed insertion and split method considerably improves the performance of queries than the index based on the previous methods.

Exploring Issues Related to the Metaverse from the Educational Perspective Using Text Mining Techniques - Focusing on News Big Data (텍스트마이닝 기법을 활용한 교육관점에서의 메타버스 관련 이슈 탐색 - 뉴스 빅데이터를 중심으로)

  • Park, Ju-Yeon;Jeong, Do-Heon
    • Journal of Industrial Convergence
    • /
    • v.20 no.6
    • /
    • pp.27-35
    • /
    • 2022
  • The purpose of this study is to analyze the metaverse-related issues in the news big data from an educational perspective, explore their characteristics, and provide implications for the educational applicability of the metaverse and future education. To this end, 41,366 cases of metaverse-related data searched on portal sites were collected, and weight values of all extracted keywords were calculated and ranked using TF-IDF, a representative term weight model, and then word cloud visualization analysis was performed. In addition, major topics were analyzed using topic modeling(LDA), a sophisticated probability-based text mining technique. As a result of the study, topics such as platform industry, future talent, and extension in technology were derived as core issues of the metaverse from an educational perspective. In addition, as a result of performing secondary data analysis under three key themes of technology, job, and education, it was found that metaverse has issues related to education platform innovation, future job innovation, and future competency innovation in future education. This study is meaningful in that it analyzes a vast amount of news big data in stages to draw issues from an education perspective and provide implications for future education.

Analysis of Research Topics among Library, Archives and Museums using Topic Modeling (토픽 모델링을 활용한 도서관, 기록관, 박물관간의 연구 주제 분석)

  • Kim, Heesop;Kang, Bora
    • Journal of Korean Library and Information Science Society
    • /
    • v.50 no.4
    • /
    • pp.339-358
    • /
    • 2019
  • The purpose of this study is to understand the topics of the research for the establishment of cooperative platform between libraries, archives, and museums that carry out the common task of providing knowledge information in a broad sense. To achieve the purpose of this study, 637 bibliographic information on three institutions were collected from the Web version of Scopus database. Among the collected bibliographic information, 5,218 words were extracted through NetMiner V.4 and analysed topic modeling. The results are as follows: First, as a result of analyzing the frequency of word appearance according to the tf-idf weight 'Preservation' was the most hottest topic. Second, the topic modeling analysis through LDA(Latent Dirichlet Allocation) algorithm resulted in 13 topic areas. Third, as a result of expressing 13 topic areas as a network, repository construction was the central topic, and the research topics such as cooperation among institutions, conservation environment for collections, system and policy discovery, life cycle of collections, exhibition of information resources, and information retrieval were closely related to the central topic. Fourth, the trend of 13 topic areas by year 1998 is limited to the specific subjects such as system and policy discovery, information retrieval, and life cycle of collections, while the subsequent studies have been carried out after that year.

A Korean Document Sentiment Classification System based on Semantic Properties of Sentiment Words (감정 단어의 의미적 특성을 반영한 한국어 문서 감정분류 시스템)

  • Hwang, Jae-Won;Ko, Young-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.4
    • /
    • pp.317-322
    • /
    • 2010
  • This paper proposes how to improve performance of the Korean document sentiment-classification system using semantic properties of the sentiment words. A sentiment word means a word with sentiment, and sentiment features are defined by a set of the sentiment words which are important lexical resource for the sentiment classification. Sentiment feature represents different sentiment intensity in general field and in specific domain. In general field, we can estimate the sentiment intensity using a snippet from a search engine, while in specific domain, training data can be used for this estimation. When the sentiment intensity of the sentiment features are estimated, it is called semantic orientation and is used to estimate the sentiment intensity of the sentences in the text documents. After estimating sentiment intensity of the sentences, we apply that to the weights of sentiment features. In this paper, we evaluate our system in three different cases such as general, domain-specific, and general/domain-specific semantic orientation using support vector machine. Our experimental results show the improved performance in all cases, and, especially in general/domain-specific semantic orientation, our proposed method performs 3.1% better than a baseline system indexed by only content words.

Analysis of a Compound-Target Network of Oryeong-san (오령산 구성성분-타겟 네트워크 분석)

  • Kim, Sang-Kyun
    • Journal of the Korea Knowledge Information Technology Society
    • /
    • v.13 no.5
    • /
    • pp.607-614
    • /
    • 2018
  • Oryeong-san is a prescription widely used for diseases where water is stagnant because it has the effect of circulating the water in the body and releasing it into the urine. In order to investigate the mechanisms of oryeong-san, we in this paper construct and analysis the compound-target network of medicinal materials constituting oryeong-san based on a systems pharmacology approach. First, the targets related to the 475 chemical compounds of oryeong-san were searched in the STITCH database, and the search results for the interactions between compounds and targets were downloaded as XML files. The compound-target network of oryeong-san is visualized and explored using Gephi 0.8.2, which is an open-source software for graphs and networks. In the network, nodes are compounds and targets, and edges are interactions between the nodes. The edge is weighted according to the reliability of the interaction. In order to analysis the compound-target network, it is clustered using MCL algorithm, which is able to cluster the weighted network. A total of 130 clusters were created, and the number of nodes in the cluster with the largest number of nodes was 32. In the clustered network, it was revealed that the active compounds of medicinal materials were associated with the targets for regulating the blood pressure in the kidney. In the future, we will clarify the mechanisms of oryeong-san by linking the information on disease databases and the network of this research.

A Single-End-Point DTW Algorithm for Keyword Spotting (핵심어 검출을 위한 단일 끝점 DTW알고리즘)

  • 최용선;오상훈;이수영
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.209-219
    • /
    • 2004
  • In order to implement a real time hardware for keyword spotting, we propose a Single-End-Point DTW(SEP-DTW) algorithm which is simple and less complex for computation. The SEP-DTW algorithm only needs a single end point which enables efficient applications, and it has a small wont of computations because the global search area is divided into successive local search areas. Also, we adopt new local constraints and a new distance measure for a better performance of the SEP-DTW algorithm. Besides, we make a normalization of feature same vectors so that they have the same variance in each frequency bin, and each frame has the same energy levels. To construct several reference patterns for each keyword, we use a clustering algorithm for all training patterns, and mean vectors in every cluster are taken as reference patterns. In order to detect a key word for input streams of speech, we measure the distances between reference patterns and input pattern, and we make a decision whether the distances are smaller than a pre-defined threshold value. With isolated speech recognition and keyword spotting experiments, we verify that the proposed algorithm has a better performance than other methods.

A New Semantic Distance Measurement Method using TF-IDF in Linked Open Data (링크드 오픈 데이터에서 TF-IDF를 이용한 새로운 시맨틱 거리 측정 기법)

  • Cho, Jung-Gil
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2020
  • Linked Data allows structured data to be published in a standard way that datasets from various domains can be interlinked. With the rapid evolution of Linked Open Data(LOD), researchers are exploiting it to solve particular problems such as semantic similarity assessment. In this paper, we propose a method, on top of the basic concept of Linked Data Semantic Distance (LDSD), for calculating the Linked Data semantic distance between resources that can be used in the LOD-based recommender system. The semantic distance measurement model proposed in this paper is based on a similarity measurement that combines the LOD-based semantic distance and a new link weight using TF-IDF, which is well known in the field of information retrieval. In order to verify the effectiveness of this paper's approach, performance was evaluated in the context of an LOD-based recommendation system using mixed data of DBpedia and MovieLens. Experimental results show that the proposed method shows higher accuracy compared to other similar methods. In addition, it contributed to the improvement of the accuracy of the recommender system by expanding the range of semantic distance calculation.

An Optimal Path Search Method based on Traffic Information for Telematics Terminals (텔레매틱스 단말기를 위한 교통 정보를 활용한 최적 경로 탐색 기법)

  • Kim, Jin-Deog
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2221-2229
    • /
    • 2006
  • Optimal path search algorithm which is a killer application of mobile device to utilize location information should consider traffic flows of the roads as well as the distance between a departure and destination. The existing path search algorithms, however, are net able to cope efficiently with the change of the traffic flows. In this paper, we propose a new optimal path search algorithm. The algorithm takes the current flows into consideration in order to reduce the cost to get destination. It decomposes the road network into Fixed Grid to get variable heuristics. We also carry out the experiments with Dijkstra and Ar algorithm in terms of the execution time, the number of node accesses and the accuracy of path. The results obtained from the experimental tests show the proposed algorithm outperforms the others. The algorithm is highly expected to be useful in a advanced telematics systems.

Development of Multimedia Content Usage Analysis Service Platform Utilizing Attention and Understanding Flows (멀티미디어 콘텐츠 응시와 이해도 기반 분석 서비스 플랫폼 기술)

  • Ko, Ginam;Moon, Nammee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.8
    • /
    • pp.315-320
    • /
    • 2015
  • The purposed of this research is to develop multimedia content usage analysis service platform. In the proposed platform, the content gazing behaviors of the users are monitored and profiled in real-time and a set of quantifiable metrics is provided. These metrics are used to determine the closeness of the users' behavior from the intent set by the provider. Based on the evaluation, it is possible to assess the effectiveness of the contents themselves as well. The content usage assessment is accomplished by utilizing the intention flow and the intent weight, which are embedded into the content by the content provider. Proposed methodology can be effectively applied and used in various application domains such as in education and in commercial advertisements.