• Title/Summary/Keyword: search word

Search Result 384, Processing Time 0.024 seconds

New Fast and Cost effective Chien Search Machine Design Using Galois Subfield Transformation (갈로이스 부분장 변환을 이용한 새로운 고속의 경제적 치엔탐색기의 설계법에 대하여)

  • An, Hyeong-Keon;Hong, Young-Jin;Kim, Jin-Young
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.3 s.357
    • /
    • pp.61-67
    • /
    • 2007
  • In Reed Solomon decoder, when there are more than 4 error symbols, we usually use Chien search machine to find those error positions. In this case, classical method requires complex and relatively slow digital circuitry to implement it. In this paper we propose New fast and cost effective Chien search machine design method using Galois Subfield transformation. Example is given to show the method is working well. This new design can be applied to the case where there are more than 5 symbol errors in the Reed-Solomon code word.

Recognition Time Reduction Technique for the Time-synchronous Viterbi Beam Search (시간 동기 비터비 빔 탐색을 위한 인식 시간 감축법)

  • 이강성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.6
    • /
    • pp.46-50
    • /
    • 2001
  • This paper proposes a new recognition time reduction algorithm Score-Cache technique, which is applicable to the HMM-base speech recognition system. Score-Cache is a very unique technique that has no other performance degradation and still reduces a lot of search time. Other search reduction techniques have trade-offs with the recognition rate. This technique can be applied to the continuous speech recognition system as well as the isolated word speech recognition system. W9 can get high degree of recognition time reduction by only replacing the score calculating function, not changing my architecture of the system. This technique also can be used with other recognition time reduction algorithms which give more time reduction. We could get 54% of time reduction at best.

  • PDF

지능형 전문가관리 프레임워크를 위한 주제 분야 계층 자동 생성

  • Yang, Geun-U;Lee, Sang-Ro
    • 한국경영정보학회:학술대회논문집
    • /
    • 2007.11a
    • /
    • pp.294-299
    • /
    • 2007
  • In this paper, we introduce the methodology for the automatic generation of the subject field hierarchy for Intellgent Expert Management Framework using WordNet. Intelligent Expert Management Framework, which is proposed as an appropriate method to manage valuable tacit knowledge within the organization, defines the expert profile structure and proposes the efficient method to automate the process to collect and update the expert profile information based on the profile structure defined. To increase the satisfaction level of users, additional intelligent search features are defined and users can be given the list of experts in related or similar expert fields when they perform expert searches based on the expert database being built. To enable automatic profiling of the organizational experts as well as intelligent expert searches, the subject field hierarchy, upon which the expert profiles are classified and expert searches for similar fields are performed, should be predefined. In this paper, we propose the WordNet library method that first eliminates the ambiguity of the senses of nominal data values, constructs the subject field hierarchy by overlapping the hypernym of the remaining senses, and lastly adjusts the derived hierarchy to the preference of users. Based on the proposed methodology, we expect to avoid the prohibitive costs in building large subject field hierarchies when manually done as well as maintain the objectivity of the hierarchies.

  • PDF

Word Sense Disambiguation based on Concept Learning with a focus on the Lowest Frequency Words (저빈도어를 고려한 개념학습 기반 의미 중의성 해소)

  • Kim Dong-Sung;Choe Jae-Woong
    • Language and Information
    • /
    • v.10 no.1
    • /
    • pp.21-46
    • /
    • 2006
  • This study proposes a Word Sense Disambiguation (WSD) algorithm, based on concept learning with special emphasis on statistically meaningful lowest frequency words. Previous works on WSD typically make use of frequency of collocation and its probability. Such probability based WSD approaches tend to ignore the lowest frequency words which could be meaningful in the context. In this paper, we show an algorithm to extract and make use of the meaningful lowest frequency words in WSD. Learning method is adopted from the Find-Specific algorithm of Mitchell (1997), according to which the search proceeds from the specific predefined hypothetical spaces to the general ones. In our model, this algorithm is used to find contexts with the most specific classifiers and then moves to the more general ones. We build up small seed data and apply those data to the relatively large test data. Following the algorithm in Yarowsky (1995), the classified test data are exhaustively included in the seed data, thus expanding the seed data. However, this might result in lots of noise in the seed data. Thus we introduce the 'maximum a posterior hypothesis' based on the Bayes' assumption to validate the noise status of the new seed data. We use the Naive Bayes Classifier and prove that the application of Find-Specific algorithm enhances the correctness of WSD.

  • PDF

Effects of Spatial Attention for Words on Implicit Memory (단어에 대한 공각적 주의가 암묵기억에 미치는 영향)

  • 심원목;김민식
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.3_4
    • /
    • pp.13-22
    • /
    • 2000
  • The present study examined the role of spatial attention in implicit memory for words when the word identity processing was not required. Spatial attention to the identity-irrelevant perceptual features of the words was manipulated by using a visual search task (Experiment 1) or a focused attention task (Experiment 2). In two e experiments. a significant priming effect was not found for the target words as well as for the distractor words. Implicit memory for words was not affected by spatial attention on the perceptual properties of the words. indicating that the word identity processing is required to produce priming.

  • PDF

Construction of Full-Text Database and Implementation of Service Environment for Electronic Theses and Dissertations (학위논문 전문데이터베이스 구축 및 서비스환경 구현)

  • Lee, Kyi-Ho;Kim, Jin-Suk;Yoon, Wha-Muk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.1
    • /
    • pp.41-49
    • /
    • 2000
  • Form the middle of 199os, most universities in Korea have requested their students to submit not only the original text books but also their Electronic Theses and Dissertations(ETD) for masters degree and doctorates degree. The ETD submitted by the students are usually developed by various kinds of word processors such as MS-Word, LaTex, and HWP. Since there is no standard format for ETD to merge various different formats yet, it is difficult to construct the integrated database that provides full-tex service. In this paper, we transform three different ETD formats into a unified one, construct a full-text database, and implement the full-text retrieval system for effective search in the Internet environment.

  • PDF

Research trends related to childhood and adolescent cancer survivors in South Korea using word co-occurrence network analysis

  • Kang, Kyung-Ah;Han, Suk Jung;Chun, Jiyoung;Kim, Hyun-Yong
    • Child Health Nursing Research
    • /
    • v.27 no.3
    • /
    • pp.201-210
    • /
    • 2021
  • Purpose: This study analyzed research trends related to childhood and adolescent cancer survivors (CACS) using word co-occurrence network analysis on studies registered in the Korean Citation Index (KCI). Methods: This word co-occurrence network analysis study explored major research trends by constructing a network based on relationships between keywords (semantic morphemes) in the abstracts of published articles. Research articles published in the KCI over the past 10 years were collected using the Biblio Data Collector tool included in the NetMiner Program (version 4), using "cancer survivors", "adolescent", and "child" as the main search terms. After pre-processing, analyses were conducted on centrality (degree and eigenvector), cohesion (community), and topic modeling. Results: For centrality, the top 10 keywords included "treatment", "factor", "intervention", "group", "radiotherapy", "health", "risk", "measurement", "outcome", and "quality of life". In terms of cohesion and topic analysis, three categories were identified as the major research trends: "treatment and complications", "adaptation and support needs", and "management and quality of life". Conclusion: The keywords from the three main categories reflected interdisciplinary identification. Many studies on adaptation and support needs were identified in our analysis of nursing literature. Further research on managing and evaluating the quality of life among CACS must also be conducted.

Design of environmental technology search system using synonym dictionary (유의어 사전 기반 환경기술 검색 시스템 설계)

  • XIANGHUA, PIAO;HELIN, YIN;Gu, Yeong Hyeon;Yoo, Seong Joon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.582-586
    • /
    • 2020
  • 국가기후기술정보시스템은 국내 환경기술과 국외의 수요기술 정보를 제공하는 검색 시스템이다. 그러나 기존의 시스템은 유사한 뜻을 가진 단일 단어와 복수 단어들을 모두 식별하지 못하기에 유의어를 입력했을 경우 검색 결과가 다르다. 이런 문제점을 해결하기 위해 본 연구에서는 유의어 사전을 기반으로한 환경기술 검색 시스템을 제안한다. 이 시스템은 Word2vec 모델과 HDBSCAN(Hierarchical Density-Based Spatial Clustering of Application with Noise) 알고리즘을 이용해 유의어 사전을 구축한다. Word2vec 모델을 이용해 한국어와 영어 위키백과 코퍼스에 대해 형태소 분석을 진행한 후 단일 단어와 복수 단어를 포함한 단어를 추출하고 벡터화를 진행한다. 그 다음 HDBSCAN 알고리즘을 이용해 벡터화된 단어를 군집화 해주고 유의어를 추출한다. 기존의 Word2vec 모델이 모든 단어 간의 거리를 계산하고 유의어를 추출하는 과정과 대비하면 시간이 단축되는 역할을 한다. 추출한 유의어를 통합해 유의어 사전을 구축한다. 국가기후기술정보시스템에서 제공하는 국내외 기술정보, 기술정보 키워드와 구축한 유의어 사전을 Multi-filter를 제공하는 Elasticsearch에 적용해 최종적으로 유의어를 식별할 수 있는 환경기술 검색 시스템을 제안한다.

  • PDF

How Google Advertisements Attract Consumers' Call-to-action and Electronic Word-of-mouth

  • Tser-Yieth Chen;Hsueh-Ling Wu;Jiun-Hua Yun
    • Asia Marketing Journal
    • /
    • v.26 no.2
    • /
    • pp.77-89
    • /
    • 2024
  • This study investigated central and peripheral route factors to assess the impact of Google Advertisements on how these factors contribute to users' call-to-action (CTA) and electronic word-of-mouth (e-WOM) behaviors. We explored the persuasive effects of Google Advertisements on consumers by using a dataset of 483 valid empirical samples from Taiwan. We employed structural equation modeling (SEM) to examine the hypotheses. The empirical results of this study indicate that both peripheral (image appeal) and central (information completeness) routes positively lead to the persuasion effect. This finding confirms that the peripheral and central routes increase the persuasion effect. The empirical results indicate that the most effective pathway was image appeal to the persuasion effect and, ultimately, to call to action. Image appeal was also determined to be a secondary pathway that enhances the persuasion effect, ultimately leading to e-WOM. These findings have valuable implications for companies seeking to attract customers to purchase electronic products through the Google search engine. The novelty of our study is that it includes the peripheral route. Our study findings were derived from the symbolic value lens, rather than the central route based solely on the utilitarian and hedonistic value perspective.

A Movie Recommendation System based on Fuzzy-AHP and Word2vec (Fuzzy-AHP와 Word2Vec 학습 기법을 이용한 영화 추천 시스템)

  • Oh, Jae-Taek;Lee, Sang-Yong
    • Journal of Digital Convergence
    • /
    • v.18 no.1
    • /
    • pp.301-307
    • /
    • 2020
  • In recent years, a recommendation system is introduced in many different fields with the beginning of the 5G era and making a considerably prominent appearance mainly in books, movies, and music. In such a recommendation system, however, the preference degrees of users are subjective and uncertain, which means that it is difficult to provide accurate recommendation service. There should be huge amounts of learning data and more accurate estimation technologies in order to improve the performance of a recommendation system. Trying to solve this problem, this study proposed a movie recommendation system based on Fuzzy-AHP and Word2vec. The proposed system used Fuzzy-AHP to make objective predictions about user preference and Word2vec to classify scraped data. The performance of the system was assessed by measuring the accuracy of Word2vec outcomes based on grid search and comparing movie ratings predicted by the system with those by the audience. The results show that the optimal accuracy of cross validation was 91.4%, which means excellent performance. The differences in move ratings between the system and the audience were compared with the Fuzzy-AHP system, and it was superior at approximately 10%.