• Title/Summary/Keyword: 검색 가중치

Search Result 401, Processing Time 0.021 seconds

Color Component Analysis For Image Retrieval (이미지 검색을 위한 색상 성분 분석)

  • Choi, Young-Kwan;Choi, Chul;Park, Jang-Chun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Recently, studies of image analysis, as the preprocessing stage for medical image analysis or image retrieval, are actively carried out. This paper intends to propose a way of utilizing color components for image retrieval. For image retrieval, it is based on color components, and for analysis of color, CLCM (Color Level Co-occurrence Matrix) and statistical techniques are used. CLCM proposed in this paper is to project color components on 3D space through geometric rotate transform and then, to interpret distribution that is made from the spatial relationship. CLCM is 2D histogram that is made in color model, which is created through geometric rotate transform of a color model. In order to analyze it, a statistical technique is used. Like CLCM, GLCM (Gray Level Co-occurrence Matrix)[1] and Invariant Moment [2,3] use 2D distribution chart, which use basic statistical techniques in order to interpret 2D data. However, even though GLCM and Invariant Moment are optimized in each domain, it is impossible to perfectly interpret irregular data available on the spatial coordinates. That is, GLCM and Invariant Moment use only the basic statistical techniques so reliability of the extracted features is low. In order to interpret the spatial relationship and weight of data, this study has used Principal Component Analysis [4,5] that is used in multivariate statistics. In order to increase accuracy of data, it has proposed a way to project color components on 3D space, to rotate it and then, to extract features of data from all angles.

The Effectiveness of Hierarchic Clustering on Query Results in OPAC (OPAC에서 탐색결과의 클러스터링에 관한 연구)

  • Ro, Jung-Soon
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.38 no.1
    • /
    • pp.35-50
    • /
    • 2004
  • This study evaluated the applicability of the static hierarchic clustering model to clustering query results in OPAC. Two clustering methods(Between Average Linkage(BAL) and Complete Linkage(CL)) and two similarity coefficients(Dice and Jaccard) were tested on the query results retrieved from 16 title-based keyword searchings. The precision of optimal dusters was improved more than 100% compared with title-word searching. There was no difference between similarity coefficients but clustering methods in optimal cluster effectiveness. CL method is better in precision ratio but BAL is better in recall ratio at the optimal top-level and bottom-level clusters. However the differences are not significant except higher recall ratio of BAL at the top-level duster. Small number of clusters and long chain of hierarchy for optimal cluster resulted from BAL could not be desirable and efficient.

Precision Analysis of the STOMP(FW) Algorithm According to the Spatial Conceptual Hierarchy (공간 개념 계층에 따른 STOMP(FW) 알고리즘의 정확도 분석)

  • Lee, Yon-Sik;Kim, Young-Ja;Park, Sung-Sook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.12
    • /
    • pp.5015-5022
    • /
    • 2010
  • Most of the existing pattern mining techniques are capable of searching patterns according to the continuous change of the spatial information of an object but there is no constraint on the spatial information that must be included in the extracted pattern. Thus, the existing techniques are not applicable to the optimal path search between specific nodes or path prediction considering the nodes that a moving object is required to round during a unit time. In this paper, the precision of the path search according to the spatial hierarchy is analyzed using the Spatial-Temporal Optimal Moving Pattern(with Frequency & Weight) (STOPM(FW)) algorithm which searches for the optimal moving path by considering the most frequent pattern and other weighted factors such as time and cost. The result of analysis shows that the database retrieval time is minimized through the reduction of retrieval range applying with the spatial constraints. Also, the optimal moving pattern is efficiently obtained by considering whether the moving pattern is included in each hierarchical spatial scope of the spatial hierarchy or not.

Efficient Harmonic-CELP Based Low Bit Rate Speech Coder (효율적인 하모닉-CELP 구조를 갖는 저 전송률 음성 부호화기)

  • 최용수;김경민;윤대희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.5
    • /
    • pp.35-47
    • /
    • 2001
  • This paper describes an efficient harmonic-CELP speech coder by taking advantages of harmonic and CELP coders into account. According to frame voicing decision, the proposed harmonic-CELP coder adopts the RP-VSELP coder as a fast CELP in case of an unvoiced frame, or an improved harmonic coder in case of a voiced frame. The proposed coder has main features as follows: simple pitch detection, fast harmonic estimation, variable dimension harmonic vector quantization, perceptual weighting reflecting frequency resolution, fast harmonic synthesis, naturalness control using band voicing, and multi-mode. These features make the proposed coder require very low complexity, compared with HVXC coder To demonstrate the performance of the proposed coder, a 2.4 kbps coder has been implemented and compared with reference coders. From results of informal listening tests, the proposed coder showed good quality while requiring low delay and complexity.

  • PDF

Inverse Document Frequency-Based Word Embedding of Unseen Words for Question Answering Systems (질의응답 시스템에서 처음 보는 단어의 역문헌빈도 기반 단어 임베딩 기법)

  • Lee, Wooin;Song, Gwangho;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.902-909
    • /
    • 2016
  • Question answering system (QA system) is a system that finds an actual answer to the question posed by a user, whereas a typical search engine would only find the links to the relevant documents. Recent works related to the open domain QA systems are receiving much attention in the fields of natural language processing, artificial intelligence, and data mining. However, the prior works on QA systems simply replace all words that are not in the training data with a single token, even though such unseen words are likely to play crucial roles in differentiating the candidate answers from the actual answers. In this paper, we propose a method to compute vectors of such unseen words by taking into account the context in which the words have occurred. Next, we also propose a model which utilizes inverse document frequencies (IDF) to efficiently process unseen words by expanding the system's vocabulary. Finally, we validate that the proposed method and model improve the performance of a QA system through experiments.

An efficient Decision-Making using the extended Fuzzy AHP Method(EFAM) (확장된 Fuzzy AHP를 이용한 효율적인 의사결정)

  • Ryu, Kyung-Hyun;Pi, Su-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.828-833
    • /
    • 2009
  • WWW which is an applicable massive set of document on the Web is a thesaurus of various information for users. However, Search engines spend a lot of time to retrieve necessary information and to filter out unnecessary information for user. In this paper, we propose the EFAM(the Extended Fuzzy AHP Method) model to manage the Web resource efficiently, and to make a decision in the problem of specific domain definitely. The EFAM model is concerned with the emotion analysis based on the domain corpus information, and it composed with systematic common concept grids by the knowledge of multiple experts. Therefore, The proposed the EFAM model can extract the documents by considering on the emotion criteria in the semantic context that is extracted concept from the corpus of specific domain and confirms that our model provides more efficient decision-making through an experiment than the conventional methods such as AHP and Fuzzy AHP which describe as a hierarchical structure elements about decision-making based on the alternatives, evaluation criteria, subjective attribute weight and fuzzy relation between concept and object.

Development of Freeway Traffic Incident Clearance Time Prediction Model by Accident Level (사고등급별 고속도로 교통사고 처리시간 예측모형 개발)

  • LEE, Soong-bong;HAN, Dong Hee;LEE, Young-Ihn
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.5
    • /
    • pp.497-507
    • /
    • 2015
  • Nonrecurrent congestion of freeway was primarily caused by incident. The main cause of incident was known as a traffic accident. Therefore, accurate prediction of traffic incident clearance time is very important in accident management. Traffic accident data on freeway during year 2008 to year 2014 period were analyzed for this study. KNN(K-Nearest Neighbor) algorithm was hired for developing incident clearance time prediction model with the historical traffic accident data. Analysis result of accident data explains the level of accident significantly affect on the incident clearance time. For this reason, incident clearance time was categorized by accident level. Data were sorted by classification of traffic volume, number of lanes and time periods to consider traffic conditions and roadway geometry. Factors affecting incident clearance time were analyzed from the extracted data for identifying similar types of accident. Lastly, weight of detail factors was calculated in order to measure distance metric. Weight was calculated with applying standard method of normal distribution, then incident clearance time was predicted. Prediction result of model showed a lower prediction error(MAPE) than models of previous studies. The improve model developed in this study is expected to contribute to the efficient highway operation management when incident occurs.

Fast RSST Algorithm Using Link Classification and Elimination Technique (가지 분류 및 제거기법을 이용한 고속 RSST 알고리듬)

  • Hong, Won-Hak
    • 전자공학회논문지 IE
    • /
    • v.43 no.4
    • /
    • pp.43-51
    • /
    • 2006
  • Segmentation method using RSST has many advantages in extracting of accurate region boundaries and controlling the resolution of segmented result and so on. In this paper, we propose three fast RSST algorithms for image segmentation. In first method, we classify links according to weight size for fast link search. In the second method, very similar links before RSST construction are eliminated. In third method, the links of very small regions which are not important for human eye are eliminated. As a result, the total times elapsed for segmentation are reduced by about 10 $\sim$ 40 times, and reconstructed images based on the segmentation results show little degradation of PSNR and visual quality.

A Researcher Model based on Ontology and a Social Network Construction Technique (온톨로지 기반의 연구자 모델링 기법과 연구자 네트워크 구축 기법)

  • Mun, Hyeon-Jeong;Jun, In-Ha;Woo, Yong-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.7
    • /
    • pp.1022-1031
    • /
    • 2009
  • In this paper, we propose a researcher modeling technique based on ontology and construct social network for researchers using diverse relational properties. User ontology schema is created by extending the existing HR-XML model for a researcher model. User ontology schema and instance are created by OWL. We compose social network model for efficient cooperation between researchers using static relational properties such as educational background and dynamic relational properties such as co-authors and co-workers, etc. Closeness has direction because researcher network is differently configured by the researchers. We define inferencing rules using SWRL and inference ontology rules using racer inference machine to compose direct relationships between researchers. The proposed model for researchers can be applied to the cooperation model for researchers by retrieving common expert group dynamically.

  • PDF

(A User Authentication System Using Geometric Analysis and Similarity Comparison) (얼굴의 기하학적 분석과 유사도 비교를 이용한 사용자 인증 시스템)

  • 최내원;류동엽;지정규
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.9
    • /
    • pp.1269-1278
    • /
    • 2002
  • The more high growth of knowledge, the more need personal identity technique. Fingerprint or iris of the eye identity techniques are already commercialized and used various field. Using human face recognition or authentication are not high performance yet. But application for an organism or face recognition are expected getting important. We propose a user recognition system by verifying similarity comparison of eye and lip component images which are splitted, calculated characteristic rate of each facial components and added weight to special formula. Through test proposed methods and analysis the result, we got a high recognition rate.

  • PDF