• Title/Summary/Keyword: 시간 가중치

Search Result 792, Processing Time 0.028 seconds

XML Document Keyword Weight Analysis based Paragraph Extraction Model (XML 문서 키워드 가중치 분석 기반 문단 추출 모델)

  • Lee, Jongwon;Kang, Inshik;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.11
    • /
    • pp.2133-2138
    • /
    • 2017
  • The analysis of existing XML documents and other documents was centered on words. It can be implemented using a morpheme analyzer, but it can classify many words in the document and cannot grasp the core contents of the document. In order for a user to efficiently understand a document, a paragraph containing a main word must be extracted and presented to the user. The proposed system retrieves keyword in the normalized XML document. Then, the user extracts the paragraphs containing the keyword inputted for searching and displays them to the user. In addition, the frequency and weight of the keyword used in the search are informed to the user, and the order of the extracted paragraphs and the redundancy elimination function are minimized so that the user can understand the document. The proposed system can minimize the time and effort required to understand the document by allowing the user to understand the document without reading the whole document.

A sample survey design for service satisfaction evaluation of regional education offices (지역교육청 수요자 만족도조사를 위한 표본설계에 관한 연구)

  • Heo, Sun-Yeong;Chang, Duk-Joon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.4
    • /
    • pp.669-679
    • /
    • 2010
  • A sample survey design is suggested for the service satisfaction evaluation of regional education offices based on the sample size of 2009 Gyeongnam regional education offices's customer satisfaction survey. The sample design is developed to fit the goal of evaluation of individual regional offices and allocate at least the minimum sample size to each city or county in Gyeongnam to achieve the goal of the survey. The population is stratified according to the regions and the types of schools, and the sample of schools is selected with proportional to the size of classes within each stratum. Finally, each sample student is selected according to two-stage cluster sampling within each sample school. Weighting averages, weighting totals and so on can be evaluated for analysis purposes. Their variance estimates can be evaluated using re-sampling methods like BBR, Jackknife, linearization-substitution methods, which are generally used for the data from a complex sample.

Discovering Frequent Itemsets Reflected User Characteristics Using Weighted Batch based on Data Stream (스트림 데이터 환경에서 배치 가중치를 이용하여 사용자 특성을 반영한 빈발항목 집합 탐사)

  • Seo, Bok-Il;Kim, Jae-In;Hwang, Bu-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.1
    • /
    • pp.56-64
    • /
    • 2011
  • It is difficult to discover frequent itemsets based on whole data from data stream since data stream has the characteristics of infinity and continuity. Therefore, a specialized data mining method, which reflects the properties of data and the requirement of users, is required. In this paper, we propose the method of FIMWB discovering the frequent itemsets which are reflecting the property that the recent events are more important than old events. Data stream is splitted into batches according to the given time interval. Our method gives a weighted value to each batch. It reflects user's interestedness for recent events. FP-Digraph discovers the frequent itemsets by using the result of FIMWB. Experimental result shows that FIMWB can reduce the generation of useless items and FP-Digraph method shows that it is suitable for real-time environment in comparison to a method based on a tree(FP-Tree).

Efficient VLSI Architecture for Disparity Calculation based on Geodesic Support-weight (Geodesic Support-weight 기반 깊이정보 추출 알고리즘의 효율적인 VLSI 구조)

  • Ryu, Donghoon;Park, Taegeun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.9
    • /
    • pp.45-53
    • /
    • 2015
  • Adaptive support-weight based algorithm can produce better disparity map compared to generic area-based algorithms and also can be implemented as a realtime system. In this paper, we propose a realtime system based on geodesic support-weight which performs better segmentation of objects in the window. The data scheduling is analyzed for efficient hardware design and better performance and the parallel architecture for weight update which takes the longest delay is proposed. The exponential function is efficiently designed using a simple step function by careful error analysis. The proposed architecture is designed with verilogHDL and synthesized using Donbu Hitek 0.18um standard cell library. The proposed system shows 2.22% of error rate and can run up to 260Mhz (25fps) operation frequency with 182K gates.

A Contribution Culling Method for Fast Rendering of Complex Urban Scenes (복잡한 도시장면의 고속 렌더링을 위한 기여도 컬링 기법)

  • Lee, Bum-Jong;Park, Jong-Seung
    • Journal of Korea Game Society
    • /
    • v.7 no.1
    • /
    • pp.43-52
    • /
    • 2007
  • This article describes a new contribution culling method for fast rendering of complex huge urban scenes. A view frustum culling technique is used for fast rendering of complex scenes. To support the levels-of-detail, we subdivide the image regions and construct a weighted quadtree. Only visible objects at the current camera position contributes the current quadtree and the weight is assigned to each object in the quadtree. The weight is proportional to the image area of the projected object, so large buildings in the far distance are less likely to be culled out than small buildings in the near distance. The rendering time is nearly constant not depending on the number of visible objects. The proposed method has applied to a new metropolitan region which is currently under development. Experimental results showed that the rendering quality of the proposed method is barely distinguishable from the rendering quality of the original method, while the proposed method reduces the number of polygons by about 9%. Experimental results showed that the proposed rendering method is appropriate for real-time rendering applications of complex huge scenes.

  • PDF

A Practical RWA Algorithm-based on Lookup Table for Edge Disjoint Paths (EDP들의 참조 테이블을 이용한 실용적 인 경로 설정 및 파장 할당 알고리즘)

  • 김명희;방영철;정민영;이태진;추현승
    • Journal of KIISE:Information Networking
    • /
    • v.31 no.2
    • /
    • pp.123-130
    • /
    • 2004
  • Routing and wavelength assignment(RWA) problem is an important issue in optical transport networks based on wavelength division multiplexing(WDM) technique. It is typically solved using a combination of linear programming and graph coloring, or path selection based graph algorithms. Such methods are either complex or make extensive use of heuristics. In this paper we propose a novel and efficient approach which basically obtains the maximum edge disjoint paths (EDPs) for each source-destination demand pair. And those EDPs obtained are stored in Lookup Table and used for the update of weight matrix. Routes are determined in order by the weight matrix for the demand set. The comprehensive computer simulation shows that the Proposed algorithm uses similar or fewer wavelengths with significantly less execution time than bounded greedy approach (BGA) for EDP which is currently known to be effective in practice.

A Study on Quality evaluation Methodology Establishment of Anti-Virus Software based on the Real Test Environment (리얼 테스트 환경 기반의 안티바이러스 소프트웨어의 품질평가 방법론 정립에 관한 연구)

  • Maeng, Doo-Iyel;Park, Jong-Kae;Kim, Sung-Joo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.3B
    • /
    • pp.440-452
    • /
    • 2010
  • For an evaluation of the software product, the national/international organizations and labs have been studying various methodologies for the quality on the basis of ISO/IEC Quality Assurance System, but they still have many issues in evaluation of the anti-virus software that has special characteristics of complexity. In this paper, to establish a methodology of the quality evaluation for the anti-virus software, fulfilling the requirements more than reasonable level, a process to draw the evaluation items and quantification was established. And the information of weight was objectified by analyzing the relative magnitude between each factors. Based on the defined information (evaluation item, weight), conducting the quality evaluations for 70 kinds of open anti-virus software collected from the portal sites in the real test environment, and as a result of the positive analysis with user's long-term experience, this paper justifies the evaluation item and the weight.

A Study of Recommending Service Using Mining Sequential Pattern based on Weight (가중치 기반의 순차패턴 탐사를 이용한 추천서비스에 관한 연구)

  • Cho, Young-Sung;Moon, Song-Chul;Ahn, Yeon S.
    • Journal of Digital Contents Society
    • /
    • v.15 no.6
    • /
    • pp.711-719
    • /
    • 2014
  • Along with the advent of ubiquitous computing environment, it is becoming a part of our common life style that the demands for enjoying the wireless internet using intelligent portable device such as smart phone and iPad, are increasing anytime or anyplace without any restriction of time and place. The recommending service becomes a very important technology which can find exact information to present users, then is easy for customers to reduce their searching effort to find out the items with high purchasability in e-commerce. Traditional mining association rule ignores the difference among the transactions. In order to do that, it is considered the importance of type of merchandise or service and then, we suggest a new recommending service using mining sequential pattern based on weight to reflect frequently changing trends of purchase pattern as time goes by and as often as customers need different merchandises on e-commerce being extremely diverse. To verify improved better performance of proposing system than the previous systems, we carry out the experiments in the same dataset collected in a cosmetic internet shopping mall.

Three-Dimensional Image Registration using a Locally Weighted-3D Distance Map (지역적 가중치 거리맵을 이용한 3차원 영상 정합)

  • Lee, Ho;Hong, Helen;Shin, Yeong-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.939-948
    • /
    • 2004
  • In this paper. we Propose a robust and fast image registration technique for motion correction in brain CT-CT angiography obtained from same patient to be taken at different time. First, the feature points of two images are respectively extracted by 3D edge detection technique, and they are converted to locally weighted 3D distance map in reference image. Second, we search the optimal location whore the cross-correlation of two edges is maximized while floating image is transformed rigidly to reference image. This optimal location is determined when the maximum value of cross-correlation does't change any more and iterates over constant number. Finally, two images are registered at optimal location by transforming floating image. In the experiment, we evaluate an accuracy and robustness using artificial image and give a visual inspection using clinical brain CT-CT angiography dataset. Our proposed method shows that two images can be registered at optimal location without converging at local maximum location robustly and rapidly by using locally weighted 3D distance map, even though we use a few number of feature points in those images.

A Stochastic Word-Spacing System Based on Word Category-Pattern (어절 내의 형태소 범주 패턴에 기반한 통계적 자동 띄어쓰기 시스템)

  • Kang, Mi-Young;Jung, Sung-Won;Kwon, Hyuk-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.11
    • /
    • pp.965-978
    • /
    • 2006
  • This paper implements an automatic Korean word-spacing system based on word-recognition using morpheme unigrams and the pattern that the categories of those morpheme unigrams share within a candidate word. Although previous work on Korean word-spacing models has produced the advantages of easy construction and time efficiency, there still remain problems, such as data sparseness and critical memory size, which arise from the morpho-typological characteristics of Korean. In order to cope with both problems, our implementation uses the stochastic information of morpheme unigrams, and their category patterns, instead of word unigrams. A word's probability in a sentence is obtained based on morpheme probability and the weight for the morpheme's category within the category pattern of the candidate word. The category weights are trained so as to minimize the error means between the observed probabilities of words and those estimated by words' individual-morphemes' probabilities weighted according to their categories' powers in a given word's category pattern.