• Title/Summary/Keyword: 필터 링

Search Result 3,392, Processing Time 0.036 seconds

A 5.4Gb/s Clock and Data Recovery Circuit for Graphic DRAM Interface (그래픽 DRAM 인터페이스용 5.4Gb/s 클럭 및 데이터 복원회로)

  • Kim, Young-Ran;Kim, Kyung-Ae;Lee, Seung-Jun;Park, Sung-Min
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.2
    • /
    • pp.19-24
    • /
    • 2007
  • With recent advancement of high-speed, multi-gigabit data transmission capabilities, serial links have been more widely adopted in industry than parallel links. Since the parallel link design forces its transmitter to transmit both the data and the clock to the receiver at the same time, it leads to hardware's intricacy during high-speed data transmission, large power consumption, and high cost. Meanwhile, the serial links allows the transmitter to transmit data only with no synchronized clock information. For the purpose, clock and data recovery circuit becomes a very crucial key block. In this paper, a 5.4Gbps half-rate bang-bang CDR is designed for the applications of high-speed graphic DRAM interface. The CDR consists of a half-rate bang-bang phase detector, a current-mirror charge-pump, a 2nd-order loop filter, and a 4-stage differential ring-type VCO. The PD automatically retimes and demultiplexes the data, generating two 2.7Gb/s sequences. The proposed circuit is realized in 66㎚ CMOS process. With input pseudo-random bit sequences (PRBS) of $2^{13}-1$, the post-layout simulations show 10psRMS clock jitter and $40ps_{p-p}$ retimed data jitter characteristics, and also the power dissipation of 80mW from a single 1.8V supply.

A Study on Personalized Advertisement System Using Web Mining (웹 마이닝을 이용한 개인 광고기법에 관한 연구)

  • 김은수;송강수;이원돈;송정길
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.4
    • /
    • pp.92-103
    • /
    • 2003
  • Great many advertisements are serviced in on-line by development of electronic commerce and internet user's rapid increase recently. However, this advertisement service is stopping in one-side service of relevant advertisement rather than doing users' inclination analysis to basis. Therefore, want advertisement service that many websites are personalized for efficient service of relevant advertisement and service through relevant server's log analysis research and enforce. Take advantage of log data of local system that this treatise is not analysis of server log data and analyze user's Preference degree and inclination. Also, try to propose advertisement system personalized by making relevant site tributary category and give weight of relevant tributary. User's preference user preference which analysis is one part of cooperation fielder ring of web personalized techniques use information in visit site tributary and suppose internet user's action in visit number of times of relevant site and try inclination analysis of mixing form. Express user's preference degree by vector, and inclination analysis result uninterrupted data that simplicity application form is not regarded and techniques that propose inclination analysis change of data since with move data use and analyze newly and proposed so that can do continuous renewal and application as feedback Sikkim. Presented method that can choose advertisements of relevant tributary through this result and provide personalized advertisement service by applying process such as user inclination analysis in advertisement chosen.

  • PDF

A proper folder recommendation technique using frequent itemsets for efficient e-mail classification (효과적인 이메일 분류를 위한 빈발 항목집합 기반 최적 이메일 폴더 추천 기법)

  • Moon, Jong-Pil;Lee, Won-Suk;Chang, Joong-Hyuk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.2
    • /
    • pp.33-46
    • /
    • 2011
  • Since an e-mail has been an important mean of communication and information sharing, there have been much effort to classify e-mails efficiently by their contents. An e-mail has various forms in length and style, and words used in an e-mail are usually irregular. In addition, the criteria of an e-mail classification are subjective. As a result, it is quite difficult for the conventional text classification technique to be adapted to an e-mail classification efficiently. An e-mail classification technique in a commercial e-mail program uses a simple text filtering technique in an e-mail client. In the previous studies on automatic classification of an e-mail, the Naive Bayesian technique based on the probability has been used to improve the classification accuracy, and most of them are on an e-mail in English. This paper proposes the personalized recommendation technique of an email in Korean using a data mining technique of frequent patterns. The proposed technique consists of two phases such as the pre-processing of e-mails in an e-mail folder and the generating a profile for the e-mail folder. The generated profile is used for an e-mail to be classified into the most appropriate e-mail folder by the subjective criteria. The e-mail classification system is also implemented, which adapts the proposed technique.

Application of Residual Statics to Land Seismic Data: traveltime decomposition vs stack-power maximization (육상 탄성파자료에 대한 나머지 정적보정의 효과: 주행시간 분해기법과 겹쌓기제곱 최대화기법)

  • Sa, Jinhyeon;Woo, Juhwan;Rhee, Chulwoo;Kim, Jisoo
    • Geophysics and Geophysical Exploration
    • /
    • v.19 no.1
    • /
    • pp.11-19
    • /
    • 2016
  • Two representative residual static methods of traveltime decomposition and stack-power maximization are discussed in terms of application to land seismic data. For the model data with synthetic shot/receiver statics (time shift) applied and random noises added, continuities of reflection event are much improved by stack-power maximization method, resulting the derived time-shifts approximately equal to the synthetic statics. Optimal parameters (maximum allowable shift, correlation window, iteration number) for residual statics are effectively chosen with diagnostic displays of CSP (common shot point) stack and CRP (common receiver point) stack as well as CMP gather. In addition to removal of long-wavelength time shift by refraction statics, prior to residual statics, processing steps of f-k filter, predictive deconvolution and time variant spectral whitening are employed to attenuate noises and thereby to minimize the error during the correlation process. The reflectors including horizontal layer of reservoir are more clearly shown in the variable-density section through repicking the velocities after residual statics and inverse NMO correction.

Design and Implementation of Data Distribution Management Module for IEEE 1516 HLA/RTI (IEEE 1516 HLA/RTI 표준을 만족하는 데이터 분산 관리 모듈의 설계 및 구현)

  • Ahn, Jung-Hyun;Hong, Jeong-Hee;Kim, Tag-Gon
    • Journal of the Korea Society for Simulation
    • /
    • v.17 no.2
    • /
    • pp.21-29
    • /
    • 2008
  • The High Level Architecture(HLA) specifies a framework for interoperation between heterogeneous simulators, and Run-Time Infrastructure(RTI) is a implementation of the HLA Interface Specification. The Data Distribution Management(DDM) services, one category of IEEE 1516 HLA/RTI management services, control filters for data transmission and reception of data volume among simulators. In this paper, we propose design concept of DDM and show its implementation for light-weighted RTI. The design concept of DDM is to minimize total amount of message that each federate and a federation process generate using the rate of RTI service execution. The design of our proposed DDM follows that a data transfer mechanism is differently applied as the rate of RTI service execution. A federate usually publishes or subscribes data when it starts. The federate constantly updates the data and modifies associated regions while it continues to advance its simulation time. Therefore, the proposed DDM design provides fast update or region modification in exchange of complex publish and subscribe services. We describe how to process the proposed DDM in IEEE 1516 HLA/RTI and experiment variable scenarios while modifying region, changing overlap ratio, and increasing data volume.

  • PDF

A Study of Visualizing Relational Information - In Mitologia Project - (관계형 정보의 시각화에 관한 연구 - 미톨로지아 프로젝트를 중심으로 -)

  • Jang, Seok-Hyun;Hwang, Hyo-Won;Lee, Kyung-Won
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.73-80
    • /
    • 2006
  • Mitologia is about visualizing relations of information in user-oriented method. Most information given in life has invisible relations with each other. By analyzing the common characters and relations of information, we can not only measure the importance of the information but also grasp the overall properties of the information. Especially human relations are the major concerns of social network having several visualization methodologies shown by analyzing relations of each individual in society. We applied social network theory to grasp relationships between characters in Greek mythology representing a limited society. But the current tools of social network analysis have limits that they show the information one-sided way because of the ignorance of user-oriented design. Mitologia attempts to suggest the visual structure model more effective and easy to understand in analyzing data. We extracted connections among myth characters by evaluating classes, frequencies of appearance and emotional links they have. And we raised the understanding of users with furnishing the proper interaction to the information. The initial interface offers 4 kinds of indexes helping to access character nodes easily, while zoom-in function can be used for the detailed relations. The Zoom-in is quite different from usual filtering methods. It makes the irrelative information invisible so that users can find out the characters' relation more easily and quickly. This project suggests the layout to show overall information relationships and the appropriate interactions to present detailed information at the same time.

  • PDF

Extraction of the Tree Regions in Forest Areas Using LIDAR Data and Ortho-image (라이다 자료와 정사영상을 이용한 산림지역의 수목영역추출)

  • Kim, Eui Myoung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.2
    • /
    • pp.27-34
    • /
    • 2013
  • Due to the increased interest in global warming, interest in forest resources aimed towards reducing greenhouse gases have subsequently increased. Thus far, data related to forest resources have been obtained, through the employment of aerial photographs or satellite images, by means of plotting. However, the use of imaging data is disadvantageous; merely, due to the fact that recorded measurements such as the height of trees, in dense forest areas, lack accuracy. Within such context, the authors of this study have presented a method of data processing in which an individual tree is isolated within forested areas through the use of LIDAR data and ortho-images. Such isolation resulted in the provision of more efficient and accurate data in regards to the height of trees. As for the data processing of LIDAR, the authors have generated a normalized digital surface model to extract tree points via local maxima filtering, and have additionally, with motives to extract forest areas, applied object oriented image classifications to the processing of data using ortho-images. The final tree point was then given a figure derived from the combination of LIDAR and ortho-images results. Based from an experiment conducted in the Yongin area, the authors have analyzed the merits and demerits of methods that either employ LIDAR data or ortho-images and have thereby obtained information of individual trees within forested areas by combining the two data; thus verifying the efficiency of the above presented method.

Improvement of SNPs detection efficient by reuse of sequences in Genotyping By Sequencing technology (유전체 서열 재사용을 이용한 Genotyping By Sequencing 기술의 단일 염기 다형성 탐지 효율 개선)

  • Baek, Jeong-Ho;Kim, Do-Wan;Kim, Junah;Lee, Tae-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.10
    • /
    • pp.2491-2499
    • /
    • 2015
  • Recently, the most popular technique to determine the Genotype, genetic features of individual organisms, is the GBS based on SNP from sequences determined by NGS. As analyzing the sequences by the GBS, TASSEL is the most used program to identify the genotypes. But, TASSEL has limitation that it uses only the partial sequences that is obtained by NGS. We tried to improve the efficiency in use of the sequences in order to solve the limitation. So, we constructed new data sets by quality checking, filtering the unused sequences with error rate below 0.1% and clipping the sequences considering the location of barcode and enzyme. As a result, approximately over 17% of the SNP detection efficiency was increased. In this paper, we suggest the method and the applied programs in order to detect more SNPs by using the disused sequences.

CNVDAT: A Copy Number Variation Detection and Analysis Tool for Next-generation Sequencing Data (CNVDAT : 차세대 시퀀싱 데이터를 위한 유전체 단위 반복 변이 검출 및 분석 도구)

  • Kang, Inho;Kong, Jinhwa;Shin, JaeMoon;Lee, UnJoo;Yoon, Jeehee
    • Journal of KIISE:Databases
    • /
    • v.41 no.4
    • /
    • pp.249-255
    • /
    • 2014
  • Copy number variations(CNVs) are a recently recognized class of human structural variations and are associated with a variety of human diseases, including cancer. To find important cancer genes, researchers identify novel CNVs in patients with a particular cancer and analyze large amounts of genomic and clinical data. We present a tool called CNVDAT which is able to detect CNVs from NGS data and systematically analyze the genomic and clinical data associated with variations. CNVDAT consists of two modules, CNV Detection Engine and Sequence Analyser. CNV Detection Engine extracts CNVs by using the multi-resolution system of scale-space filtering, enabling the detection of the types and the exact locations of CNVs of all sizes even when the coverage level of read data is low. Sequence Analyser is a user-friendly program to view and compare variation regions between tumor and matched normal samples. It also provides a complete analysis function of refGene and OMIM data and makes it possible to discover CNV-gene-phenotype relationships. CNVDAT source code is freely available from http://dblab.hallym.ac.kr/CNVDAT/.

A Customized Healthy Menu Recommendation Method Using Content-Based and Food Substitution Table (내용 기반 및 식품 교환 표를 이용한 맞춤형 건강식단 추천 기법)

  • Oh, Yoori;Kim, Yoonhee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.3
    • /
    • pp.161-166
    • /
    • 2017
  • In recent times, many people have problems of nutritional imbalance; lack or surplus intake of a specific nutrient despite the variety of available foods. Accordingly, the interest in health and diet issues has increased leading to the emergence of various mobile applications. However, most mobile applications only record the user's diet history and show simple statistics and usually provide only general information for healthy diet. It is necessary for users interested in healthy eating to be provided recommendation services reflecting their food interest and providing customized information. Hence, we propose a menu recommendation method which includes calculating the recommended calorie amount based on the user's physical and activity profile to assign to each food group a substitution unit. In addition, our method also analyzes the user's food preferences using food intake history. Thus it satisfies recommended intake unit for each food group by exchanging the user's preferred foods. Also, the excellence of our proposed algorithm is demonstrated through the calculation of precision, recall, health index and the harmonic average of the 3 aforementioned measures. We compare it to another method which considers user's interest and recommended substitution unit. The proposed method provides menu recommendation reflecting interest and personalized health status by which user can improve and maintain a healthy dietary habit.