• Title/Summary/Keyword: 필터 링

Search Result 3,386, Processing Time 0.028 seconds

Extraction of the Tree Regions in Forest Areas Using LIDAR Data and Ortho-image (라이다 자료와 정사영상을 이용한 산림지역의 수목영역추출)

  • Kim, Eui Myoung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.2
    • /
    • pp.27-34
    • /
    • 2013
  • Due to the increased interest in global warming, interest in forest resources aimed towards reducing greenhouse gases have subsequently increased. Thus far, data related to forest resources have been obtained, through the employment of aerial photographs or satellite images, by means of plotting. However, the use of imaging data is disadvantageous; merely, due to the fact that recorded measurements such as the height of trees, in dense forest areas, lack accuracy. Within such context, the authors of this study have presented a method of data processing in which an individual tree is isolated within forested areas through the use of LIDAR data and ortho-images. Such isolation resulted in the provision of more efficient and accurate data in regards to the height of trees. As for the data processing of LIDAR, the authors have generated a normalized digital surface model to extract tree points via local maxima filtering, and have additionally, with motives to extract forest areas, applied object oriented image classifications to the processing of data using ortho-images. The final tree point was then given a figure derived from the combination of LIDAR and ortho-images results. Based from an experiment conducted in the Yongin area, the authors have analyzed the merits and demerits of methods that either employ LIDAR data or ortho-images and have thereby obtained information of individual trees within forested areas by combining the two data; thus verifying the efficiency of the above presented method.

Improvement of SNPs detection efficient by reuse of sequences in Genotyping By Sequencing technology (유전체 서열 재사용을 이용한 Genotyping By Sequencing 기술의 단일 염기 다형성 탐지 효율 개선)

  • Baek, Jeong-Ho;Kim, Do-Wan;Kim, Junah;Lee, Tae-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.10
    • /
    • pp.2491-2499
    • /
    • 2015
  • Recently, the most popular technique to determine the Genotype, genetic features of individual organisms, is the GBS based on SNP from sequences determined by NGS. As analyzing the sequences by the GBS, TASSEL is the most used program to identify the genotypes. But, TASSEL has limitation that it uses only the partial sequences that is obtained by NGS. We tried to improve the efficiency in use of the sequences in order to solve the limitation. So, we constructed new data sets by quality checking, filtering the unused sequences with error rate below 0.1% and clipping the sequences considering the location of barcode and enzyme. As a result, approximately over 17% of the SNP detection efficiency was increased. In this paper, we suggest the method and the applied programs in order to detect more SNPs by using the disused sequences.

CNVDAT: A Copy Number Variation Detection and Analysis Tool for Next-generation Sequencing Data (CNVDAT : 차세대 시퀀싱 데이터를 위한 유전체 단위 반복 변이 검출 및 분석 도구)

  • Kang, Inho;Kong, Jinhwa;Shin, JaeMoon;Lee, UnJoo;Yoon, Jeehee
    • Journal of KIISE:Databases
    • /
    • v.41 no.4
    • /
    • pp.249-255
    • /
    • 2014
  • Copy number variations(CNVs) are a recently recognized class of human structural variations and are associated with a variety of human diseases, including cancer. To find important cancer genes, researchers identify novel CNVs in patients with a particular cancer and analyze large amounts of genomic and clinical data. We present a tool called CNVDAT which is able to detect CNVs from NGS data and systematically analyze the genomic and clinical data associated with variations. CNVDAT consists of two modules, CNV Detection Engine and Sequence Analyser. CNV Detection Engine extracts CNVs by using the multi-resolution system of scale-space filtering, enabling the detection of the types and the exact locations of CNVs of all sizes even when the coverage level of read data is low. Sequence Analyser is a user-friendly program to view and compare variation regions between tumor and matched normal samples. It also provides a complete analysis function of refGene and OMIM data and makes it possible to discover CNV-gene-phenotype relationships. CNVDAT source code is freely available from http://dblab.hallym.ac.kr/CNVDAT/.

A Customized Healthy Menu Recommendation Method Using Content-Based and Food Substitution Table (내용 기반 및 식품 교환 표를 이용한 맞춤형 건강식단 추천 기법)

  • Oh, Yoori;Kim, Yoonhee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.3
    • /
    • pp.161-166
    • /
    • 2017
  • In recent times, many people have problems of nutritional imbalance; lack or surplus intake of a specific nutrient despite the variety of available foods. Accordingly, the interest in health and diet issues has increased leading to the emergence of various mobile applications. However, most mobile applications only record the user's diet history and show simple statistics and usually provide only general information for healthy diet. It is necessary for users interested in healthy eating to be provided recommendation services reflecting their food interest and providing customized information. Hence, we propose a menu recommendation method which includes calculating the recommended calorie amount based on the user's physical and activity profile to assign to each food group a substitution unit. In addition, our method also analyzes the user's food preferences using food intake history. Thus it satisfies recommended intake unit for each food group by exchanging the user's preferred foods. Also, the excellence of our proposed algorithm is demonstrated through the calculation of precision, recall, health index and the harmonic average of the 3 aforementioned measures. We compare it to another method which considers user's interest and recommended substitution unit. The proposed method provides menu recommendation reflecting interest and personalized health status by which user can improve and maintain a healthy dietary habit.

Design of Sensor Middleware Architecture on Multi Level Spatial DBMS with Snapshot (스냅샷을 가지는 다중 레벨 공간 DBMS를 기반으로 하는 센서 미들웨어 구조 설계)

  • Oh, Eun-Seog;Kim, Ho-Seok;Kim, Jae-Hong;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.8 no.1 s.16
    • /
    • pp.1-16
    • /
    • 2006
  • Recently, human based computing environment for supporting users to concentrate only user task without sensing other changes from users is being progressively researched and developed. But middleware deletes steream data processed for reducing process load of massive information from RFID sensor in this computing. So, this kind of middleware have problems when user demands probability or statistics needed for data warehousing or data mining and when user demands very important stream data repeatedly but already discarded in the middleware every former time. In this paper, we designs Sensor Middleware Architecture on Multi Level Spatial DBMS with Snapshot and manage repeatedly required stream datas to solve reusing problems of historical stream data in current middleware. This system uses disk databse that manages historical stream datas filtered in middleware for requiring services using historical stream information as data mining or data warehousing from user, and uses memory database that mamages highly reuseable data as a snapshot when stream data storaged in disk database has high reuse frequency from user. For the more, this system processes memory database management policy in a cycle to maintain high reusement and rapid service for users. Our paper system solves problems of repeated requirement of stream datas, or a policy decision service using historical stream data of current middleware. Also offers variant and rapid data services maintaining high data reusement of main memory snapshot datas.

  • PDF

A Study on the Integration of Airborne LiDAR and UAV Data for High-resolution Topographic Information Construction of Tidal Flat (갯벌지역 고해상도 지형정보 구축을 위한 항공 라이다와 UAV 데이터 통합 활용에 관한 연구)

  • Kim, Hye Jin;Lee, Jae Bin;Kim, Yong Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.345-352
    • /
    • 2020
  • To preserve and restore tidal flats and prevent safety accidents, it is necessary to construct tidal flat topographic information including the exact location and shape of tidal creeks. In the tidal flats where the field surveying is difficult to apply, airborne LiDAR surveying can provide accurate terrain data for a wide area. On the other hand, we can economically obtain relatively high-resolution data from UAV (Unmanned Aerial Vehicle) surveying. In this study, we proposed the methodology to generate high-resolution topographic information of tidal flats effectively by integrating airborne LiDAR and UAV point clouds. For the purpose, automatic ICP (Iterative Closest Points) registration between two different datasets was conducted and tidal creeks were extracted by applying CSF (Cloth Simulation Filtering) algorithm. Then, we integrated high-density UAV data for tidal creeks and airborne LiDAR data for flat grounds. DEM (Digital Elevation Model) and tidal flat area and depth were generated from the integrated data to construct high-resolution topographic information for large-scale tidal flat map creation. As a result, UAV data was registered without GCP (Ground Control Point), and integrated data including detailed topographic information of tidal creeks with a relatively small data size was generated.

Text Filtering using Iterative Boosting Algorithms (반복적 부스팅 학습을 이용한 문서 여과)

  • Hahn, Sang-Youn;Zang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.270-277
    • /
    • 2002
  • Text filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. The aim of this paper is to improve the accuracy of text filtering systems by using machine learning techniques. We apply AdaBoost algorithms to the filtering task. An AdaBoost algorithm generates and combines a series of simple hypotheses. Each of the hypotheses decides the relevance of a document to a topic on the basis of whether or not the document includes a certain word. We begin with an existing AdaBoost algorithm which uses weak hypotheses with their output of 1 or -1. Then we extend the algorithm to use weak hypotheses with real-valued outputs which was proposed recently to improve error reduction rates and final filtering performance. Next, we attempt to achieve further improvement in the AdaBoost's performance by first setting weights randomly according to the continuous Poisson distribution, executing AdaBoost, repeating these steps several times, and then combining all the hypotheses learned. This has the effect of mitigating the ovefitting problem which may occur when learning from a small number of data. Experiments have been performed on the real document collections used in TREC-8, a well-established text retrieval contest. This dataset includes Financial Times articles from 1992 to 1994. The experimental results show that AdaBoost with real-valued hypotheses outperforms AdaBoost with binary-valued hypotheses, and that AdaBoost iterated with random weights further improves filtering accuracy. Comparison results of all the participants of the TREC-8 filtering task are also provided.

Topic-Specific Mobile Web Contents Adaptation (주제기반 모바일 웹 콘텐츠 적응화)

  • Lee, Eun-Shil;Kang, Jin-Beom;Choi, Joong-Min
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.6
    • /
    • pp.539-548
    • /
    • 2007
  • Mobile content adaptation is a technology of effectively representing the contents originally built for the desktop PC on wireless mobile devices. Previous approaches for Web content adaptation are mostly device-dependent. Also, the content transformation to suit to a smaller device is done manually. Furthermore, the same contents are provided to different users regardless of their individual preferences. As a result, the user has difficulty in selecting relevant information from a heavy volume of contents since the context information related to the content is not provided. To resolve these problems, this paper proposes an enhanced method of Web content adaptation for mobile devices. In our system, the process of Web content adaptation consists of 4 stages including block filtering, block title extraction, block content summarization, and personalization through learning. Learning is initiated when the user selects the full content menu from the content summary page. As a result of learning, personalization is realized by showing the information for the relevant block at the top of the content list. A series of experiments are performed to evaluate the content adaptation for a number of Web sites including online newspapers. The results of evaluation are satisfactory, both in block filtering accuracy and in user satisfaction by personalization.

LiDAR Ground Classification Enhancement Based on Weighted Gradient Kernel (가중 경사 커널 기반 LiDAR 미추출 지형 분류 개선)

  • Lee, Ho-Young;An, Seung-Man;Kim, Sung-Su;Sung, Hyo-Hyun;Kim, Chang-Hun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.29-33
    • /
    • 2010
  • The purpose of LiDAR ground classification is to archive both goals which are acquiring confident ground points with high precision and describing ground shape in detail. In spite of many studies about developing optimized algorithms to kick out this, it is very difficult to classify ground points and describing ground shape by airborne LiDAR data. Especially it is more difficult in a dense forested area like Korea. Principle misclassification was mainly caused by complex forest canopy hierarchy in Korea and relatively coarse LiDAR points density for ground classification. Unfortunately, a lot of LiDAR surveying performed in summer in South Korea. And by that reason, schematic LiDAR points distribution is very different from those of Europe. So, this study propose enhanced ground classification method considering Korean land cover characteristics. Firstly, this study designate highly confident candidated LiDAR points as a first ground points which is acquired by using big roller classification algorithm. Secondly, this study applied weighted gradient kernel(WGK) algorithm to find and include highly expected ground points from the remained candidate points. This study methods is very useful for reconstruct deformed terrain due to misclassification results by detecting and include important terrain model key points for describing ground shape at site. Especially in the case of deformed bank side of river area, this study showed highly enhanced classification and reconstruction results by using WGK algorithm.

Implementation of an Efficient Microbial Medical Image Retrieval System Applying Knowledge Databases (지식 데이타베이스를 적용한 효율적인 세균 의료영상 검색 시스템의 구현)

  • Shin Yong Won;Koo Bong Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.1 s.33
    • /
    • pp.93-100
    • /
    • 2005
  • This study is to desist and implement an efficient microbial medical image retrieval system based on knowledge and content of them which can make use of more accurate decision on colony as doll as efficient education for new techicians. For this. re first address overall inference to set up flexible search path using rule-base in order U redure time required original microbial identification by searching the fastest path of microbial identification phase based on heuristics knowledge. Next, we propose a color ffature gfraction mtU, which is able to extract color feature vectors of visual contents from a inn microbial image based on especially bacteria image using HSV color model. In addition, for better retrieval performance based on large microbial databases, we present an integrated indexing technique that combines with B+-tree for indexing simple attributes, inverted file structure for text medical keywords list, and scan-based filtering method for high dimensional color feature vectors. Finally. the implemented system shows the possibility to manage and retrieve the complex microbial images using knowledge and visual contents itself effectively. We expect to decrease rapidly Loaming time for elementary technicians by tell organizing knowledge of clinical fields through proposed system.

  • PDF