• Title/Summary/Keyword: Automatic Extraction

Search Result 887, Processing Time 0.024 seconds

AUTOMATIC ROAD NETWORK EXTRACTION. USING LIDAR RANGE AND INTENSITY DATA

  • Kim, Moon-Gie;Cho, Woo-Sug
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.79-82
    • /
    • 2005
  • Recently the necessity of road data is still being increased in industrial society, so there are many repairing and new constructions of roads at many areas. According to the development of government, city and region, the update and acquisition of road data for GIS (Geographical Information System) is very necessary. In this study, the fusion method with range data(3D Ground Coordinate System Data) and Intensity data in stand alone LiDAR data is used for road extraction and then digital image processing method is applicable. Up to date Intensity data of LiDAR is being studied. This study shows the possibility method for road extraction using Intensity data. Intensity and Range data are acquired at the same time. Therefore LiDAR does not have problems of multi-sensor data fusion method. Also the advantage of intensity data is already geocoded, same scale of real world and can make ortho-photo. Lastly, analysis of quantitative and quality is showed with extracted road image which compare with I: 1,000 digital map.

  • PDF

New Framework for Automated Extraction of Key Frames from Compressed Video

  • Kim, Kang-Wook;Kwon, Seong-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.693-700
    • /
    • 2012
  • The effective extraction of key frames from a video stream is an essential task for summarizing and representing the content of a video. Accordingly, this paper proposes a new and fast method for extracting key frames from a compressed video. In the proposed approach, after the entire video sequence has been segmented into elementary content units, called shots, key frame extraction is performed by first assigning the number of key frames to each shot, and then distributing the key frames over the shot using a probabilistic approach to locate the optimal position of the key frames. The main advantage of the proposed method is that no time-consuming computations are needed for distributing the key frames within the shots and the procedure for key frame extraction is completely automatic. Furthermore, the set of key frames is independent of any subjective thresholds or manually set parameters.

Comparison of Free Amino Acids in Soybean Paste (Doenjang) by Different Extraction Solvents and Analytical Methods (추출 용매와 분석 기법에 따른 된장의 유리아미노산 비교)

  • Kang, Ok-Ju
    • Korean journal of food and cookery science
    • /
    • v.23 no.1 s.97
    • /
    • pp.150-155
    • /
    • 2007
  • This work was conducted to obtain a rapid, accurate, and precise procedure for free amino acids analysis in Doenjang with HPLC-OPA (high performance liquid chromatography using-phthalaldehyde) and AAA (automatic amino acid analyzer) methods. Different sample extraction procedures among water, 0.1 M perchloric acid, and 0.1% meta-phosphoric acid were also compared. The optimal extraction solvent was 0.1% meta-phosphoric acid for both the HPLC-OPA and AAA methods. Good recoveries for glycine and methionine were observed using the 0.1% meta-phosphoric acid extraction with HPLC-OPA method. Method precisions (% relative standard deviation) for the free amino acids ranged for 1.62% to 8.27%, in which the HPLC-OPA method with water extraction showed the lowest value at 1.62%. Inhibition rates of the free amino acids in Doenjang were greatest with an addition of NaCI at a 1% concentration.

Extraction Transformation Transportation (ETT) system Design and implementation for extracting heterogeneous Data on Data Warehouse (데이터웨어하우스에서 이질적 형태를 가진 데이터의 추출을 위한 Extraction Transformation Transportation(ETT) 시스템 설계 및 구현)

  • 여성주;왕지남
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.24 no.67
    • /
    • pp.49-60
    • /
    • 2001
  • Data warehouse(DW) manages all information in a Enterprise and also offers the specific information to users. However, it might be difficult to develope an effective DW system due to varieties in computing facilities, data base, and operating systems. The heterogeneous system environments make it harder to extract data and to provide proper information to usesr in real time. Also commonly occurred is data inconsistency of non-integrated legacy system, which requires an effective and efficient data extraction flow control as well as data cleansing. We design the integrated automatic ETT(Extraction Transformation Transportation) system to control data extraction flow and suggest implementation methodology. Detail analysis and design are given to specify the proposed ETT approach with a real implementation.

  • PDF

Automatic Extraction of Stable Visual Landmarks for a Mobile Robot under Uncertainty (이동로봇의 불확실성을 고려한 안정한 시각 랜드마크의 자동 추출)

  • Moon, In-Hyuk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.9
    • /
    • pp.758-765
    • /
    • 2001
  • This paper proposes a method to automatically extract stable visual landmarks from sensory data. Given a 2D occupancy map, a mobile robot first extracts vertical line features which are distinct and on vertical planar surfaces, because they are expected to be observed reliably from various viewpoints. Since the feature information such as position and length includes uncertainty due to errors of vision and motion, the robot then reduces the uncertainty by matching the planar surface containing the features to the map. As a result, the robot obtains modeled stable visual landmarks from extracted features. This extraction process is performed on-line to adapt to an actual changes of lighting and scene depending on the robot’s view. Experimental results in various real scenes show the validity of the proposed method.

  • PDF

A Modified Iterative N-FINDR Algorithm for Fully Automatic Extraction of Endmembers from Hyperspectral Imagery (초분광 영상의 endmember 자동 추출을 위한 수정된 Iterative N-FINDR 기법 개발)

  • Kim, Kwang-Eun
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.5
    • /
    • pp.565-572
    • /
    • 2011
  • A modified iterative N-FINDR algorithm is developed for fully automatic extraction of endmembers from hyperspectral image data. This algorithm exploits the advantages of iterative NFINDR technique and Iterative Error analysis technique. The experiments using a simulated hyperspectral image data shows that the optimum number of endmembers can be automatically decided. The extracted endmembers and finally generated abundance fraction maps show the potentialities of the proposed algorithm. More studies are needed for verification of the applicability of the algorithm to the real hyperspectral image data where the absence of pure pixels is common.

A Study on Design and Implementation of Automatic Product Information Indexing and Retrieval System for Online Comparison Shopping on the Web (웹 상의 온라인 비교 쇼핑을 위한 상품 정보 자동 색인 및 검색 시스템의 설계 및 구현에 대한 연구)

  • 강대기;이제선;함호상
    • The Journal of Society for e-Business Studies
    • /
    • v.3 no.2
    • /
    • pp.57-71
    • /
    • 1998
  • In this paper, we describe the approaches of shopping agents and directory services for online comparison shopping on the web, and propose an information indexing and retrieval system, named InfoEye, with a new method for automatic extraction of product information. The developed method is based on the knowledge about presentation of the product information on the Web. The method from the knowledge about presentation of the product information is derived from both the point that online stores display their products to customers in easy-to-browse ways and heuristics made of analyses of product information look-and-feel of domestic online stores. In indexing process, the method is applied to product information extraction from Hypertext Markup Language (HTML) documents collected by a mirroring robot from online stores. We have made InfoEye to a readily usable stage and transferred the technology to Webnara commercial shopping engine. The proposed system is a cutting-edge solution to help customers as a shopping expert by providing information about the reasonable price of a product from dozens of online stores, saving customers shopping time, giving information about new products, and comparing quality factors of products in a same category.

  • PDF

Performance Evaluation about Implicit Referential Integrities Extraction Algorithm of RDB (RDB의 묵시적 참조 무결성 추출 알고리즘에 대한 성능 평가)

  • Kim, Jin-Hyung;Jeong, Dong-Won
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2005.11a
    • /
    • pp.71-76
    • /
    • 2005
  • XML is rapidly becoming one of the most widely adopted technologies for information exchange and representation on the World Wide Web. However, the large part of data is still stored in a relational database. Hence, we need to convert relational data into XML documents. The most important point of the conversion is to reflect referential integrities In relational schema model to XML schema model exactly. Until now, FT, NeT and CoT are suggested as existing approaches for conversion from the relational schema model to the XML schema model but these approaches only reflect referential integrities which are defined explicitly for conversion. In this paper, we suggest an algorithm for automatic extraction of implicit referential integrities such as foreign key constraints which is not defined explicitly in the initial relational schema model. We present translated XML documents by existing algorithms and suggested algorithms as comparison evaluation. We also compare suggested algorithm and conventional algorithms by simluation in accuracy part.

  • PDF

Estimation of Automatic Video Captioning in Real Applications using Machine Learning Techniques and Convolutional Neural Network

  • Vaishnavi, J;Narmatha, V
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.316-326
    • /
    • 2022
  • The prompt development in the field of video is the outbreak of online services which replaces the television media within a shorter period in gaining popularity. The online videos are encouraged more in use due to the captions displayed along with the scenes for better understandability. Not only entertainment media but other marketing companies and organizations are utilizing videos along with captions for their product promotions. The need for captions is enabled for its usage in many ways for hearing impaired and non-native people. Research is continued in an automatic display of the appropriate messages for the videos uploaded in shows, movies, educational videos, online classes, websites, etc. This paper focuses on two concerns namely the first part dealing with the machine learning method for preprocessing the videos into frames and resizing, the resized frames are classified into multiple actions after feature extraction. For the feature extraction statistical method, GLCM and Hu moments are used. The second part deals with the deep learning method where the CNN architecture is used to acquire the results. Finally both the results are compared to find the best accuracy where CNN proves to give top accuracy of 96.10% in classification.

A New Temporal Filtering Method for Improved Automatic Lipreading (향상된 자동 독순을 위한 새로운 시간영역 필터링 기법)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.15B no.2
    • /
    • pp.123-130
    • /
    • 2008
  • Automatic lipreading is to recognize speech by observing the movement of a speaker's lips. It has received attention recently as a method of complementing performance degradation of acoustic speech recognition in acoustically noisy environments. One of the important issues in automatic lipreading is to define and extract salient features from the recorded images. In this paper, we propose a feature extraction method by using a new filtering technique for obtaining improved recognition performance. The proposed method eliminates frequency components which are too slow or too fast compared to the relevant speech information by applying a band-pass filter to the temporal trajectory of each pixel in the images containing the lip region and, then, features are extracted by principal component analysis. We show that the proposed method produces improved performance in both clean and visually noisy conditions via speaker-independent recognition experiments.