• Title/Summary/Keyword: automatic reference extraction

Search Result 34, Processing Time 0.031 seconds

Defect Cell Extraction for TFT-LCD Auto-Repair System (TFT-LCD 자동 수선시스템에서 결함이 있는 셀을 자동으로 추출하는 방법)

  • Cho, Jae-Soo;Ha, Gwang-Sung;Lee, Jin-Wook;Kim, Dong-Hyun;Jeon, Edward
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.5
    • /
    • pp.432-437
    • /
    • 2008
  • This paper proposes a defect cell extraction algorithm for TFT-LCD auto-repair system. Auto defect search algorithm and automatic defect cell extraction method are very important for TFT-LCD auto repair system. In the previous literature[1], we proposed an automatic visual inspection algorithm of TFT-LCD. Based on the inspected information(defect size and defect axis, if defect exists) by the automatic search algorithm, defect cells should be extracted from the input image for the auto repair system. For automatic extraction of defect cells, we used a novel block matching algorithm and a simple filtering process in order to find a given reference point in the LCD cell. The proposed defect cell extraction algorithm can be used in all kinds of TFT-LCD devices by changing a stored template which includes a given reference point. Various experimental results show the effectiveness of the proposed method.

Automatic Extraction of References for Research Reports using Deep Learning Language Model (딥러닝 언어 모델을 이용한 연구보고서의 참고문헌 자동추출 연구)

  • Yukyung Han;Wonsuk Choi;Minchul Lee
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.2
    • /
    • pp.115-135
    • /
    • 2023
  • The purpose of this study is to assess the effectiveness of using deep learning language models to extract references automatically and create a reference database for research reports in an efficient manner. Unlike academic journals, research reports present difficulties in automatically extracting references due to variations in formatting across institutions. In this study, we addressed this issue by introducing the task of separating references from non-reference phrases, in addition to the commonly used metadata extraction task for reference extraction. The study employed datasets that included various types of references, such as those from research reports of a particular institution, academic journals, and a combination of academic journal references and non-reference texts. Two deep learning language models, namely RoBERTa+CRF and ChatGPT, were compared to evaluate their performance in automatic extraction. They were used to extract metadata, categorize data types, and separate original text. The research findings showed that the deep learning language models were highly effective, achieving maximum F1-scores of 95.41% for metadata extraction and 98.91% for categorization of data types and separation of the original text. These results provide valuable insights into the use of deep learning language models and different types of datasets for constructing reference databases for research reports including both reference and non-reference texts.

Automatic Extraction Method of Control Point Based on Geospatial Web Service (지리공간 웹 서비스 기반의 기준점 자동추출 기법 연구)

  • Lee, Young Rim
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.2
    • /
    • pp.17-24
    • /
    • 2014
  • This paper proposes an automatic extraction method of control point based on Geospatial Web Service. The proposed method consists of 3 steps. 1) The first step is to acquires reference data using the Geospatial Web Service. 2) The second step is to finds candidate control points in reference data and the target image by SURF algorithm. 3) By using RANSAC algorithm, the final step is to filters the correct matching points of candidate control points as final control points. By using the Geospatial Web Service, the proposed method increases operation convenience, and has the more extensible because of following the OGC Standard. The proposed method has been tested for SPOT-1, SPOT-5, IKONOS satellite images and has been used military standard data as reference data. The proposed method yielded a uniform accuracy under RMSE 5 pixel. The experimental results proved the capabilities of continuous improvement in accuracy depending on the resolution of target image, and showed the full potential of the proposed method for military purpose.

Automatic Classification Method for Time-Series Image Data using Reference Map (Reference Map을 이용한 시계열 image data의 자동분류법)

  • Hong, Sun-Pyo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.2
    • /
    • pp.58-65
    • /
    • 1997
  • A new automatic classification method with high and stable accuracy for time-series image data is presented in this paper. This method is based on prior condition that a classified map of the target area already exists, or at least one of the time-series image data had been classified. The classified map is used as a reference map to specify training areas of classification categories. The new automatic classification method consists of five steps, i.e., extraction of training data using reference map, detection of changed pixels based upon the homogeneity of training data, clustering of changed pixels, reconstruction of training data, and classification as like maximum likelihood classifier. In order to evaluate the performance of this method qualitatively, four time-series Landsat TM image data were classified by using this method and a conventional method which needs a skilled operator. As a results, we could get classified maps with high reliability and fast throughput, without a skilled operator.

  • PDF

Automatic Generation of Bibliographic Metadata with Reference Information for Academic Journals (학술논문 내에서 참고문헌 정보가 포함된 서지 메타데이터 자동 생성 연구)

  • Jeong, Seonki;Shin, Hyeonho;Ji, Seon-Yeong;Choi, Sungphil
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.56 no.3
    • /
    • pp.241-264
    • /
    • 2022
  • Bibliographic metadata can help researchers effectively utilize essential publications that they need and grasp academic trends of their own fields. With the manual creation of the metadata costly and time-consuming. it is nontrivial to effectively automatize the metadata construction using rule-based methods due to the immoderate variety of the article forms and styles according to publishers and academic societies. Therefore, this study proposes a two-step extraction process based on rules and deep neural networks for generating bibliographic metadata of scientific articlles to overcome the difficulties above. The extraction target areas in articles were identified by using a deep neural network-based model, and then the details in the areas were analyzed and sub-divided into relevant metadata elements. IThe proposed model also includes a model for generating reference summary information, which is able to separate the end of the text and the starting point of a reference, and to extract individual references by essential rule set, and to identify all the bibliographic items in each reference by a deep neural network. In addition, in order to confirm the possibility of a model that generates the bibliographic information of academic papers without pre- and post-processing, we conducted an in-depth comparative experiment with various settings and configurations. As a result of the experiment, the method proposed in this paper showed higher performance.

A Region Based Approach to Surface Segmentation using LIDAR Data and Images

  • Moon, Ji-Young;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.575-583
    • /
    • 2007
  • Surface segmentation aims to represent the terrain as a set of bounded and analytically defined surface patches. Many previous segmentation methods have been developed to extract planar patches from LIDAR data for building extraction. However, most of them were not fully satisfactory for more general applications in terms of the degree of automation and the quality of the segmentation results. This is mainly caused from the limited information derived from LIDAR data. The purpose of this study is thus to develop an automatic method to perform surface segmentation by combining not only LIDAR data but also images. A region-based method is proposed to generate a set of planar patches by grouping LIDAR points. The grouping criteria are based on both the coordinates of the points and the corresponding intensity values computed from the images. This method has been applied to urban data and the segmentation results are compared with the reference data acquired by manual segmentation. 76% of the test area is correctly segmented. Under-segmentation is rarely founded but over-segmentation still exists. If the over-segmentation is mitigated by merging adjacent patches with similar properties as a post-process, the proposed segmentation method can be effectively utilized for a reliable intermediate process toward automatic extraction of 3D model of the real world.

Footprint extraction of urban buildings with LIDAR data

  • Kanniah, Kasturi Devi;Gunaratnam, Kasturi;Mohd, Mohd Ibrahim Seeni
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.113-119
    • /
    • 2003
  • Building information is extremely important for many applications within the urban environment. Sufficient techniques and user-friendly tools for information extraction from remotely sensed imagery are urgently needed. This paper presents an automatic and manual approach for extracting footprints of buildings in urban areas from airborne Light Detection and Ranging (LIDAR) data. First a digital surface model (DSM) was generated from the LIDAR point data. Then, objects higher than the ground surface are extracted using the generated DSM. Based on general knowledge on the study area and field visits, buildings were separated from other objects. The automatic technique for extracting the building footprints was based on different window sizes and different values of image add backs, while the manual technique was based on image segmentation. A comparison was then made to see how precise the two techniques are in detecting and extracting building footprints. Finally, the results were compared with manually digitized building reference data to conduct an accuracy assessment and the result shows that LIDAR data provide a better shape characterization of each buildings.

  • PDF

Automatic Coastline Extraction and Change Detection Monitoring using LANDSAT Imagery (LANDSAT 영상을 이용한 해안선 자동 추출과 변화탐지 모니터링)

  • Kim, Mi Kyeong;Sohn, Hong Gyoo;Kim, Sang Pil;Jang, Hyo Seon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.4
    • /
    • pp.45-53
    • /
    • 2013
  • Global warming causes sea levels to rise and global changes apparently taking place including coastline changes. Coastline change due to sea level rise is also one of the most significant phenomena affected by global climate change. Accordingly, Coastline change detection can be utilized as an indicator of representing global climate change. Generally, Coastline change has happened mainly because of not only sea level rise but also artificial factor that is reclaimed land development by mud flat reclamation. However, Arctic coastal areas have been experienced serious change mostly due to sea level rise rather than other factors. The purposes of this study are automatic extraction of coastline and identifying change. In this study, in order to extract coastline automatically, contrast of the water and the land was maximized utilizing modified NDWI(Normalized Difference Water Index) and it made automatic extraction of coastline possibile. The imagery converted into modified NDWI were applied image processing techniques in order that appropriate threshold value can be found automatically to separate the water and land. Then the coastline was extracted through edge detection algorithm and changes were detected using extracted coastlines. Without the help of other data, automatic extraction of coastlines using LANDSAT was possible and similarity was found by comparing NLCD data as a reference data. Also, the results of the study area that is permafrost always frozen below $0^{\circ}C$ showed quantitative changes of the coastline and verified that the change was accelerated.

BIM-Based Generation of Free-form Building Panelization Model (BIM 기반 비정형 건축물 패널화 모델 생성 방법에 관한 연구)

  • Kim, Yang-Gil;Lee, Yun-Gu;Ham, Nam-Hyuk;Kim, Jae-Jun
    • Journal of KIBIM
    • /
    • v.12 no.4
    • /
    • pp.19-31
    • /
    • 2022
  • With the development of 3D-based CAD (Computer Aided Design), attempts at freeform building design have expanded to small and medium-sized buildings in Korea. However, a standardized system for continuous utilization of shape data and BIM conversion process implemented with 3D-based NURBS is still immature. Without accurate review and management throughout the Freeform building project, interference between members occurs and the cost of the project increases. This is very detrimental to the project. To solve this problem, we proposed a continuous utilization process of 3D shape information based on BIM parameters. Our process includes algorithms such as Auto Split, Panel Optimization, Excel extraction based on shape information, BIM modeling through Adaptive Component, and BIM model utilization method using ID Code. The optimal cutting reference point was calculated and the optimal material specification was derived using the Panel Optimization algorithm. With the Adaptive Component design methodology, a BIM model conforming to the standard cross-section details and specifications was uniformly established. The automatic BIM conversion algorithm of shape data through Excel extraction created a BIM model without omission of data based on the optimized panel cutting reference point and cutting line. Finally, we analyzed how to use the BIM model built for automatic conversion. As a result of the analysis, in addition to the BIM utilization plan in the general construction stage such as visualization, interference review, quantity calculation, and construction simulation, an individual management plan for the unit panel was derived through ID data input. This study suggested an improvement process by linking the existing research on atypical panel optimization and the study of parameter-based BIM information management method. And it showed that it can solve the problems of existing Freeform building project.

Mean-Shift Blob Clustering and Tracking for Traffic Monitoring System

  • Choi, Jae-Young;Yang, Young-Kyu
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.3
    • /
    • pp.235-243
    • /
    • 2008
  • Object tracking is a common vision task to detect and trace objects between consecutive frames. It is also important for a variety of applications such as surveillance, video based traffic monitoring system, and so on. An efficient moving vehicle clustering and tracking algorithm suitable for traffic monitoring system is proposed in this paper. First, automatic background extraction method is used to get a reliable background as a reference. The moving blob(object) is then separated from the background by mean shift method. Second, the scale invariant feature based method extracts the salient features from the clustered foreground blob. It is robust to change the illumination, scale, and affine shape. The simulation results on various road situations demonstrate good performance achieved by proposed method.