• Title/Summary/Keyword: Automate

Search Result 564, Processing Time 0.024 seconds

RAUT: An end-to-end tool for automated parsing and uploading river cross-sectional survey in AutoCAD format to river information system for supporting HEC-RAS operation (하천정비기본계획 CAD 형식 단면 측량자료 자동 추출 및 하천공간 데이터베이스 업로딩과 HEC-RAS 지원을 위한 RAUT 툴 개발)

  • Kim, Kyungdong;Kim, Dongsu;You, Hojun
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.12
    • /
    • pp.1339-1348
    • /
    • 2021
  • In accordance with the River Law, the basic river maintenance plan is established every 5-10 years with a considerable national budget for domestic rivers, and various river surveys such as the river section required for HEC-RAS simulation for flood level calculation are being conducted. However, river survey data are provided only in the form of a pdf report to the River Management Geographic Information System (RIMGIS), and the original data are distributedly owned by designers who performed the river maintenance plan in CAD format. It is a situation that the usability for other purposes is considerably lowered. In addition, when using surveyed CAD-type cross-sectional data for HEC-RAS, tools such as 'Dream' are used, but the reality is that time and cost are almost as close as manual work. In this study, RAUT (River Information Auto Upload Tool), a tool that can solve these problems, was developed. First, the RAUT tool attempted to automate the complicated steps of manually inputting CAD survey data and simulating the input data of the HEC-RAS one-dimensional model used in establishing the basic river plan in practice. Second, it is possible to directly read CAD survey data, which is river spatial information, and automatically upload it to the river spatial information DB based on the standard data model (ArcRiver), enabling the management of river survey data in the river maintenance plan at the national level. In other words, if RIMGIS uses a tool such as RAUT, it will be able to systematically manage national river survey data such as river section. The developed RAUT reads the river spatial information CAD data of the river maintenance master plan targeting the Jeju-do agar basin, builds it into a mySQL-based spatial DB, and automatically generates topographic data for HEC-RAS one-dimensional simulation from the built DB. A pilot process was implemented.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

A fundamental study on the automation of tunnel blasting design using a machine learning model (머신러닝을 이용한 터널발파설계 자동화를 위한 기초연구)

  • Kim, Yangkyun;Lee, Je-Kyum;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.5
    • /
    • pp.431-449
    • /
    • 2022
  • As many tunnels generally have been constructed, various experiences and techniques have been accumulated for tunnel design as well as tunnel construction. Hence, there are not a few cases that, for some usual tunnel design works, it is sufficient to perform the design by only modifying or supplementing previous similar design cases unless a tunnel has a unique structure or in geological conditions. In particular, for a tunnel blast design, it is reasonable to refer to previous similar design cases because the blast design in the stage of design is a preliminary design, considering that it is general to perform additional blast design through test blasts prior to the start of tunnel excavation. Meanwhile, entering the industry 4.0 era, artificial intelligence (AI) of which availability is surging across whole industry sector is broadly utilized to tunnel and blasting. For a drill and blast tunnel, AI is mainly applied for the estimation of blast vibration and rock mass classification, etc. however, there are few cases where it is applied to blast pattern design. Thus, this study attempts to automate tunnel blast design by means of machine learning, a branch of artificial intelligence. For this, the data related to a blast design was collected from 25 tunnel design reports for learning as well as 2 additional reports for the test, and from which 4 design parameters, i.e., rock mass class, road type and cross sectional area of upper section as well as bench section as input data as well as16 design elements, i.e., blast cut type, specific charge, the number of drill holes, and spacing and burden for each blast hole group, etc. as output. Based on this design data, three machine learning models, i.e., XGBoost, ANN, SVM, were tested and XGBoost was chosen as the best model and the results show a generally similar trend to an actual design when assumed design parameters were input. It is not enough yet to perform the whole blast design using the results from this study, however, it is planned that additional studies will be carried out to make it possible to put it to practical use after collecting more sufficient blast design data and supplementing detailed machine learning processes.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.