• Title/Summary/Keyword: automatic generation of 3D model

Search Result 78, Processing Time 0.034 seconds

Development of Automated 3D Modeling System to Construct BIM for Railway Bridge (철도 교량의 BIM 구축을 위한 3차원 모델 생성 자동화 시스템 개발)

  • Lee, Heon-Min;Kim, Hyun-Seung;Lee, Il-Soo
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.5
    • /
    • pp.267-274
    • /
    • 2018
  • For successful BIM settlement, it is a key technic for engineer to design structures in the 3-dimensional digital space and to work out related design documents directly. Lately many BIM tool has been released and each supports their 3-dimensional object libraries. But it is not easy to apply those libraries to design transportation infra structures that were placed along the route(3-dimensional line). Moreover, in case of design changes, it is so difficult to reflect those changes with the integrated model that was assembled by them. Because of they were developed without consideration for redundancy of parameters between objects that were placed nearby or were related each other. In this paper, a method to develop module for modeling and placing 3-dimensional object for transportation infra structures is presented. The modules are employed by a parametric method and can deal with design changes. Also, for a railroad bridge, through developing user interface of the integrated 3-dimensional model that was assembled by those modules the applicability of them was reviewed.

Three-dimensional Model Generation for Active Shape Model Algorithm (능동모양모델 알고리듬을 위한 삼차원 모델생성 기법)

  • Lim, Seong-Jae;Jeong, Yong-Yeon;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.28-35
    • /
    • 2006
  • Statistical models of shape variability based on active shape models (ASMs) have been successfully utilized to perform segmentation and recognition tasks in two-dimensional (2D) images. Three-dimensional (3D) model-based approaches are more promising than 2D approaches since they can bring in more realistic shape constraints for recognizing and delineating the object boundary. For 3D model-based approaches, however, building the 3D shape model from a training set of segmented instances of an object is a major challenge and currently it remains an open problem in building the 3D shape model, one essential step is to generate a point distribution model (PDM). Corresponding landmarks must be selected in all1 training shapes for generating PDM, and manual determination of landmark correspondences is very time-consuming, tedious, and error-prone. In this paper, we propose a novel automatic method for generating 3D statistical shape models. Given a set of training 3D shapes, we generate a 3D model by 1) building the mean shape fro]n the distance transform of the training shapes, 2) utilizing a tetrahedron method for automatically selecting landmarks on the mean shape, and 3) subsequently propagating these landmarks to each training shape via a distance labeling method. In this paper, we investigate the accuracy and compactness of the 3D model for the human liver built from 50 segmented individual CT data sets. The proposed method is very general without such assumptions and can be applied to other data sets.

Automatic Classification of Bridge Component based on Deep Learning (딥러닝 기반 교량 구성요소 자동 분류)

  • Lee, Jae Hyuk;Park, Jeong Jun;Yoon, Hyungchul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.239-245
    • /
    • 2020
  • Recently, BIM (Building Information Modeling) are widely being utilized in Construction industry. However, most structures that have been constructed in the past do not have BIM. For structures without BIM, the use of SfM (Structure from Motion) techniques in the 2D image obtained from the camera allows the generation of 3D model point cloud data and BIM to be established. However, since these generated point cloud data do not contain semantic information, it is necessary to manually classify what elements of the structure. Therefore, in this study, deep learning was applied to automate the process of classifying structural components. In the establishment of deep learning network, Inception-ResNet-v2 of CNN (Convolutional Neural Network) structure was used, and the components of bridge structure were learned through transfer learning. As a result of classifying components using the data collected to verify the developed system, the components of the bridge were classified with an accuracy of 96.13 %.

Generation and Detection of Cranial Landmark

  • Heo, Suwoong;Kang, Jiwoo;Kim, Yong Oock;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 2015
  • Purpose When a surgeon examines the morphology of skull of patient, locations of craniometric landmarks of 3D computed tomography(CT) volume are one of the most important information for surgical purpose. The locations of craniometric landmarks can be found manually by surgeon from the 3D rendered volume or 2D sagittal, axial, and coronal slices which are taken by CT. Since there are many landmarks on the skull, finding these manually is time-consuming, exhaustive, and occasionally inexact. These inefficiencies raise a demand for a automatic localization technique for craniometric landmark points. So in this paper, we propose a novel method through which we can automatically find these landmark points, which are useful for surgical purpose. Materials and Methods At first, we align the experimental data (CT volumes) using Frankfurt Horizontal Plane (FHP) and Mid Sagittal Plane(MSP) which are defined by 3 and 2 cranial landmark points each. The target landmark of our experiment is the anterior nasal spine. Prior to constructing a statistical cubic model which would be used for detecting the location of the landmark from a given CT volume, reference points for the anterior nasal spine were manually chosen by a surgeon from several CT volume sets. The statistical cubic model is constructed by calculating weighted intensity means of these CT sets around the reference points. By finding the location where similarity function (squared difference function) has the minimal value with this model, the location of the landmark can be found from any given CT volume. Results In this paper, we used 5 CT volumes to construct the statistical cubic model. The 20 CT volumes including the volumes, which were used to construct the model, were used for testing. The range of age of subjects is up to 2 years (24 months) old. The found points of each data are almost close to the reference point which were manually chosen by surgeon. Also it has been seen that the similarity function always has the global minimum at the detection point. Conclusion Through the experiment, we have seen the proposed method shows the outstanding performance in searching the landmark point. This algorithm would make surgeons efficiently work with morphological informations of skull. We also expect the potential of our algorithm for searching the anatomic landmarks not only cranial landmarks.

Highly Dense 3D Surface Generation Using Multi-image Matching

  • Noh, Myoung-Jong;Cho, Woo-Sug;Bang, Ki-In
    • ETRI Journal
    • /
    • v.34 no.1
    • /
    • pp.87-97
    • /
    • 2012
  • This study presents an automatic matching method for generating a dense, accurate, and discontinuity-preserved digital surface model (DSM) using multiple images acquired by an aerial digital frame camera. The proposed method consists of two main procedures: area-based multi-image matching (AMIM) and stereo-pair epipolar line matching (SELM). AMIM evaluates the sum of the normalized cross correlation of corresponding image points from multiple images to determine the optimal height of an object point. A novel method is introduced for determining the search height range and incremental height, which are necessary for the vertical line locus used in the AMIM. This procedure also includes the means to select the best reference and target images for each strip so that multi-image matching can resolve the common problem over occlusion areas. The SELM extracts densely positioned distinct points along epipolar lines from the multiple images and generates a discontinuity-preserved DSM using geometric and radiometric constraints. The matched points derived by the AMIM are used as anchor points between overlapped images to find conjugate distinct points using epipolar geometry. The performance of the proposed method was evaluated for several different test areas, including urban areas.

Advanced Design Environmental With Adaptive And Knowledge-Based Finite Elements

  • Haghighi, Kamyar;Jang, Eun
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1993.10a
    • /
    • pp.1222-1229
    • /
    • 1993
  • An advanced design environment , which is based on adaptive and knowledge -based finite elements (INTELMESH), has been developed. Unlike other approaches, INTEMMESH incorporates the information about the object geometry as well as the boundary and loading conditions to generate an ${\alpha}$-priori finite element mesh which is more refined around the critical regions of the problem domain. INTEMMESH is designed for planar domains and axisymmetric 3-D structures of elasticity and heat transfer subjected to mechanical and thermal loading . It intelligently identifies the critical regions/points in the problem domain and utilize the new concepts of substructuring and wave propagation to choose the proper mesh size for them. INTEMMESH generates well-shaped triangular elements by applying trangulartion and Laplacian smoothing procedures. The adaptive analysis involves the intial finite elements analyze and an efficient ${\alpha}$-posteriori error analysis involves the initial finite element anal sis and an efficient ${\alpha}$-posteriori error analysis and estimation . Once a problem is defined , the system automatically builds a finite element model and analyzes the problem though automatic iterative process until the error reaches a desired level. It has been shown that the proposed approach which initiates the process with an ${\alpha}$-priori, and near optimum mesh of the object , converges to the desired accuracy in less time and at less cost. Such an advanced design/analysis environment will provide the capability for rapid product development and reducing the design cycle time and cost.

  • PDF

Development of an Automatic Generation Methodology for Digital Elevation Models using a Two-Dimensional Digital Map (수치지형도를 이용한 DEM 자동 생성 기법의 개발)

  • Park, Chan-Soo;Lee, Seong-Kyu;Suh, Yong-Cheol
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.3
    • /
    • pp.113-122
    • /
    • 2007
  • The rapid growth of aerial survey and remote sensing technology has enabled the rapid acquisition of very large amounts of geographic data, which should be analyzed using real-time visualization technology. The level of detail(LOD) algorithm is one of the most important elements for realizing real-time visualization. We chose the triangulated irregular network (TIN) method to generate normalized digital elevation model(DEM) data. First, we generated TIN data using contour lines obtained from a two-dimensional(2D) digital map and created a 2D grid array fitting the size of the area. Then, we generated normalized DEM data by calculating the intersection points between the TIN data and the points on the 2D grid array. We used constrained Delaunay triangulation(CDT) and ray-triangle intersection algorithms to calculate the intersection points between the TIN data and the points on the 2D grid array in each step. In addition, we simulated a three-dimensional(3D) terrain model based on normalized DEM data with real-time visualization using a Microsoft Visual C++ 6.0 program in the DirectX API library and a quad-tree LOD algorithm.

  • PDF

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.