• Title/Summary/Keyword: Automated Data Analysis

Search Result 586, Processing Time 0.027 seconds

An Extraction of Geometric Characteristics Paramenters of Watershed by Using Geographic Information System (지형정보시스템을 이용한 하천유역의 형태학적 특성인자의 추출)

  • 안상진;함창학
    • Water for future
    • /
    • v.28 no.2
    • /
    • pp.115-124
    • /
    • 1995
  • A GIS is capable of extracting various hydrological factors from DEM(digital elevation model). One of important tasks for hydrological analysis is the division of watershed. It can be an essential factor among various geometric characteristics of watershed. In this study, watershed itself and other geometric factors of watershed are extracted from DEM by using GIS technique. The manual process of tasks to obtain geometric characteristics of watershed is automated by using the functions of ARC/INFO software as GIS package. Scanned data was used for this study and it is converted to DEM data. Various forms of representation of spatial data are handled in main module and GRID module of ARC/INFO. GRID module is used on a stream in order to define watershed boundary, so it would be possible to obtain the watersheds. Also, a flow direction, stream networks and orders are generated. The results show that GIS can aid watershed management and research and surveillance. Also the geometric characteristics parameters of watershed can be quantified with ease using GIS technique and the hardsome process can be automated.

  • PDF

The Study of the Roughness of the Pavement on the Bridge Deck and Approach Slab using a 5year(2003 to 2007) Pavement Condition Survey Data (HPMS 데이터를 이용한 고속도로 교량 및 뒷채움구간 평탄성 특성 연구)

  • Park, Sang-Wook;Suh, Young-Chan
    • International Journal of Highway Engineering
    • /
    • v.10 no.3
    • /
    • pp.189-197
    • /
    • 2008
  • Using a 5 year(2003 to 2007) pavement condition survey data from the highway pavement management system(HPMS), the roughness of the bridge deck pavement was analyzed. Based on the result of this analysis, this study tried to identify the factors affecting the deterioration of the bridge deck pavement condition. The data from HPMS indicates that the roughness of the bridge deck pavement is worse than that of the general pavement on the roadbed. The worse roughness of the bridge deck pavement is caused by the settlement of approach slab as well as the surface distress on the bridge deck pavement. In order to improve effectively the roughness of the bridge deck pavement, a management system was established in which not only the regular automated pavement condition survey to check the distress of surface of the bridge deck pavement was adopted but an automated surface profiler to check the degree of settlement of approach slab was applied.

  • PDF

Assessment of Mild Cognitive Impairment in Elderly Subjects Using a Fully Automated Brain Segmentation Software

  • Kwon, Chiheon;Kang, Koung Mi;Byun, Min Soo;Yi, Dahyun;Song, Huijin;Lee, Ji Ye;Hwang, Inpyeong;Yoo, Roh-Eul;Yun, Tae Jin;Choi, Seung Hong;Kim, Ji-hoon;Sohn, Chul-Ho;Lee, Dong Young
    • Investigative Magnetic Resonance Imaging
    • /
    • v.25 no.3
    • /
    • pp.164-171
    • /
    • 2021
  • Purpose: Mild cognitive impairment (MCI) is a prodromal stage of Alzheimer's disease (AD). Brain atrophy in this disease spectrum begins in the medial temporal lobe structure, which can be recognized by magnetic resonance imaging. To overcome the unsatisfactory inter-observer reliability of visual evaluation, quantitative brain volumetry has been developed and widely investigated for the diagnosis of MCI and AD. The aim of this study was to assess the prediction accuracy of quantitative brain volumetry using a fully automated segmentation software package, NeuroQuant®, for the diagnosis of MCI. Materials and Methods: A total of 418 subjects from the Korean Brain Aging Study for Early Diagnosis and Prediction of Alzheimer's Disease cohort were included in our study. Each participant was allocated to either a cognitively normal old group (n = 285) or an MCI group (n = 133). Brain volumetric data were obtained from T1-weighted images using the NeuroQuant software package. Logistic regression and receiver operating characteristic (ROC) curve analyses were performed to investigate relevant brain regions and their prediction accuracies. Results: Multivariate logistic regression analysis revealed that normative percentiles of the hippocampus (P < 0.001), amygdala (P = 0.003), frontal lobe (P = 0.049), medial parietal lobe (P = 0.023), and third ventricle (P = 0.012) were independent predictive factors for MCI. In ROC analysis, normative percentiles of the hippocampus and amygdala showed fair accuracies in the diagnosis of MCI (area under the curve: 0.739 and 0.727, respectively). Conclusion: Normative percentiles of the hippocampus and amygdala provided by the fully automated segmentation software could be used for screening MCI with a reasonable post-processing time. This information might help us interpret structural MRI in patients with cognitive impairment.

3-D Information Model for High-speed Railway Infrastructures (고속철도시설물을 위한 3차원정보모델)

  • Shim, Chang-Su;Kim, Deok-Won;Youn, Nu-Ri
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2008.04a
    • /
    • pp.241-246
    • /
    • 2008
  • Design of a high-speed railway line requires collaboration of heterogeneous application systems and of engineers with different background. Object-based 3D models with metadata can be a shared information model for the effective collaborative design. In this paper, railway infrastructure information model is proposed to enable integrated and inter-operable works throughout the life-cycle of the railway infrastructures, from planning to maintenance. In order to develop the model, object-based 3-D models were built for a 10km railway among Korea high-speed railway lines. The model has basically three information layers for designers, contractors and an owner, respectively. Prestressed concrete box-girders are the most common superstructure of bridges. The design information layer has metadata on requirements, design codes, geometry, analysis and so on. The construction layer has data on drawings, real data for material and products, schedules and so on. The maintenance layer for the owner has the final geometry, material data, products and their suppliers and so on. These information has its own data architecture which is derived from similar concept of product breakdown structure(PBS) and work breakdown structure(WBS). The constructed RIIM for the infrastructures of the high-speed railway was successfully applied to various areas such as design check, structural analysis, automated estimation, construction simulation, virtual viewing, and digital mock-up. The integrated information model can realize virtual construction system for railway lines and dramatically increase the productivity of the whole engineering process.

  • PDF

The study on the CALS's character -From a ILS point of view- (CALS 성격 규명에 관한 연구 - ILS를 중심으로 -)

  • 손병식;김성권
    • The Journal of Society for e-Business Studies
    • /
    • v.3 no.1
    • /
    • pp.43-67
    • /
    • 1998
  • Logistics is by no means a new subject area. The concept of logistics goes way become more complex as technology advances, and logistics requirements have increased accordingly. In 1964, when ILS philosophy formally came into being, ILS was defined in general terms and did not describe what actions an ILS program should accomplish. ILS philosophy have been developed from 1964 through 1980. In 1982, United States Department of Defense formulated a new concept, CALS. CALS is the strategy that the US defense development to management the transition to integration and automated interchange in defense system engineering, manufacturing, and logistic support. Its goal is to use the inherent features of digitized data to revolutionize the function of data -gathering, data storage and data - transfer techologies associared with the development of defense systems. The Result will be systems that are cheaper, more reliable, and easier to maintain. To define CALS's character, the purpose of this papers compare two concepts - CALS and ILS. The elements of CALS consist of standads and EDI. The elements of ILS include LCC(Life Cycle Cost), LSA(Logistics Support Analysis), LSAR(Logistics Support Analysis Rcord), Aquisition Cycle.

  • PDF

Spatiotemporal Analysis of Vessel Trajectory Data using Network Analysis (네트워크 분석 기법을 이용한 항적 데이터의 시공간적 특징 분석)

  • Oh, Jaeyong;Kim, Hye-Jin
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.26 no.7
    • /
    • pp.759-766
    • /
    • 2020
  • In recent years, the maritime traffic environment has been changing in various ways, and the traffic volume has been increasing constantly. Accordingly, the requirements for maritime traffic analysis have become diversified. To this end, traffic characteristics must first be analyzed using vessel trajectory data. However, as the conventional method is mostly manual, it requires a considerable amount of time and effort, and errors may occur during data processing. In addition, ensuring the reliability of the analysis results is difficult, because this method considers the subjective opinion of analysts. Therefore, in this paper, we propose an automated method of traffic network generation for maritime traffic analysis. In the experiment, spatiotemporal features are analyzed using data collected at Mokpo Harbor over six months. The proposed method can automatically generate a traffic network reflecting the traffic characteristics of the experimental area. In addition, it can be applied to a large amount of trajectory data. Finally, as the spatiotemporal characteristics can be analyzed using the traffic network, the proposed method is expected to be used in various maritime traffic analyses.

Comparison of performance of automatic detection model of GPR signal considering the heterogeneous ground (지반의 불균질성을 고려한 GPR 신호의 자동탐지모델 성능 비교)

  • Lee, Sang Yun;Song, Ki-Il;Kang, Kyung Nam;Ryu, Hee Hwan
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.4
    • /
    • pp.341-353
    • /
    • 2022
  • Pipelines are buried in urban area, and the position (depth and orientation) of buried pipeline should be clearly identified before ground excavation. Although various geophysical methods can be used to detect the buried pipeline, it is not easy to identify the exact information of pipeline due to heterogeneous ground condition. Among various non-destructive geo-exploration methods, ground penetration radar (GPR) can explore the ground subsurface rapidly with relatively low cost compared to other exploration methods. However, the exploration data obtained from GPR requires considerable experiences because interpretation is not intuitive. Recently, researches on automated detection technology for GPR data using deep learning have been conducted. However, the lack of GPR data which is essential for training makes it difficult to build up the reliable detection model. To overcome this problem, we conducted a preliminary study to improve the performance of the detection model using finite difference time domain (FDTD)-based numerical analysis. Firstly, numerical analysis was performed with homogeneous soil media having single permittivity. In case of heterogeneous ground, numerical analysis was performed considering the ground heterogeneity using fractal technique. Secondly, deep learning was carried out using convolutional neural network. Detection Model-A is trained with data set obtained from homogeneous ground. And, detection Model-B is trained with data set obtained from homogeneous ground and heterogeneous ground. As a result, it is found that the detection Model-B which is trained including heterogeneous ground shows better performance than detection Model-A. It indicates the ground heterogeneity should be considered to increase the performance of automated detection model for GPR exploration.

Development of an Automated Model for Selecting Overlapping Areas of Marine Activity Zone using GIS (GIS를 활용한 해양활동공간 중첩구역 산출 자동화 모형개발)

  • KIM, Bum-Kyu;PARK, Yong-Gil;CHOI, Hyun-Woo;KIM, Tae-Hoon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.3
    • /
    • pp.59-73
    • /
    • 2022
  • Currently, the conflict between use and conservation of the ocean is intensifying in the ocean, so it is essential to introduce an effective method to define and manage it in advance for each core value of ocean. Accordingly, although the ocean is divided into nine marine use zone and managed through marine spatial planning, the analysis of the sea areas where mutually exclusive activities overlap in the ocean is insufficient. In this study, an automated model was developed to derive a sea areas where the core values of the ocean conflict. In order to analyze marine activities, available data on marine activity were collected, and data necessity for the analysis of mutually exclusive marine activities were derived. After classifying the derived data into legal and characteristic data, a conflict matrix was prepared through pairwise comparison between data to designate priorities when overlapping occurs. Based on the designated priorities, an automation model was developed, and sea areas where marine activities conflicted were derived, visualized, and area calculated. Using this, it is judged that the efficiency of decision-making can be improved by clearly deriving the sea areas where major issues occur in establishing the marine spatial planning.

Standard-based Integration of Heterogeneous Large-scale DNA Microarray Data for Improving Reusability

  • Jung, Yong;Seo, Hwa-Jeong;Park, Yu-Rang;Kim, Ji-Hun;Bien, Sang Jay;Kim, Ju-Han
    • Genomics & Informatics
    • /
    • v.9 no.1
    • /
    • pp.19-27
    • /
    • 2011
  • Gene Expression Omnibus (GEO) has kept the largest amount of gene-expression microarray data that have grown exponentially. Microarray data in GEO have been generated in many different formats and often lack standardized annotation and documentation. It is hard to know if preprocessing has been applied to a dataset or not and in what way. Standard-based integration of heterogeneous data formats and metadata is necessary for comprehensive data query, analysis and mining. We attempted to integrate the heterogeneous microarray data in GEO based on Minimum Information About a Microarray Experiment (MIAME) standard. We unified the data fields of GEO Data table and mapped the attributes of GEO metadata into MIAME elements. We also discriminated non-preprocessed raw datasets from others and processed ones by using a two-step classification method. Most of the procedures were developed as semi-automated algorithms with some degree of text mining techniques. We localized 2,967 Platforms, 4,867 Series and 103,590 Samples with covering 279 organisms, integrated them into a standard-based relational schema and developed a comprehensive query interface to extract. Our tool, GEOQuest is available at http://www.snubi.org/software/GEOQuest/.

Attention-based word correlation analysis system for big data analysis (빅데이터 분석을 위한 어텐션 기반의 단어 연관관계 분석 시스템)

  • Chi-Gon, Hwang;Chang-Pyo, Yoon;Soo-Wook, Lee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.41-46
    • /
    • 2023
  • Recently, big data analysis can use various techniques according to the development of machine learning. Big data collected in reality lacks an automated refining technique for the same or similar terms based on semantic analysis of the relationship between words. Since most of the big data is described in general sentences, it is difficult to understand the meaning and terms of the sentences. To solve these problems, it is necessary to understand the morphological analysis and meaning of sentences. Accordingly, NLP, a technique for analyzing natural language, can understand the word's relationship and sentences. Among the NLP techniques, the transformer has been proposed as a way to solve the disadvantages of RNN by using self-attention composed of an encoder-decoder structure of seq2seq. In this paper, transformers are used as a way to form associations between words in order to understand the words and phrases of sentences extracted from big data.