• Title/Summary/Keyword: Deep Space Network

Search Result 163, Processing Time 0.024 seconds

Study on Q-value prediction ahead of tunnel excavation face using recurrent neural network (순환인공신경망을 활용한 터널굴착면 전방 Q값 예측에 관한 연구)

  • Hong, Chang-Ho;Kim, Jin;Ryu, Hee-Hwan;Cho, Gye-Chun
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.22 no.3
    • /
    • pp.239-248
    • /
    • 2020
  • Exact rock classification helps suitable support patterns to be installed. Face mapping is usually conducted to classify the rock mass using RMR (Rock Mass Ration) or Q values. There have been several attempts to predict the grade of rock mass using mechanical data of jumbo drills or probe drills and photographs of excavation surfaces by using deep learning. However, they took long time, or had a limitation that it is impossible to grasp the rock grade in ahead of the tunnel surface. In this study, a method to predict the Q value ahead of excavation surface is developed using recurrent neural network (RNN) technique and it is compared with the Q values from face mapping for verification. Among Q values from over 4,600 tunnel faces, 70% of data was used for learning, and the rests were used for verification. Repeated learnings were performed in different number of learning and number of previous excavation surfaces utilized for learning. The coincidence between the predicted and actual Q values was compared with the root mean square error (RMSE). RMSE value from 600 times repeated learning with 2 prior excavation faces gives a lowest values. The results from this study can vary with the input data sets, the results can help to understand how the past ground conditions affect the future ground conditions and to predict the Q value ahead of the tunnel excavation face.

Development of Deep Learning-based Automatic Classification of Architectural Objects in Point Clouds for BIM Application in Renovating Aging Buildings (딥러닝 기반 노후 건축물 리모델링 시 BIM 적용을 위한 포인트 클라우드의 건축 객체 자동 분류 기술 개발)

  • Kim, Tae-Hoon;Gu, Hyeong-Mo;Hong, Soon-Min;Choo, Seoung-Yeon
    • Journal of KIBIM
    • /
    • v.13 no.4
    • /
    • pp.96-105
    • /
    • 2023
  • This study focuses on developing a building object recognition technology for efficient use in the remodeling of buildings constructed without drawings. In the era of the 4th industrial revolution, smart technologies are being developed. This research contributes to the architectural field by introducing a deep learning-based method for automatic object classification and recognition, utilizing point cloud data. We use a TD3D network with voxels, optimizing its performance through adjustments in voxel size and number of blocks. This technology enables the classification of building objects such as walls, floors, and roofs from 3D scanning data, labeling them in polygonal forms to minimize boundary ambiguities. However, challenges in object boundary classifications were observed. The model facilitates the automatic classification of non-building objects, thereby reducing manual effort in data matching processes. It also distinguishes between elements to be demolished or retained during remodeling. The study minimized data set loss space by labeling using the extremities of the x, y, and z coordinates. The research aims to enhance the efficiency of building object classification and improve the quality of architectural plans by reducing manpower and time during remodeling. The study aligns with its goal of developing an efficient classification technology. Future work can extend to creating classified objects using parametric tools with polygon-labeled datasets, offering meaningful numerical analysis for remodeling processes. Continued research in this direction is anticipated to significantly advance the efficiency of building remodeling techniques.

FRS-OCC: Face Recognition System for Surveillance Based on Occlusion Invariant Technique

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.288-296
    • /
    • 2021
  • Automated face recognition in a runtime environment is gaining more and more important in the fields of surveillance and urban security. This is a difficult task keeping in mind the constantly volatile image landscape with varying features and attributes. For a system to be beneficial in industrial settings, it is pertinent that its efficiency isn't compromised when running on roads, intersections, and busy streets. However, recognition in such uncontrolled circumstances is a major problem in real-life applications. In this paper, the main problem of face recognition in which full face is not visible (Occlusion). This is a common occurrence as any person can change his features by wearing a scarf, sunglass or by merely growing a mustache or beard. Such types of discrepancies in facial appearance are frequently stumbled upon in an uncontrolled circumstance and possibly will be a reason to the security systems which are based upon face recognition. These types of variations are very common in a real-life environment. It has been analyzed that it has been studied less in literature but now researchers have a major focus on this type of variation. Existing state-of-the-art techniques suffer from several limitations. Most significant amongst them are low level of usability and poor response time in case of any calamity. In this paper, an improved face recognition system is developed to solve the problem of occlusion known as FRS-OCC. To build the FRS-OCC system, the color and texture features are used and then an incremental learning algorithm (Learn++) to select more informative features. Afterward, the trained stack-based autoencoder (SAE) deep learning algorithm is used to recognize a human face. Overall, the FRS-OCC system is used to introduce such algorithms which enhance the response time to guarantee a benchmark quality of service in any situation. To test and evaluate the performance of the proposed FRS-OCC system, the AR face dataset is utilized. On average, the FRS-OCC system is outperformed and achieved SE of 98.82%, SP of 98.49%, AC of 98.76% and AUC of 0.9995 compared to other state-of-the-art methods. The obtained results indicate that the FRS-OCC system can be used in any surveillance application.

Data abnormal detection using bidirectional long-short neural network combined with artificial experience

  • Yang, Kang;Jiang, Huachen;Ding, Youliang;Wang, Manya;Wan, Chunfeng
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.117-127
    • /
    • 2022
  • Data anomalies seriously threaten the reliability of the bridge structural health monitoring system and may trigger system misjudgment. To overcome the above problem, an efficient and accurate data anomaly detection method is desiderated. Traditional anomaly detection methods extract various abnormal features as the key indicators to identify data anomalies. Then set thresholds artificially for various features to identify specific anomalies, which is the artificial experience method. However, limited by the poor generalization ability among sensors, this method often leads to high labor costs. Another approach to anomaly detection is a data-driven approach based on machine learning methods. Among these, the bidirectional long-short memory neural network (BiLSTM), as an effective classification method, excels at finding complex relationships in multivariate time series data. However, training unprocessed original signals often leads to low computation efficiency and poor convergence, for lacking appropriate feature selection. Therefore, this article combines the advantages of the two methods by proposing a deep learning method with manual experience statistical features fed into it. Experimental comparative studies illustrate that the BiLSTM model with appropriate feature input has an accuracy rate of over 87-94%. Meanwhile, this paper provides basic principles of data cleaning and discusses the typical features of various anomalies. Furthermore, the optimization strategies of the feature space selection based on artificial experience are also highlighted.

Predicting Unseen Object Pose with an Adaptive Depth Estimator (적응형 깊이 추정기를 이용한 미지 물체의 자세 예측)

  • Sungho, Song;Incheol, Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.12
    • /
    • pp.509-516
    • /
    • 2022
  • Accurate pose prediction of objects in 3D space is an important visual recognition technique widely used in many applications such as scene understanding in both indoor and outdoor environments, robotic object manipulation, autonomous driving, and augmented reality. Most previous works for object pose estimation have the limitation that they require an exact 3D CAD model for each object. Unlike such previous works, this paper proposes a novel neural network model that can predict the poses of unknown objects based on only their RGB color images without the corresponding 3D CAD models. The proposed model can obtain depth maps required for unknown object pose prediction by using an adaptive depth estimator, AdaBins,. In this paper, we evaluate the usefulness and the performance of the proposed model through experiments using benchmark datasets.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Development of Artificial Intelligence Joint Model for Hybrid Finite Element Analysis (하이브리드 유한요소해석을 위한 인공지능 조인트 모델 개발)

  • Jang, Kyung Suk;Lim, Hyoung Jun;Hwang, Ji Hye;Shin, Jaeyoon;Yun, Gun Jin
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.10
    • /
    • pp.773-782
    • /
    • 2020
  • The development of joint FE models for deep learning neural network (DLNN)-based hybrid FEA is presented. Material models of bolts and bearings in the front axle of tractor, showing complex behavior induced by various tightening conditions, were replaced with DLNN models. Bolts are modeled as one-dimensional Timoshenko beam elements with six degrees of freedom, and bearings as three-dimensional solid elements. Stress-strain data were extracted from all elements after finite element analysis subjected to various load conditions, and DLNN for bolts and bearing were trained with Tensorflow. The DLNN-based joint models were implemented in the ABAQUS user subroutines where stresses from the next increment are updated and the algorithmic tangent stiffness matrix is calculated. Generalization of the trained DLNN in the FE model was verified by subjecting it to a new loading condition. Finally, the DLNN-based FEA for the front axle of the tractor was conducted and the feasibility was verified by comparing with results of a static structural experiment of the actual tractor.

Semantic Segmentation of Hazardous Facilities in Rural Area Using U-Net from KOMPSAT Ortho Mosaic Imagery (KOMPSAT 정사모자이크 영상으로부터 U-Net 모델을 활용한 농촌위해시설 분류)

  • Sung-Hyun Gong;Hyung-Sup Jung;Moung-Jin Lee;Kwang-Jae Lee;Kwan-Young Oh;Jae-Young Chang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1693-1705
    • /
    • 2023
  • Rural areas, which account for about 90% of the country's land area, are increasing in importance and value as a space that performs various public functions. However, facilities that adversely affect residents' lives, such as livestock facilities, factories, and solar panels, are being built indiscriminately near residential areas, damaging the rural environment and landscape and lowering the quality of residents' lives. In order to prevent disorderly development in rural areas and manage rural space in a planned manner, detection and monitoring of hazardous facilities in rural areas is necessary. Data can be acquired through satellite imagery, which can be acquired periodically and provide information on the entire region. Effective detection is possible by utilizing image-based deep learning techniques using convolutional neural networks. Therefore, U-Net model, which shows high performance in semantic segmentation, was used to classify potentially hazardous facilities in rural areas. In this study, KOMPSAT ortho-mosaic optical imagery provided by the Korea Aerospace Research Institute in 2020 with a spatial resolution of 0.7 meters was used, and AI training data for livestock facilities, factories, and solar panels were produced by hand for training and inference. After training with U-Net, pixel accuracy of 0.9739 and mean Intersection over Union (mIoU) of 0.7025 were achieved. The results of this study can be used for monitoring hazardous facilities in rural areas and are expected to be used as basis for rural planning.

Estimation of Displacements Using Artificial Intelligence Considering Spatial Correlation of Structural Shape (구조형상 공간상관을 고려한 인공지능 기반 변위 추정)

  • Seung-Hun Shin;Ji-Young Kim;Jong-Yeol Woo;Dae-Gun Kim;Tae-Seok Jin
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.1
    • /
    • pp.1-7
    • /
    • 2023
  • An artificial intelligence (AI) method based on image deep learning is proposed to predict the entire displacement shape of a structure using the feature of partial displacements. The performance of the method was investigated through a structural test of a steel frame. An image-to-image regression (I2IR) training method was developed based on the U-Net layer for image recognition. In the I2IR method, the U-Net is modified to generate images of entire displacement shapes when images of partial displacement shapes of structures are input to the AI network. Furthermore, the training of displacements combined with the location feature was developed so that nodal displacement values with corresponding nodal coordinates could be used in AI training. The proposed training methods can consider correlations between nodal displacements in 3D space, and the accuracy of displacement predictions is improved compared with artificial neural network training methods. Displacements of the steel frame were predicted during the structural tests using the proposed methods and compared with 3D scanning data of displacement shapes. The results show that the proposed AI prediction properly follows the measured displacements using 3D scanning.

Prediction of the remaining time and time interval of pebbles in pebble bed HTGRs aided by CNN via DEM datasets

  • Mengqi Wu;Xu Liu;Nan Gui;Xingtuan Yang;Jiyuan Tu;Shengyao Jiang;Qian Zhao
    • Nuclear Engineering and Technology
    • /
    • v.55 no.1
    • /
    • pp.339-352
    • /
    • 2023
  • Prediction of the time-related traits of pebble flow inside pebble-bed HTGRs is of great significance for reactor operation and design. In this work, an image-driven approach with the aid of a convolutional neural network (CNN) is proposed to predict the remaining time of initially loaded pebbles and the time interval of paired flow images of the pebble bed. Two types of strategies are put forward: one is adding FC layers to the classic classification CNN models and using regression training, and the other is CNN-based deep expectation (DEX) by regarding the time prediction as a deep classification task followed by softmax expected value refinements. The current dataset is obtained from the discrete element method (DEM) simulations. Results show that the CNN-aided models generally make satisfactory predictions on the remaining time with the determination coefficient larger than 0.99. Among these models, the VGG19+DEX performs the best and its CumScore (proportion of test set with prediction error within 0.5s) can reach 0.939. Besides, the remaining time of additional test sets and new cases can also be well predicted, indicating good generalization ability of the model. In the task of predicting the time interval of image pairs, the VGG19+DEX model has also generated satisfactory results. Particularly, the trained model, with promising generalization ability, has demonstrated great potential in accurately and instantaneously predicting the traits of interest, without the need for additional computational intensive DEM simulations. Nevertheless, the issues of data diversity and model optimization need to be improved to achieve the full potential of the CNN-aided prediction tool.