• 제목/요약/키워드: Region-based CNN

검색결과 78건 처리시간 0.022초

인공지능 기반 플랜트 도면 내 심볼 객체 자동화 검출 (Automatic Recognition of Symbol Objects in P&IDs using Artificial Intelligence)

  • 신호진;전은미;권도경;권준석;이철진
    • 플랜트 저널
    • /
    • 제17권3호
    • /
    • pp.37-41
    • /
    • 2021
  • P&ID(Piping and Instrument Diagram)는 플랜트의 장치 및 계장 정보를 집약적으로 담고 있는, 엔지니어링 핵심도면이다. 한 장의 P&ID에는 심볼로 표현된 수백 여개의 정보들이 존재하며, 이에 대한 디지털 전산화 작업이 수작업으로 진행되고 있어 많은 인력과 시간이 소요된다. 기존 연구들은 CNN 모델을 이용하여 도면 객체 검출에 성공하였으나, 도면 한 장당 약 30분, 인식률은 90% 정도로 현장에서 구현하기에는 부족한 성능이다. 따라서 본 연구에서는 영역 검출과 객체 인식을 동시에 처리하는 1-stage 객체 검출 알고리즘을 제안하였다. 이미지 레이블링 오픈소스 툴을 이용하여 학습 데이터를 구축하고 딥러닝 모델 학습을 통해 도면 내 심볼 이미지 인식 방법을 제안한다.

Visual Explanation of a Deep Learning Solar Flare Forecast Model and Its Relationship to Physical Parameters

  • Yi, Kangwoo;Moon, Yong-Jae;Lim, Daye;Park, Eunsu;Lee, Harim
    • 천문학회보
    • /
    • 제46권1호
    • /
    • pp.42.1-42.1
    • /
    • 2021
  • In this study, we present a visual explanation of a deep learning solar flare forecast model and its relationship to physical parameters of solar active regions (ARs). For this, we use full-disk magnetograms at 00:00 UT from the Solar and Heliospheric Observatory/Michelson Doppler Imager and the Solar Dynamics Observatory/Helioseismic and Magnetic Imager, physical parameters from the Space-weather HMI Active Region Patch (SHARP), and Geostationary Operational Environmental Satellite X-ray flare data. Our deep learning flare forecast model based on the Convolutional Neural Network (CNN) predicts "Yes" or "No" for the daily occurrence of C-, M-, and X-class flares. We interpret the model using two CNN attribution methods (guided backpropagation and Gradient-weighted Class Activation Mapping [Grad-CAM]) that provide quantitative information on explaining the model. We find that our deep learning flare forecasting model is intimately related to AR physical properties that have also been distinguished in previous studies as holding significant predictive ability. Major results of this study are as follows. First, we successfully apply our deep learning models to the forecast of daily solar flare occurrence with TSS = 0.65, without any preprocessing to extract features from data. Second, using the attribution methods, we find that the polarity inversion line is an important feature for the deep learning flare forecasting model. Third, the ARs with high Grad-CAM values produce more flares than those with low Grad-CAM values. Fourth, nine SHARP parameters such as total unsigned vertical current, total unsigned current helicity, total unsigned flux, and total photospheric magnetic free energy density are well correlated with Grad-CAM values.

  • PDF

무인 항공기를 이용한 밀집영역 자동차 탐지 (Vehicle Detection in Dense Area Using UAV Aerial Images)

  • 서창진
    • 한국산학기술학회논문지
    • /
    • 제19권3호
    • /
    • pp.693-698
    • /
    • 2018
  • 본 논문은 최근 물체탐지 분야에서 실시간 물체 탐지 알고리즘으로 주목을 받고 있는 YOLOv2(You Only Look Once) 알고리즘을 이용하여 밀집 영역에 주차되어 있는 자동차 탐지 방법을 제안한다. YOLO의 컨볼루션 네트워크는 전체 이미지에서 한 번의 평가를 통해서 직접적으로 경계박스들을 예측하고 각 클래스의 확률을 계산하고 물체 탐지 과정이 단일 네트워크이기 때문에 탐지 성능이 최적화 되며 빠르다는 장점을 가지고 있다. 기존의 슬라이딩 윈도우 접근법과 R-CNN 계열의 탐지 방법은 region proposal 방법을 사용하여 이미지 안에 가능성이 많은 경계박스를 생성하고 각 요소들을 따로 학습하기 때문에 최적화 및 실시간 적용에 어려움을 가지고 있다. 제안하는 연구는 YOLOv2 알고리즘을 적용하여 기존의 알고리즘이 가지고 있는 물체 탐지의 실시간 처리 문제점을 해결하여 실시간으로 지상에 있는 자동차를 탐지하는 방법을 제안한다. 제안하는 연구 방법의 실험을 위하여 오픈소스로 제공되는 Darknet을 사용하였으며 GTX-1080ti 4개를 탑재한 Deep learning 서버를 이용하여 실험하였다. 실험결과 YOLO를 활용한 자동차 탐지 방법은 기존의 알고리즘 보다 물체탐지에 대한 오버헤드를 감소 할 수 있었으며 실시간으로 지상에 존재하는 자동차를 탐지할 수 있었다.

Semantic Segmentation 기반 딥러닝을 활용한 건축 Building Information Modeling 부재 분류성능 개선 방안 (A Proposal of Deep Learning Based Semantic Segmentation to Improve Performance of Building Information Models Classification)

  • 이고은;유영수;하대목;구본상;이관훈
    • 한국BIM학회 논문집
    • /
    • 제11권3호
    • /
    • pp.22-33
    • /
    • 2021
  • In order to maximize the use of BIM, all data related to individual elements in the model must be correctly assigned, and it is essential to check whether it corresponds to the IFC entity classification. However, as the BIM modeling process is performed by a large number of participants, it is difficult to achieve complete integrity. To solve this problem, studies on semantic integrity verification are being conducted to examine whether elements are correctly classified or IFC mapped in the BIM model by applying an artificial intelligence algorithm to the 2D image of each element. Existing studies had a limitation in that they could not correctly classify some elements even though the geometrical differences in the images were clear. This was found to be due to the fact that the geometrical characteristics were not properly reflected in the learning process because the range of the region to be learned in the image was not clearly defined. In this study, the CRF-RNN-based semantic segmentation was applied to increase the clarity of element region within each image, and then applied to the MVCNN algorithm to improve the classification performance. As a result of applying semantic segmentation in the MVCNN learning process to 889 data composed of a total of 8 BIM element types, the classification accuracy was found to be 0.92, which is improved by 0.06 compared to the conventional MVCNN.

Vision-Based Activity Recognition Monitoring Based on Human-Object Interaction at Construction Sites

  • Chae, Yeon;Lee, Hoonyong;Ahn, Changbum R.;Jung, Minhyuk;Park, Moonseo
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.877-885
    • /
    • 2022
  • Vision-based activity recognition has been widely attempted at construction sites to estimate productivity and enhance workers' health and safety. Previous studies have focused on extracting an individual worker's postural information from sequential image frames for activity recognition. However, various trades of workers perform different tasks with similar postural patterns, which degrades the performance of activity recognition based on postural information. To this end, this research exploited a concept of human-object interaction, the interaction between a worker and their surrounding objects, considering the fact that trade workers interact with a specific object (e.g., working tools or construction materials) relevant to their trades. This research developed an approach to understand the context from sequential image frames based on four features: posture, object, spatial features, and temporal feature. Both posture and object features were used to analyze the interaction between the worker and the target object, and the other two features were used to detect movements from the entire region of image frames in both temporal and spatial domains. The developed approach used convolutional neural networks (CNN) for feature extractors and activity classifiers and long short-term memory (LSTM) was also used as an activity classifier. The developed approach provided an average accuracy of 85.96% for classifying 12 target construction tasks performed by two trades of workers, which was higher than two benchmark models. This experimental result indicated that integrating a concept of the human-object interaction offers great benefits in activity recognition when various trade workers coexist in a scene.

  • PDF

Deep Window Detection in Street Scenes

  • Ma, Wenguang;Ma, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권2호
    • /
    • pp.855-870
    • /
    • 2020
  • Windows are key components of building facades. Detecting windows, crucial to 3D semantic reconstruction and scene parsing, is a challenging task in computer vision. Early methods try to solve window detection by using hand-crafted features and traditional classifiers. However, these methods are unable to handle the diversity of window instances in real scenes and suffer from heavy computational costs. Recently, convolutional neural networks based object detection algorithms attract much attention due to their good performances. Unfortunately, directly training them for challenging window detection cannot achieve satisfying results. In this paper, we propose an approach for window detection. It involves an improved Faster R-CNN architecture for window detection, featuring in a window region proposal network, an RoI feature fusion and a context enhancement module. Besides, a post optimization process is designed by the regular distribution of windows to refine detection results obtained by the improved deep architecture. Furthermore, we present a newly collected dataset which is the largest one for window detection in real street scenes to date. Experimental results on both existing datasets and the new dataset show that the proposed method has outstanding performance.

Image Retrieval Based on the Weighted and Regional Integration of CNN Features

  • Liao, Kaiyang;Fan, Bing;Zheng, Yuanlin;Lin, Guangfeng;Cao, Congjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권3호
    • /
    • pp.894-907
    • /
    • 2022
  • The features extracted by convolutional neural networks are more descriptive of images than traditional features, and their convolutional layers are more suitable for retrieving images than are fully connected layers. The convolutional layer features will consume considerable time and memory if used directly to match an image. Therefore, this paper proposes a feature weighting and region integration method for convolutional layer features to form global feature vectors and subsequently use them for image matching. First, the 3D feature of the last convolutional layer is extracted, and the convolutional feature is subsequently weighted again to highlight the edge information and position information of the image. Next, we integrate several regional eigenvectors that are processed by sliding windows into a global eigenvector. Finally, the initial ranking of the retrieval is obtained by measuring the similarity of the query image and the test image using the cosine distance, and the final mean Average Precision (mAP) is obtained by using the extended query method for rearrangement. We conduct experiments using the Oxford5k and Paris6k datasets and their extended datasets, Paris106k and Oxford105k. These experimental results indicate that the global feature extracted by the new method can better describe an image.

계층적 보조 경계 추출을 이용한 단일 영상의 초해상도 기법 (Single Image Super Resolution using sub-Edge Extraction based on Hierarchical Structure)

  • 한현호
    • 디지털정책학회지
    • /
    • 제1권2호
    • /
    • pp.53-59
    • /
    • 2022
  • 본 논문에서는 단일 영상을 기반으로 초해상도를 생성하는 과정에서 계층 구조를 거쳐 추출된 보조 경계 특징을 이용한 방법을 제안하였다. 초해상도의 품질을 향상시키기 위해서는 영상 내 경계 영역을 선명하게 표현하면서도 각 영역의 형태를 명확하게 구분하여야 한다. 제안하는 방법은 초해상도 과정에서 품질을 결정하는 중요한 요인인 경계 영역을 입력 영상의 구조적 형태를 유지하면서 개선된 초해상도 결과를 생성하기 위해 딥러닝 기반의 초해상도 방법에서 영상의 경계 영역 정보를 보조적으로 활용하는 구조를 사용하였다. 딥러닝 기반의 초해상도를 수행하기 위한 그룹 컨볼루션 구조에 더해 보조 경계 추출을 위한 고주파 대역의 정보를 기반으로 별도의 계층적 구조의 경계 누적 추출 과정을 수행하여 이를 보조 특징으로써 활용하는 방법을 제안하였다. 실험 결과 기존 초해상도 대비 PSNR과 SSIM에서 약 1%의 성능 향상을 보였다.

Image-based Soft Drink Type Classification and Dietary Assessment System Using Deep Convolutional Neural Network with Transfer Learning

  • Rubaiya Hafiz;Mohammad Reduanul Haque;Aniruddha Rakshit;Amina khatun;Mohammad Shorif Uddin
    • International Journal of Computer Science & Network Security
    • /
    • 제24권2호
    • /
    • pp.158-168
    • /
    • 2024
  • There is hardly any person in modern times who has not taken soft drinks instead of drinking water. The rate of people taking soft drinks being surprisingly high, researchers around the world have cautioned from time to time that these drinks lead to weight gain, raise the risk of non-communicable diseases and so on. Therefore, in this work an image-based tool is developed to monitor the nutritional information of soft drinks by using deep convolutional neural network with transfer learning. At first, visual saliency, mean shift segmentation, thresholding and noise reduction technique, collectively known as 'pre-processing' are adopted to extract the location of drinks region. After removing backgrounds and segment out only the desired area from image, we impose Discrete Wavelength Transform (DWT) based resolution enhancement technique is applied to improve the quality of image. After that, transfer learning model is employed for the classification of drinks. Finally, nutrition value of each drink is estimated using Bag-of-Feature (BoF) based classification and Euclidean distance-based ratio calculation technique. To achieve this, a dataset is built with ten most consumed soft drinks in Bangladesh. These images were collected from imageNet dataset as well as internet and proposed method confirms that it has the ability to detect and recognize different types of drinks with an accuracy of 98.51%.

저노출 카메라와 웨이블릿 기반 랜덤 포레스트를 이용한 야간 자동차 전조등 및 후미등 인식 (Vehicle Headlight and Taillight Recognition in Nighttime using Low-Exposure Camera and Wavelet-based Random Forest)

  • 허두영;김상준;곽충섭;남재열;고병철
    • 방송공학회논문지
    • /
    • 제22권3호
    • /
    • pp.282-294
    • /
    • 2017
  • 본 논문에서는 차량이 움직일 때 발생하는 카메라의 움직임, 도로상의 광원에 강건한 지능형 전조등 제어 시스템을 제안한다. 후보광원을 검출할 때 카메라의 원근 범위 추정 모델을 기반으로 한 ROI (Region of Interest)를 사용하며 이는 FROI (Front ROI)와 BROI (Back ROI)로 나뉘어 사용된다. ROI내에서 차량의 전조등과 후미등, 반사광 및 주변 도로의 조명들은 2개의 적응적 임계값에 의해 세그먼트화 된다. 세그먼트화 된 광원 후보군들로부터 후미등은 적색도(redness)와 Haar-like특징에 기반한 랜덤포레스트 분류기에 의해 검출된다. 전조등과 후미등 분류 과정에서 빠른 학습과 실시간 처리를 위해 SVM(Support Vector Machine) 또는 CNN(Convolutional Neural Network)을 사용하지 않고 랜덤포레스트 분류기를 사용했다. 마지막으로 페어링(Pairing) 단계에서는 수직좌표 유사성, 광원들간의 연관성 검사와 같은 사전 정의된 규칙을 적용한다. 제안된 알고리즘은 다양한 야간 운전환경을 포함하는 데이터에 적용한 결과, 최근의 관련연구 보다 향상된 검출 성능을 보여주었다.