• Title/Summary/Keyword: Fully convolutional network model

Search Result 62, Processing Time 0.034 seconds

Reconstruction of wind speed fields in mountainous areas using a full convolutional neural network

  • Ruifang Shen;Bo Li;Ke Li;Bowen Yan;Yuanzhao Zhang
    • Wind and Structures
    • /
    • v.38 no.4
    • /
    • pp.231-244
    • /
    • 2024
  • As wind farms expand into low wind speed areas, an increasing number are being established in mountainous regions. To fully utilize wind energy resources, it is essential to understand the details of mountain flow fields. Reconstructing the wind speed field in complex terrain is crucial for planning, designing, operation of wind farms, which impacts the wind farm's profits throughout its life cycle. Currently, wind speed reconstruction is primarily achieved through physical and machine learning methods. However, physical methods often require significant computational costs. Therefore, we propose a Full Convolutional Neural Network (FCNN)-based reconstruction method for mountain wind velocity fields to evaluate wind resources more accurately and efficiently. This method establishes the mapping relation between terrain, wind angle, height, and corresponding velocity fields of three velocity components within a specific terrain range. Guided by this mapping relation, wind velocity fields of three components at different terrains, wind angles, and heights can be generated. The effectiveness of this method was demonstrated by reconstructing the wind speed field of complex terrain in Beijing.

Semantic Segmentation using Convolutional Neural Network with Conditional Random Field (조건부 랜덤 필드와 컨볼루션 신경망을 이용한 의미론적인 객체 분할 방법)

  • Lim, Su-Chang;Kim, Do-Yeon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.3
    • /
    • pp.451-456
    • /
    • 2017
  • Semantic segmentation, which is the most basic and complicated problem in computer vision, classifies each pixel of an image into a specific object and performs a task of specifying a label. MRF and CRF, which have been studied in the past, have been studied as effective methods for improving the accuracy of pixel level labeling. In this paper, we propose a semantic partitioning method that combines CNN, a kind of deep running, which is in the spotlight recently, and CRF, a probabilistic model. For learning and performance verification, Pascal VOC 2012 image database was used and the test was performed using arbitrary images not used for learning. As a result of the study, we showed better partitioning performance than existing semantic partitioning algorithm.

A Deep Learning-based Streetscapes Safety Score Prediction Model using Environmental Context from Big Data (빅데이터로부터 추출된 주변 환경 컨텍스트를 반영한 딥러닝 기반 거리 안전도 점수 예측 모델)

  • Lee, Gi-In;Kang, Hang-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1282-1290
    • /
    • 2017
  • Since the mitigation of fear of crime significantly enhances the consumptions in a city, studies focusing on urban safety analysis have received much attention as means of revitalizing the local economy. In addition, with the development of computer vision and machine learning technologies, efficient and automated analysis methods have been developed. Previous studies have used global features to predict the safety of cities, yet this method has limited ability in accurately predicting abstract information such as safety assessments. Therefore we used a Convolutional Context Neural Network (CCNN) that considered "context" as a decision criterion to accurately predict safety of cities. CCNN model is constructed by combining a stacked auto encoder with a fully connected network to find the context and use it in the CNN model to predict the score. We analyzed the RMSE and correlation of SVR, Alexnet, and Sharing models to compare with the performance of CCNN model. Our results indicate that our model has much better RMSE and Pearson/Spearman correlation coefficient.

Malaria Cell Image Recognition Based On VGG19 Using Transfer Learning (전이 학습을 이용한 VGG19 기반 말라리아셀 이미지 인식)

  • Peng, Xiangshen;Kim, Kangchul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.483-490
    • /
    • 2022
  • Malaria is a disease caused by a parasite and it is prevalent in all over the world. The usual method used to recognize malaria cells is a thick and thin blood smears examination methods, but this method requires a lot of manual calculation, so the efficiency and accuracy are very low as well as the lack of pathologists in impoverished country has led to high malaria mortality rates. In this paper, a malaria cell image recognition model using transfer learning is proposed, which consists in the feature extractor, the residual structure and the fully connected layers. When the pre-training parameters of the VGG-19 model are imported to the proposed model, the parameters of some convolutional layers model are frozen and the fine-tuning method is used to fit the data for the model. Also we implement another malaria cell recognition model without residual structure to compare with the proposed model. The simulation results shows that the model using the residual structure gets better performance than the other model without residual structure and the proposed model has the best accuracy of 97.33% compared to other recent papers.

Deep Learning Similarity-based 1:1 Matching Method for Real Product Image and Drawing Image

  • Han, Gi-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.59-68
    • /
    • 2022
  • This paper presents a method for 1:1 verification by comparing the similarity between the given real product image and the drawing image. The proposed method combines two existing CNN-based deep learning models to construct a Siamese Network. After extracting the feature vector of the image through the FC (Fully Connected) Layer of each network and comparing the similarity, if the real product image and the drawing image (front view, left and right side view, top view, etc) are the same product, the similarity is set to 1 for learning and, if it is a different product, the similarity is set to 0. The test (inference) model is a deep learning model that queries the real product image and the drawing image in pairs to determine whether the pair is the same product or not. In the proposed model, through a comparison of the similarity between the real product image and the drawing image, if the similarity is greater than or equal to a threshold value (Threshold: 0.5), it is determined that the product is the same, and if it is less than or equal to, it is determined that the product is a different product. The proposed model showed an accuracy of about 71.8% for a query to a product (positive: positive) with the same drawing as the real product, and an accuracy of about 83.1% for a query to a different product (positive: negative). In the future, we plan to conduct a study to improve the matching accuracy between the real product image and the drawing image by combining the parameter optimization study with the proposed model and adding processes such as data purification.

CNN Model for Prediction of Tensile Strength based on Pore Distribution Characteristics in Cement Paste (시멘트풀의 공극분포특성에 기반한 인장강도 예측 CNN 모델)

  • Sung-Wook Hong;Tong-Seok Han
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.5
    • /
    • pp.339-346
    • /
    • 2023
  • The uncertainties of microstructural features affect the properties of materials. Numerous pores that are randomly distributed in materials make it difficult to predict the properties of the materials. The distribution of pores in cementitious materials has a great influence on their mechanical properties. Existing studies focus on analyzing the statistical relationship between pore distribution and material responses, and the correlation between them is not yet fully determined. In this study, the mechanical response of cementitious materials is predicted through an image-based data approach using a convolutional neural network (CNN), and the correlation between pore distribution and material response is analyzed. The dataset for machine learning consists of high-resolution micro-CT images and the properties (tensile strength) of cementitious materials. The microstructures are characterized, and the mechanical properties are evaluated through 2D direct tension simulations using the phase-field fracture model. The attributes of input images are analyzed to identify the spot with the greatest influence on the prediction of material response through CNN. The correlation between pore distribution characteristics and material response is analyzed by comparing the active regions during the CNN process and the pore distribution.

CCTV-Based Multi-Factor Authentication System

  • Kwon, Byoung-Wook;Sharma, Pradip Kumar;Park, Jong-Hyuk
    • Journal of Information Processing Systems
    • /
    • v.15 no.4
    • /
    • pp.904-919
    • /
    • 2019
  • Many security systems rely solely on solutions based on Artificial Intelligence, which are weak in nature. These security solutions can be easily manipulated by malicious users who can gain unlawful access. Some security systems suggest using fingerprint-based solutions, but they can be easily deceived by copying fingerprints with clay. Image-based security is undoubtedly easy to manipulate, but it is also a solution that does not require any special training on the part of the user. In this paper, we propose a multi-factor security framework that operates in a three-step process to authenticate the user. The motivation of the research lies in utilizing commonly available and inexpensive devices such as onsite CCTV cameras and smartphone camera and providing fully secure user authentication. We have used technologies such as Argon2 for hashing image features and physically unclonable identification for secure device-server communication. We also discuss the methodological workflow of the proposed multi-factor authentication framework. In addition, we present the service scenario of the proposed model. Finally, we analyze qualitatively the proposed model and compare it with state-of-the-art methods to evaluate the usability of the model in real-world applications.

A Study on Person Re-Identification System using Enhanced RNN (확장된 RNN을 활용한 사람재인식 시스템에 관한 연구)

  • Choi, Seok-Gyu;Xu, Wenjie
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.2
    • /
    • pp.15-23
    • /
    • 2017
  • The person Re-identification is the most challenging part of computer vision due to the significant changes in human pose and background clutter with occlusions. The picture from non-overlapping cameras enhance the difficulty to distinguish some person from the other. To reach a better performance match, most methods use feature selection and distance metrics separately to get discriminative representations and proper distance to describe the similarity between person and kind of ignoring some significant features. This situation has encouraged us to consider a novel method to deal with this problem. In this paper, we proposed an enhanced recurrent neural network with three-tier hierarchical network for person re-identification. Specifically, the proposed recurrent neural network (RNN) model contain an iterative expectation maximum (EM) algorithm and three-tier Hierarchical network to jointly learn both the discriminative features and metrics distance. The iterative EM algorithm can fully use of the feature extraction ability of convolutional neural network (CNN) which is in series before the RNN. By unsupervised learning, the EM framework can change the labels of the patches and train larger datasets. Through the three-tier hierarchical network, the convolutional neural network, recurrent network and pooling layer can jointly be a feature extractor to better train the network. The experimental result shows that comparing with other researchers' approaches in this field, this method also can get a competitive accuracy. The influence of different component of this method will be analyzed and evaluated in the future research.

Robust Coronary Artery Segmentation in 2D X-ray Images using Local Patch-based Re-connection Methods (지역적 패치기반 보정기법을 활용한 2D X-ray 영상에서의 강인한 관상동맥 재연결 기법)

  • Han, Kyunghoon;Jeon, Byunghwan;Kim, Sekeun;Jang, Yeonggul;Jung, Sunghee;Shim, Hackjoon;Chang, Hyukjae
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.592-601
    • /
    • 2019
  • For coronary procedures, X-ray angiogram images are useful for diagnosing and assisting procedures. It is challenging to accurately segment a coronary artery using only a single segmentation model in 2D X-ray images due to a complex structure of three-dimensional coronary artery, especially from phenomenon of vessels being broken in the middle or end of coronary artery. In order to solve these problems, the initial segmentation is performed using an existing single model, and the candidate regions for the sophisticate correction is estimated based on the initial segment, and the local patch-based correction is performed in the candidate regions. Through this research, not only the broken coronary arteries are re-connected, but also the distal part of coronary artery that is very thin is additionally correctly found. Further, the performance can be much improved by combining the proposed correction method with any existing coronary artery segmentation method. In this paper, the U-net, a fully convolutional network was chosen as a segmentation method and the proposed correction method was combined with U-net to demonstrate a significant improvement in performance through X-ray images from several patients.

Development of Deep Learning Based Ensemble Land Cover Segmentation Algorithm Using Drone Aerial Images (드론 항공영상을 이용한 딥러닝 기반 앙상블 토지 피복 분할 알고리즘 개발)

  • Hae-Gwang Park;Seung-Ki Baek;Seung Hyun Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.71-80
    • /
    • 2024
  • In this study, a proposed ensemble learning technique aims to enhance the semantic segmentation performance of images captured by Unmanned Aerial Vehicles (UAVs). With the increasing use of UAVs in fields such as urban planning, there has been active development of techniques utilizing deep learning segmentation methods for land cover segmentation. The study suggests a method that utilizes prominent segmentation models, namely U-Net, DeepLabV3, and Fully Convolutional Network (FCN), to improve segmentation prediction performance. The proposed approach integrates training loss, validation accuracy, and class score of the three segmentation models to enhance overall prediction performance. The method was applied and evaluated on a land cover segmentation problem involving seven classes: buildings,roads, parking lots, fields, trees, empty spaces, and areas with unspecified labels, using images captured by UAVs. The performance of the ensemble model was evaluated by mean Intersection over Union (mIoU), and the results of comparing the proposed ensemble model with the three existing segmentation methods showed that mIoU performance was improved. Consequently, the study confirms that the proposed technique can enhance the performance of semantic segmentation models.