• 제목/요약/키워드: Deep neural nets

검색결과 19건 처리시간 0.022초

도시 구조물 분류를 위한 3차원 점 군의 구형 특징 표현과 심층 신뢰 신경망 기반의 환경 형상 학습 (Spherical Signature Description of 3D Point Cloud and Environmental Feature Learning based on Deep Belief Nets for Urban Structure Classification)

  • 이세진;김동현
    • 로봇학회논문지
    • /
    • 제11권3호
    • /
    • pp.115-126
    • /
    • 2016
  • This paper suggests the method of the spherical signature description of 3D point clouds taken from the laser range scanner on the ground vehicle. Based on the spherical signature description of each point, the extractor of significant environmental features is learned by the Deep Belief Nets for the urban structure classification. Arbitrary point among the 3D point cloud can represents its signature in its sky surface by using several neighborhood points. The unit spherical surface centered on that point can be considered to accumulate the evidence of each angular tessellation. According to a kind of point area such as wall, ground, tree, car, and so on, the results of spherical signature description look so different each other. These data can be applied into the Deep Belief Nets, which is one of the Deep Neural Networks, for learning the environmental feature extractor. With this learned feature extractor, 3D points can be classified due to its urban structures well. Experimental results prove that the proposed method based on the spherical signature description and the Deep Belief Nets is suitable for the mobile robots in terms of the classification accuracy.

심층신경망을 이용한 PCB 부품의 인쇄문자 인식 (Recognition of Characters Printed on PCB Components Using Deep Neural Networks)

  • 조태훈
    • 반도체디스플레이기술학회지
    • /
    • 제20권3호
    • /
    • pp.6-10
    • /
    • 2021
  • Recognition of characters printed or marked on the PCB components from images captured using cameras is an important task in PCB components inspection systems. Previous optical character recognition (OCR) of PCB components typically consists of two stages: character segmentation and classification of each segmented character. However, character segmentation often fails due to corrupted characters, low image contrast, etc. Thus, OCR without character segmentation is desirable and increasingly used via deep neural networks. Typical implementation based on deep neural nets without character segmentation includes convolutional neural network followed by recurrent neural network (RNN). However, one disadvantage of this approach is slow execution due to RNN layers. LPRNet is a segmentation-free character recognition network with excellent accuracy proved in license plate recognition. LPRNet uses a wide convolution instead of RNN, thus enabling fast inference. In this paper, LPRNet was adapted for recognizing characters printed on PCB components with fast execution and high accuracy. Initial training with synthetic images followed by fine-tuning on real text images yielded accurate recognition. This net can be further optimized on Intel CPU using OpenVINO tool kit. The optimized version of the network can be run in real-time faster than even GPU.

A Robust Energy Consumption Forecasting Model using ResNet-LSTM with Huber Loss

  • Albelwi, Saleh
    • International Journal of Computer Science & Network Security
    • /
    • 제22권7호
    • /
    • pp.301-307
    • /
    • 2022
  • Energy consumption has grown alongside dramatic population increases. Statistics show that buildings in particular utilize a significant amount of energy, worldwide. Because of this, building energy prediction is crucial to best optimize utilities' energy plans and also create a predictive model for consumers. To improve energy prediction performance, this paper proposes a ResNet-LSTM model that combines residual networks (ResNets) and long short-term memory (LSTM) for energy consumption prediction. ResNets are utilized to extract complex and rich features, while LSTM has the ability to learn temporal correlation; the dense layer is used as a regression to forecast energy consumption. To make our model more robust, we employed Huber loss during the optimization process. Huber loss obtains high efficiency by handling minor errors quadratically. It also takes the absolute error for large errors to increase robustness. This makes our model less sensitive to outlier data. Our proposed system was trained on historical data to forecast energy consumption for different time series. To evaluate our proposed model, we compared our model's performance with several popular machine learning and deep learning methods such as linear regression, neural networks, decision tree, and convolutional neural networks, etc. The results show that our proposed model predicted energy consumption most accurately.

심층신경망을 이용한 PCB 부품의 검지 및 인식 (Detection of PCB Components Using Deep Neural Nets)

  • 조태훈
    • 반도체디스플레이기술학회지
    • /
    • 제19권2호
    • /
    • pp.11-15
    • /
    • 2020
  • In a typical initial setup of a PCB component inspection system, operators should manually input various information such as category, position, and inspection area for each component to be inspected, thus causing much inconvenience and longer setup time. Although there are many deep learning based object detectors, RetinaNet is regarded as one of best object detectors currently available. In this paper, a method using an extended RetinaNet is proposed that automatically detects its component category and position for each component mounted on PCBs from a high-resolution color input image. We extended the basic RetinaNet feature pyramid network by adding a feature pyramid layer having higher spatial resolution to the basic feature pyramid. It was demonstrated by experiments that the extended RetinaNet can detect successfully very small components that could be missed by the basic RetinaNet. Using the proposed method could enable automatic generation of inspection areas, thus considerably reducing the setup time of PCB component inspection systems.

ConvXGB: A new deep learning model for classification problems based on CNN and XGBoost

  • Thongsuwan, Setthanun;Jaiyen, Saichon;Padcharoen, Anantachai;Agarwal, Praveen
    • Nuclear Engineering and Technology
    • /
    • 제53권2호
    • /
    • pp.522-531
    • /
    • 2021
  • We describe a new deep learning model - Convolutional eXtreme Gradient Boosting (ConvXGB) for classification problems based on convolutional neural nets and Chen et al.'s XGBoost. As well as image data, ConvXGB also supports the general classification problems, with a data preprocessing module. ConvXGB consists of several stacked convolutional layers to learn the features of the input and is able to learn features automatically, followed by XGBoost in the last layer for predicting the class labels. The ConvXGB model is simplified by reducing the number of parameters under appropriate conditions, since it is not necessary re-adjust the weight values in a back propagation cycle. Experiments on several data sets from UCL Repository, including images and general data sets, showed that our model handled the classification problems, for all the tested data sets, slightly better than CNN and XGBoost alone and was sometimes significantly better.

Learning Deep Representation by Increasing ConvNets Depth for Few Shot Learning

  • Fabian, H.S. Tan;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • 제8권4호
    • /
    • pp.75-81
    • /
    • 2019
  • Though recent advancement of deep learning methods have provided satisfactory results from large data domain, somehow yield poor performance on few-shot classification tasks. In order to train a model with strong performance, i.e. deep convolutional neural network, it depends heavily on huge dataset and the labeled classes of the dataset can be extremely humongous. The cost of human annotation and scarcity of the data among the classes have drastically limited the capability of current image classification model. On the contrary, humans are excellent in terms of learning or recognizing new unseen classes with merely small set of labeled examples. Few-shot learning aims to train a classification model with limited labeled samples to recognize new classes that have neverseen during training process. In this paper, we increase the backbone depth of the embedding network in orderto learn the variation between the intra-class. By increasing the network depth of the embedding module, we are able to achieve competitive performance due to the minimized intra-class variation.

딥러닝을 이용한 소규모 지역의 영상분류 적용성 분석 : UAV 영상을 이용한 농경지를 대상으로 (Applicability of Image Classification Using Deep Learning in Small Area : Case of Agricultural Lands Using UAV Image)

  • 최석근;이승기;강연빈;성선경;최도연;김광호
    • 한국측량학회지
    • /
    • 제38권1호
    • /
    • pp.23-33
    • /
    • 2020
  • 최근 UAV (Unmanned Aerial Vehicle)를 이용하여 고해상도 영상을 편리하게 취득할 수 있게 되면서 저비용으로 소규모 지역의 관측 및 공간정보 제작이 가능하게 되었다. 특히, 농업환경 모니터링을 위하여 작물생산 지역의 피복지도 생성에 대한 연구가 활발히 진행되고 있으며, 랜덤 포레스트와 SVM (Support Vector Machine) 및 CNN(Convolutional Neural Network) 을 적용하여 분류 성능을 비교한 결과 영상분류에서 딥러닝 적용에 대하여 활용도가 높은 것으로 나타났다. 특히, 위성영상을 이용한 피복분류는 위성영상 데이터 셋과 선행 파라메터를 사용하여 피복분류의 정확도와 시간에 대한 장점을 가지고 있다. 하지만, 무인항공기 영상은 위성영상과 공간해상도와 같은 특성이 달라 이를 적용하기에는 어려움이 있다. 이러한 문제점을 해결하기 위하여 위성영상 데이터 셋이 아닌 UAV를 이용한 데이터 셋과 국내의 소규모 복합 피복이 존재하는 농경지 분석에 활용이 가능한 딥러닝 알고리즘 적용 연구를 수행하였다. 본 연구에서는 최신 딥러닝의 의미론적 영상분류인 DeepLab V3+, FC-DenseNet (Fully Convolutional DenseNets), FRRN-B (Full-Resolution Residual Networks) 를 UAV 데이터 셋에 적용하여 영상분류를 수행하였다. 분류 결과 DeepLab V3+와 FC-DenseNet의 적용 결과가 기존 감독분류보다 높은 전체 정확도 97%, Kappa 계수 0.92로 소규모 지역의 UAV 영상을 활용한 피복분류의 적용가능성을 보여주었다.

A Review on Advanced Methodologies to Identify the Breast Cancer Classification using the Deep Learning Techniques

  • Bandaru, Satish Babu;Babu, G. Rama Mohan
    • International Journal of Computer Science & Network Security
    • /
    • 제22권4호
    • /
    • pp.420-426
    • /
    • 2022
  • Breast cancer is among the cancers that may be healed as the disease diagnosed at early times before it is distributed through all the areas of the body. The Automatic Analysis of Diagnostic Tests (AAT) is an automated assistance for physicians that can deliver reliable findings to analyze the critically endangered diseases. Deep learning, a family of machine learning methods, has grown at an astonishing pace in recent years. It is used to search and render diagnoses in fields from banking to medicine to machine learning. We attempt to create a deep learning algorithm that can reliably diagnose the breast cancer in the mammogram. We want the algorithm to identify it as cancer, or this image is not cancer, allowing use of a full testing dataset of either strong clinical annotations in training data or the cancer status only, in which a few images of either cancers or noncancer were annotated. Even with this technique, the photographs would be annotated with the condition; an optional portion of the annotated image will then act as the mark. The final stage of the suggested system doesn't need any based labels to be accessible during model training. Furthermore, the results of the review process suggest that deep learning approaches have surpassed the extent of the level of state-of-of-the-the-the-art in tumor identification, feature extraction, and classification. in these three ways, the paper explains why learning algorithms were applied: train the network from scratch, transplanting certain deep learning concepts and constraints into a network, and (another way) reducing the amount of parameters in the trained nets, are two functions that help expand the scope of the networks. Researchers in economically developing countries have applied deep learning imaging devices to cancer detection; on the other hand, cancer chances have gone through the roof in Africa. Convolutional Neural Network (CNN) is a sort of deep learning that can aid you with a variety of other activities, such as speech recognition, image recognition, and classification. To accomplish this goal in this article, we will use CNN to categorize and identify breast cancer photographs from the available databases from the US Centers for Disease Control and Prevention.

Automated detection of corrosion in used nuclear fuel dry storage canisters using residual neural networks

  • Papamarkou, Theodore;Guy, Hayley;Kroencke, Bryce;Miller, Jordan;Robinette, Preston;Schultz, Daniel;Hinkle, Jacob;Pullum, Laura;Schuman, Catherine;Renshaw, Jeremy;Chatzidakis, Stylianos
    • Nuclear Engineering and Technology
    • /
    • 제53권2호
    • /
    • pp.657-665
    • /
    • 2021
  • Nondestructive evaluation methods play an important role in ensuring component integrity and safety in many industries. Operator fatigue can play a critical role in the reliability of such methods. This is important for inspecting high value assets or assets with a high consequence of failure, such as aerospace and nuclear components. Recent advances in convolution neural networks can support and automate these inspection efforts. This paper proposes using residual neural networks (ResNets) for real-time detection of corrosion, including iron oxide discoloration, pitting and stress corrosion cracking, in dry storage stainless steel canisters housing used nuclear fuel. The proposed approach crops nuclear canister images into smaller tiles, trains a ResNet on these tiles, and classifies images as corroded or intact using the per-image count of tiles predicted as corroded by the ResNet. The results demonstrate that such a deep learning approach allows to detect the locus of corrosion via smaller tiles, and at the same time to infer with high accuracy whether an image comes from a corroded canister. Thereby, the proposed approach holds promise to automate and speed up nuclear fuel canister inspections, to minimize inspection costs, and to partially replace human-conducted onsite inspections, thus reducing radiation doses to personnel.

Two-phase flow pattern online monitoring system based on convolutional neural network and transfer learning

  • Hong Xu;Tao Tang
    • Nuclear Engineering and Technology
    • /
    • 제54권12호
    • /
    • pp.4751-4758
    • /
    • 2022
  • Two-phase flow may almost exist in every branch of the energy industry. For the corresponding engineering design, it is very essential and crucial to monitor flow patterns and their transitions accurately. With the high-speed development and success of deep learning based on convolutional neural network (CNN), the study of flow pattern identification recently almost focused on this methodology. Additionally, the photographing technique has attractive implementation features as well, since it is normally considerably less expensive than other techniques. The development of such a two-phase flow pattern online monitoring system is the objective of this work, which seldom studied before. The ongoing preliminary engineering design (including hardware and software) of the system are introduced. The flow pattern identification method based on CNNs and transfer learning was discussed in detail. Several potential CNN candidates such as ALexNet, VggNet16 and ResNets were introduced and compared with each other based on a flow pattern dataset. According to the results, ResNet50 is the most promising CNN network for the system owing to its high precision, fast classification and strong robustness. This work can be a reference for the online monitoring system design in the energy system.