• Title/Summary/Keyword: deep neural networks

Search Result 855, Processing Time 0.027 seconds

Deep Adversarial Residual Convolutional Neural Network for Image Generation and Classification

  • Haque, Md Foysal;Kang, Dae-Seong
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.111-120
    • /
    • 2020
  • Generative adversarial networks (GANs) achieved impressive performance on image generation and visual classification applications. However, adversarial networks meet difficulties in combining the generative model and unstable training process. To overcome the problem, we combined the deep residual network with upsampling convolutional layers to construct the generative network. Moreover, the study shows that image generation and classification performance become more prominent when the residual layers include on the generator. The proposed network empirically shows that the ability to generate images with higher visual accuracy provided certain amounts of additional complexity using proper regularization techniques. Experimental evaluation shows that the proposed method is superior to image generation and classification tasks.

Small Marker Detection with Attention Model in Robotic Applications (로봇시스템에서 작은 마커 인식을 하기 위한 사물 감지 어텐션 모델)

  • Kim, Minjae;Moon, Hyungpil
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.425-430
    • /
    • 2022
  • As robots are considered one of the mainstream digital transformations, robots with machine vision becomes a main area of study providing the ability to check what robots watch and make decisions based on it. However, it is difficult to find a small object in the image mainly due to the flaw of the most of visual recognition networks. Because visual recognition networks are mostly convolution neural network which usually consider local features. So, we make a model considering not only local feature, but also global feature. In this paper, we propose a detection method of a small marker on the object using deep learning and an algorithm that considers global features by combining Transformer's self-attention technique with a convolutional neural network. We suggest a self-attention model with new definition of Query, Key and Value for model to learn global feature and simplified equation by getting rid of position vector and classification token which cause the model to be heavy and slow. Finally, we show that our model achieves higher mAP than state of the art model YOLOr.

Performance Analysis of DNN inference using OpenCV Built in CPU and GPU Functions (OpenCV 내장 CPU 및 GPU 함수를 이용한 DNN 추론 시간 복잡도 분석)

  • Park, Chun-Su
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.1
    • /
    • pp.75-78
    • /
    • 2022
  • Deep Neural Networks (DNN) has become an essential data processing architecture for the implementation of multiple computer vision tasks. Recently, DNN-based algorithms achieve much higher recognition accuracy than traditional algorithms based on shallow learning. However, training and inference DNNs require huge computational capabilities than daily usage purposes of computers. Moreover, with increased size and depth of DNNs, CPUs may be unsatisfactory since they use serial processing by default. GPUs are the solution that come up with greater speed compared to CPUs because of their Parallel Processing/Computation nature. In this paper, we analyze the inference time complexity of DNNs using well-known computer vision library, OpenCV. We measure and analyze inference time complexity for three cases, CPU, GPU-Float32, and GPU-Float16.

Wi-Fi RSSI Heat Maps Based Indoor Localization System Using Deep Convolutional Neural Networks

  • Poulose, Alwin;Han, Dong Seog
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.717-720
    • /
    • 2020
  • An indoor localization system that uses Wi-Fi RSSI signals for localization gives accurate user position results. The conventional Wi-Fi RSSI signal based localization system uses raw RSSI signals from access points (APs) to estimate the user position. However, the RSSI values of a particular location are usually not stable due to the signal propagation in the indoor environments. To reduce the RSSI signal fluctuations, shadow fading, multipath effects and the blockage of Wi-Fi RSSI signals, we propose a Wi-Fi localization system that utilizes the advantages of Wi-Fi RSSI heat maps. The proposed localization system uses a regression model with deep convolutional neural networks (DCNNs) and gives accurate user position results for indoor localization. The experiment results demonstrate the superior performance of the proposed localization system for indoor localization.

  • PDF

Refinement of Ground Truth Data for X-ray Coronary Artery Angiography (CAG) using Active Contour Model

  • Dongjin Han;Youngjoon Park
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.134-141
    • /
    • 2023
  • We present a novel method aimed at refining ground truth data through regularization and modification, particularly applicable when working with the original ground truth set. Enhancing the performance of deep neural networks is achieved by applying regularization techniques to the existing ground truth data. In many machine learning tasks requiring pixel-level segmentation sets, accurately delineating objects is vital. However, it proves challenging for thin and elongated objects such as blood vessels in X-ray coronary angiography, often resulting in inconsistent generation of ground truth data. This method involves an analysis of the quality of training set pairs - comprising images and ground truth data - to automatically regulate and modify the boundaries of ground truth segmentation. Employing the active contour model and a recursive ground truth generation approach results in stable and precisely defined boundary contours. Following the regularization and adjustment of the ground truth set, there is a substantial improvement in the performance of deep neural networks.

Development of Fire Detection System using YOLOv8 (YOLOv8을 이용한 화재 검출 시스템 개발)

  • Chae Eun Lee;Chun-Su Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.23 no.1
    • /
    • pp.19-24
    • /
    • 2024
  • It is not an exaggeration to say that a single fire causes a lot of damage, so fires are one of the disaster situations that must be alerted as soon as possible. Various technologies have been utilized so far because preventing and detecting fires can never be completely accomplished with individual human efforts. Recently, deep learning technology has been developed, and fire detection systems using object detection neural networks are being actively studied. In this paper, we propose a new fire detection system that improves the previously studied fire detection system. We train the YOLOv8 model using refined datasets through improved labeling methods, derive results, and demonstrate the superiority of the proposed system by comparing it with the results of previous studies.

  • PDF

Generation of optical fringe patterns using deep learning (딥러닝을 이용한 광학적 프린지 패턴의 생성)

  • Kang, Ji-Won;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.12
    • /
    • pp.1588-1594
    • /
    • 2020
  • In this paper, we discuss a data balancing method for learning a neural network that generates digital holograms using a deep neural network (DNN). Deep neural networks are based on deep learning (DL) technology and use a generative adversarial network (GAN) series. The fringe pattern, which is the basic unit of a hologram to be created through a deep neural network, has very different data types depending on the hologram plane and the position of the object. However, because the criteria for classifying the data are not clear, an imbalance in the training data may occur. The imbalance of learning data acts as a factor of instability in learning. Therefore, it presents a method for classifying and balancing data for which the classification criteria are not clear. And it shows that learning is stabilized through this.

Multi channel far field speaker verification using teacher student deep neural networks (교사 학생 심층신경망을 활용한 다채널 원거리 화자 인증)

  • Jung, Jee-weon;Heo, Hee-Soo;Shim, Hye-jin;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.6
    • /
    • pp.483-488
    • /
    • 2018
  • Far field input utterance is one of the major causes of performance degradation of speaker verification systems. In this study, we used teacher student learning framework to compensate for the performance degradation caused by far field utterances. Teacher student learning refers to training the student deep neural network in possible performance degradation condition using the teacher deep neural network trained without such condition. In this study, we use the teacher network trained with near distance utterances to train the student network with far distance utterances. However, through experiments, it was found that performance of near distance utterances were deteriorated. To avoid such phenomenon, we proposed techniques that use trained teacher network as initialization of student network and training the student network using both near and far field utterances. Experiments were conducted using deep neural networks that input raw waveforms of 4-channel utterances recorded in both near and far distance. Results show the equal error rate of near and far-field utterances respectively, 2.55 % / 2.8 % without teacher student learning, 9.75 % / 1.8 % for conventional teacher student learning, and 2.5 % / 2.7 % with proposed techniques.

A Deep Learning Performance Comparison of R and Tensorflow (R과 텐서플로우 딥러닝 성능 비교)

  • Sung-Bong Jang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.487-494
    • /
    • 2023
  • In this study, performance comparison was performed on R and TensorFlow, which are free deep learning tools. In the experiment, six types of deep neural networks were built using each tool, and the neural networks were trained using the 10-year Korean temperature dataset. The number of nodes in the input layer of the constructed neural network was set to 10, the number of output layers was set to 5, and the hidden layer was set to 5, 10, and 20 to conduct experiments. The dataset includes 3600 temperature data collected from Gangnam-gu, Seoul from March 1, 2013 to March 29, 2023. For performance comparison, the future temperature was predicted for 5 days using the trained neural network, and the root mean square error (RMSE) value was measured using the predicted value and the actual value. Experiment results shows that when there was one hidden layer, the learning error of R was 0.04731176, and TensorFlow was measured at 0.06677193, and when there were two hidden layers, R was measured at 0.04782134 and TensorFlow was measured at 0.05799060. Overall, R was measured to have better performance. We tried to solve the difficulties in tool selection by providing quantitative performance information on the two tools to users who are new to machine learning.

Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks (그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색)

  • Su-Youn Choi;Jong-Youel Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.649-654
    • /
    • 2023
  • This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search.