• Title/Summary/Keyword: Deep neural networks

Search Result 845, Processing Time 0.034 seconds

Study on Image Compression Algorithm with Deep Learning (딥 러닝 기반의 이미지 압축 알고리즘에 관한 연구)

  • Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.4
    • /
    • pp.156-162
    • /
    • 2022
  • Image compression plays an important role in encoding and improving various forms of images in the digital era. Recent researches have focused on the principle of deep learning as one of the most exciting machine learning methods to show that it is good scheme to analyze, classify and compress images. Various neural networks are able to adapt for image compressions, such as deep neural networks, artificial neural networks, recurrent neural networks and convolution neural networks. In this review paper, we discussed how to apply the rule of deep learning to obtain better image compression with high accuracy, low loss-ness and high visibility of the image. For those results in performance, deep learning methods are required on justified manner with distinct analysis.

A Study on Neural Networks Forecast Model of Deep Excavation Wall Movements (인공신경망 기법을 활용한 굴착공사 흙막이 변위량 예측에 관한 연구)

  • Shin, Han-Woo;Kim, Gwang-Hee;Kim, Young-Seok
    • Journal of the Korea Institute of Building Construction
    • /
    • v.7 no.3
    • /
    • pp.131-137
    • /
    • 2007
  • To predict deep excavation wall movements is important in the urban areas considering the cost and the safety in construction. Failing to estimate deep excavation wall movements in advance causes too many problems in the projects. The purpose of this study is to propose the forecast model of deep excavation wall movements using artificial neural networks. The data of the Deep Excavation Wall Movements which were done form Long research is used of Artificial neural networks training and apply the real construction work measured data to the Artificial neural networks model. Applying the artificial neural networks to forecast the deep excavation wall movements can significantly contribute to identifying and preventing the accident in the overall construction work.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Melanoma Classification Using Log-Gabor Filter and Ensemble of Deep Convolution Neural Networks

  • Long, Hoang;Lee, Suk-Hwan;Kwon, Seong-Geun;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1203-1211
    • /
    • 2022
  • Melanoma is a skin cancer that starts in pigment-producing cells (melanocytes). The death rates of skin cancer like melanoma can be reduced by early detection and diagnosis of diseases. It is common for doctors to spend a lot of time trying to distinguish between skin lesions and healthy cells because of their striking similarities. The detection of melanoma lesions can be made easier for doctors with the help of an automated classification system that uses deep learning. This study presents a new approach for melanoma classification based on an ensemble of deep convolution neural networks and a Log-Gabor filter. First, we create the Log-Gabor representation of the original image. Then, we input the Log-Gabor representation into a new ensemble of deep convolution neural networks. We evaluated the proposed method on the melanoma dataset collected at Yonsei University and Dongsan Clinic. Based on our numerical results, the proposed framework achieves more accuracy than other approaches.

Genetic algorithm based deep learning neural network structure and hyperparameter optimization (유전 알고리즘 기반의 심층 학습 신경망 구조와 초모수 최적화)

  • Lee, Sanghyeop;Kang, Do-Young;Park, Jangsik
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.519-527
    • /
    • 2021
  • Alzheimer's disease is one of the challenges to tackle in the coming aging era and is attempting to diagnose and predict through various biomarkers. While the application of various deep learning-based technologies as powerful imaging technologies has recently expanded across the medical industry, empirical design is not easy because there are various deep earning neural networks architecture and categorical hyperparameters that rely on problems and data to solve. In this paper, we show the possibility of optimizing a deep learning neural network structure and hyperparameters for Alzheimer's disease classification in amyloid brain images in a representative deep earning neural networks architecture using genetic algorithms. It was observed that the optimal deep learning neural network structure and hyperparameter were chosen as the values of the experiment were converging.

Analyzing DNN Model Performance Depending on Backbone Network (백본 네트워크에 따른 사람 속성 검출 모델의 성능 변화 분석)

  • Chun-Su Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.128-132
    • /
    • 2023
  • Recently, with the development of deep learning technology, research on pedestrian attribute recognition technology using deep neural networks has been actively conducted. Existing pedestrian attribute recognition techniques can be obtained in such a way as global-based, regional-area-based, visual attention-based, sequential prediction-based, and newly designed loss function-based, depending on how pedestrian attributes are detected. It is known that the performance of these pedestrian attribute recognition technologies varies greatly depending on the type of backbone network that constitutes the deep neural networks model. Therefore, in this paper, several backbone networks are applied to the baseline pedestrian attribute recognition model and the performance changes of the model are analyzed. In this paper, the analysis is conducted using Resnet34, Resnet50, Resnet101, Swin-tiny, and Swinv2-tiny, which are representative backbone networks used in the fields of image classification, object detection, etc. Furthermore, this paper analyzes the change in time complexity when inferencing each backbone network using a CPU and a GPU.

  • PDF

Sound Event Detection based on Deep Neural Networks (딥 뉴럴네트워크 기반의 소리 이벤트 검출)

  • Chung, Suk-Hwan;Chung, Yong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.2
    • /
    • pp.389-396
    • /
    • 2019
  • In this paper, various architectures of deep neural networks were applied for sound event detection and their performances were compared using a common audio database. The FNN, CNN, RNN and CRNN were implemented using hyper-parameters optimized for the database as well as the architecture of each neural network. Among the implemented deep neural networks, CRNN performed best at all testing conditions and CNN followed CRNN in performance. Although RNN has a merit in tracking the time-correlations in audio signals, it showed poor performance compared with CNN and CRNN.

An Approximate DRAM Architecture for Energy-efficient Deep Learning

  • Nguyen, Duy Thanh;Chang, Ik-Joon
    • Journal of Semiconductor Engineering
    • /
    • v.1 no.1
    • /
    • pp.31-37
    • /
    • 2020
  • We present an approximate DRAM architecture for energy-efficient deep learning. Our key premise is that by bounding memory errors to non-critical information, we can significantly reduce DRAM refresh energy without compromising recognition accuracy of deep neural networks. To validate the key premise, we make extensive Monte-Carlo simulations for several well-known convolutional neural networks such as LeNet, ConvNet and AlexNet with the input of MINIST, CIFAR-10, and ImageNet, respectively. We assume that the highest-order 8-bits (in single precision) and 4-bits (in half precision) are protected from retention errors under the proposed architecture and then, randomly inject bit-errors to unprotected bits with various bit-error-rates. Here, recognition accuracies of the above convolutional neural networks are successfully maintained up to the 10-5-order bit-error-rate. We simulate DRAM energy during inference of the above convolutional neural networks, where the proposed architecture shows the possibility of considerable energy saving up to 10 ~ 37.5% of total DRAM energy.

Acoustic Event Detection in Multichannel Audio Using Gated Recurrent Neural Networks with High-Resolution Spectral Features

  • Kim, Hyoung-Gook;Kim, Jin Young
    • ETRI Journal
    • /
    • v.39 no.6
    • /
    • pp.832-840
    • /
    • 2017
  • Recently, deep recurrent neural networks have achieved great success in various machine learning tasks, and have also been applied for sound event detection. The detection of temporally overlapping sound events in realistic environments is much more challenging than in monophonic detection problems. In this paper, we present an approach to improve the accuracy of polyphonic sound event detection in multichannel audio based on gated recurrent neural networks in combination with auditory spectral features. In the proposed method, human hearing perception-based spatial and spectral-domain noise-reduced harmonic features are extracted from multichannel audio and used as high-resolution spectral inputs to train gated recurrent neural networks. This provides a fast and stable convergence rate compared to long short-term memory recurrent neural networks. Our evaluation reveals that the proposed method outperforms the conventional approaches.

Compressed Ensemble of Deep Convolutional Neural Networks with Global and Local Facial Features for Improved Face Recognition (얼굴인식 성능 향상을 위한 얼굴 전역 및 지역 특징 기반 앙상블 압축 심층합성곱신경망 모델 제안)

  • Yoon, Kyung Shin;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.1019-1029
    • /
    • 2020
  • In this paper, we propose a novel knowledge distillation algorithm to create an compressed deep ensemble network coupled with the combined use of local and global features of face images. In order to transfer the capability of high-level recognition performances of the ensemble deep networks to a single deep network, the probability for class prediction, which is the softmax output of the ensemble network, is used as soft target for training a single deep network. By applying the knowledge distillation algorithm, the local feature informations obtained by training the deep ensemble network using facial subregions of the face image as input are transmitted to a single deep network to create a so-called compressed ensemble DCNN. The experimental results demonstrate that our proposed compressed ensemble deep network can maintain the recognition performance of the complex ensemble deep networks and is superior to the recognition performance of a single deep network. In addition, our proposed method can significantly reduce the storage(memory) space and execution time, compared to the conventional ensemble deep networks developed for face recognition.