• Title/Summary/Keyword: Deep-learning Neural Network

Search Result 1,683, Processing Time 0.022 seconds

Light weight architecture for acoustic scene classification (음향 장면 분류를 위한 경량화 모형 연구)

  • Lim, Soyoung;Kwak, Il-Youp
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.979-993
    • /
    • 2021
  • Acoustic scene classification (ASC) categorizes an audio file based on the environment in which it has been recorded. This has long been studied in the detection and classification of acoustic scenes and events (DCASE). In this study, we considered the problem that ASC faces in real-world applications that the model used should have low-complexity. We compared several models that apply light-weight techniques. First, a base CNN model was proposed using log mel-spectrogram, deltas, and delta-deltas features. Second, depthwise separable convolution, linear bottleneck inverted residual block was applied to the convolutional layer, and Quantization was applied to the models to develop a low-complexity model. The model considering low-complexity was similar or slightly inferior to the performance of the base model, but the model size was significantly reduced from 503 KB to 42.76 KB.

Image Filtering Method for an Effective Inverse Tone-mapping (효과적인 역 톤 매핑을 위한 필터링 기법)

  • Kang, Rahoon;Park, Bumjun;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.217-226
    • /
    • 2019
  • In this paper, we propose a filtering method that can improve the results of inverse tone-mapping using guided image filter. Inverse tone-mapping techniques have been proposed that convert LDR images to HDR. Recently, many algorithms have been studied to convert single LDR images into HDR images using CNN. Among them, there exists an algorithm for restoring pixel information using CNN which learned to restore saturated region. The algorithm does not suppress the noise in the non-saturation region and cannot restore the detail in the saturated region. The proposed algorithm suppresses the noise in the non-saturated region and restores the detail of the saturated region using a WGIF in the input image, and then applies it to the CNN to improve the quality of the final image. The proposed algorithm shows a higher quantitative image quality index than the existing algorithms when the HDR quantitative image quality index was measured.

Crack Detection on the Road in Aerial Image using Mask R-CNN (Mask R-CNN을 이용한 항공 영상에서의 도로 균열 검출)

  • Lee, Min Hye;Nam, Kwang Woo;Lee, Chang Woo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.3
    • /
    • pp.23-29
    • /
    • 2019
  • Conventional crack detection methods have a problem of consuming a lot of labor, time and cost. To solve these problems, an automatic detection system is needed to detect cracks in images obtained by using vehicles or UAVs(unmanned aerial vehicles). In this paper, we have studied road crack detection with unmanned aerial photographs. Aerial images are generated through preprocessing and labeling to generate morphological information data sets of cracks. The generated data set was applied to the mask R-CNN model to obtain a new model in which various crack information was learned. Experimental results show that the cracks in the proposed aerial image were detected with an accuracy of 73.5% and some of them were predicted in a certain type of crack region.

Efficient Inference of Image Objects using Semantic Segmentation (시멘틱 세그멘테이션을 활용한 이미지 오브젝트의 효율적인 영역 추론)

  • Lim, Heonyeong;Lee, Yurim;Jee, Minkyu;Go, Myunghyun;Kim, Hakdong;Kim, Wonil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.67-76
    • /
    • 2019
  • In this paper, we propose an efficient object classification method based on semantic segmentation for multi-labeled image data. In addition to various pixel unit information and processing techniques such as color information, contour, contrast, and saturation included in image data, a detailed region in which each object is located is extracted as a meaningful unit and the experiment is conducted to reflect the result in the inference. We use a neural network that has been proven to perform well in image classification to understand which object is located where image data containing various class objects are located. Based on these researches, we aim to provide artificial intelligence services that can classify real-time detailed areas of complex images containing various objects in the future.

CNN-Based Toxic Plant Identification System (CNN 기반 독성 식물 판별 시스템)

  • Park, SungHyun;Lim, Byeongyeon;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.8
    • /
    • pp.993-998
    • /
    • 2020
  • The technology of interiors is currently developing around the world. According to various studies, the use of plants to create an environment in the home interior is increasing. However, households using furniture are designed as environment-friendly environment interiors, and in Korea and abroad, plants are used for home interiors. Unexpected accidents are occurring. As a result, there were books and broadcasts about the dangers of specific plants, but until now, accidents continue to occur because they do not properly recognize the dangers of specific plants. Therefore, in this paper, we propose a toxic plant identification system based on a multiplicative neural network model that identifies common toxic plants commonly found in Korea. We propose a high efficiency model. Through this, toxic plants can be identified with higher accuracy and safety accidents caused by toxic plants.

Indoor Scene Classification based on Color and Depth Images for Automated Reverberation Sound Editing (자동 잔향 편집을 위한 컬러 및 깊이 정보 기반 실내 장면 분류)

  • Jeong, Min-Heuk;Yu, Yong-Hyun;Park, Sung-Jun;Hwang, Seung-Jun;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.384-390
    • /
    • 2020
  • The reverberation effect on the sound when producing movies or VR contents is a very important factor in the realism and liveliness. The reverberation time depending the space is recommended in a standard called RT60(Reverberation Time 60 dB). In this paper, we propose a scene recognition technique for automatic reverberation editing. To this end, we devised a classification model that independently trains color images and predicted depth images in the same model. Indoor scene classification is limited only by training color information because of the similarity of internal structure. Deep learning based depth information extraction technology is used to use spatial depth information. Based on RT60, 10 scene classes were constructed and model training and evaluation were conducted. Finally, the proposed SCR + DNet (Scene Classification for Reverb + Depth Net) classifier achieves higher performance than conventional CNN classifiers with 92.4% accuracy.

Implementation of a Classification System for Dog Behaviors using YOLI-based Object Detection and a Node.js Server (YOLO 기반 개체 검출과 Node.js 서버를 이용한 반려견 행동 분류 시스템 구현)

  • Jo, Yong-Hwa;Lee, Hyuek-Jae;Kim, Young-Hun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.21 no.1
    • /
    • pp.29-37
    • /
    • 2020
  • This paper implements a method of extracting an object about a dog through real-time image analysis and classifying dog behaviors from the extracted images. The Darknet YOLO was used to detect dog objects, and the Teachable Machine provided by Google was used to classify behavior patterns from the extracted images. The trained Teachable Machine is saved in Google Drive and can be used by ml5.js implemented on a node.js server. By implementing an interactive web server using a socket.io module on the node.js server, the classified results are transmitted to the user's smart phone or PC in real time so that it can be checked anytime, anywhere.

CNN based Complex Spectrogram Enhancement in Multi-Rotor UAV Environments (멀티로터 UAV 환경에서의 CNN 기반 복소 스펙트로그램 향상 기법)

  • Kim, Young-Jin;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.4
    • /
    • pp.459-466
    • /
    • 2020
  • The sound collected through the multi-rotor unmanned aerial vehicle (UAV) includes the ego noise generated by the motor or propeller, or the wind noise generated during the flight, and thus the quality is greatly impaired. In a multi-rotor UAV environment, both the magnitude and phase of the target sound are greatly corrupted, so it is necessary to enhance the sound in consideration of both the magnitude and phase. However, it is difficult to improve the phase because it does not show the structural characteristics. in this study, we propose a CNN-based complex spectrogram enhancement method that removes noise based on complex spectrogram that can represent both magnitude and phase. Experimental results reveal that the proposed method improves enhancement performance by considering both the magnitude and phase of the complex spectrogram.

Design of YOLO-based Removable System for Pet Monitoring (반려동물 모니터링을 위한 YOLO 기반의 이동식 시스템 설계)

  • Lee, Min-Hye;Kang, Jun-Young;Lim, Soon-Ja
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.22-27
    • /
    • 2020
  • Recently, as the number of households raising pets increases due to the increase of single households, there is a need for a system for monitoring the status or behavior of pets. There are regional limitations in the monitoring of pets using domestic CCTVs, which requires a large number of CCTVs or restricts the behavior of pets. In this paper, we propose a mobile system for detecting and tracking cats using deep learning to solve the regional limitations of pet monitoring. We use YOLO (You Look Only Once), an object detection neural network model, to learn the characteristics of pets and apply them to Raspberry Pi to track objects detected in an image. We have designed a mobile monitoring system that connects Raspberry Pi and a laptop via wireless LAN and can check the movement and condition of cats in real time.

End-to-end non-autoregressive fast text-to-speech (End-to-end 비자기회귀식 가속 음성합성기)

  • Kim, Wiback;Nam, Hosung
    • Phonetics and Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.47-53
    • /
    • 2021
  • Autoregressive Text-to-Speech (TTS) models suffer from inference instability and slow inference speed. Inference instability occurs when a poorly predicted sample at time step t affects all the subsequent predictions. Slow inference speed arises from a model structure that forces the predicted samples from time steps 1 to t-1 to predict the sample at time step t. In this study, an end-to-end non-autoregressive fast text-to-speech model is suggested as a solution to these problems. The results of this study show that this model's Mean Opinion Score (MOS) is close to that of Tacotron 2 - WaveNet, while this model's inference speed and stability are higher than those of Tacotron 2 - WaveNet. Further, this study aims to offer insight into the improvement of non-autoregressive models.