• 제목/요약/키워드: Deep Learning based System

검색결과 1,194건 처리시간 0.023초

딥러닝 기반 영상 주행기록계와 단안 깊이 추정 및 기술을 위한 벤치마크 (Benchmark for Deep Learning based Visual Odometry and Monocular Depth Estimation)

  • 최혁두
    • 로봇학회논문지
    • /
    • 제14권2호
    • /
    • pp.114-121
    • /
    • 2019
  • This paper presents a new benchmark system for visual odometry (VO) and monocular depth estimation (MDE). As deep learning has become a key technology in computer vision, many researchers are trying to apply deep learning to VO and MDE. Just a couple of years ago, they were independently studied in a supervised way, but now they are coupled and trained together in an unsupervised way. However, before designing fancy models and losses, we have to customize datasets to use them for training and testing. After training, the model has to be compared with the existing models, which is also a huge burden. The benchmark provides input dataset ready-to-use for VO and MDE research in 'tfrecords' format and output dataset that includes model checkpoints and inference results of the existing models. It also provides various tools for data formatting, training, and evaluation. In the experiments, the exsiting models were evaluated to verify their performances presented in the corresponding papers and we found that the evaluation result is inferior to the presented performances.

3D Object Generation and Renderer System based on VAE ResNet-GAN

  • Min-Su Yu;Tae-Won Jung;GyoungHyun Kim;Soonchul Kwon;Kye-Dong Jung
    • International journal of advanced smart convergence
    • /
    • 제12권4호
    • /
    • pp.142-146
    • /
    • 2023
  • We present a method for generating 3D structures and rendering objects by combining VAE (Variational Autoencoder) and GAN (Generative Adversarial Network). This approach focuses on generating and rendering 3D models with improved quality using residual learning as the learning method for the encoder. We deep stack the encoder layers to accurately reflect the features of the image and apply residual blocks to solve the problems of deep layers to improve the encoder performance. This solves the problems of gradient vanishing and exploding, which are problems when constructing a deep neural network, and creates a 3D model of improved quality. To accurately extract image features, we construct deep layers of the encoder model and apply the residual function to learning to model with more detailed information. The generated model has more detailed voxels for more accurate representation, is rendered by adding materials and lighting, and is finally converted into a mesh model. 3D models have excellent visual quality and accuracy, making them useful in various fields such as virtual reality, game development, and metaverse.

Training Data Sets Construction from Large Data Set for PCB Character Recognition

  • NDAYISHIMIYE, Fabrice;Gang, Sumyung;Lee, Joon Jae
    • Journal of Multimedia Information System
    • /
    • 제6권4호
    • /
    • pp.225-234
    • /
    • 2019
  • Deep learning has become increasingly popular in both academic and industrial areas nowadays. Various domains including pattern recognition, Computer vision have witnessed the great power of deep neural networks. However, current studies on deep learning mainly focus on quality data sets with balanced class labels, while training on bad and imbalanced data set have been providing great challenges for classification tasks. We propose in this paper a method of data analysis-based data reduction techniques for selecting good and diversity data samples from a large dataset for a deep learning model. Furthermore, data sampling techniques could be applied to decrease the large size of raw data by retrieving its useful knowledge as representatives. Therefore, instead of dealing with large size of raw data, we can use some data reduction techniques to sample data without losing important information. We group PCB characters in classes and train deep learning on the ResNet56 v2 and SENet model in order to improve the classification performance of optical character recognition (OCR) character classifier.

An Automatic Face Hiding System based on the Deep Learning Technology

  • Yoon, Hyeon-Dham;Ohm, Seong-Yong
    • International Journal of Advanced Culture Technology
    • /
    • 제7권4호
    • /
    • pp.289-294
    • /
    • 2019
  • As social network service platforms grow and one-person media market expands, people upload their own photos and/or videos through multiple open platforms. However, it can be illegal to upload the digital contents containing the faces of others on the public sites without their permission. Therefore, many people are spending much time and effort in editing such digital contents so that the faces of others should not be exposed to the public. In this paper, we propose an automatic face hiding system called 'autoblur', which detects all the unregistered faces and mosaic them automatically. The system has been implemented using the GitHub MIT open-source 'Face Recognition' which is based on deep learning technology. In this system, two dozens of face images of the user are taken from different angles to register his/her own face. Once the face of the user is learned and registered, the system detects all the other faces for the given photo or video and then blurs them out. Our experiments show that it produces quick and correct results for the sample photos.

A Novel RFID Dynamic Testing Method Based on Optical Measurement

  • Zhenlu Liu;Xiaolei Yu;Lin Li;Weichun Zhang;Xiao Zhuang;Zhimin Zhao
    • Current Optics and Photonics
    • /
    • 제8권2호
    • /
    • pp.127-137
    • /
    • 2024
  • The distribution of tags is an important factor that affects the performance of radio-frequency identification (RFID). To study RFID performance, it is necessary to obtain RFID tags' coordinates. However, the positioning method of RFID technology has large errors, and is easily affected by the environment. Therefore, a new method using optical measurement is proposed to achieve RFID performance analysis. First, due to the possibility of blurring during image acquisition, the paper derives a new image prior to removing blurring. A nonlocal means-based method for image deconvolution is proposed. Experimental results show that the PSNR and SSIM indicators of our algorithm are better than those of a learning deep convolutional neural network and fast total variation. Second, an RFID dynamic testing system based on photoelectric sensing technology is designed. The reading distance of RFID and the three-dimensional coordinates of the tags are obtained. Finally, deep learning is used to model the RFID reading distance and tag distribution. The error is 3.02%, which is better than other algorithms such as a particle-swarm optimization back-propagation neural network, an extreme learning machine, and a deep neural network. The paper proposes the use of optical methods to measure and collect RFID data, and to analyze and predict RFID performance. This provides a new method for testing RFID performance.

딥러닝을 이용한 핸드크림의 마찰 시계열 데이터 분류 (Deep Learning-based Approach for Classification of Tribological Time Series Data for Hand Creams)

  • 김지원;이유민;한상헌;김경택
    • 산업경영시스템학회지
    • /
    • 제44권3호
    • /
    • pp.98-105
    • /
    • 2021
  • The sensory stimulation of a cosmetic product has been deemed to be an ancillary aspect until a decade ago. That point of view has drastically changed on different levels in just a decade. Nowadays cosmetic formulators should unavoidably meet the needs of consumers who want sensory satisfaction, although they do not have much time for new product development. The selection of new products from candidate products largely depend on the panel of human sensory experts. As new product development cycle time decreases, the formulators wanted to find systematic tools that are required to filter candidate products into a short list. Traditional statistical analysis on most physical property tests for the products including tribology tests and rheology tests, do not give any sound foundation for filtering candidate products. In this paper, we suggest a deep learning-based analysis method to identify hand cream products by raw electric signals from tribological sliding test. We compare the result of the deep learning-based method using raw data as input with the results of several machine learning-based analysis methods using manually extracted features as input. Among them, ResNet that is a deep learning model proved to be the best method to identify hand cream used in the test. According to our search in the scientific reported papers, this is the first attempt for predicting test cosmetic product with only raw time-series friction data without any manual feature extraction. Automatic product identification capability without manually extracted features can be used to narrow down the list of the newly developed candidate products.

A review of Chinese named entity recognition

  • Cheng, Jieren;Liu, Jingxin;Xu, Xinbin;Xia, Dongwan;Liu, Le;Sheng, Victor S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권6호
    • /
    • pp.2012-2030
    • /
    • 2021
  • Named Entity Recognition (NER) is used to identify entity nouns in the corpus such as Location, Person and Organization, etc. NER is also an important basic of research in various natural language fields. The processing of Chinese NER has some unique difficulties, for example, there is no obvious segmentation boundary between each Chinese character in a Chinese sentence. The Chinese NER task is often combined with Chinese word segmentation, and so on. In response to these problems, we summarize the recognition methods of Chinese NER. In this review, we first introduce the sequence labeling system and evaluation metrics of NER. Then, we divide Chinese NER methods into rule-based methods, statistics-based machine learning methods and deep learning-based methods. Subsequently, we analyze in detail the model framework based on deep learning and the typical Chinese NER methods. Finally, we put forward the current challenges and future research directions of Chinese NER technology.

CNN 기반의 실시간 DNS DDoS 공격 탐지 시스템 (CNN Based Real-Time DNS DDoS Attack Detection System)

  • 서인혁;이기택;유진현;김승주
    • 정보처리학회논문지:컴퓨터 및 통신 시스템
    • /
    • 제6권3호
    • /
    • pp.135-142
    • /
    • 2017
  • DDoS (Distributed Denial of Service)는 대량의 좀비 PC를 이용하여 공격 대상 서버에 접근하여 자원을 고갈시켜 정상적인 사용자가 서버를 이용하지 못하게 하는 공격이다. DDoS 공격발생 사례가 꾸준히 증가하고 있고, 주요 공격대상은 IT 서비스, 금융권, 정부기관이기 때문에 DDoS를 탐지하는 것이 중요한 이슈로 떠오르고 있다. 본 논문에서는 DNS 서버를 이용하여 패킷을 증폭시키는 DNS DDoS 공격 즉, DNS Amplification 공격(이하 DNS 증폭 공격)을 Deep Learning (이하 딥 러닝)을 활용해 실시간으로 탐지하는 방법에 대해 소개한다. 기존 연구들의 한계점을 극복하기 위하여 실험망 환경의 데이터가 아닌 실 환경 데이터를 혼합하여 탐지 시스템을 학습하였다. 또한 이미지 인식에 주로 사용되는 Convolutional Neural Network (이하 CNN)을 이용하여 딥 러닝 모델을 구축하였다.

Deep Learning-based Pet Monitoring System and Activity Recognition device

  • Kim, Jinah;Kim, Hyungju;Park, Chan;Moon, Nammee
    • 한국컴퓨터정보학회논문지
    • /
    • 제27권2호
    • /
    • pp.25-32
    • /
    • 2022
  • 본 논문에서는 활동 인식장치를 이용한 딥러닝 기반의 반려동물 모니터링 시스템을 제안한다.이 시스템은 반려동물의 활동 인식장치와 반려인의 스마트 기기, 서버로 구성된다. 아두이노 기반 활동 인식 장치로부터 가속도와 자이로 데이터를 수집하고, 이로부터 반려동물의 걸음 수를 연산하였다. 수집된 데이터는 전처리 과정을 거쳐 CNN과 LSTM을 하이브리드한 딥러닝 모델을 통해 5가지 형태(앉기, 서기, 눕기, 걷기, 뛰기)로 활동을 인식함으로써 활동량을 측정한다. 마지막으로, 반려인의 스마트 기기에 일일 및 주간 브리핑 차트 등 활동 변화에 대한 모니터링을 제공한다. 성능 평가 결과, 반려동물의 구체화된 활동 인식 및 활동량 측정이 가능함을 확인하였다. 향후 데이터 축적을 통해 반려동물의 이상행동 탐지 및 헬스 케어 서비스의 확장을 기대할 수 있다.

Web-based University Classroom Attendance System Based on Deep Learning Face Recognition

  • Ismail, Nor Azman;Chai, Cheah Wen;Samma, Hussein;Salam, Md Sah;Hasan, Layla;Wahab, Nur Haliza Abdul;Mohamed, Farhan;Leng, Wong Yee;Rohani, Mohd Foad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.503-523
    • /
    • 2022
  • Nowadays, many attendance applications utilise biometric techniques such as the face, fingerprint, and iris recognition. Biometrics has become ubiquitous in many sectors. Due to the advancement of deep learning algorithms, the accuracy rate of biometric techniques has been improved tremendously. This paper proposes a web-based attendance system that adopts facial recognition using open-source deep learning pre-trained models. Face recognition procedural steps using web technology and database were explained. The methodology used the required pre-trained weight files embedded in the procedure of face recognition. The face recognition method includes two important processes: registration of face datasets and face matching. The extracted feature vectors were implemented and stored in an online database to create a more dynamic face recognition process. Finally, user testing was conducted, whereby users were asked to perform a series of biometric verification. The testing consists of facial scans from the front, right (30 - 45 degrees) and left (30 - 45 degrees). Reported face recognition results showed an accuracy of 92% with a precision of 100% and recall of 90%.