• Title/Summary/Keyword: Smart Encoder

Search Result 48, Processing Time 0.026 seconds

WiFi CSI Data Preprocessing and Augmentation Techniques in Indoor People Counting using Deep Learning (딥러닝을 활용한 실내 사람 수 추정을 위한 WiFi CSI 데이터 전처리와 증강 기법)

  • Kim, Yeon-Ju;Kim, Seungku
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1890-1897
    • /
    • 2021
  • People counting is an important technology to provide application services such as smart home, smart building, smart car, etc. Due to the social distancing of COVID-19, the people counting technology attracted public attention. People counting system can be implemented in various ways such as camera, sensor, wireless, etc. according to service requirements. People counting system using WiFi AP uses WiFi CSI data that reflects multipath information. This technology is an effective solution implementing indoor with low cost. The conventional WiFi CSI-based people counting technologies have low accuracy that obstructs the high quality service. This paper proposes a deep learning people counting system based on WiFi CSI data. Data preprocessing using auto-encoder, data augmentation that transform WiFi CSI data, and a proposed deep learning model improve the accuracy of people counting. In the experimental result, the proposed approach shows 89.29% accuracy in 6 subjects.

Mobile Finger Signature Verification Robust to Skilled Forgery (모바일환경에서 위조서명에 강건한 딥러닝 기반의 핑거서명검증 연구)

  • Nam, Seng-soo;Seo, Chang-ho;Choi, Dae-seon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.5
    • /
    • pp.1161-1170
    • /
    • 2016
  • In this paper, we provide an authentication technology for verifying dynamic signature made by finger on smart phone. In the proposed method, we are using the Auto-Encoder-based 1 class model in order to effectively distinguish skilled forgery signature. In addition to the basic dynamic signature characteristic information such as appearance and velocity of a signature, we use accelerometer value supported by most of the smartphone. Signed data is re-sampled to give the same length and is normalized to a constant size. We built a test set for evaluation and conducted experiment in three ways. As results of the experiment, the proposed acceleration sensor value and 1 class model shows 6.9% less EER than previous method.

Development of a driver's emotion detection model using auto-encoder on driving behavior and psychological data

  • Eun-Seo, Jung;Seo-Hee, Kim;Yun-Jung, Hong;In-Beom, Yang;Jiyoung, Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.3
    • /
    • pp.35-43
    • /
    • 2023
  • Emotion recognition while driving is an essential task to prevent accidents. Furthermore, in the era of autonomous driving, automobiles are the subject of mobility, requiring more emotional communication with drivers, and the emotion recognition market is gradually spreading. Accordingly, in this research plan, the driver's emotions are classified into seven categories using psychological and behavioral data, which are relatively easy to collect. The latent vectors extracted through the auto-encoder model were also used as features in this classification model, confirming that this affected performance improvement. Furthermore, it also confirmed that the performance was improved when using the framework presented in this paper compared to when the existing EEG data were included. Finally, 81% of the driver's emotion classification accuracy and 80% of F1-Score were achieved only through psychological, personal information, and behavioral data.

Sentiment Analysis using Robust Parallel Tri-LSTM Sentence Embedding in Out-of-Vocabulary Word (Out-of-Vocabulary 단어에 강건한 병렬 Tri-LSTM 문장 임베딩을 이용한 감정분석)

  • Lee, Hyun Young;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.16-24
    • /
    • 2021
  • The exiting word embedding methodology such as word2vec represents words, which only occur in the raw training corpus, as a fixed-length vector into a continuous vector space, so when mapping the words incorporated in the raw training corpus into a fixed-length vector in morphologically rich language, out-of-vocabulary (OOV) problem often happens. Even for sentence embedding, when representing the meaning of a sentence as a fixed-length vector by synthesizing word vectors constituting a sentence, OOV words make it challenging to meaningfully represent a sentence into a fixed-length vector. In particular, since the agglutinative language, the Korean has a morphological characteristic to integrate lexical morpheme and grammatical morpheme, handling OOV words is an important factor in improving performance. In this paper, we propose parallel Tri-LSTM sentence embedding that is robust to the OOV problem by extending utilizing the morphological information of words into sentence-level. As a result of the sentiment analysis task with corpus in Korean, we empirically found that the character unit is better than the morpheme unit as an embedding unit for Korean sentence embedding. We achieved 86.17% accuracy on the sentiment analysis task with the parallel bidirectional Tri-LSTM sentence encoder.

Financial Market Prediction and Improving the Performance Based on Large-scale Exogenous Variables and Deep Neural Networks (대규모 외생 변수 및 Deep Neural Network 기반 금융 시장 예측 및 성능 향상)

  • Cheon, Sung Gil;Lee, Ju Hong;Choi, Bum Ghi;Song, Jae Won
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.26-35
    • /
    • 2020
  • Attempts to predict future stock prices have been studied steadily since the past. However, unlike general time-series data, financial time-series data has various obstacles to making predictions such as non-stationarity, long-term dependence, and non-linearity. In addition, variables of a wide range of data have limitations in the selection by humans, and the model should be able to automatically extract variables well. In this paper, we propose a 'sliding time step normalization' method that can normalize non-stationary data and LSTM autoencoder to compress variables from all variables. and 'moving transfer learning', which divides periods and performs transfer learning. In addition, the experiment shows that the performance is superior when using as many variables as possible through the neural network rather than using only 100 major financial variables and by using 'sliding time step normalization' to normalize the non-stationarity of data in all sections, it is shown to be effective in improving performance. 'moving transfer learning' shows that it is effective in improving the performance in long test intervals by evaluating the performance of the model and performing transfer learning in the test interval for each step.

A Case Study on Integrated Surveillance System Field Implement with Intelligent Video Analytic Software (지능형 영상 분석 소프트웨어를 탑재한 종합 감시 시스템 현장 구축에 관한 사례 연구)

  • Jeon, Ji-Hye;Ahn, Tae-Ki;Park, Kwang-Young;Park, Goo-Man
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.6
    • /
    • pp.255-260
    • /
    • 2011
  • The security issue in urban transit system has been widely considered as the common matters. The safe urban transit system is highly demanded because of the vast number of daily passengers, and providing safety is one of the most challenging projects. We introduced a test model for integrated security system for urban transit system and built it at a subway station to demonstrate its performance. This system consists of cameras, sensor network and central monitoring software. We described the smart camera functionality in more detail. The proposed smart camera includes the moving objects recognition module, video analytics, video encoder and server module that transmits video and audio information. We demonstrated the system's excellent performance.

Ensemble-based deep learning for autonomous bridge component and damage segmentation leveraging Nested Reg-UNet

  • Abhishek Subedi;Wen Tang;Tarutal Ghosh Mondal;Rih-Teng Wu;Mohammad R. Jahanshahi
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.335-349
    • /
    • 2023
  • Bridges constantly undergo deterioration and damage, the most common ones being concrete damage and exposed rebar. Periodic inspection of bridges to identify damages can aid in their quick remediation. Likewise, identifying components can provide context for damage assessment and help gauge a bridge's state of interaction with its surroundings. Current inspection techniques rely on manual site visits, which can be time-consuming and costly. More recently, robotic inspection assisted by autonomous data analytics based on Computer Vision (CV) and Artificial Intelligence (AI) has been viewed as a suitable alternative to manual inspection because of its efficiency and accuracy. To aid research in this avenue, this study performs a comparative assessment of different architectures, loss functions, and ensembling strategies for the autonomous segmentation of bridge components and damages. The experiments lead to several interesting discoveries. Nested Reg-UNet architecture is found to outperform five other state-of-the-art architectures in both damage and component segmentation tasks. The architecture is built by combining a Nested UNet style dense configuration with a pretrained RegNet encoder. In terms of the mean Intersection over Union (mIoU) metric, the Nested Reg-UNet architecture provides an improvement of 2.86% on the damage segmentation task and 1.66% on the component segmentation task compared to the state-of-the-art UNet architecture. Furthermore, it is demonstrated that incorporating the Lovasz-Softmax loss function to counter class imbalance can boost performance by 3.44% in the component segmentation task over the most employed alternative, weighted Cross Entropy (wCE). Finally, weighted softmax ensembling is found to be quite effective when used synchronously with the Nested Reg-UNet architecture by providing mIoU improvement of 0.74% in the component segmentation task and 1.14% in the damage segmentation task over a single-architecture baseline. Overall, the best mIoU of 92.50% for the component segmentation task and 84.19% for the damage segmentation task validate the feasibility of these techniques for autonomous bridge component and damage segmentation using RGB images.

Convolutional Autoencoder based Stress Detection using Soft Voting (소프트 보팅을 이용한 합성곱 오토인코더 기반 스트레스 탐지)

  • Eun Bin Choi;Soo Hyung Kim
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.1-9
    • /
    • 2023
  • Stress is a significant issue in modern society, often triggered by external or internal factors that are difficult to manage. When high stress persists over a long term, it can develop into a chronic condition, negatively impacting health and overall well-being. However, it is challenging for individuals experiencing chronic stress to recognize their condition, making early detection and management crucial. Using biosignals measured from wearable devices to detect stress could lead to more effective management. However, there are two main problems with using biosignals: first, manually extracting features from these signals can introduce bias, and second, the performance of classification models can vary greatly depending on the subject of the experiment. This paper proposes a model that reduces bias using convo utional autoencoders, which can represent the key features of data, and enhances generalizability by employing soft voting, a method of ensemble learning, to minimize performance variability. To verify the generalization performance of the model, we evaluate it using LOSO cross-validation method. The model proposed in this paper has demonstrated superior accuracy compared to previous studies using the WESAD dataset.

  • PDF

Development of Progressive Download Video Transmission EDR based RTOS on Wireless LAN (RTOS 기반 무선랜 장치가 연결된 영상기록저장장치의 Progressive Download 방식 영상전송 기술 개발)

  • Nahm, Eui-Seok
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.12
    • /
    • pp.1792-1798
    • /
    • 2017
  • Event Data Recorder(Car Black-Box) with WiFi dongle have been released, and the platform of the majority is the Linux platform. This is because the platform development is possible in little investment cost by reducing the source licensing costs by taking advantage of the open source. But utilizing Linux platform has the limitations of boot-up time and consuming processing power due to the limitation of battery capacity, to be cost-competitive to minimize the use of memory. In this paper, the real-time operating system(RTOS) is utilized to optimize these portions. MP4 encoder and Muxer are developed to be about ten seconds boot up and minimized memory. It has the advantages of operating at lower power consumption than the Linux utilizing WiFi dongle. Utilizing a WiFi dongle is to provide a progressive download feature on smart phones to lower product prices. But RTOS has the weakness in WiFi. Porting TCP /IP, Web and DHCP server and combination with the USB OTG Host interface by implementing the protocol stack are developed for WiFi. And also SPI NOR flash memory is utilized for faster boot time and cost reductions, low processing power to be consume. As the results, the developed proved the 10 seconds booting time, 24 frame rate/sec. and 10% lower power consumption.

Crack segmentation in high-resolution images using cascaded deep convolutional neural networks and Bayesian data fusion

  • Tang, Wen;Wu, Rih-Teng;Jahanshahi, Mohammad R.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.221-235
    • /
    • 2022
  • Manual inspection of steel box girders on long span bridges is time-consuming and labor-intensive. The quality of inspection relies on the subjective judgements of the inspectors. This study proposes an automated approach to detect and segment cracks in high-resolution images. An end-to-end cascaded framework is proposed to first detect the existence of cracks using a deep convolutional neural network (CNN) and then segment the crack using a modified U-Net encoder-decoder architecture. A Naïve Bayes data fusion scheme is proposed to reduce the false positives and false negatives effectively. To generate the binary crack mask, first, the original images are divided into 448 × 448 overlapping image patches where these image patches are classified as cracks versus non-cracks using a deep CNN. Next, a modified U-Net is trained from scratch using only the crack patches for segmentation. A customized loss function that consists of binary cross entropy loss and the Dice loss is introduced to enhance the segmentation performance. Additionally, a Naïve Bayes fusion strategy is employed to integrate the crack score maps from different overlapping crack patches and to decide whether a pixel is crack or not. Comprehensive experiments have demonstrated that the proposed approach achieves an 81.71% mean intersection over union (mIoU) score across 5 different training/test splits, which is 7.29% higher than the baseline reference implemented with the original U-Net.