• Title/Summary/Keyword: Training signal

Search Result 506, Processing Time 0.032 seconds

Study on KTX driver assistance terminal's hardware and device development (KTX 운전자지원 단말기의 하드웨어와 장치개발에 대한 연구)

  • Jung, Sung-Youn;Kim, Hyung-In;Kang, Ki-Sok;Kim, Hyun-Sic;Jung, Do-Won
    • Proceedings of the KSR Conference
    • /
    • 2008.11b
    • /
    • pp.498-508
    • /
    • 2008
  • TECA(Terminal Cabin) has train information, fault display, maintenance function and training function for driver, as sub-device of Seoul-Pusan Express train's on-board computerized device. Concerning KTX train's driving, signal driving is required for high speed driving, and it is true that we have difficulty in troubleshooting and acquisition of basic information through long train and one-man driving. Therefore, rapid and correct processing about various information and TECA device for maintenance are indispensable for its utilization. But there is not study example in hardware, maintenance about this device and device development according to new requirements. Therefore, this study will help understanding of driver assistance terminal device's maintenance performance development, describing TECA's basic functions necessary for device development and its requirements. Accordingly, I suppose that this thesis will be utilized to complete specification, necessary for new driver assistance terminal device development.

  • PDF

Convolutional neural network based traffic sound classification robust to environmental noise (합성곱 신경망 기반 환경잡음에 강인한 교통 소음 분류 모델)

  • Lee, Jaejun;Kim, Wansoo;Lee, Kyogu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.6
    • /
    • pp.469-474
    • /
    • 2018
  • As urban population increases, research on urban environmental noise is getting more attention. In this study, we classify the abnormal noise occurring in traffic situation by using a deep learning algorithm which shows high performance in recent environmental noise classification studies. Specifically, we classify the four classes of tire skidding sounds, car crash sounds, car horn sounds, and normal sounds using convolutional neural networks. In addition, we add three environmental noises, including rain, wind and crowd noises, to our training data so that the classification model is more robust in real traffic situation with environmental noises. Experimental results show that the proposed traffic sound classification model achieves better performance than the existing algorithms, particularly under harsh conditions with environmental noises.

Auto Thresholding for Efficient Neurofeedback Trainning (효과적인 뉴로피드백 훈련을 위한 임계값 설정 기법)

  • Shin, Min-Chul;Hwang, Hae-Do;Yoon, Seung-Hyun;Lee, Jieun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.2
    • /
    • pp.19-29
    • /
    • 2019
  • We develop a complete system that includes data collection, signal processing, and real-time interaction for effective neurofeedback training. Our system supports a sophisticated technique to find threshold values which are quite important for effective neurofeedback system. A therapist specifies a target success rate of positive feedback, allowable error and time. The system computes a current success rate and compare it with the target one. If the difference between two rates exceeds the allowable error for allowable time, we find an optimum threshold value to obtain the target success rate by using numerical optimization technique. We conduct several experiments by varying input parameters: target success rate, allowable error and time, and demonstrate the effectiveness of our technique by showing the desired target success rate is stably obtained and systematically controlled by input parameters.

Active pulse classification algorithm using convolutional neural networks (콘볼루션 신경회로망을 이용한 능동펄스 식별 알고리즘)

  • Kim, Geunhwan;Choi, Seung-Ryul;Yoon, Kyung-Sik;Lee, Kyun-Kyung;Lee, Donghwa
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.106-113
    • /
    • 2019
  • In this paper, we propose an algorithm to classify the received active pulse when the active sonar system is operated as a non-cooperative mode. The proposed algorithm uses CNN (Convolutional Neural Networks) which shows good performance in various fields. As an input of CNN, time frequency analysis data which performs STFT (Short Time Fourier Transform) of the received signal is used. The CNN used in this paper consists of two convolution and pulling layers. We designed a database based neural network and a pulse feature based neural network according to the output layer design. To verify the performance of the algorithm, the data of 3110 CW (Continuous Wave) pulses and LFM (Linear Frequency Modulated) pulses received from the actual ocean were processed to construct training data and test data. As a result of simulation, the database based neural network showed 99.9 % accuracy and the feature based neural network showed about 96 % accuracy when allowing 2 pixel error.

Denoise of Astronomical Images with Deep Learning

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.54.2-54.2
    • /
    • 2019
  • Removing noise which occurs inevitably when taking image data has been a big concern. There is a way to raise signal-to-noise ratio and it is regarded as the only way, image stacking. Image stacking is averaging or just adding all pixel values of multiple pictures taken of a specific area. Its performance and reliability are unquestioned, but its weaknesses are also evident. Object with fast proper motion can be vanished, and most of all, it takes too long time. So if we can handle single shot image well and achieve similar performance, we can overcome those weaknesses. Recent developments in deep learning have enabled things that were not possible with former algorithm-based programming. One of the things is generating data with more information from data with less information. As a part of that, we reproduced stacked image from single shot image using a kind of deep learning, conditional generative adversarial network (cGAN). r-band camcol2 south data were used from SDSS Stripe 82 data. From all fields, image data which is stacked with only 22 individual images and, as a pair of stacked image, single pass data which were included in all stacked image were used. All used fields are cut in $128{\times}128$ pixel size, so total number of image is 17930. 14234 pairs of all images were used for training cGAN and 3696 pairs were used for verify the result. As a result, RMS error of pixel values between generated data from the best condition and target data were $7.67{\times}10^{-4}$ compared to original input data, $1.24{\times}10^{-3}$. We also applied to a few test galaxy images and generated images were similar to stacked images qualitatively compared to other de-noising methods. In addition, with photometry, The number count of stacked-cGAN matched sources is larger than that of single pass-stacked one, especially for fainter objects. Also, magnitude completeness became better in fainter objects. With this work, it is possible to observe reliably 1 magnitude fainter object.

  • PDF

Development of deep learning-based holographic ultrasound generation algorithm (딥러닝 기반 초음파 홀로그램 생성 알고리즘 개발)

  • Lee, Moon Hwan;Hwang, Jae Youn
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.169-175
    • /
    • 2021
  • Recently, an ultrasound hologram and its applications have gained attention in the ultrasound research field. However, the determination technique of transmit signal phases, which generate a hologram, has not been significantly advanced from the previous algorithms which are time-consuming iterative methods. Thus, we applied the deep learning technique, which has been previously adopted to generate an optical hologram, to generate an ultrasound hologram. We further examined the Deep learning-based Holographic Ultrasound Generation algorithm (Deep-HUG). We implement the U-Net-based algorithm and examine its generalizability by training on a dataset, which consists of randomly distributed disks, and testing on the alphabets (A-Z). Furthermore, we compare the Deep-HUG with the previous algorithm in terms of computation time, accuracy, and uniformity. It was found that the accuracy and uniformity of the Deep-HUG are somewhat lower than those of the previous algorithm whereas the computation time is 190 times faster than that of the previous algorithm, demonstrating that Deep-HUG has potential as a useful technique to rapidly generate an ultrasound hologram for various applications.

BLE-based Indoor Positioning System design using Neural Network (신경망을 이용한 BLE 기반 실내 측위 시스템 설계)

  • Shin, Kwang-Seong;Lee, Heekwon;Youm, Sungkwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.75-80
    • /
    • 2021
  • Positioning technology is performing important functions in augmented reality, smart factory, and autonomous driving. Among the positioning techniques, the positioning method using beacons has been considered a challenging task due to the deviation of the RSSI value. In this study, the position of a moving object is predicted by training a neural network that takes the RSSI value of the receiver as an input and the distance as the target value. To do this, the measured distance versus RSSI was collected. A neural network was introduced to create synthetic data from the collected actual data. Based on this neural network, the RSSI value versus distance was predicted. The real value of RSSI was obtained as a neural network for generating synthetic data, and based on this value, the coordinates of the object were estimated by learning a neural network that tracks the location of a terminal in a virtual environment.

Design of a Smart Music Learning Device that can interact with each other using a transparent touch panel (투명 터치패널을 이용한 상호작용이 가능한 스마트 음악학습기의 설계)

  • Kim, Hyeong-Gyun;Kim, Yong-Ho
    • Journal of Digital Convergence
    • /
    • v.18 no.12
    • /
    • pp.127-132
    • /
    • 2020
  • The Smart Music Learning Device(SMLD) presented in this paper constructs the display part by attaching the touch panel to both sides of the transparent panel. The main processing unit uses raspberry pie, and the operating system uses Android. On the transparent panel, music education contents are displayed, and on the touch panels 1 and 2, the inputs of learners and instructors are accepted. The signal input from the touch panels 1 and 2 controls the progress of the music education contents through a process in the main processing unit. This control process design and implement a two - sided panel - based interactive training algorithm. This device aims at musical education based on mutual understanding. Therefore, it conducts face-to-face education using music education contents presented through transparent panel. This allows the instructor to know in real time the response to the learner, thus improving the understanding of the learning and the quality of the education. Also, the learner's concentration can be improved.

Synthesis of T2-weighted images from proton density images using a generative adversarial network in a temporomandibular joint magnetic resonance imaging protocol

  • Chena, Lee;Eun-Gyu, Ha;Yoon Joo, Choi;Kug Jin, Jeon;Sang-Sun, Han
    • Imaging Science in Dentistry
    • /
    • v.52 no.4
    • /
    • pp.393-398
    • /
    • 2022
  • Purpose: This study proposed a generative adversarial network (GAN) model for T2-weighted image (WI) synthesis from proton density (PD)-WI in a temporomandibular joint(TMJ) magnetic resonance imaging (MRI) protocol. Materials and Methods: From January to November 2019, MRI scans for TMJ were reviewed and 308 imaging sets were collected. For training, 277 pairs of PD- and T2-WI sagittal TMJ images were used. Transfer learning of the pix2pix GAN model was utilized to generate T2-WI from PD-WI. Model performance was evaluated with the structural similarity index map (SSIM) and peak signal-to-noise ratio (PSNR) indices for 31 predicted T2-WI (pT2). The disc position was clinically diagnosed as anterior disc displacement with or without reduction, and joint effusion as present or absent. The true T2-WI-based diagnosis was regarded as the gold standard, to which pT2-based diagnoses were compared using Cohen's ĸ coefficient. Results: The mean SSIM and PSNR values were 0.4781(±0.0522) and 21.30(±1.51) dB, respectively. The pT2 protocol showed almost perfect agreement(ĸ=0.81) with the gold standard for disc position. The number of discordant cases was higher for normal disc position (17%) than for anterior displacement with reduction (2%) or without reduction (10%). The effusion diagnosis also showed almost perfect agreement(ĸ=0.88), with higher concordance for the presence (85%) than for the absence (77%) of effusion. Conclusion: The application of pT2 images for a TMJ MRI protocol useful for diagnosis, although the image quality of pT2 was not fully satisfactory. Further research is expected to enhance pT2 quality.

A ResNet based multiscale feature extraction for classifying multi-variate medical time series

  • Zhu, Junke;Sun, Le;Wang, Yilin;Subramani, Sudha;Peng, Dandan;Nicolas, Shangwe Charmant
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1431-1445
    • /
    • 2022
  • We construct a deep neural network model named ECGResNet. This model can diagnosis diseases based on 12-lead ECG data of eight common cardiovascular diseases with a high accuracy. We chose the 16 Blocks of ResNet50 as the main body of the model and added the Squeeze-and-Excitation module to learn the data information between channels adaptively. We modified the first convolutional layer of ResNet50 which has a convolutional kernel of 7 to a superposition of convolutional kernels of 8 and 16 as our feature extraction method. This way allows the model to focus on the overall trend of the ECG signal while also noticing subtle changes. The model further improves the accuracy of cardiovascular and cerebrovascular disease classification by using a fully connected layer that integrates factors such as gender and age. The ECGResNet model adds Dropout layers to both the residual block and SE module of ResNet50, further avoiding the phenomenon of model overfitting. The model was eventually trained using a five-fold cross-validation and Flooding training method, with an accuracy of 95% on the test set and an F1-score of 0.841.We design a new deep neural network, innovate a multi-scale feature extraction method, and apply the SE module to extract features of ECG data.