• Title/Summary/Keyword: neural net

Search Result 748, Processing Time 0.022 seconds

A Multi-speaker Speech Synthesis System Using X-vector (x-vector를 이용한 다화자 음성합성 시스템)

  • Jo, Min Su;Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.675-681
    • /
    • 2021
  • With the recent growth of the AI speaker market, the demand for speech synthesis technology that enables natural conversation with users is increasing. Therefore, there is a need for a multi-speaker speech synthesis system that can generate voices of various tones. In order to synthesize natural speech, it is required to train with a large-capacity. high-quality speech DB. However, it is very difficult in terms of recording time and cost to collect a high-quality, large-capacity speech database uttered by many speakers. Therefore, it is necessary to train the speech synthesis system using the speech DB of a very large number of speakers with a small amount of training data for each speaker, and a technique for naturally expressing the tone and rhyme of multiple speakers is required. In this paper, we propose a technology for constructing a speaker encoder by applying the deep learning-based x-vector technique used in speaker recognition technology, and synthesizing a new speaker's tone with a small amount of data through the speaker encoder. In the multi-speaker speech synthesis system, the module for synthesizing mel-spectrogram from input text is composed of Tacotron2, and the vocoder generating synthesized speech consists of WaveNet with mixture of logistic distributions applied. The x-vector extracted from the trained speaker embedding neural networks is added to Tacotron2 as an input to express the desired speaker's tone.

Estimation of Displacements Using Artificial Intelligence Considering Spatial Correlation of Structural Shape (구조형상 공간상관을 고려한 인공지능 기반 변위 추정)

  • Seung-Hun Shin;Ji-Young Kim;Jong-Yeol Woo;Dae-Gun Kim;Tae-Seok Jin
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.1
    • /
    • pp.1-7
    • /
    • 2023
  • An artificial intelligence (AI) method based on image deep learning is proposed to predict the entire displacement shape of a structure using the feature of partial displacements. The performance of the method was investigated through a structural test of a steel frame. An image-to-image regression (I2IR) training method was developed based on the U-Net layer for image recognition. In the I2IR method, the U-Net is modified to generate images of entire displacement shapes when images of partial displacement shapes of structures are input to the AI network. Furthermore, the training of displacements combined with the location feature was developed so that nodal displacement values with corresponding nodal coordinates could be used in AI training. The proposed training methods can consider correlations between nodal displacements in 3D space, and the accuracy of displacement predictions is improved compared with artificial neural network training methods. Displacements of the steel frame were predicted during the structural tests using the proposed methods and compared with 3D scanning data of displacement shapes. The results show that the proposed AI prediction properly follows the measured displacements using 3D scanning.

Comparative Analysis of Self-supervised Deephashing Models for Efficient Image Retrieval System (효율적인 이미지 검색 시스템을 위한 자기 감독 딥해싱 모델의 비교 분석)

  • Kim Soo In;Jeon Young Jin;Lee Sang Bum;Kim Won Gyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.519-524
    • /
    • 2023
  • In hashing-based image retrieval, the hash code of a manipulated image is different from the original image, making it difficult to search for the same image. This paper proposes and evaluates a self-supervised deephashing model that generates perceptual hash codes from feature information such as texture, shape, and color of images. The comparison models are autoencoder-based variational inference models, but the encoder is designed with a fully connected layer, convolutional neural network, and transformer modules. The proposed model is a variational inference model that includes a SimAM module of extracting geometric patterns and positional relationships within images. The SimAM module can learn latent vectors highlighting objects or local regions through an energy function using the activation values of neurons and surrounding neurons. The proposed method is a representation learning model that can generate low-dimensional latent vectors from high-dimensional input images, and the latent vectors are binarized into distinguishable hash code. From the experimental results on public datasets such as CIFAR-10, ImageNet, and NUS-WIDE, the proposed model is superior to the comparative model and analyzed to have equivalent performance to the supervised learning-based deephashing model. The proposed model can be used in application systems that require low-dimensional representation of images, such as image search or copyright image determination.

Incremental Image Noise Reduction in Coronary CT Angiography Using a Deep Learning-Based Technique with Iterative Reconstruction

  • Jung Hee Hong;Eun-Ah Park;Whal Lee;Chulkyun Ahn;Jong-Hyo Kim
    • Korean Journal of Radiology
    • /
    • v.21 no.10
    • /
    • pp.1165-1177
    • /
    • 2020
  • Objective: To assess the feasibility of applying a deep learning-based denoising technique to coronary CT angiography (CCTA) along with iterative reconstruction for additional noise reduction. Materials and Methods: We retrospectively enrolled 82 consecutive patients (male:female = 60:22; mean age, 67.0 ± 10.8 years) who had undergone both CCTA and invasive coronary artery angiography from March 2017 to June 2018. All included patients underwent CCTA with iterative reconstruction (ADMIRE level 3, Siemens Healthineers). We developed a deep learning based denoising technique (ClariCT.AI, ClariPI), which was based on a modified U-net type convolutional neural net model designed to predict the possible occurrence of low-dose noise in the originals. Denoised images were obtained by subtracting the predicted noise from the originals. Image noise, CT attenuation, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were objectively calculated. The edge rise distance (ERD) was measured as an indicator of image sharpness. Two blinded readers subjectively graded the image quality using a 5-point scale. Diagnostic performance of the CCTA was evaluated based on the presence or absence of significant stenosis (≥ 50% lumen reduction). Results: Objective image qualities (original vs. denoised: image noise, 67.22 ± 25.74 vs. 52.64 ± 27.40; SNR [left main], 21.91 ± 6.38 vs. 30.35 ± 10.46; CNR [left main], 23.24 ± 6.52 vs. 31.93 ± 10.72; all p < 0.001) and subjective image quality (2.45 ± 0.62 vs. 3.65 ± 0.60, p < 0.001) improved significantly in the denoised images. The average ERDs of the denoised images were significantly smaller than those of originals (0.98 ± 0.08 vs. 0.09 ± 0.08, p < 0.001). With regard to diagnostic accuracy, no significant differences were observed among paired comparisons. Conclusion: Application of the deep learning technique along with iterative reconstruction can enhance the noise reduction performance with a significant improvement in objective and subjective image qualities of CCTA images.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

Semantic Segmentation of Drone Imagery Using Deep Learning for Seagrass Habitat Monitoring (잘피 서식지 모니터링을 위한 딥러닝 기반의 드론 영상 의미론적 분할)

  • Jeon, Eui-Ik;Kim, Seong-Hak;Kim, Byoung-Sub;Park, Kyung-Hyun;Choi, Ock-In
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_1
    • /
    • pp.199-215
    • /
    • 2020
  • A seagrass that is marine vascular plants plays an important role in the marine ecosystem, so periodic monitoring ofseagrass habitatsis being performed. Recently, the use of dronesthat can easily acquire very high-resolution imagery is increasing to efficiently monitor seagrass habitats. And deep learning based on a convolutional neural network has shown excellent performance in semantic segmentation. So, studies applied to deep learning models have been actively conducted in remote sensing. However, the segmentation accuracy was different due to the hyperparameter, various deep learning models and imagery. And the normalization of the image and the tile and batch size are also not standardized. So,seagrass habitats were segmented from drone-borne imagery using a deep learning that shows excellent performance in this study. And it compared and analyzed the results focused on normalization and tile size. For comparison of the results according to the normalization, tile and batch size, a grayscale image and grayscale imagery converted to Z-score and Min-Max normalization methods were used. And the tile size isincreased at a specific interval while the batch size is allowed the memory size to be used as much as possible. As a result, IoU was 0.26 ~ 0.4 higher than that of Z-score normalized imagery than other imagery. Also, it wasfound that the difference to 0.09 depending on the tile and batch size. The results were different according to the normalization, tile and batch. Therefore, this experiment found that these factors should have a suitable decision process.

Evaluation of Transfer Learning in Gastroscopy Image Classification using Convolutional Neual Network (합성곱 신경망을 활용한 위내시경 이미지 분류에서 전이학습의 효용성 평가)

  • Park, Sung Jin;Kim, Young Jae;Park, Dong Kyun;Chung, Jun Won;Kim, Kwang Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.39 no.5
    • /
    • pp.213-219
    • /
    • 2018
  • Stomach cancer is the most diagnosed cancer in Korea. When gastric cancer is detected early, the 5-year survival rate is as high as 90%. Gastroscopy is a very useful method for early diagnosis. But the false negative rate of gastric cancer in the gastroscopy was 4.6~25.8% due to the subjective judgment of the physician. Recently, the image classification performance of the image recognition field has been advanced by the convolutional neural network. Convolutional neural networks perform well when diverse and sufficient amounts of data are supported. However, medical data is not easy to access and it is difficult to gather enough high-quality data that includes expert annotations. So This paper evaluates the efficacy of transfer learning in gastroscopy classification and diagnosis. We obtained 787 endoscopic images of gastric endoscopy at Gil Medical Center, Gachon University. The number of normal images was 200, and the number of abnormal images was 587. The image size was reconstructed and normalized. In the case of the ResNet50 structure, the classification accuracy before and after applying the transfer learning was improved from 0.9 to 0.947, and the AUC was also improved from 0.94 to 0.98. In the case of the InceptionV3 structure, the classification accuracy before and after applying the transfer learning was improved from 0.862 to 0.924, and the AUC was also improved from 0.89 to 0.97. In the case of the VGG16 structure, the classification accuracy before and after applying the transfer learning was improved from 0.87 to 0.938, and the AUC was also improved from 0.89 to 0.98. The difference in the performance of the CNN model before and after transfer learning was statistically significant when confirmed by T-test (p < 0.05). As a result, transfer learning is judged to be an effective method of medical data that is difficult to collect good quality data.

An adaptive deviation-resistant neutron spectrum unfolding method based on transfer learning

  • Cao, Chenglong;Gan, Quan;Song, Jing;Yang, Qi;Hu, Liqin;Wang, Fang;Zhou, Tao
    • Nuclear Engineering and Technology
    • /
    • v.52 no.11
    • /
    • pp.2452-2459
    • /
    • 2020
  • Neutron spectrum is essential to the safe operation of reactors. Traditional online neutron spectrum measurement methods still have room to improve accuracy for the application cases of wide energy range. From the application of artificial neural network (ANN) algorithm in spectrum unfolding, its accuracy is difficult to be improved for lacking of enough effective training data. In this paper, an adaptive deviation-resistant neutron spectrum unfolding method based on transfer learning was developed. The model of ANN was trained with thousands of neutron spectra generated with Monte Carlo transport calculation to construct a coarse-grained unfolded spectrum. In order to improve the accuracy of the unfolded spectrum, results of the previous ANN model combined with some specific eigenvalues of the current system were put into the dataset for training the deeper ANN model, and fine-grained unfolded spectrum could be achieved through the deeper ANN model. The method could realize accurate spectrum unfolding while maintaining universality, combined with detectors covering wide energy range, it could improve the accuracy of spectrum measurement methods for wide energy range. This method was verified with a fast neutron reactor BN-600. The mean square error (MSE), average relative deviation (ARD) and spectrum quality (Qs) were selected to evaluate the final results and they all demonstrated that the developed method was much more precise than traditional spectrum unfolding methods.

Dual CNN Structured Sound Event Detection Algorithm Based on Real Life Acoustic Dataset (실생활 음향 데이터 기반 이중 CNN 구조를 특징으로 하는 음향 이벤트 인식 알고리즘)

  • Suh, Sangwon;Lim, Wootaek;Jeong, Youngho;Lee, Taejin;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.855-865
    • /
    • 2018
  • Sound event detection is one of the research areas to model human auditory cognitive characteristics by recognizing events in an environment with multiple acoustic events and determining the onset and offset time for each event. DCASE, a research group on acoustic scene classification and sound event detection, is proceeding challenges to encourage participation of researchers and to activate sound event detection research. However, the size of the dataset provided by the DCASE Challenge is relatively small compared to ImageNet, which is a representative dataset for visual object recognition, and there are not many open sources for the acoustic dataset. In this study, the sound events that can occur in indoor and outdoor are collected on a larger scale and annotated for dataset construction. Furthermore, to improve the performance of the sound event detection task, we developed a dual CNN structured sound event detection system by adding a supplementary neural network to a convolutional neural network to determine the presence of sound events. Finally, we conducted a comparative experiment with both baseline systems of the DCASE 2016 and 2017.

Deep Learning Similarity-based 1:1 Matching Method for Real Product Image and Drawing Image

  • Han, Gi-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.59-68
    • /
    • 2022
  • This paper presents a method for 1:1 verification by comparing the similarity between the given real product image and the drawing image. The proposed method combines two existing CNN-based deep learning models to construct a Siamese Network. After extracting the feature vector of the image through the FC (Fully Connected) Layer of each network and comparing the similarity, if the real product image and the drawing image (front view, left and right side view, top view, etc) are the same product, the similarity is set to 1 for learning and, if it is a different product, the similarity is set to 0. The test (inference) model is a deep learning model that queries the real product image and the drawing image in pairs to determine whether the pair is the same product or not. In the proposed model, through a comparison of the similarity between the real product image and the drawing image, if the similarity is greater than or equal to a threshold value (Threshold: 0.5), it is determined that the product is the same, and if it is less than or equal to, it is determined that the product is a different product. The proposed model showed an accuracy of about 71.8% for a query to a product (positive: positive) with the same drawing as the real product, and an accuracy of about 83.1% for a query to a different product (positive: negative). In the future, we plan to conduct a study to improve the matching accuracy between the real product image and the drawing image by combining the parameter optimization study with the proposed model and adding processes such as data purification.