• Title/Summary/Keyword: 합성신경망

Search Result 636, Processing Time 0.023 seconds

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.

Comparative Analysis of Self-supervised Deephashing Models for Efficient Image Retrieval System (효율적인 이미지 검색 시스템을 위한 자기 감독 딥해싱 모델의 비교 분석)

  • Kim Soo In;Jeon Young Jin;Lee Sang Bum;Kim Won Gyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.519-524
    • /
    • 2023
  • In hashing-based image retrieval, the hash code of a manipulated image is different from the original image, making it difficult to search for the same image. This paper proposes and evaluates a self-supervised deephashing model that generates perceptual hash codes from feature information such as texture, shape, and color of images. The comparison models are autoencoder-based variational inference models, but the encoder is designed with a fully connected layer, convolutional neural network, and transformer modules. The proposed model is a variational inference model that includes a SimAM module of extracting geometric patterns and positional relationships within images. The SimAM module can learn latent vectors highlighting objects or local regions through an energy function using the activation values of neurons and surrounding neurons. The proposed method is a representation learning model that can generate low-dimensional latent vectors from high-dimensional input images, and the latent vectors are binarized into distinguishable hash code. From the experimental results on public datasets such as CIFAR-10, ImageNet, and NUS-WIDE, the proposed model is superior to the comparative model and analyzed to have equivalent performance to the supervised learning-based deephashing model. The proposed model can be used in application systems that require low-dimensional representation of images, such as image search or copyright image determination.

A Study on Leakage Detection Technique Using Transfer Learning-Based Feature Fusion (전이학습 기반 특징융합을 이용한 누출판별 기법 연구)

  • YuJin Han;Tae-Jin Park;Jonghyuk Lee;Ji-Hoon Bae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.41-47
    • /
    • 2024
  • When there were disparities in performance between models trained in the time and frequency domains, even after conducting an ensemble, we observed that the performance of the ensemble was compromised due to imbalances in the individual model performances. Therefore, this paper proposes a leakage detection technique to enhance the accuracy of pipeline leakage detection through a step-wise learning approach that extracts features from both the time and frequency domains and integrates them. This method involves a two-step learning process. In the Stage 1, independent model training is conducted in the time and frequency domains to effectively extract crucial features from the provided data in each domain. In Stage 2, the pre-trained models were utilized by removing their respective classifiers. Subsequently, the features from both domains were fused, and a new classifier was added for retraining. The proposed transfer learning-based feature fusion technique in this paper performs model training by integrating features extracted from the time and frequency domains. This integration exploits the complementary nature of features from both domains, allowing the model to leverage diverse information. As a result, it achieved a high accuracy of 99.88%, demonstrating outstanding performance in pipeline leakage detection.

A Study on the Efficacy of Edge-Based Adversarial Example Detection Model: Across Various Adversarial Algorithms

  • Jaesung Shim;Kyuri Jo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.31-41
    • /
    • 2024
  • Deep learning models show excellent performance in tasks such as image classification and object detection in the field of computer vision, and are used in various ways in actual industrial sites. Recently, research on improving robustness has been actively conducted, along with pointing out that this deep learning model is vulnerable to hostile examples. A hostile example is an image in which small noise is added to induce misclassification, and can pose a significant threat when applying a deep learning model to a real environment. In this paper, we tried to confirm the robustness of the edge-learning classification model and the performance of the adversarial example detection model using it for adversarial examples of various algorithms. As a result of robustness experiments, the basic classification model showed about 17% accuracy for the FGSM algorithm, while the edge-learning models maintained accuracy in the 60-70% range, and the basic classification model showed accuracy in the 0-1% range for the PGD/DeepFool/CW algorithm, while the edge-learning models maintained accuracy in 80-90%. As a result of the adversarial example detection experiment, a high detection rate of 91-95% was confirmed for all algorithms of FGSM/PGD/DeepFool/CW. By presenting the possibility of defending against various hostile algorithms through this study, it is expected to improve the safety and reliability of deep learning models in various industries using computer vision.

Chart-based Stock Price Prediction by Combing Variation Autoencoder and Attention Mechanisms (변이형 오토인코더와 어텐션 메커니즘을 결합한 차트기반 주가 예측)

  • Sanghyun Bae;Byounggu Choi
    • Information Systems Review
    • /
    • v.23 no.1
    • /
    • pp.23-43
    • /
    • 2021
  • Recently, many studies have been conducted to increase the accuracy of stock price prediction by analyzing candlestick charts using artificial intelligence techniques. However, these studies failed to consider the time-series characteristics of candlestick charts and to take into account the emotional state of market participants in data learning for stock price prediction. In order to overcome these limitations, this study produced input data by combining volatility index and candlestick charts to consider the emotional state of market participants, and used the data as input for a new method proposed on the basis of combining variantion autoencoder (VAE) and attention mechanisms for considering the time-series characteristics of candlestick chart. Fifty firms were randomly selected from the S&P 500 index and their stock prices were predicted to evaluate the performance of the method compared with existing ones such as convolutional neural network (CNN) or long-short term memory (LSTM). The results indicated the method proposed in this study showed superior performance compared to the existing ones. This study implied that the accuracy of stock price prediction could be improved by considering the emotional state of market participants and the time-series characteristics of the candlestick chart.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Comparison of rainfall-runoff performance based on various gridded precipitation datasets in the Mekong River basin (메콩강 유역의 격자형 강수 자료에 의한 강우-유출 모의 성능 비교·분석)

  • Kim, Younghun;Le, Xuan-Hien;Jung, Sungho;Yeon, Minho;Lee, Gihae
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.2
    • /
    • pp.75-89
    • /
    • 2023
  • As the Mekong River basin is a nationally shared river, it is difficult to collect precipitation data, and the quantitative and qualitative quality of the data sets differs from country to country, which may increase the uncertainty of hydrological analysis results. Recently, with the development of remote sensing technology, it has become easier to obtain grid-based precipitation products(GPPs), and various hydrological analysis studies have been conducted in unmeasured or large watersheds using GPPs. In this study, rainfall-runoff simulation in the Mekong River basin was conducted using the SWAT model, which is a quasi-distribution model with three satellite GPPs (TRMM, GSMaP, PERSIANN-CDR) and two GPPs (APHRODITE, GPCC). Four water level stations, Luang Prabang, Pakse, Stung Treng, and Kratie, which are major outlets of the main Mekong River, were selected, and the parameters of the SWAT model were calibrated using APHRODITE as an observation value for the period from 2001 to 2011 and runoff simulations were verified for the period form 2012 to 2013. In addition, using the ConvAE, a convolutional neural network model, spatio-temporal correction of original satellite precipitation products was performed, and rainfall-runoff performances were compared before and after correction of satellite precipitation products. The original satellite precipitation products and GPCC showed a quantitatively under- or over-estimated or spatially very different pattern compared to APHPRODITE, whereas, in the case of satellite precipitation prodcuts corrected using ConvAE, spatial correlation was dramatically improved. In the case of runoff simulation, the runoff simulation results using the satellite precipitation products corrected by ConvAE for all the outlets have significantly improved accuracy than the runoff results using original satellite precipitation products. Therefore, the bias correction technique using the ConvAE technique presented in this study can be applied in various hydrological analysis for large watersheds where rain guage network is not dense.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

Development of Intelligent Severity of Atopic Dermatitis Diagnosis Model using Convolutional Neural Network (합성곱 신경망(Convolutional Neural Network)을 활용한 지능형 아토피피부염 중증도 진단 모델 개발)

  • Yoon, Jae-Woong;Chun, Jae-Heon;Bang, Chul-Hwan;Park, Young-Min;Kim, Young-Joo;Oh, Sung-Min;Jung, Joon-Ho;Lee, Suk-Jun;Lee, Ji-Hyun
    • Management & Information Systems Review
    • /
    • v.36 no.4
    • /
    • pp.33-51
    • /
    • 2017
  • With the advent of 'The Forth Industrial Revolution' and the growing demand for quality of life due to economic growth, needs for the quality of medical services are increasing. Artificial intelligence has been introduced in the medical field, but it is rarely used in chronic skin diseases that directly affect the quality of life. Also, atopic dermatitis, a representative disease among chronic skin diseases, has a disadvantage in that it is difficult to make an objective diagnosis of the severity of lesions. The aim of this study is to establish an intelligent severity recognition model of atopic dermatitis for improving the quality of patient's life. For this, the following steps were performed. First, image data of patients with atopic dermatitis were collected from the Catholic University of Korea Seoul Saint Mary's Hospital. Refinement and labeling were performed on the collected image data to obtain training and verification data that suitable for the objective intelligent atopic dermatitis severity recognition model. Second, learning and verification of various CNN algorithms are performed to select an image recognition algorithm that suitable for the objective intelligent atopic dermatitis severity recognition model. Experimental results showed that 'ResNet V1 101' and 'ResNet V2 50' were measured the highest performance with Erythema and Excoriation over 90% accuracy, and 'VGG-NET' was measured 89% accuracy lower than the two lesions due to lack of training data. The proposed methodology demonstrates that the image recognition algorithm has high performance not only in the field of object recognition but also in the medical field requiring expert knowledge. In addition, this study is expected to be highly applicable in the field of atopic dermatitis due to it uses image data of actual atopic dermatitis patients.

  • PDF