• Title/Summary/Keyword: AI 데이터셋

Search Result 224, Processing Time 0.026 seconds

Generation of Stage Tour Contents with Deep Learning Style Transfer (딥러닝 스타일 전이 기반의 무대 탐방 콘텐츠 생성 기법)

  • Kim, Dong-Min;Kim, Hyeon-Sik;Bong, Dae-Hyeon;Choi, Jong-Yun;Jeong, Jin-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1403-1410
    • /
    • 2020
  • Recently, as interest in non-face-to-face experiences and services increases, the demand for web video contents that can be easily consumed using mobile devices such as smartphones or tablets is rapidly increasing. To cope with these requirements, in this paper we propose a technique to efficiently produce video contents that can provide experience of visiting famous places (i.e., stage tour) in animation or movies. To this end, an image dataset was established by collecting images of stage areas using Google Maps and Google Street View APIs. Afterwards, a deep learning-based style transfer method to apply the unique style of animation videos to the collected street view images and generate the video contents from the style-transferred images was presented. Finally, we showed that the proposed method could produce more interesting stage-tour video contents through various experiments.

A Study on the Generation of Webtoons through Fine-Tuning of Diffusion Models (확산모델의 미세조정을 통한 웹툰 생성연구)

  • Kyungho Yu;Hyungju Kim;Jeongin Kim;Chanjun Chun;Pankoo Kim
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.76-83
    • /
    • 2023
  • This study proposes a method to assist webtoon artists in the process of webtoon creation by utilizing a pretrained Text-to-Image model to generate webtoon images from text. The proposed approach involves fine-tuning a pretrained Stable Diffusion model using a webtoon dataset transformed into the desired webtoon style. The fine-tuning process, using LoRA technique, completes in a quick training time of approximately 4.5 hours with 30,000 steps. The generated images exhibit the representation of shapes and backgrounds based on the input text, resulting in the creation of webtoon-like images. Furthermore, the quantitative evaluation using the Inception score shows that the proposed method outperforms DCGAN-based Text-to-Image models. If webtoon artists adopt the proposed Text-to-Image model for webtoon creation, it is expected to significantly reduce the time required for the creative process.

Performance Evaluation of Pre-trained Language Models in Multi-Goal Conversational Recommender Systems (다중목표 대화형 추천시스템을 위한 사전 학습된 언어모델들에 대한 성능 평가)

  • Taeho Kim;Hyung-Jun Jang;Sang-Wook Kim
    • Smart Media Journal
    • /
    • v.12 no.6
    • /
    • pp.35-40
    • /
    • 2023
  • In this study paper, we examine pre-trained language models used in Multi-Goal Conversational Recommender Systems (MG-CRS), comparing and analyzing their performances of various pre-trained language models. Specifically, we investigates the impact of the sizes of language models on the performance of MG-CRS. The study targets three types of language models - of BERT, GPT2, and BART, and measures and compares their accuracy in two tasks of 'type prediction' and 'topic prediction' on the MG-CRS dataset, DuRecDial 2.0. Experimental results show that all models demonstrated excellent performance in the type prediction task, but there were notable provide significant performance differences in performance depending on among the models or based on their sizes in the topic prediction task. Based on these findings, the study provides directions for improving the performance of MG-CRS.

Cascade Fusion-Based Multi-Scale Enhancement of Thermal Image (캐스케이드 융합 기반 다중 스케일 열화상 향상 기법)

  • Kyung-Jae Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.301-307
    • /
    • 2024
  • This study introduces a novel cascade fusion architecture aimed at enhancing thermal images across various scale conditions. The processing of thermal images at multiple scales has been challenging due to the limitations of existing methods that are designed for specific scales. To overcome these limitations, this paper proposes a unified framework that utilizes cascade feature fusion to effectively learn multi-scale representations. Confidence maps from different image scales are fused in a cascaded manner, enabling scale-invariant learning. The architecture comprises end-to-end trained convolutional neural networks to enhance image quality by reinforcing mutual scale dependencies. Experimental results indicate that the proposed technique outperforms existing methods in multi-scale thermal image enhancement. Performance evaluation results are provided, demonstrating consistent improvements in image quality metrics. The cascade fusion design facilitates robust generalization across scales and efficient learning of cross-scale representations.

Training Performance Analysis of Semantic Segmentation Deep Learning Model by Progressive Combining Multi-modal Spatial Information Datasets (다중 공간정보 데이터의 점진적 조합에 의한 의미적 분류 딥러닝 모델 학습 성능 분석)

  • Lee, Dae-Geon;Shin, Young-Ha;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.2
    • /
    • pp.91-108
    • /
    • 2022
  • In most cases, optical images have been used as training data of DL (Deep Learning) models for object detection, recognition, identification, classification, semantic segmentation, and instance segmentation. However, properties of 3D objects in the real-world could not be fully explored with 2D images. One of the major sources of the 3D geospatial information is DSM (Digital Surface Model). In this matter, characteristic information derived from DSM would be effective to analyze 3D terrain features. Especially, man-made objects such as buildings having geometrically unique shape could be described by geometric elements that are obtained from 3D geospatial data. The background and motivation of this paper were drawn from concept of the intrinsic image that is involved in high-level visual information processing. This paper aims to extract buildings after classifying terrain features by training DL model with DSM-derived information including slope, aspect, and SRI (Shaded Relief Image). The experiments were carried out using DSM and label dataset provided by ISPRS (International Society for Photogrammetry and Remote Sensing) for CNN-based SegNet model. In particular, experiments focus on combining multi-source information to improve training performance and synergistic effect of the DL model. The results demonstrate that buildings were effectively classified and extracted by the proposed approach.

Quantitative Evaluations of Deep Learning Models for Rapid Building Damage Detection in Disaster Areas (재난지역에서의 신속한 건물 피해 정도 감지를 위한 딥러닝 모델의 정량 평가)

  • Ser, Junho;Yang, Byungyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.381-391
    • /
    • 2022
  • This paper is intended to find one of the prevailing deep learning models that are a type of AI (Artificial Intelligence) that helps rapidly detect damaged buildings where disasters occur. The models selected are SSD-512, RetinaNet, and YOLOv3 which are widely used in object detection in recent years. These models are based on one-stage detector networks that are suitable for rapid object detection. These are often used for object detection due to their advantages in structure and high speed but not for damaged building detection in disaster management. In this study, we first trained each of the algorithms on xBD dataset that provides the post-disaster imagery with damage classification labels. Next, the three models are quantitatively evaluated with the mAP(mean Average Precision) and the FPS (Frames Per Second). The mAP of YOLOv3 is recorded at 34.39%, and the FPS reached 46. The mAP of RetinaNet recorded 36.06%, which is 1.67% higher than YOLOv3, but the FPS is one-third of YOLOv3. SSD-512 received significantly lower values than the results of YOLOv3 on two quantitative indicators. In a disaster situation, a rapid and precise investigation of damaged buildings is essential for effective disaster response. Accordingly, it is expected that the results obtained through this study can be effectively used for the rapid response in disaster management.

Bit Operation Optimization and DNN Application using GPU Acceleration (GPU 가속기를 통한 비트 연산 최적화 및 DNN 응용)

  • Kim, Sang Hyeok;Lee, Jae Heung
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1314-1320
    • /
    • 2019
  • In this paper, we propose a new method for optimizing bit operations and applying them to DNN(Deep Neural Network) in software environment. As a method for this, we propose a packing function for bitwise optimization and a masking matrix multiplication operation for application to DNN. The packing function converts 32-bit real value to 2-bit quantization value through threshold comparison operation. When this sequence is over, four 32-bit real values are changed to one 8-bit value. The masking matrix multiplication operation consists of a special operation for multiplying the packed weight value with the normal input value. And each operation was then processed in parallel using a GPU accelerator. As a result of this experiment, memory saved about 16 times than 32-bit DNN Model. Nevertheless, the accuracy was within 1%, similar to the 32-bit model.

A Study on Lane Detection Based on Split-Attention Backbone Network (Split-Attention 백본 네트워크를 활용한 차선 인식에 관한 연구)

  • Song, In seo;Lee, Seon woo;Kwon, Jang woo;Won, Jong hoon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.5
    • /
    • pp.178-188
    • /
    • 2020
  • This paper proposes a lane recognition CNN network using split-attention network as a backbone to extract feature. Split-attention is a method of assigning weight to each channel of a feature map in the CNN feature extraction process; it can reliably extract the features of an image during the rapidly changing driving environment of a vehicle. The proposed deep neural networks in this paper were trained and evaluated using the Tusimple data set. The change in performance according to the number of layers of the backbone network was compared and analyzed. A result comparable to the latest research was obtained with an accuracy of up to 96.26, and FN showed the best result. Therefore, even in the driving environment of an actual vehicle, stable lane recognition is possible without misrecognition using the model proposed in this study.

A Study on the Production of 3D Datasets for Stone Pagodas by Period in Korea

  • Byong-Kwon Lee;Eun-Ji Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.105-111
    • /
    • 2023
  • Currently, most of content restoration using artificial intelligence learning is 2D learning. However, 3D form of artificial intelligence learning is in an incomplete state due to the disadvantage of requiring a lot of computation and learning speed from the existing 2 axes (X, Y) to 3 axes (X, Y, Z). The purpose of this paper is to secure a data-set for artificial intelligence learning by analyzing and 3D modeling the stone pagodas of ourinari by era based on the two-dimensional information (image) of cultural assets. In addition, we analyzed the differences and characteristics of towers in each era in Korea, and proposed a feature modeling method suitable for artificial intelligence learning. Restoration of cultural properties relies on a variety of materials, expert techniques and historical archives. By recording and managing the information necessary for the restoration of cultural properties through this study, it is expected that it will be used as an important documentary heritage for restoring and maintaining Korean traditional pagodas in the future.

An Interpretable Log Anomaly System Using Bayesian Probability and Closed Sequence Pattern Mining (베이지안 확률 및 폐쇄 순차패턴 마이닝 방식을 이용한 설명가능한 로그 이상탐지 시스템)

  • Yun, Jiyoung;Shin, Gun-Yoon;Kim, Dong-Wook;Kim, Sang-Soo;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.77-87
    • /
    • 2021
  • With the development of the Internet and personal computers, various and complex attacks begin to emerge. As the attacks become more complex, signature-based detection become difficult. It leads to the research on behavior-based log anomaly detection. Recent work utilizes deep learning to learn the order and it shows good performance. Despite its good performance, it does not provide any explanation for prediction. The lack of explanation can occur difficulty of finding contamination of data or the vulnerability of the model itself. As a result, the users lose their reliability of the model. To address this problem, this work proposes an explainable log anomaly detection system. In this study, log parsing is the first to proceed. Afterward, sequential rules are extracted by Bayesian posterior probability. As a result, the "If condition then results, post-probability" type rule set is extracted. If the sample is matched to the ruleset, it is normal, otherwise, it is an anomaly. We utilize HDFS datasets for the experiment, resulting in F1score 92.7% in test dataset.