• Title/Summary/Keyword: CNN Model

Search Result 989, Processing Time 0.023 seconds

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.

Nondestructive Quantification of Corrosion in Cu Interconnects Using Smith Charts (스미스 차트를 이용한 구리 인터커텍트의 비파괴적 부식도 평가)

  • Minkyu Kang;Namgyeong Kim;Hyunwoo Nam;Tae Yeob Kang
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.31 no.2
    • /
    • pp.28-35
    • /
    • 2024
  • Corrosion inside electronic packages significantly impacts the system performance and reliability, necessitating non-destructive diagnostic techniques for system health management. This study aims to present a non-destructive method for assessing corrosion in copper interconnects using the Smith chart, a tool that integrates the magnitude and phase of complex impedance for visualization. For the experiment, specimens simulating copper transmission lines were subjected to temperature and humidity cycles according to the MIL-STD-810G standard to induce corrosion. The corrosion level of the specimen was quantitatively assessed and labeled based on color changes in the R channel. S-parameters and Smith charts with progressing corrosion stages showed unique patterns corresponding to five levels of corrosion, confirming the effectiveness of the Smith chart as a tool for corrosion assessment. Furthermore, by employing data augmentation, 4,444 Smith charts representing various corrosion levels were obtained, and artificial intelligence models were trained to output the corrosion stages of copper interconnects based on the input Smith charts. Among image classification-specialized CNN and Transformer models, the ConvNeXt model achieved the highest diagnostic performance with an accuracy of 89.4%. When diagnosing the corrosion using the Smith chart, it is possible to perform a non-destructive evaluation using electronic signals. Additionally, by integrating and visualizing signal magnitude and phase information, it is expected to perform an intuitive and noise-robust diagnosis.

A Safety Score Prediction Model in Urban Environment Using Convolutional Neural Network (컨볼루션 신경망을 이용한 도시 환경에서의 안전도 점수 예측 모델 연구)

  • Kang, Hyeon-Woo;Kang, Hang-Bong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.8
    • /
    • pp.393-400
    • /
    • 2016
  • Recently, there have been various researches on efficient and automatic analysis on urban environment methods that utilize the computer vision and machine learning technology. Among many new analyses, urban safety analysis has received a major attention. In order to predict more accurately on safety score and reflect the human visual perception, it is necessary to consider the generic and local information that are most important to human perception. In this paper, we use Double-column Convolutional Neural network consisting of generic and local columns for the prediction of urban safety. The input of generic and local column used re-sized and random cropped images from original images, respectively. In addition, a new learning method is proposed to solve the problem of over-fitting in a particular column in the learning process. For the performance comparison of our Double-column Convolutional Neural Network, we compare two Support Vector Regression and three Convolutional Neural Network models using Root Mean Square Error and correlation analysis. Our experimental results demonstrate that our Double-column Convolutional Neural Network model show the best performance with Root Mean Square Error of 0.7432 and Pearson/Spearman correlation coefficient of 0.853/0.840.

Oil Pipeline Weld Defect Identification System Based on Convolutional Neural Network

  • Shang, Jiaze;An, Weipeng;Liu, Yu;Han, Bang;Guo, Yaodan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1086-1103
    • /
    • 2020
  • The automatic identification and classification of image-based weld defects is a difficult task due to the complex texture of the X-ray images of the weld defect. Several depth learning methods for automatically identifying welds were proposed and tested. In this work, four different depth convolutional neural networks were evaluated and compared on the 1631 image set. The concavity, undercut, bar defects, circular defects, unfused defects and incomplete penetration in the weld image 6 different types of defects are classified. Another contribution of this paper is to train a CNN model "RayNet" for the dataset from scratch. In the experiment part, the parameters of convolution operation are compared and analyzed, in which the experimental part performs a comparative analysis of various parameters in the convolution operation, compares the size of the input image, gives the classification results for each defect, and finally shows the partial feature map during feature extraction with the classification accuracy reaching 96.5%, which is 6.6% higher than the classification accuracy of other existing fine-tuned models, and even improves the classification accuracy compared with the traditional image processing methods, and also proves that the model trained from scratch also has a good performance on small-scale data sets. Our proposed method can assist the evaluators in classifying pipeline welding defects.

A Proposal of Deep Learning Based Semantic Segmentation to Improve Performance of Building Information Models Classification (Semantic Segmentation 기반 딥러닝을 활용한 건축 Building Information Modeling 부재 분류성능 개선 방안)

  • Lee, Ko-Eun;Yu, Young-Su;Ha, Dae-Mok;Koo, Bon-Sang;Lee, Kwan-Hoon
    • Journal of KIBIM
    • /
    • v.11 no.3
    • /
    • pp.22-33
    • /
    • 2021
  • In order to maximize the use of BIM, all data related to individual elements in the model must be correctly assigned, and it is essential to check whether it corresponds to the IFC entity classification. However, as the BIM modeling process is performed by a large number of participants, it is difficult to achieve complete integrity. To solve this problem, studies on semantic integrity verification are being conducted to examine whether elements are correctly classified or IFC mapped in the BIM model by applying an artificial intelligence algorithm to the 2D image of each element. Existing studies had a limitation in that they could not correctly classify some elements even though the geometrical differences in the images were clear. This was found to be due to the fact that the geometrical characteristics were not properly reflected in the learning process because the range of the region to be learned in the image was not clearly defined. In this study, the CRF-RNN-based semantic segmentation was applied to increase the clarity of element region within each image, and then applied to the MVCNN algorithm to improve the classification performance. As a result of applying semantic segmentation in the MVCNN learning process to 889 data composed of a total of 8 BIM element types, the classification accuracy was found to be 0.92, which is improved by 0.06 compared to the conventional MVCNN.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

The Evaluation of Denoising PET Image Using Self Supervised Noise2Void Learning Training: A Phantom Study (자기 지도 학습훈련 기반의 Noise2Void 네트워크를 이용한 PET 영상의 잡음 제거 평가: 팬텀 실험)

  • Yoon, Seokhwan;Park, Chanrok
    • Journal of radiological science and technology
    • /
    • v.44 no.6
    • /
    • pp.655-661
    • /
    • 2021
  • Positron emission tomography (PET) images is affected by acquisition time, short acquisition times results in low gamma counts leading to degradation of image quality by statistical noise. Noise2Void(N2V) is self supervised denoising model that is convolutional neural network (CNN) based deep learning. The purpose of this study is to evaluate denoising performance of N2V for PET image with a short acquisition time. The phantom was scanned as a list mode for 10 min using Biograph mCT40 of PET/CT (Siemens Healthcare, Erlangen, Germany). We compared PET images using NEMA image-quality phantom for standard acquisition time (10 min), short acquisition time (2min) and simulated PET image (S2 min). To evaluate performance of N2V, the peak signal to noise ratio (PSNR), normalized root mean square error (NRMSE), structural similarity index (SSIM) and radio-activity recovery coefficient (RC) were used. The PSNR, NRMSE and SSIM for 2 min and S2 min PET images compared to 10min PET image were 30.983, 33.936, 9.954, 7.609 and 0.916, 0.934 respectively. The RC for spheres with S2 min PET image also met European Association of Nuclear Medicine Research Ltd. (EARL) FDG PET accreditation program. We confirmed generated S2 min PET image from N2V deep learning showed improvement results compared to 2 min PET image and The PET images on visual analysis were also comparable between 10 min and S2 min PET images. In conclusion, noisy PET image by means of short acquisition time using N2V denoising network model can be improved image quality without underestimation of radioactivity.

Deobfuscation Processing and Deep Learning-Based Detection Method for PowerShell-Based Malware (파워쉘 기반 악성코드에 대한 역난독화 처리와 딥러닝 기반 탐지 방법)

  • Jung, Ho-jin;Ryu, Hyo-gon;Jo, Kyu-whan;Lee, Sangkyun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.3
    • /
    • pp.501-511
    • /
    • 2022
  • In 2021, ransomware attacks became popular, and the number is rapidly increasing every year. Since PowerShell is used as the primary ransomware technique, the need for PowerShell-based malware detection is ever increasing. However, the existing detection techniques have limits in that they cannot detect obfuscated scripts or require a long processing time for deobfuscation. This paper proposes a simple and fast deobfuscation method and a deep learning-based classification model that can detect PowerShell-based malware. Our technique is composed of Word2Vec and a convolutional neural network to learn the meaning of a script extracting important features. We tested the proposed model using 1400 malicious codes and 8600 normal scripts provided by the AI-based PowerShell malicious script detection track of the 2021 Cybersecurity AI/Big Data Utilization Contest. Our method achieved 5.04 times faster deobfuscation than the existing methods with a perfect success rate and high detection performance with FPR of 0.01 and TPR of 0.965.

A DCT Learning Combined RRU-Net for the Image Splicing Forgery Detection (DCT 학습을 융합한 RRU-Net 기반 이미지 스플라이싱 위조 영역 탐지 모델)

  • Young-min Seo;Jung-woo Han;Hee-jung Kwon;Su-bin Lee;Joongjin Kook
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.1
    • /
    • pp.11-17
    • /
    • 2023
  • This paper proposes a lightweight deep learning network for detecting an image splicing forgery. The research on image forgery detection using CNN, a deep learning network, and research on detecting and localizing forgery in pixel units are in progress. Among them, CAT-Net, which learns the discrete cosine transform coefficients of images together with images, was released in 2022. The DCT coefficients presented by CAT-Net are combined with the JPEG artifact learning module and the backbone model as pre-learning, and the weights are fixed. The dataset used for pre-training is not included in the public dataset, and the backbone model has a relatively large number of network parameters, which causes overfitting in a small dataset, hindering generalization performance. In this paper, this learning module is designed to learn the characterization depending on the DCT domain in real-time during network training without pre-training. The DCT RRU-Net proposed in this paper is a network that combines RRU-Net which detects forgery by learning only images and JPEG artifact learning module. It is confirmed that the network parameters are less than those of CAT-Net, the detection performance of forgery is better than that of RRU-Net, and the generalization performance for various datasets improves through the network architecture and training method of DCT RRU-Net.

  • PDF

Prediction of Ship Travel Time in Harbour using 1D-Convolutional Neural Network (1D-CNN을 이용한 항만내 선박 이동시간 예측)

  • Sang-Lok Yoo;Kwang-Il Ki;Cho-Young Jung
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.275-276
    • /
    • 2022
  • VTS operators instruct ships to wait for entry and departure to sail in one-way to prevent ship collision accidents in ports with narrow routes. Currently, the instructions are not based on scientific and statistical data. As a result, there is a significant deviation depending on the individual capability of the VTS operators. Accordingly, this study built a 1d-convolutional neural network model by collecting ship and weather data to predict the exact travel time for ship entry/departure waiting for instructions in the port. It was confirmed that the proposed model was improved by more than 4.5% compared to other ensemble machine learning models. Through this study, it is possible to predict the time required to enter and depart a vessel in various situations, so it is expected that the VTS operators will help provide accurate information to the vessel and determine the waiting order.

  • PDF