• Title/Summary/Keyword: 이미지 예측 모델

Search Result 212, Processing Time 0.027 seconds

A technique for predicting the cutting points of fish for the target weight using AI machine vision

  • Jang, Yong-hun;Lee, Myung-sub
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.4
    • /
    • pp.27-36
    • /
    • 2022
  • In this paper, to improve the conditions of the fish processing site, we propose a method to predict the cutting point of fish according to the target weight using AI machine vision. The proposed method performs image-based preprocessing by first photographing the top and front views of the input fish. Then, RANSAC(RANdom SAmple Consensus) is used to extract the fish contour line, and then 3D external information of the fish is obtained using 3D modeling. Next, machine learning is performed on the extracted three-dimensional feature information and measured weight information to generate a neural network model. Subsequently, the fish is cut at the cutting point predicted by the proposed technique, and then the weight of the cut piece is measured. We compared the measured weight with the target weight and evaluated the performance using evaluation methods such as MAE(Mean Absolute Error) and MRE(Mean Relative Error). The obtained results indicate that an average error rate of less than 3% was achieved in comparison to the target weight. The proposed technique is expected to contribute greatly to the development of the fishery industry in the future by being linked to the automation system.

Research on depth information based object-tracking and stage size estimation for immersive audio panning (이머시브 오디오 패닝을 위한 깊이 정보 기반 객체 추적 및 무대 크기 예측에 관한 연구)

  • Kangeun Lee;Hongjun Park;Sungyoung Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.5
    • /
    • pp.529-535
    • /
    • 2024
  • This paper presents our research on automatic audio panning for media content production. Previously, tracking an audio was done manually. With the advent of the immersive audio era, the need for an automatic audio panning system has increased, yet no substantial research has been progressed to date. Therefore, we propose a computer vision-based human tracking and depth feature processing system which processes depth feature through using 2-dimensional coordinates and models 3-dimensional view transformation for automatic audio panning to ensure audiovisual congruence. Also, this system applies stage size estimation model which gets input as an image and extrapolates stage width and depth as meter unit. Since our system estimates stage sizes and directly applies them to view transformation, no additional depth data training is required. To validate the proposed system, we also conducted a pilot test with Unity based sample video. Our team expects that our system will enable automated audio panning, assisting many audio engineers.

Generating Test Data for Deep Neural Network Model using Synonym Replacement (동의어 치환을 이용한 심층 신경망 모델의 테스트 데이터 생성)

  • Lee, Min-soo;Lee, Chan-gun
    • Journal of Software Engineering Society
    • /
    • v.28 no.1
    • /
    • pp.23-28
    • /
    • 2019
  • Recently, in order to effectively test deep neural network model for image processing application, researches have actively conducted to automatically generate data in corner-case that is not correctly predicted by the model. This paper proposes test data generation method that selects arbitrary words from input of system and transforms them into synonyms in order to test the bug reporter automatic assignment system based on sentence classification deep neural network model. In addition, we compare and evaluate the case of using proposed test data generation and the case of using existing difference-inducing test data generations based on various neuron coverages.

A Bulge Detection Model in Cultural Asset images using Ensemble of Deep Features (심층 특징들의 앙상블을 사용한 목조 문화재 영상에서의 배부름 감지 모델)

  • Kang, Jaeyong;Kim, Inki;Lim, Hyunseok;Gwak, Jeonghwan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.129-131
    • /
    • 2021
  • 본 논문에서는 심층 특징 앙상블을 사용하여 목조 문화재의 변위 현상 중 하나인 배부름 현상을 감지할 수 있는 모델을 제안한다. 우선 총 4개의 서로 다른 사전 학습된 합성 곱 신경망을 사용하여 입력 영상에 대한 심층 특징들을 추출한다. 그 이후 4개의 서로 다른 심층 특징들을 결합하여 하나의 특징 벡터를 생성한다. 그 이후 합쳐진 특징 벡터는 완전 연결 계층의 입력 값으로 들어와서 최종적으로 변위가 존재하는지 아닌지에 대한 예측을 수행하게 된다. 데이터 셋으로는 충주시 근처의 문화재에 방문해서 수집한 목조 문화재 이미지를 가지고 정상 및 비정상으로 구분한 데이터 셋을 사용하였다. 실험 결과 심층 특징 앙상블 기법을 사용한 모델이 앙상블 기법을 사용하지 않은 모델보다 더 좋은 성능을 나타냄을 확인하였다. 이러한 결과로 부터 우리가 제안한 방법이 목재 문화재의 배부름 현상에 대한 변위 검출에 있어서 매우 적합함을 보여준다.

  • PDF

Transfer Learning-based Multi-Modal Fusion Answer Selection Model for Video Question Answering System (비디오 질의 응답 시스템을 위한 전이 학습 기반의 멀티 모달 퓨전 정답 선택 모델)

  • Park, Gyu-Min;Park, Seung-Bae
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.548-553
    • /
    • 2021
  • 비디오 질의 응답은 입력으로 주어진 비디오와 질문에 적절할 정답을 제공하기 위해 텍스트, 이미지 등 다양한 정보처리가 요구되는 대표적인 multi-modal 문제이다. 질의 응답 시스템은 질의 응답의 성능을 높이기 위해 다수의 서로 다른 응답 모듈을 사용하기도 하며 생성된 정답 후보군 중 가장 적절할 정답을 선택하는 정답 선택 모듈이 필요하다. 정답 선택 모듈은 응답 모듈의 서로 다른 관점을 고려하여 응답 선택을 선택할 필요성이 있다. 하지만 응답 모듈이 black-box 모델인 경우 정답 선택 모듈은 응답 모듈의 parameter와 예측 분포를 통해 지식을 전달 받기 어렵다. 그리고 학습 데이터셋은 응답 모듈이 학습에 사용했기 때문에 과적합 문제로 각 모듈의 관점을 학습하기엔 어려우며 학습 데이터셋 이외 비교적 적은 데이터셋으로 학습해야 하는 문제점이 있다. 본 논문에서는 정답 선택 성능을 높이기 위해 전이 학습 기반의 멀티모달 퓨전 정답 선택 모델을 제안한다. DramaQA 데이터셋을 통해 성능을 측정하여 제안된 모델의 우수성을 실험적으로 증명하였다.

  • PDF

Performance Analysis of Anomaly Area Segmentation in Industrial Products Based on Self-Attention Deep Learning Model (Self-Attention 딥러닝 모델 기반 산업 제품의 이상 영역 분할 성능 분석)

  • Changjoon Park;Namjung Kim;Junhwi Park;Jaehyun Lee;Jeonghwan Gwak
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.45-46
    • /
    • 2024
  • 본 논문에서는 Self-Attention 기반 딥러닝 기법인 Dense Prediction Transformer(DPT) 모델을 MVTec Anomaly Detection(MVTec AD) 데이터셋에 적용하여 실제 산업 제품 이미지 내 이상 부분을 분할하는 연구를 진행하였다. DPT 모델의 적용을 통해 기존 Convolutional Neural Network(CNN) 기반 이상 탐지기법의 한계점인 지역적 Feature 추출 및 고정된 수용영역으로 인한 문제를 개선하였으며, 실제 산업 제품 데이터에서의 이상 분할 시 기존 주력 기법인 U-Net의 구조를 적용한 최고 성능의 모델보다 1.14%만큼의 성능 향상을 보임에 따라 Self-Attention 기반 딥러닝 기법의 적용이 산업 제품 이상 분할에 효과적임을 입증하였다.

  • PDF

Comparison of Seismic Data Interpolation Performance using U-Net and cWGAN (U-Net과 cWGAN을 이용한 탄성파 탐사 자료 보간 성능 평가)

  • Yu, Jiyun;Yoon, Daeung
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.3
    • /
    • pp.140-161
    • /
    • 2022
  • Seismic data with missing traces are often obtained regularly or irregularly due to environmental and economic constraints in their acquisition. Accordingly, seismic data interpolation is an essential step in seismic data processing. Recently, research activity on machine learning-based seismic data interpolation has been flourishing. In particular, convolutional neural network (CNN) and generative adversarial network (GAN), which are widely used algorithms for super-resolution problem solving in the image processing field, are also used for seismic data interpolation. In this study, CNN-based algorithm, U-Net and GAN-based algorithm, and conditional Wasserstein GAN (cWGAN) were used as seismic data interpolation methods. The results and performances of the methods were evaluated thoroughly to find an optimal interpolation method, which reconstructs with high accuracy missing seismic data. The work process for model training and performance evaluation was divided into two cases (i.e., Cases I and II). In Case I, we trained the model using only the regularly sampled data with 50% missing traces. We evaluated the model performance by applying the trained model to a total of six different test datasets, which consisted of a combination of regular, irregular, and sampling ratios. In Case II, six different models were generated using the training datasets sampled in the same way as the six test datasets. The models were applied to the same test datasets used in Case I to compare the results. We found that cWGAN showed better prediction performance than U-Net with higher PSNR and SSIM. However, cWGAN generated additional noise to the prediction results; thus, an ensemble technique was performed to remove the noise and improve the accuracy. The cWGAN ensemble model removed successfully the noise and showed improved PSNR and SSIM compared with existing individual models.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

Crack Detection on Bridge Deck Using Generative Adversarial Networks and Deep Learning (적대적 생성 신경망과 딥러닝을 이용한 교량 상판의 균열 감지)

  • Ji, Bongjun
    • Journal of the Korean Recycled Construction Resources Institute
    • /
    • v.9 no.3
    • /
    • pp.303-310
    • /
    • 2021
  • Cracks in bridges are important factors that indicate the condition of bridges and should be monitored periodically. However, a visual inspection conducted by a human expert has problems in cost, time, and reliability. Therefore, in recent years, researches to apply a deep learning model are started to be conducted. Deep learning requires sufficient data on the situations to be predicted, but bridge crack data is relatively difficult to obtain. In particular, it is difficult to collect a large amount of crack data in a specific situation because the shape of bridge cracks may vary depending on the bridge's design, location, and construction method. This study developed a crack detection model that generates and trains insufficient crack data through a Generative Adversarial Network. GAN successfully generated data statistically similar to the given crack data, and accordingly, crack detection was possible with about 3% higher accuracy when using the generated image than when the generated image was not used. This approach is expected to effectively improve the performance of the detection model as it is applied when crack detection on bridges is required, though there is not enough data, also when there is relatively little or much data f or one class.

Indoor Scene Classification based on Color and Depth Images for Automated Reverberation Sound Editing (자동 잔향 편집을 위한 컬러 및 깊이 정보 기반 실내 장면 분류)

  • Jeong, Min-Heuk;Yu, Yong-Hyun;Park, Sung-Jun;Hwang, Seung-Jun;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.384-390
    • /
    • 2020
  • The reverberation effect on the sound when producing movies or VR contents is a very important factor in the realism and liveliness. The reverberation time depending the space is recommended in a standard called RT60(Reverberation Time 60 dB). In this paper, we propose a scene recognition technique for automatic reverberation editing. To this end, we devised a classification model that independently trains color images and predicted depth images in the same model. Indoor scene classification is limited only by training color information because of the similarity of internal structure. Deep learning based depth information extraction technology is used to use spatial depth information. Based on RT60, 10 scene classes were constructed and model training and evaluation were conducted. Finally, the proposed SCR + DNet (Scene Classification for Reverb + Depth Net) classifier achieves higher performance than conventional CNN classifiers with 92.4% accuracy.