• Title/Summary/Keyword: 합성신경망

Search Result 649, Processing Time 0.03 seconds

A Prediction System of Skin Pore Labeling Using CNN and Image Processing (합성곱 신경망 및 영상처리 기법을 활용한 피부 모공 등급 예측 시스템)

  • Tae-Hee, Lee;Woo-Sung, Hwang;Myung-Ryul, Choi
    • Journal of IKEEE
    • /
    • v.26 no.4
    • /
    • pp.647-652
    • /
    • 2022
  • In this paper, we propose a prediction system for skin pore labeling based on a CNN(Convolution Neural Network) model, where a data set is constructed by processing skin images taken by users, and a pore feature image is generated by the proposed image processing algorithm. The skin image data set was labeled for pore characteristics based on the visual classification criteria of skin beauty experts. The proposed image processing algorithm was applied to generate pore feature images from skin images and to train a CNN model that predicts pore feature ratings. The prediction results with pore features by the proposed CNN model is similar to experts visual classification results, where less learning time and higher prediction results were obtained than the results by the comparison model (Resnet-50). In this paper, we describe the proposed image processing algorithm and CNN model, the results of the prediction system and future research plans.

Feature Extraction and Recognition of Myanmar Characters Based on Deep Learning (딥러닝 기반 미얀마 문자의 특징 추출 및 인식)

  • Ohnmar, Khin;Lee, Sung-Keun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.977-984
    • /
    • 2022
  • Recently, with the economic development of Southeast Asia, the use of information devices is widely spreading, and the demand for application services using intelligent character recognition is increasing. This paper discusses deep learning-based feature extraction and recognition of Myanmar, one of the Southeast Asian countries. Myanmar alphabet (33 letters) and Myanmar numerals (10 numbers) are used for feature extraction. In this paper, the number of nine features are extracted and more than three new features are proposed. Extracted features of each characters and numbers are expressed with successful results. In the recognition part, convolutional neural networks are used to assess its execution on character distinction. Its algorithm is implemented on captured image data-sets and its implementation is evaluated. The precision of models on the input data set is 96 % and uses a real-time input image.

Window Attention Module Based Transformer for Image Classification (윈도우 주의 모듈 기반 트랜스포머를 활용한 이미지 분류 방법)

  • Kim, Sanghoon;Kim, Wonjun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.538-547
    • /
    • 2022
  • Recently introduced image classification methods using Transformers show remarkable performance improvements over conventional neural network-based methods. In order to effectively consider regional features, research has been actively conducted on how to apply transformers by dividing image areas into multiple window areas, but learning of inter-window relationships is still insufficient. In this paper, to overcome this problem, we propose a transformer structure that can reflect the relationship between windows in learning. The proposed method computes the importance of each window region through compression and a fully connected layer based on self-attention operations for each window region. The calculated importance is scaled to each window area as a learned weight of the relationship between the window areas to re-calibrate the feature value. Experimental results show that the proposed method can effectively improve the performance of existing transformer-based methods.

360 RGBD Image Synthesis from a Sparse Set of Images with Narrow Field-of-View (소수의 협소화각 RGBD 영상으로부터 360 RGBD 영상 합성)

  • Kim, Soojie;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.487-498
    • /
    • 2022
  • Depth map is an image that contains distance information in 3D space on a 2D plane and is used in various 3D vision tasks. Many existing depth estimation studies mainly use narrow FoV images, in which a significant portion of the entire scene is lost. In this paper, we propose a technique for generating 360° omnidirectional RGBD images from a sparse set of narrow FoV images. The proposed generative adversarial network based image generation model estimates the relative FoV for the entire panoramic image from a small number of non-overlapping images and produces a 360° RGB and depth image simultaneously. In addition, it shows improved performance by configuring a network reflecting the spherical characteristics of the 360° image.

Blurred Image Enhancement Techniques Using Stack-Attention (Stack-Attention을 이용한 흐릿한 영상 강화 기법)

  • Park Chae Rim;Lee Kwang Ill;Cho Seok Je
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.83-90
    • /
    • 2023
  • Blurred image is an important factor in lowering image recognition rates in Computer vision. This mainly occurs when the camera is unstablely out of focus or the object in the scene moves quickly during the exposure time. Blurred images greatly degrade visual quality, weakening visibility, and this phenomenon occurs frequently despite the continuous development digital camera technology. In this paper, it replace the modified building module based on the Deep multi-patch neural network designed with convolution neural networks to capture details of input images and Attention techniques to focus on objects in blurred images in many ways and strengthen the image. It measures and assigns each weight at different scales to differentiate the blurring of change and restores from rough to fine levels of the image to adjust both global and local region sequentially. Through this method, it show excellent results that recover degraded image quality, extract efficient object detection and features, and complement color constancy.

A Dual-Structured Self-Attention for improving the Performance of Vision Transformers (비전 트랜스포머 성능향상을 위한 이중 구조 셀프 어텐션)

  • Kwang-Yeob Lee;Hwang-Hee Moon;Tae-Ryong Park
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.251-257
    • /
    • 2023
  • In this paper, we propose a dual-structured self-attention method that improves the lack of regional features of the vision transformer's self-attention. Vision Transformers, which are more computationally efficient than convolutional neural networks in object classification, object segmentation, and video image recognition, lack the ability to extract regional features relatively. To solve this problem, many studies are conducted based on Windows or Shift Windows, but these methods weaken the advantages of self-attention-based transformers by increasing computational complexity using multiple levels of encoders. This paper proposes a dual-structure self-attention using self-attention and neighborhood network to improve locality inductive bias compared to the existing method. The neighborhood network for extracting local context information provides a much simpler computational complexity than the window structure. CIFAR-10 and CIFAR-100 were used to compare the performance of the proposed dual-structure self-attention transformer and the existing transformer, and the experiment showed improvements of 0.63% and 1.57% in Top-1 accuracy, respectively.

A Study on the Application of ColMap in 3D Reconstruction for Cultural Heritage Restoration

  • Byong-Kwon Lee;Beom-jun Kim;Woo-Jong Yoo;Min Ahn;Soo-Jin Han
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.95-101
    • /
    • 2023
  • Colmap is one of the innovative artificial intelligence technologies, highly effective as a tool in 3D reconstruction tasks. Moreover, it excels at constructing intricate 3D models by utilizing images and corresponding metadata. Colmap generates 3D models by merging 2D images, camera position data, depth information, and so on. Through this, it achieves detailed and precise 3D reconstructions, inclusive of objects from the real world. Additionally, Colmap provides rapid processing by leveraging GPUs, allowing for efficient operation even within large data sets. In this paper, we have presented a method of collecting 2D images of traditional Korean towers and reconstructing them into 3D models using Colmap. This study applied this technology in the restoration process of traditional stone towers in South Korea. As a result, we confirmed the potential applicability of Colmap in the field of cultural heritage restoration.

Morphological Analysis of Hydraulically Stimulated Fractures by Deep-Learning Segmentation Method (딥러닝 기반 균열 추출 기법을 통한 수압 파쇄 균열 형상 분석)

  • Park, Jimin;Kim, Kwang Yeom ;Yun, Tae Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.39 no.8
    • /
    • pp.17-28
    • /
    • 2023
  • Laboratory-scale hydraulic fracturing experiments were conducted on granite specimens at various viscosities and injection rates of the fracturing fluid. A series of cross-sectional computed tomography (CT) images of fractured specimens was obtained via a three-dimensional X-ray CT imaging method. Pixel-level fracture segmentation of the CT images was conducted using a convolutional neural network (CNN)-based Nested U-Net model structure. Compared with traditional image processing methods, the CNN-based model showed a better performance in the extraction of thin and complex fractures. These extracted fractures extracted were reconstructed in three dimensions and morphologically analyzed based on their fracture volume, aperture, tortuosity, and surface roughness. The fracture volume and aperture increased with the increase in viscosity of the fracturing fluid, while the tortuosity and roughness of the fracture surface decreased. The findings also confirmed the anisotropic tortuosity and roughness of the fracture surface. In this study, a CNN-based model was used to perform accurate fracture segmentation, and quantitative analysis of hydraulic stimulated fractures was conducted successfully.

Adhesive Area Detection System of Single-Lap Joint Using Vibration-Response-Based Nonlinear Transformation Approach for Deep Learning (딥러닝을 이용하여 진동 응답 기반 비선형 변환 접근법을 적용한 단일 랩 조인트의 접착 면적 탐지 시스템)

  • Min-Je Kim;Dong-Yoon Kim;Gil Ho Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.1
    • /
    • pp.57-65
    • /
    • 2023
  • A vibration response-based detection system was used to investigate the adhesive areas of single-lap joints using a nonlinear transformation approach for deep learning. In industry or engineering fields, it is difficult to know the condition of an invisible part within a structure that cannot easily be disassembled and the conditions of adhesive areas of adhesively bonded structures. To address these issues, a detection method was devised that uses nonlinear transformation to determine the adhesive areas of various single-lap-jointed specimens from the vibration response of the reference specimen. In this study, a frequency response function with nonlinear transformation was employed to identify the vibration characteristics, and a virtual spectrogram was used for classification in convolutional neural network based deep learning. Moreover, a vibration experiment, an analytical solution, and a finite-element analysis were performed to verify the developed method with aluminum, carbon fiber composite, and ultra-high-molecular-weight polyethylene specimens.

Implementation of an alarm system with AI image processing to detect whether a helmet is worn or not and a fall accident (헬멧 착용 여부 및 쓰러짐 사고 감지를 위한 AI 영상처리와 알람 시스템의 구현)

  • Yong-Hwa Jo;Hyuek-Jae Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.3
    • /
    • pp.150-159
    • /
    • 2022
  • This paper presents an implementation of detecting whether a helmet is worn and there is a fall accident through individual image analysis in real-time from extracting the image objects of several workers active in the industrial field. In order to detect image objects of workers, YOLO, a deep learning-based computer vision model, was used, and for whether a helmet is worn or not, the extracted images with 5,000 different helmet learning data images were applied. For whether a fall accident occurred, the position of the head was checked using the Pose real-time body tracking algorithm of Mediapipe, and the movement speed was calculated to determine whether the person fell. In addition, to give reliability to the result of a falling accident, a method to infer the posture of an object by obtaining the size of YOLO's bounding box was proposed and implemented. Finally, Telegram API Bot and Firebase DB server were implemented for notification service to administrators.