• Title/Summary/Keyword: 딥러닝 융합 영상처리

Search Result 72, Processing Time 0.026 seconds

Urban Change Detection for High-resolution Satellite Images using DeepLabV3+ (DeepLabV3+를 이용한 고해상도 위성영상에서의 도시 변화탐지)

  • Song, Chang-Woo;Wahyu, Wiratama
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.441-442
    • /
    • 2021
  • 본 논문에서는 고해상도의 시계열 위성영상을 딥러닝 알고리즘으로 학습하여 도시 변화탐지를 수행한다. 고해상도 위성영상을 활용한 서비스는 4 차 산업혁명 융합 신사업 중 하나인 스마트시티에 적용하여 도시 노후화, 교통 혼잡, 범죄 등 다양한 도시 문제 해결 및 효율적인 도시를 구축하는데 활용이 가능하다. 이에 본 연구에서는 도시 변화탐지를 위한 딥러닝 알고리즘으로 DeepLabV3+를 사용한다. 이는 인코더-디코더 구조로, 공간 정보를 점진적으로 회복함으로써 더욱 정확한 물체의 경계면을 찾을 수 있다. 제안하는 방법은 DeepLabV3+의 레이어와 loss function 을 수정하여 기존보다 좋은 결과를 얻었다. 객관적인 성능평가를 위해, 공개된 데이터셋 LEVIR-CD 으로 학습한 결과로 평균 IoU 는 0.87, 평균 Dice 는 0.93 을 얻었다.

자율주행 YT 개발을 위한 다중 센서 기반의 융합 인식기술

  • Kim, Tae-Geun;Lee, Seong-Ho
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2020.11a
    • /
    • pp.159-160
    • /
    • 2020
  • 카메라, 라이다, RTK-GNSS 등의 센서는 자율주행 YT 기술 개발을 위해 매우 주요한 요소이다. 본 연구에서는 항만 터미널의 무인화에 핵심 기술 중의 하나인 자율주행 YT에서 안전한 자율주행을 실현하기 위한 다중센서 기반의 융합인식 기술을 제안하고자 한다.

  • PDF

Data Augmentation Techniques for Deep Learning-Based Medical Image Analyses (딥러닝 기반 의료영상 분석을 위한 데이터 증강 기법)

  • Mingyu Kim;Hyun-Jin Bae
    • Journal of the Korean Society of Radiology
    • /
    • v.81 no.6
    • /
    • pp.1290-1304
    • /
    • 2020
  • Medical image analyses have been widely used to differentiate normal and abnormal cases, detect lesions, segment organs, etc. Recently, owing to many breakthroughs in artificial intelligence techniques, medical image analyses based on deep learning have been actively studied. However, sufficient medical data are difficult to obtain, and data imbalance between classes hinder the improvement of deep learning performance. To resolve these issues, various studies have been performed, and data augmentation has been found to be a solution. In this review, we introduce data augmentation techniques, including image processing, such as rotation, shift, and intensity variation methods, generative adversarial network-based method, and image property mixing methods. Subsequently, we examine various deep learning studies based on data augmentation techniques. Finally, we discuss the necessity and future directions of data augmentation.

LSTM(Long Short-Term Memory)-Based Abnormal Behavior Recognition Using AlphaPose (AlphaPose를 활용한 LSTM(Long Short-Term Memory) 기반 이상행동인식)

  • Bae, Hyun-Jae;Jang, Gyu-Jin;Kim, Young-Hun;Kim, Jin-Pyung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.5
    • /
    • pp.187-194
    • /
    • 2021
  • A person's behavioral recognition is the recognition of what a person does according to joint movements. To this end, we utilize computer vision tasks that are utilized in image processing. Human behavior recognition is a safety accident response service that combines deep learning and CCTV, and can be applied within the safety management site. Existing studies are relatively lacking in behavioral recognition studies through human joint keypoint extraction by utilizing deep learning. There were also problems that were difficult to manage workers continuously and systematically at safety management sites. In this paper, to address these problems, we propose a method to recognize risk behavior using only joint keypoints and joint motion information. AlphaPose, one of the pose estimation methods, was used to extract joint keypoints in the body part. The extracted joint keypoints were sequentially entered into the Long Short-Term Memory (LSTM) model to be learned with continuous data. After checking the behavioral recognition accuracy, it was confirmed that the accuracy of the "Lying Down" behavioral recognition results was high.

GMM-based Moving Pigs Detection under Static Camera-based Video Monitoring (고정 카메라 기반 비디오 모니터링 환경에서 GMM을 활용한 움직인 돼지 탐지)

  • Lee, Sejun;Yu, Seunghyun;Son, Seungwook;Chung, Yongwha;Park, Daihee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.860-863
    • /
    • 2021
  • 고정 카메라 환경에서 움직이는 객체만을 탐지하는 것은 비디오 모니터링의 중요한 응용 분야이다. 본 논문에서는 비디오의 특성인 움직임 정보가 포함된 영상에서 GMM을 이용하여 움직인 돼지와 움직이지 않은 돼지의 위치를 대략적으로 구분하고, 추가적인 영상 처리 기법과 딥러닝 기반 객체 탐지기를 적용한 박스 단위 객체 탐지 결과를 활용하여 움직인 돼지의 외곽선을 보정한다. 돈사에서 촬영된 비디오 데이터로 실험한 결과, 제안 방법은 효과적으로 움직인 돼지를 탐지할 수 있음을 확인하였다.

Survey of Image Segmentation Algorithms for Extracting Retinal Blood Vessels (망막혈관 검출을 위한 영상분할기법)

  • Kim, Jeong-Hwan;Seo, Seung-Yeon;Song, Chul-Gyu;Kim, Kyeong-Seop
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.397-398
    • /
    • 2019
  • 망막혈관 영상에서(retinal image) 혈관의 모양 또는 생성변화를 효과적으로 검진하기 위해서 망막혈관을 자동적으로 분리하는 영상분할 기법의 개발은 매우 중요한 사안이다. 이를 위해서 주로 망막혈관영상의 잡음을 억제하고 또한 혈관의 명암대비도(contrast)를 증가시키는 전처리 과정을 거쳐서 혈관의 국부적인 화소값의 변화, 방향성을 판별하여 혈관을 자동적으로 검출하는 방법들이 제시되어왔으며 최근에는 합성곱 신경망(CNN) 딥러닝 학습모델을 활용한 망막혈관 분리 알고리즘들이 제시되고 있다.

  • PDF

Yolo based Light Source Object Detection for Traffic Image Big Data Processing (교통 영상 빅데이터 처리를 위한 Yolo 기반 광원 객체 탐지)

  • Kang, Ji-Soo;Shim, Se-Eun;Jo, Sun-Moon;Chung, Kyungyong
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.8
    • /
    • pp.40-46
    • /
    • 2020
  • As interest in traffic safety increases, research on autonomous driving, which reduces the incidence of traffic accidents, is increased. Object recognition and detection are essential for autonomous driving. Therefore, research on object recognition and detection through traffic image big data is being actively conducted to determine the road conditions. However, because most existing studies use only daytime data, it is difficult to recognize objects on night roads. Particularly, in the case of a light source object, it is difficult to use the features of the daytime as it is due to light smudging and whitening. Therefore, this study proposes Yolo based light source object detection for traffic image big data processing. The proposed method performs image processing by applying color model transitions to night traffic image. The object group is determined by extracting the characteristics of the object through image processing. It is possible to increase the recognition rate of light source object detection on a night road through a deep learning model using candidate group data.

Implementation of an alarm system with AI image processing to detect whether a helmet is worn or not and a fall accident (헬멧 착용 여부 및 쓰러짐 사고 감지를 위한 AI 영상처리와 알람 시스템의 구현)

  • Yong-Hwa Jo;Hyuek-Jae Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.3
    • /
    • pp.150-159
    • /
    • 2022
  • This paper presents an implementation of detecting whether a helmet is worn and there is a fall accident through individual image analysis in real-time from extracting the image objects of several workers active in the industrial field. In order to detect image objects of workers, YOLO, a deep learning-based computer vision model, was used, and for whether a helmet is worn or not, the extracted images with 5,000 different helmet learning data images were applied. For whether a fall accident occurred, the position of the head was checked using the Pose real-time body tracking algorithm of Mediapipe, and the movement speed was calculated to determine whether the person fell. In addition, to give reliability to the result of a falling accident, a method to infer the posture of an object by obtaining the size of YOLO's bounding box was proposed and implemented. Finally, Telegram API Bot and Firebase DB server were implemented for notification service to administrators.

Kernel-Based Video Frame Interpolation Techniques Using Feature Map Differencing (특성맵 차분을 활용한 커널 기반 비디오 프레임 보간 기법)

  • Dong-Hyeok Seo;Min-Seong Ko;Seung-Hak Lee;Jong-Hyuk Park
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.17-27
    • /
    • 2024
  • Video frame interpolation is an important technique used in the field of video and media, as it increases the continuity of motion and enables smooth playback of videos. In the study of video frame interpolation using deep learning, Kernel Based Method captures local changes well, but has limitations in handling global changes. In this paper, we propose a new U-Net structure that applies feature map differentiation and two directions to focus on capturing major changes to generate intermediate frames more accurately while reducing the number of parameters. Experimental results show that the proposed structure outperforms the existing model by up to 0.3 in PSNR with about 61% fewer parameters on common datasets such as Vimeo, Middle-burry, and a new YouTube dataset. Code is available at https://github.com/Go-MinSeong/SF-AdaCoF.

Development of an abnormal road object recognition model based on deep learning (딥러닝 기반 불량노면 객체 인식 모델 개발)

  • Choi, Mi-Hyeong;Woo, Je-Seung;Hong, Sun-Gi;Park, Jun-Mo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.4
    • /
    • pp.149-155
    • /
    • 2021
  • In this study, we intend to develop a defective road surface object recognition model that automatically detects road surface defects that restrict the movement of the transportation handicapped using electric mobile devices with deep learning. For this purpose, road surface information was collected from the pedestrian and running routes where the electric mobility aid device is expected to move in five areas within the city of Busan. For data, images were collected by dividing the road surface and surroundings into objects constituting the surroundings. A series of recognition items such as the detection of breakage levels of sidewalk blocks were defined by classifying according to the degree of impeding the movement of the transportation handicapped in traffic from the collected data. A road surface object recognition deep learning model was implemented. In the final stage of the study, the performance verification process of a deep learning model that automatically detects defective road surface objects through model learning and validation after processing, refining, and annotation of image data separated and collected in units of objects through actual driving. proceeded.