• 제목/요약/키워드: R-CNN

검색결과 253건 처리시간 0.024초

DeepLabCut과 Mask R-CNN 기반 반려동물 행동 분류 설계 (Design of Pet Behavior Classification Method Based On DeepLabCut and Mask R-CNN)

  • 권주영;신민찬;문남미
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2021년도 추계학술발표대회
    • /
    • pp.927-929
    • /
    • 2021
  • 최근 펫팸족(Pet-Family)과 같이 반려동물을 가족처럼 생각하는 가구가 증가하면서 반려동물 시장이 크게 성장하고 있다. 이러한 이유로 본 논문에서는 반려동물의 객체 식별을 통한 객체 분할과 신체 좌표추정에 기반을 둔 반려동물의 행동 분류 방법을 제안한다. 이 방법은 CCTV를 통해 반려동물 영상 데이터를 수집한다. 수집된 영상 데이터는 반려동물의 인스턴스 분할을 위해 Mask R-CNN(Region Convolutional Neural Networks) 모델을 적용하고, DeepLabCut 모델을 통해 추정된 신체 좌푯값을 도출한다. 이 결과로 도출된 영상 데이터와 추정된 신체 좌표 값은 CNN(Convolutional Neural Networks)-LSTM(Long Short-Term Memory) 모델을 적용하여 행동을 분류한다. 본 모델을 바탕으로 행동을 분석 및 분류하여, 반려동물의 위험 상황과 돌발 행동에 대한 올바른 대처를 제공할 수 있는 기반을 제공할 것이라 기대한다.

비정형 패션 이미지 검색을 위한 MASK R-CNN 선형처리 기반 CNN 분류 학습모델 구현 (Implementation of CNN-based Classification Training Model for Unstructured Fashion Image Retrieval using Preprocessing with MASK R-CNN)

  • 조승아;이하영;장혜림;김규리;이현지;손봉기;이재호
    • 한국산업정보학회논문지
    • /
    • 제27권6호
    • /
    • pp.13-23
    • /
    • 2022
  • 본 논문에서는 패션 분야의 비정형 데이터 검색을 위한 패션 아이템별 세부 컨포넌트 이미지 분류 알고리즘을 제안한다. 코로나-19 환경으로 인하여 최근 AI 기반 쇼핑몰이 증가하는 추세이다. 하지만 기존의 키워드 검색과 사용자 서핑 행위 기반 개인 맞춤형 스타일 추천으로는 정확한 비정형 데이터 검색에는 한계가 있다. 본 연구는 다양한 온라인 쇼핑 사이트에서 크롤링한 이미지를 사용하여 Mask R-CNN을 활용한 전처리를 진행한 후, CNN을 통해 패션 아이템별 컴포넌트에 대한 분류를 진행하였다. 셔츠의 카라 및 패턴과 청바지의 핏, 워싱 및 컬러에 대한 분류를 진행하였으며, 다양한 전이학습 모델을 비교 분석한 후 가장 높은 정확도가 나온 Densenet121모델을 사용하여 셔츠의 카라는 93.28%, 셔츠의 패턴은 98.10%의 정확도를 도달하였으며, 청바지의 핏은 Notched, Spread, Straight 3가지의 클래스의 경우 91.73%, Regular 핏을 추가한 4가지의 클래스의 경우 81.59%, 청바지의 색상은 93.91%, 청바지의 Washing은 91.20%, 청바지의 Demgae는 92.96%의 정확도를 도출하였다.

열화상 영상 데이터 기반 배전반 화재 발생 판별을 위한 딥러닝 모델 설계 (Design of a deep learning model to determine fire occurrence in distribution switchboard using thermal imaging data)

  • 박동준;김민영
    • 문화기술의 융합
    • /
    • 제9권5호
    • /
    • pp.737-745
    • /
    • 2023
  • 본 논문은 열화상 이미지를 활용하여 배전반 화재 발생을 감지하기 위한 인공지능 모델을 개발하는 연구에 대해 다룬다. 연구의 목표는 수집한 열화상 이미지를 전처리하여 객체 탐지 모델에 적합한 데이터로 가공하고, 이를 이용하여 배전반 내 화재 발생 여부를 판단하는 모델을 설계하는 것이다. 연구에서는 AI-HUB의 산업단지 내 학습용 열화상 이미지 데이터를 활용하였으며, CNN 기반 딥러닝 객체 검출 알고리즘 중 대표적인 모델인 Faster R-CNN과 RetinaNet을 사용하여 모델을 구축하고 두 개의 모델을 비교 분석하여 최적의 모델을 제안하고 있다.

Improved CNN Algorithm for Object Detection in Large Images

  • Yang, Seong Bong;Lee, Soo Jin
    • 한국컴퓨터정보학회논문지
    • /
    • 제25권1호
    • /
    • pp.45-53
    • /
    • 2020
  • 기존의 CNN 알고리즘은 위성영상과 같은 대형 이미지에서 소형 객체를 식별하는 것이 불가능하다는 문제점을 가지고 있었다. 본 연구에서는 이러한 문제를 해결하기 위해 관심영역 설정 및 이미지 분할 기법을 적용한 CNN 알고리즘 개선방안을 제시하였다. 실험은 비행장 및 항공기 데이터셋으로 전환학습한 YOLOv3 / Faster R-CNN 알고리즘과 테스트용 대형 이미지를 이용하여 진행하였으며, 우선 대형 이미지에서 관심영역을 식별하고 이를 순차적으로 분할해 나가며 CNN 알고리즘의 객체식별 결과를 비교하였다. 분할 이미지의 크기는 실험을 통해 최소 분할로 최대의 식별률을 얻을 수 있는 최적의 이미지 조각 크기를 도출하여 적용하였다. 실험 결과, 본 연구에서 제시한 방안을 통해 CNN 알고리즘으로 대형 이미지에서의 소형 객체를 식별하는 것이 충분히 가능함을 검증하였다.

복부 CT 영상에서 밝기값 정규화 및 Faster R-CNN을 이용한 자동 췌장 검출 (Automatic Pancreas Detection on Abdominal CT Images using Intensity Normalization and Faster R-CNN)

  • 최시은;이성은;홍헬렌
    • 한국멀티미디어학회논문지
    • /
    • 제24권3호
    • /
    • pp.396-405
    • /
    • 2021
  • In surgery to remove pancreatic cancer, it is important to figure out the shape of a patient's pancreas. However, previous studies have a limit to detect a pancreas automatically in abdominal CT images, because the pancreas varies in shape, size and location by patient. Therefore, in this paper, we propose a method of learning various shapes of pancreas according to the patients and adjacent slices using Faster R-CNN based on Inception V2, and automatically detecting the pancreas from abdominal CT images. Model training and testing were performed using the NIH Pancreas-CT Dataset, and intensity normalization was applied to all data to improve pancreatic detection accuracy. Additionally, according to the shape of the pancreas, the test dataset was classified into top, middle, and bottom slices to evaluate the model's performance on each data. The results show that the top data's mAP@.50IoU achieved 91.7% and the bottom data's mAP@.50IoU achieved 95.4%, and the highest performance was the middle data's mAP@.50IoU, 98.5%. Thus, we have confirmed that the model can accurately detect the pancreas in CT images.

Activity Object Detection Based on Improved Faster R-CNN

  • Zhang, Ning;Feng, Yiran;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제24권3호
    • /
    • pp.416-422
    • /
    • 2021
  • Due to the large differences in human activity within classes, the large similarity between classes, and the problems of visual angle and occlusion, it is difficult to extract features manually, and the detection rate of human behavior is low. In order to better solve these problems, an improved Faster R-CNN-based detection algorithm is proposed in this paper. It achieves multi-object recognition and localization through a second-order detection network, and replaces the original feature extraction module with Dense-Net, which can fuse multi-level feature information, increase network depth and avoid disappearance of network gradients. Meanwhile, the proposal merging strategy is improved with Soft-NMS, where an attenuation function is designed to replace the conventional NMS algorithm, thereby avoiding missed detection of adjacent or overlapping objects, and enhancing the network detection accuracy under multiple objects. During the experiment, the improved Faster R-CNN method in this article has 84.7% target detection result, which is improved compared to other methods, which proves that the target recognition method has significant advantages and potential.

Mask Region-Based Convolutional Neural Network (R-CNN) Based Image Segmentation of Rays in Softwoods

  • Hye-Ji, YOO;Ohkyung, KWON;Jeong-Wook, SEO
    • Journal of the Korean Wood Science and Technology
    • /
    • 제50권6호
    • /
    • pp.490-498
    • /
    • 2022
  • The current study aimed to verify the image segmentation ability of rays in tangential thin sections of conifers using artificial intelligence technology. The applied model was Mask region-based convolutional neural network (Mask R-CNN) and softwoods (viz. Picea jezoensis, Larix gmelinii, Abies nephrolepis, Abies koreana, Ginkgo biloba, Taxus cuspidata, Cryptomeria japonica, Cedrus deodara, Pinus koraiensis) were selected for the study. To take digital pictures, thin sections of thickness 10-15 ㎛ were cut using a microtome, and then stained using a 1:1 mixture of 0.5% astra blue and 1% safranin. In the digital images, rays were selected as detection objects, and Computer Vision Annotation Tool was used to annotate the rays in the training images taken from the tangential sections of the woods. The performance of the Mask R-CNN applied to select rays was as high as 0.837 mean average precision and saving the time more than half of that required for Ground Truth. During the image analysis process, however, division of the rays into two or more rays occurred. This caused some errors in the measurement of the ray height. To improve the image processing algorithms, further work on combining the fragments of a ray into one ray segment, and increasing the precision of the boundary between rays and the neighboring tissues is required.

Impacts of label quality on performance of steel fatigue crack recognition using deep learning-based image segmentation

  • Hsu, Shun-Hsiang;Chang, Ting-Wei;Chang, Chia-Ming
    • Smart Structures and Systems
    • /
    • 제29권1호
    • /
    • pp.207-220
    • /
    • 2022
  • Structural health monitoring (SHM) plays a vital role in the maintenance and operation of constructions. In recent years, autonomous inspection has received considerable attention because conventional monitoring methods are inefficient and expensive to some extent. To develop autonomous inspection, a potential approach of crack identification is needed to locate defects. Therefore, this study exploits two deep learning-based segmentation models, DeepLabv3+ and Mask R-CNN, for crack segmentation because these two segmentation models can outperform other similar models on public datasets. Additionally, impacts of label quality on model performance are explored to obtain an empirical guideline on the preparation of image datasets. The influence of image cropping and label refining are also investigated, and different strategies are applied to the dataset, resulting in six alternated datasets. By conducting experiments with these datasets, the highest mean Intersection-over-Union (mIoU), 75%, is achieved by Mask R-CNN. The rise in the percentage of annotations by image cropping improves model performance while the label refining has opposite effects on the two models. As the label refining results in fewer error annotations of cracks, this modification enhances the performance of DeepLabv3+. Instead, the performance of Mask R-CNN decreases because fragmented annotations may mistake an instance as multiple instances. To sum up, both DeepLabv3+ and Mask R-CNN are capable of crack identification, and an empirical guideline on the data preparation is presented to strengthen identification successfulness via image cropping and label refining.

Tack Coat Inspection Using Unmanned Aerial Vehicle and Deep Learning

  • da Silva, Aida;Dai, Fei;Zhu, Zhenhua
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.784-791
    • /
    • 2022
  • Tack coat is a thin layer of asphalt between the existing pavement and asphalt overlay. During construction, insufficient tack coat layering can later cause surface defects such as slippage, shoving, and rutting. This paper proposed a method for tack coat inspection improvement using an unmanned aerial vehicle (UAV) and deep learning neural network for automatic non-uniform assessment of the applied tack coat area. In this method, the drone-captured images are exploited for assessment using a combination of Mask R-CNN and Grey Level Co-occurrence Matrix (GLCM). Mask R-CNN is utilized to detect the tack coat region and segment the region of interest from the surroundings. GLCM is used to analyze the texture of the segmented region and measure the uniformity and non-uniformity of the tack coat on the existing pavements. The results of the field experiment showed both the intersection over union of Mask R-CNN and the non-uniformity measured by GLCM were promising with respect to their accuracy. The proposed method is automatic and cost-efficient, which would be of value to state Departments of Transportation for better management of their work in pavement construction and rehabilitation.

  • PDF

치매 진단을 위한 Faster R-CNN 활용 MRI 바이오마커 자동 검출 연동 분류 기술 개발 (Alzheimer's Disease Classification with Automated MRI Biomarker Detection Using Faster R-CNN for Alzheimer's Disease Diagnosis)

  • 손주형;김경태;최재영
    • 한국멀티미디어학회논문지
    • /
    • 제22권10호
    • /
    • pp.1168-1177
    • /
    • 2019
  • In order to diagnose and prevent Alzheimer's Disease (AD), it is becoming increasingly important to develop a CAD(Computer-aided Diagnosis) system for AD diagnosis, which provides effective treatment for patients by analyzing 3D MRI images. It is essential to apply powerful deep learning algorithms in order to automatically classify stages of Alzheimer's Disease and to develop a Alzheimer's Disease support diagnosis system that has the function of detecting hippocampus and CSF(Cerebrospinal fluid) which are important biomarkers in diagnosis of Alzheimer's Disease. In this paper, for AD diagnosis, we classify a given MRI data into three categories of AD, mild cognitive impairment, and normal control according by applying 3D brain MRI image to the Faster R-CNN model and detect hippocampus and CSF in MRI image. To do this, we use the 2D MRI slice images extracted from the 3D MRI data of the Faster R-CNN, and perform the widely used majority voting algorithm on the resulting bounding box labels for classification. To verify the proposed method, we used the public ADNI data set, which is the standard brain MRI database. Experimental results show that the proposed method achieves impressive classification performance compared with other state-of-the-art methods.