• 제목/요약/키워드: Feature pyramid network

검색결과 35건 처리시간 0.02초

Multi-Human Behavior Recognition Based on Improved Posture Estimation Model

  • Zhang, Ning;Park, Jin-Ho;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제24권5호
    • /
    • pp.659-666
    • /
    • 2021
  • With the continuous development of deep learning, human behavior recognition algorithms have achieved good results. However, in a multi-person recognition environment, the complex behavior environment poses a great challenge to the efficiency of recognition. To this end, this paper proposes a multi-person pose estimation model. First of all, the human detectors in the top-down framework mostly use the two-stage target detection model, which runs slow down. The single-stage YOLOv3 target detection model is used to effectively improve the running speed and the generalization of the model. Depth separable convolution, which further improves the speed of target detection and improves the model's ability to extract target proposed regions; Secondly, based on the feature pyramid network combined with context semantic information in the pose estimation model, the OHEM algorithm is used to solve difficult key point detection problems, and the accuracy of multi-person pose estimation is improved; Finally, the Euclidean distance is used to calculate the spatial distance between key points, to determine the similarity of postures in the frame, and to eliminate redundant postures.

Multi-scale context fusion network for melanoma segmentation

  • Zhenhua Li;Lei Zhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권7호
    • /
    • pp.1888-1906
    • /
    • 2024
  • Aiming at the problems that the edge of melanoma image is fuzzy, the contrast with the background is low, and the hair occlusion makes it difficult to segment accurately, this paper proposes a model MSCNet for melanoma segmentation based on U-net frame. Firstly, a multi-scale pyramid fusion module is designed to reconstruct the skip connection and transmit global information to the decoder. Secondly, the contextural information conduction module is innovatively added to the top of the encoder. The module provides different receptive fields for the segmented target by using the hole convolution with different expansion rates, so as to better fuse multi-scale contextural information. In addition, in order to suppress redundant information in the input image and pay more attention to melanoma feature information, global channel attention mechanism is introduced into the decoder. Finally, In order to solve the problem of lesion class imbalance, this paper uses a combined loss function. The algorithm of this paper is verified on ISIC 2017 and ISIC 2018 public datasets. The experimental results indicate that the proposed algorithm has better accuracy for melanoma segmentation compared with other CNN-based image segmentation algorithms.

트랜스포머 기반의 다중 시점 3차원 인체자세추정 (Multi-View 3D Human Pose Estimation Based on Transformer)

  • 최승욱;이진영;김계영
    • 스마트미디어저널
    • /
    • 제12권11호
    • /
    • pp.48-56
    • /
    • 2023
  • 3차원 인체자세추정은 스포츠, 동작인식, 영상매체의 특수효과 등의 분야에서 널리 활용되고 있는 기술이다. 이를 위한 여러 방법들 중 다중 시점 3차원 인체자세추정은 현실의 복잡한 환경에서도 정밀한 추정을 하기 위해 필수적인 방법이다. 하지만 기존 다중 시점 3차원 인체자세추정 모델들은 3차원 특징 맵을 사용함에 따라 시간 복잡도가 높은 단점이 있다. 본 논문은 계산 복잡도가 적은 트랜스포머 기반 기존 단안 시점 다중 프레임 모델을 다중 시점에 대한 3차원 인체자세추정으로 확장하는 방법을 제안한다. 다중 시점으로 확장하기 위하여 먼저 2차원 인체자세 검출자 CPN(Cascaded Pyramid Network)을 활용하여 획득한 4개 시점의 17가지 관절에 대한 2차원 관절좌표를 연결한 8차원 관절좌표를 생성한다. 그 다음 이들을 패치 임베딩 한 뒤 17×32 데이터로 변환하여 트랜스포머 모델에 입력한다. 마지막으로, 인체자세를 출력하는 MLP(Multi-Layer Perceptron) 블록을 매 반복 마다 사용한다. 이를 통해 4개 시점에 대한 3차원 인체자세추정을 동시에 수정한다. 입력 프레임 길이 27을 사용한 Zheng[5]의 방법과 비교했을 때 제안한 방법의 모델 매개변수의 수는 48.9%, MPJPE(Mean Per Joint Position Error)는 20.6mm(43.8%) 감소했으며, 학습 횟수 당 평균 학습 소요 시간은 20배 이상 빠르다.

  • PDF

딥러닝 기반 임의적 스케일 초해상도 모듈을 이용한 Mask-RCNN 성능 향상 (Improvement of Mask-RCNN Performance Using Deep-Learning-Based Arbitrary-Scale Super-Resolution Module)

  • 안영필;박현준
    • 한국정보통신학회논문지
    • /
    • 제26권3호
    • /
    • pp.381-388
    • /
    • 2022
  • 인스턴스 분할에서 Mask-RCNN은 베이스 모델로 자주 사용된다. Mask-RCNN의 성능을 높이는 것은 파생된 모델에 영향을 미치기에 의미가 있다. Mask-RCNN에는 입력 이미지 크기를 배치 크기로 통일시키는 변환 모듈(transform module)이 있다. 이 논문에서는 Mask-RCNN의 성능 향상을 위해 변환 모듈의 크기 조정 부분에 딥러닝 기반 ASSR(Arbitrary-Scale Super-Resolution)을 적용하고, 스케일 정보를 모델의 IM(Integration Module)을 이용하여 주입한다. 제안하는 방법을 COCO 데이터세트에 적용하였을 때 인스턴스 분할 성능이 Mask-RCNN 성능보다 2.5 AP 높았다. 그리고 제안하는 IM 위치 최적화를 위한 실험에서는 FPN(Feature Pyramid Network)과 백본(backbone)이 결합하기 전의 'Top' 위치에 배치했을 때 가장 좋은 성능을 보였다. 따라서 제안하는 방법은 Mask-RCNN을 베이스 모델로 사용하는 모델들의 성능을 향상시킬 수 있다.

An active learning method with difficulty learning mechanism for crack detection

  • Shu, Jiangpeng;Li, Jun;Zhang, Jiawei;Zhao, Weijian;Duan, Yuanfeng;Zhang, Zhicheng
    • Smart Structures and Systems
    • /
    • 제29권1호
    • /
    • pp.195-206
    • /
    • 2022
  • Crack detection is essential for inspection of existing structures and crack segmentation based on deep learning is a significant solution. However, datasets are usually one of the key issues. When building a new dataset for deep learning, laborious and time-consuming annotation of a large number of crack images is an obstacle. The aim of this study is to develop an approach that can automatically select a small portion of the most informative crack images from a large pool in order to annotate them, not to label all crack images. An active learning method with difficulty learning mechanism for crack segmentation tasks is proposed. Experiments are carried out on a crack image dataset of a steel box girder, which contains 500 images of 320×320 size for training, 100 for validation, and 190 for testing. In active learning experiments, the 500 images for training are acted as unlabeled image. The acquisition function in our method is compared with traditional acquisition functions, i.e., Query-By-Committee (QBC), Entropy, and Core-set. Further, comparisons are made on four common segmentation networks: U-Net, DeepLabV3, Feature Pyramid Network (FPN), and PSPNet. The results show that when training occurs with 200 (40%) of the most informative crack images that are selected by our method, the four segmentation networks can achieve 92%-95% of the obtained performance when training takes place with 500 (100%) crack images. The acquisition function in our method shows more accurate measurements of informativeness for unlabeled crack images compared to the four traditional acquisition functions at most active learning stages. Our method can select the most informative images for annotation from many unlabeled crack images automatically and accurately. Additionally, the dataset built after selecting 40% of all crack images can support crack segmentation networks that perform more than 92% when all the images are used.