• 제목/요약/키워드: instance segmentation

검색결과 73건 처리시간 0.031초

SEL-RefineMask: A Seal Segmentation and Recognition Neural Network with SEL-FPN

  • Dun, Ze-dong;Chen, Jian-yu;Qu, Mei-xia;Jiang, Bin
    • Journal of Information Processing Systems
    • /
    • 제18권3호
    • /
    • pp.411-427
    • /
    • 2022
  • Digging historical and cultural information from seals in ancient books is of great significance. However, ancient Chinese seal samples are scarce and carving methods are diverse, and traditional digital image processing methods based on greyscale have difficulty achieving superior segmentation and recognition performance. Recently, some deep learning algorithms have been proposed to address this problem; however, current neural networks are difficult to train owing to the lack of datasets. To solve the afore-mentioned problems, we proposed an SEL-RefineMask which combines selector of feature pyramid network (SEL-FPN) with RefineMask to segment and recognize seals. We designed an SEL-FPN to intelligently select a specific layer which represents different scales in the FPN and reduces the number of anchor frames. We performed experiments on some instance segmentation networks as the baseline method, and the top-1 segmentation result of 64.93% is 5.73% higher than that of humans. The top-1 result of the SEL-RefineMask network reached 67.96% which surpassed the baseline results. After segmentation, a vision transformer was used to recognize the segmentation output, and the accuracy reached 91%. Furthermore, a dataset of seals in ancient Chinese books (SACB) for segmentation and small seal font (SSF) for recognition were established which are publicly available on the website.

사례기반 학습을 이용한 음절기반 한국어 단어 분리 및 범주 결정 (Segmenting and Classifying Korean Words based on Syllables Using Instance-Based Learning)

  • 김재훈;이공주
    • 정보처리학회논문지B
    • /
    • 제10B권1호
    • /
    • pp.47-56
    • /
    • 2003
  • 한국어는 영어와 같이 공백을 단어의 경계로 사용하지만, 그 단어의 구조는 영어와 다소 차이가 있다. 영어는 일반적으로 공백 사이에 하나의 단어가 포함되나, 한국어는 여러 개의 단어 혹은 형태소가 포함된다. 이런 차이 때문에 일반적으로 한국어에서는 공백을 경계로 이루어진 단어를 어절이라고 한다. 본 논문에서는 하나의 어절 내에 포함된 단어들을 분리하고, 분리된 각 단어의 적절한 범주를 결정하는 방법을 제안한다. 본 논문에서는 사례기반 기계학습 방법을 이용하고 음절 단위로 단어를 분리한다. 사례기반 학습을 위해 사용된 자질집합은 이전 음절 자신의 음절, 이후의 두 음절, 자신의 음절에 대한 받침 정보, 이전 두 범주 정보이다. 제안된 시스템을 평가하기 위해서 ETRI 말뭉치와 KAIST 말뭉치를 사용하였으며, 두 말뭉치 모두에서 단어 분리의 F 측도가 97% 이상으로 비교적 좋은 성능을 보였다.

Segmentation and Classification of Lidar data

  • Tseng, Yi-Hsing;Wang, Miao
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.153-155
    • /
    • 2003
  • Laser scanning has become a viable technique for the collection of a large amount of accurate 3D point data densely distributed on the scanned object surface. The inherent 3D nature of the sub-randomly distributed point cloud provides abundant spatial information. To explore valuable spatial information from laser scanned data becomes an active research topic, for instance extracting digital elevation model, building models, and vegetation volumes. The sub-randomly distributed point cloud should be segmented and classified before the extraction of spatial information. This paper investigates some exist segmentation methods, and then proposes an octree-based split-and-merge segmentation method to divide lidar data into clusters belonging to 3D planes. Therefore, the classification of lidar data can be performed based on the derived attributes of extracted 3D planes. The test results of both ground and airborne lidar data show the potential of applying this method to extract spatial features from lidar data.

  • PDF

An Analysis on the Properties of Features against Various Distortions in Deep Neural Networks

  • Kang, Jung Heum;Jeong, Hye Won;Choi, Chang Kyun;Ali, Muhammad Salman;Bae, Sung-Ho;Kim, Hui Yong
    • 방송공학회논문지
    • /
    • 제26권7호
    • /
    • pp.868-876
    • /
    • 2021
  • Deploying deep neural network model training performs remarkable performance in the fields of Object detection and Instance segmentation. To train these models, features are first extracted from the input image using a backbone network. The extracted features can be reused by various tasks. Research has been actively conducted to serve various tasks by using these learned features. In this process, standardization discussions about encoding, decoding, and transmission methods are proceeding actively. In this scenario, it is necessary to analyze the response characteristics of features against various distortions that may occur in the data transmission or data compression process. In this paper, experiment was conducted to inject various distortions into the feature in the object recognition task. And analyze the mAP (mean Average Precision) metric between the predicted value output from the neural network and the target value as the intensity of various distortions was increased. Experiments have shown that features are more robust to distortion than images. And this points out that using the feature as transmission means can prevent the loss of information against the various distortions during data transmission and compression process.

Impacts of label quality on performance of steel fatigue crack recognition using deep learning-based image segmentation

  • Hsu, Shun-Hsiang;Chang, Ting-Wei;Chang, Chia-Ming
    • Smart Structures and Systems
    • /
    • 제29권1호
    • /
    • pp.207-220
    • /
    • 2022
  • Structural health monitoring (SHM) plays a vital role in the maintenance and operation of constructions. In recent years, autonomous inspection has received considerable attention because conventional monitoring methods are inefficient and expensive to some extent. To develop autonomous inspection, a potential approach of crack identification is needed to locate defects. Therefore, this study exploits two deep learning-based segmentation models, DeepLabv3+ and Mask R-CNN, for crack segmentation because these two segmentation models can outperform other similar models on public datasets. Additionally, impacts of label quality on model performance are explored to obtain an empirical guideline on the preparation of image datasets. The influence of image cropping and label refining are also investigated, and different strategies are applied to the dataset, resulting in six alternated datasets. By conducting experiments with these datasets, the highest mean Intersection-over-Union (mIoU), 75%, is achieved by Mask R-CNN. The rise in the percentage of annotations by image cropping improves model performance while the label refining has opposite effects on the two models. As the label refining results in fewer error annotations of cracks, this modification enhances the performance of DeepLabv3+. Instead, the performance of Mask R-CNN decreases because fragmented annotations may mistake an instance as multiple instances. To sum up, both DeepLabv3+ and Mask R-CNN are capable of crack identification, and an empirical guideline on the data preparation is presented to strengthen identification successfulness via image cropping and label refining.

Car detection area segmentation using deep learning system

  • Dong-Jin Kwon;Sang-hoon Lee
    • International journal of advanced smart convergence
    • /
    • 제12권4호
    • /
    • pp.182-189
    • /
    • 2023
  • A recently research, object detection and segmentation have emerged as crucial technologies widely utilized in various fields such as autonomous driving systems, surveillance and image editing. This paper proposes a program that utilizes the QT framework to perform real-time object detection and precise instance segmentation by integrating YOLO(You Only Look Once) and Mask R CNN. This system provides users with a diverse image editing environment, offering features such as selecting specific modes, drawing masks, inspecting detailed image information and employing various image processing techniques, including those based on deep learning. The program advantage the efficiency of YOLO to enable fast and accurate object detection, providing information about bounding boxes. Additionally, it performs precise segmentation using the functionalities of Mask R CNN, allowing users to accurately distinguish and edit objects within images. The QT interface ensures an intuitive and user-friendly environment for program control and enhancing accessibility. Through experiments and evaluations, our proposed system has been demonstrated to be effective in various scenarios. This program provides convenience and powerful image processing and editing capabilities to both beginners and experts, smoothly integrating computer vision technology. This paper contributes to the growth of the computer vision application field and showing the potential to integrate various image processing algorithms on a user-friendly platform

Comparison of Multi-Label U-Net and Mask R-CNN for panoramic radiograph segmentation to detect periodontitis

  • Rini, Widyaningrum;Ika, Candradewi;Nur Rahman Ahmad Seno, Aji;Rona, Aulianisa
    • Imaging Science in Dentistry
    • /
    • 제52권4호
    • /
    • pp.383-391
    • /
    • 2022
  • Purpose: Periodontitis, the most prevalent chronic inflammatory condition affecting teeth-supporting tissues, is diagnosed and classified through clinical and radiographic examinations. The staging of periodontitis using panoramic radiographs provides information for designing computer-assisted diagnostic systems. Performing image segmentation in periodontitis is required for image processing in diagnostic applications. This study evaluated image segmentation for periodontitis staging based on deep learning approaches. Materials and Methods: Multi-Label U-Net and Mask R-CNN models were compared for image segmentation to detect periodontitis using 100 digital panoramic radiographs. Normal conditions and 4 stages of periodontitis were annotated on these panoramic radiographs. A total of 1100 original and augmented images were then randomly divided into a training (75%) dataset to produce segmentation models and a testing (25%) dataset to determine the evaluation metrics of the segmentation models. Results: The performance of the segmentation models against the radiographic diagnosis of periodontitis conducted by a dentist was described by evaluation metrics(i.e., dice coefficient and intersection-over-union [IoU] score). MultiLabel U-Net achieved a dice coefficient of 0.96 and an IoU score of 0.97. Meanwhile, Mask R-CNN attained a dice coefficient of 0.87 and an IoU score of 0.74. U-Net showed the characteristic of semantic segmentation, and Mask R-CNN performed instance segmentation with accuracy, precision, recall, and F1-score values of 95%, 85.6%, 88.2%, and 86.6%, respectively. Conclusion: Multi-Label U-Net produced superior image segmentation to that of Mask R-CNN. The authors recommend integrating it with other techniques to develop hybrid models for automatic periodontitis detection.

적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지 (Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion)

  • 조은지;이동천
    • 한국측량학회지
    • /
    • 제38권6호
    • /
    • pp.635-644
    • /
    • 2020
  • 딥러닝(DL)을 이용한 객체인식, 탐지 및 분할하는 연구는 여러 분야에서 활용되고 있으며, 주로 영상을 DL 모델의 학습 데이터로 사용하고 있지만, 본 논문은 영상뿐 아니라 공간정보 특성을 포함하는 다양한 학습 데이터(multimodal training data)를 향상된 영역기반 합성곱 신경망(R-CNN)인 Detectron2 모델 학습에 사용하여 객체를 분할하고 건물을 탐지하는 것이 목적이다. 이를 위하여 적외선 항공영상과 라이다 데이터의 내재된 객체의 윤곽 및 통계적 질감정보인 Haralick feature와 같은 여러 특성을 추출하였다. DL 모델의 학습 성능은 데이터의 수량과 특성뿐 아니라 융합방법에 의해 좌우된다. 초기융합(early fusion)과 후기융합(late fusion)의 혼용방식인 하이브리드 융합(hybrid fusion)을 적용한 결과 33%의 건물을 추가적으로 탐지 할 수 있다. 이와 같은 실험 결과는 서로 다른 특성 데이터의 복합적 학습과 융합에 의한 상호보완적 효과를 입증하였다고 판단된다.

A hierarchical semantic segmentation framework for computer vision-based bridge damage detection

  • Jingxiao Liu;Yujie Wei ;Bingqing Chen;Hae Young Noh
    • Smart Structures and Systems
    • /
    • 제31권4호
    • /
    • pp.325-334
    • /
    • 2023
  • Computer vision-based damage detection enables non-contact, efficient and low-cost bridge health monitoring, which reduces the need for labor-intensive manual inspection or that for a large number of on-site sensing instruments. By leveraging recent semantic segmentation approaches, we can detect regions of critical structural components and identify damages at pixel level on images. However, existing methods perform poorly when detecting small and thin damages (e.g., cracks); the problem is exacerbated by imbalanced samples. To this end, we incorporate domain knowledge to introduce a hierarchical semantic segmentation framework that imposes a hierarchical semantic relationship between component categories and damage types. For instance, certain types of concrete cracks are only present on bridge columns, and therefore the noncolumn region may be masked out when detecting such damages. In this way, the damage detection model focuses on extracting features from relevant structural components and avoid those from irrelevant regions. We also utilize multi-scale augmentation to preserve contextual information of each image, without losing the ability to handle small and/or thin damages. In addition, our framework employs an importance sampling, where images with rare components are sampled more often, to address sample imbalance. We evaluated our framework on a public synthetic dataset that consists of 2,000 railway bridges. Our framework achieves a 0.836 mean intersection over union (IoU) for structural component segmentation and a 0.483 mean IoU for damage segmentation. Our results have in total 5% and 18% improvements for the structural component segmentation and damage segmentation tasks, respectively, compared to the best-performing baseline model.

Context- and Shape-Aware Safety Monitoring for Construction Workers

  • Wei-Chih Chern;Kichang Choi;Vijayan Asari;Hongjo Kim
    • 국제학술발표논문집
    • /
    • The 10th International Conference on Construction Engineering and Project Management
    • /
    • pp.423-430
    • /
    • 2024
  • The task of vision safety monitoring in construction environments presents a formidable challenge, owing to the dynamic and heterogeneous nature of these settings. Despite the advancements in artificial intelligence, the nuanced analysis of small or tiny personal protective equipment (PPE) remains a complex endeavor. In response to this challenge, this paper introduces an innovative safety monitoring system, specifically designed to enhance the safety monitoring of working both at ground level and at elevated heights. This novel system integrates a suite of sophisticated technologies: instance segmentation, shape classification, object tracking, a visualization report, and a real-time notification module. Collectively, these components coalesce to deliver a safety monitoring solution, ensuring a higher standard of protection for construction workers. The experimental results…..