• Title/Summary/Keyword: Over-Segmentation

Search Result 342, Processing Time 0.027 seconds

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

Synthetic data augmentation for pixel-wise steel fatigue crack identification using fully convolutional networks

  • Zhai, Guanghao;Narazaki, Yasutaka;Wang, Shuo;Shajihan, Shaik Althaf V.;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.237-250
    • /
    • 2022
  • Structural health monitoring (SHM) plays an important role in ensuring the safety and functionality of critical civil infrastructure. In recent years, numerous researchers have conducted studies to develop computer vision and machine learning techniques for SHM purposes, offering the potential to reduce the laborious nature and improve the effectiveness of field inspections. However, high-quality vision data from various types of damaged structures is relatively difficult to obtain, because of the rare occurrence of damaged structures. The lack of data is particularly acute for fatigue crack in steel bridge girder. As a result, the lack of data for training purposes is one of the main issues that hinders wider application of these powerful techniques for SHM. To address this problem, the use of synthetic data is proposed in this article to augment real-world datasets used for training neural networks that can identify fatigue cracks in steel structures. First, random textures representing the surface of steel structures with fatigue cracks are created and mapped onto a 3D graphics model. Subsequently, this model is used to generate synthetic images for various lighting conditions and camera angles. A fully convolutional network is then trained for two cases: (1) using only real-word data, and (2) using both synthetic and real-word data. By employing synthetic data augmentation in the training process, the crack identification performance of the neural network for the test dataset is seen to improve from 35% to 40% and 49% to 62% for intersection over union (IoU) and precision, respectively, demonstrating the efficacy of the proposed approach.

HMM Based Part of Speech Tagging for Hadith Isnad

  • Abdelkarim Abdelkader
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.3
    • /
    • pp.151-160
    • /
    • 2023
  • The Hadith is the second source of Islamic jurisprudence after Qur'an. Both sources are indispensable for muslims to practice Islam. All Ahadith are collected and are written. But most books of Hadith contain Ahadith that can be weak or rejected. So, quite a long time, scholars of Hadith have defined laws, rules and principles of Hadith to know the correct Hadith (Sahih) from the fair (Hassen) and weak (Dhaif). Unfortunately, the application of these rules, laws and principles is done manually by the specialists or students until now. The work presented in this paper is part of the automatic treatment of Hadith, and more specifically, it aims to automatically process the chain of narrators (Hadith Isnad) to find its different components and affect for each component its own tag using a statistical method: the Hidden Markov Models (HMM). This method is a power abstraction for times series data and a robust tool for representing probability distributions over sequences of observations. In this paper, we describe an important tool in the Hadith isnad processing: A chunker with HMM. The role of this tool is to decompose the chain of narrators (Isnad) and determine the tag of each part of Isnad (POI). First, we have compiled a tagset containing 13 tags. Then, we have used these tags to manually conceive a corpus of 100 chains of narrators from "Sahih Alboukhari" and we have extracted a lexicon from this corpus. This lexicon is a set of XML documents based on HPSG features and it contains the information of 134 narrators. After that, we have designed and implemented an analyzer based on HMM that permit to assign for each part of Isnad its proper tag and for each narrator its features. The system was tested on 2661 not duplicated Isnad from "Sahih Alboukhari". The obtained result achieved F-scores of 93%.

Development of Robust Feature Recognition and Extraction Algorithm for Dried Oak Mushrooms (건표고의 외관특징 인식 및 추출 알고리즘 개발)

  • Lee, C.H.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.21 no.3
    • /
    • pp.325-335
    • /
    • 1996
  • Visual features are crucial for monitoring the growth state, indexing the drying performance, and grading the quality of oak mushrooms. A computer vision system with neural net information processing technique was utilized to quantize quality factors of a dried oak mushrooms distributed over the cap and gill sides. In this paper, visual feature extraction algorithm were integrated with the neural net processing to deal with various fuzzy patterns of mushroom shapes and to compensate the fault sensitiveness of the crisp criteria and heuristic rules derived from the image processing results. The proposed algorithm improved the segmentation of the skin features of each side, the identification of cap and gill surfaces, the identification of stipe states and removal of the stipe, etc. And the visual characteristics of dried oak mushrooms were analyzed and primary visual features essential to tile quality evaluation were extracted and quantized. In this study, black and white gray images were captured and used for the algorithm development.

  • PDF

Is three-piece maxillary segmentation surgery a stable procedure?

  • Renata Mayumi Kato;Joao Roberto Goncalves;Jaqueline Ignacio;Larry Wolford;Patricia Bicalho de Mello;Julianna Parizotto;Jonas Bianchi
    • The korean journal of orthodontics
    • /
    • v.54 no.2
    • /
    • pp.128-135
    • /
    • 2024
  • Objective: The number of three-piece maxillary osteotomies has increased over the years; however, the literature remains controversial. The objective of this study was to evaluate the skeletal stability of this surgical modality compared with that of one-piece maxillary osteotomy. Methods: This retrospective cohort study included 39 individuals who underwent Le Fort I maxillary osteotomies and were divided into two groups: group 1 (three pieces, n = 22) and group 2 (one piece, n = 17). Three cone-beam computed tomography scans from each patient (T1, pre-surgical; T2, post-surgical; and T3, follow-up) were used to evaluate the three-dimensional skeletal changes. Results: The differences within groups were statistically significant only for group 1 in terms of surgical changes (T2-T1) with a mean difference in the canine region of 3.09 mm and the posterior region of 3.08 mm. No significant differences in surgical stability were identified between or within the groups. The mean values of the differences between groups were 0.05 mm (posterior region) and -0.39 mm (canine region). Conclusions: Our findings suggest that one- and three-piece maxillary osteotomies result in similar post-surgical skeletal stability.

Reversible Watermarking based on Predicted Error Histogram for Medical Imagery (의료 영상을 위한 추정오차 히스토그램 기반 가역 워터마킹 알고리즘)

  • Oh, Gi-Tae;Jang, Han-Byul;Do, Um-Ji;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.5
    • /
    • pp.231-240
    • /
    • 2015
  • Medical imagery require to protect the privacy with preserving the quality of the original contents. Therefore, reversible watermarking is a solution for this purpose. Previous researches have focused on general imagery and achieved high capacity and high quality. However, they raise a distortion over entire image and hence are not applicable to medical imagery which require to preserve the quality of the objects. In this paper, we propose a novel reversible watermarking for medical imagery, which preserve the quality of the objects and achieves high capacity. First, object and background region is segmented and then predicted error histogram-based reversible watermarking is applied for each region. For the efficient watermark embedding with small distortion in the object region, the embedding level at object region is set as low while the embedding level at background region is set as high. In experiments, the proposed algorithm is compared with the previous predicted error histogram-based algorithm in aspects of embedding capacity and perceptual quality. Results support that the proposed algorithm performs well over the previous algorithm.

Contrast Enhancement Using a Density based Sub-histogram Equalization Technique (밀도기반의 분할된 히스토그램 평활화를 통한 대비 향상 기법)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.1
    • /
    • pp.10-21
    • /
    • 2009
  • In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes those regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrast in the images and the results are compared to the conventional approaches to show its superiority.

Detection Algorithm of Road Damage and Obstacle Based on Joint Deep Learning for Driving Safety (주행 안전을 위한 joint deep learning 기반의 도로 노면 파손 및 장애물 탐지 알고리즘)

  • Shim, Seungbo;Jeong, Jae-Jin
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.2
    • /
    • pp.95-111
    • /
    • 2021
  • As the population decreases in an aging society, the average age of drivers increases. Accordingly, the elderly at high risk of being in an accident need autonomous-driving vehicles. In order to secure driving safety on the road, several technologies to respond to various obstacles are required in those vehicles. Among them, technology is required to recognize static obstacles, such as poor road conditions, as well as dynamic obstacles, such as vehicles, bicycles, and people, that may be encountered while driving. In this study, we propose a deep neural network algorithm capable of simultaneously detecting these two types of obstacle. For this algorithm, we used 1,418 road images and produced annotation data that marks seven categories of dynamic obstacles and labels images to indicate road damage. As a result of training, dynamic obstacles were detected with an average accuracy of 46.22%, and road surface damage was detected with a mean intersection over union of 74.71%. In addition, the average elapsed time required to process a single image is 89ms, and this algorithm is suitable for personal mobility vehicles that are slower than ordinary vehicles. In the future, it is expected that driving safety with personal mobility vehicles will be improved by utilizing technology that detects road obstacles.

A Study of Development and Application of an Inland Water Body Training Dataset Using Sentinel-1 SAR Images in Korea (Sentinel-1 SAR 영상을 활용한 국내 내륙 수체 학습 데이터셋 구축 및 알고리즘 적용 연구)

  • Eu-Ru Lee;Hyung-Sup Jung
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1371-1388
    • /
    • 2023
  • Floods are becoming more severe and frequent due to global warming-induced climate change. Water disasters are rising in Korea due to severe rainfall and wet seasons. This makes preventive climate change measures and efficient water catastrophe responses crucial, and synthetic aperture radar satellite imagery can help. This research created 1,423 water body learning datasets for individual water body regions along the Han and Nakdong waterways to reflect domestic water body properties discovered by Sentinel-1 satellite radar imagery. We created a document with exact data annotation criteria for many situations. After the dataset was processed, U-Net, a deep learning model, analyzed water body detection results. The results from applying the learned model to water body locations not involved in the learning process were studied to validate soil water body monitoring on a national scale. The analysis showed that the created water body area detected water bodies accurately (F1-Score: 0.987, Intersection over Union [IoU]: 0.955). Other domestic water body regions not used for training and evaluation showed similar accuracy (F1-Score: 0.941, IoU: 0.89). Both outcomes showed that the computer accurately spotted water bodies in most areas, however tiny streams and gloomy areas had problems. This work should improve water resource change and disaster damage surveillance. Future studies will likely include more water body attribute datasets. Such databases could help manage and monitor water bodies nationwide and shed light on misclassified regions.

Waterbody Detection for the Reservoirs in South Korea Using Swin Transformer and Sentinel-1 Images (Swin Transformer와 Sentinel-1 영상을 이용한 우리나라 저수지의 수체 탐지)

  • Soyeon Choi;Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Yungyo Im;Youngmin Seo;Wanyub Kim;Minha Choi;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.949-965
    • /
    • 2023
  • In this study, we propose a method to monitor the surface area of agricultural reservoirs in South Korea using Sentinel-1 synthetic aperture radar images and the deep learning model, Swin Transformer. Utilizing the Google Earth Engine platform, datasets from 2017 to 2021 were constructed for seven agricultural reservoirs, categorized into 700 K-ton, 900 K-ton, and 1.5 M-ton capacities. For four of the reservoirs, a total of 1,283 images were used for model training through shuffling and 5-fold cross-validation techniques. Upon evaluation, the Swin Transformer Large model, configured with a window size of 12, demonstrated superior semantic segmentation performance, showing an average accuracy of 99.54% and a mean intersection over union (mIoU) of 95.15% for all folds. When the best-performing model was applied to the datasets of the remaining three reservoirsfor validation, it achieved an accuracy of over 99% and mIoU of over 94% for all reservoirs. These results indicate that the Swin Transformer model can effectively monitor the surface area of agricultural reservoirs in South Korea.