• Title/Summary/Keyword: Image Segmentation

Search Result 2,144, Processing Time 0.024 seconds

Fiber Classification and Detection Technique Proposed for Applying on the PVA-ECC Sectional Image (PVA-ECC단면 이미지의 섬유 분류 및 검출 기법)

  • Kim, Yun-Yong;Lee, Bang-Yeon;Kim, Jin-Keun
    • Journal of the Korea Concrete Institute
    • /
    • v.20 no.4
    • /
    • pp.513-522
    • /
    • 2008
  • The fiber dispersion performance in fiber-reinforced cementitious composites is a crucial factor with respect to achieving desired mechanical performance. However, evaluation of the fiber dispersion performance in the composite PVA-ECC (Polyvinyl alcohol-Engineered Cementitious Composite) is extremely challenging because of the low contrast of PVA fibers with the cement-based matrix. In the present work, an enhanced fiber detection technique is developed and demonstrated. Using a fluorescence technique on the PVA-ECC, PVA fibers are observed as green dots in the cross-section of the composite. After capturing the fluorescence image with a Charged Couple Device (CCD) camera through a microscope. The fibers are more accurately detected by employing a series of process based on a categorization, watershed segmentation, and morphological reconstruction.

Extracting curved text lines using the chain composition and the expanded grouping method (체인 정합과 확장된 그룹핑 방법을 사용한 곡선형 텍스트 라인 추출)

  • Bai, Nguyen Noi;Yoon, Jin-Seon;Song, Young-Jun;Kim, Nam;Kim, Yong-Gi
    • The KIPS Transactions:PartB
    • /
    • v.14B no.6
    • /
    • pp.453-460
    • /
    • 2007
  • In this paper, we present a method to extract the text lines in poorly structured documents. The text lines may have different orientations, considerably curved shapes, and there are possibly a few wide inter-word gaps in a text line. Those text lines can be found in posters, blocks of addresses, artistic documents. Our method based on the traditional perceptual grouping but we develop novel solutions to overcome the problems of insufficient seed points and vaned orientations un a single line. In this paper, we assume that text lines contained tone connected components, in which each connected components is a set of black pixels within a letter, or some touched letters. In our scheme, the connected components closer than an iteratively incremented threshold will make together a chain. Elongate chains are identified as the seed chains of lines. Then the seed chains are extended to the left and the right regarding the local orientations. The local orientations will be reevaluated at each side of the chains when it is extended. By this process, all text lines are finally constructed. The proposed method is good for extraction of the considerably curved text lines from logos and slogans in our experiment; 98% and 94% for the straight-line extraction and the curved-line extraction, respectively.

Deep learning based crack detection from tunnel cement concrete lining (딥러닝 기반 터널 콘크리트 라이닝 균열 탐지)

  • Bae, Soohyeon;Ham, Sangwoo;Lee, Impyeong;Lee, Gyu-Phil;Kim, Donggyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.583-598
    • /
    • 2022
  • As human-based tunnel inspections are affected by the subjective judgment of the inspector, making continuous history management difficult. There is a lot of deep learning-based automatic crack detection research recently. However, the large public crack datasets used in most studies differ significantly from those in tunnels. Also, additional work is required to build sophisticated crack labels in current tunnel evaluation. Therefore, we present a method to improve crack detection performance by inputting existing datasets into a deep learning model. We evaluate and compare the performance of deep learning models trained by combining existing tunnel datasets, high-quality tunnel datasets, and public crack datasets. As a result, DeepLabv3+ with Cross-Entropy loss function performed best when trained on both public datasets, patchwise classification, and oversampled tunnel datasets. In the future, we expect to contribute to establishing a plan to efficiently utilize the tunnel image acquisition system's data for deep learning model learning.

Deep learning algorithm of concrete spalling detection using focal loss and data augmentation (Focal loss와 데이터 증강 기법을 이용한 콘크리트 박락 탐지 심층 신경망 알고리즘)

  • Shim, Seungbo;Choi, Sang-Il;Kong, Suk-Min;Lee, Seong-Won
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.4
    • /
    • pp.253-263
    • /
    • 2021
  • Concrete structures are damaged by aging and external environmental factors. This type of damage is to appear in the form of cracks, to proceed in the form of spalling. Such concrete damage can act as the main cause of reducing the original design bearing capacity of the structure, and negatively affect the stability of the structure. If such damage continues, it may lead to a safety accident in the future, thus proper repair and reinforcement are required. To this end, an accurate and objective condition inspection of the structure must be performed, and for this inspection, a sensor technology capable of detecting damage area is required. For this reason, we propose a deep learning-based image processing algorithm that can detect spalling. To develop this, 298 spalling images were obtained, of which 253 images were used for training, and the remaining 45 images were used for testing. In addition, an improved loss function and data augmentation technique were applied to improve the detection performance. As a result, the detection performance of concrete spalling showed a mean intersection over union of 80.19%. In conclusion, we developed an algorithm to detect concrete spalling through a deep learning-based image processing technique, with an improved loss function and data augmentation technique. This technology is expected to be utilized for accurate inspection and diagnosis of structures in the future.

Identification of shear layer at river confluence using (RGB) aerial imagery (RGB 항공 영상을 이용한 하천 합류부 전단층 추출법)

  • Noh, Hyoseob;Park, Yong Sung
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.8
    • /
    • pp.553-566
    • /
    • 2021
  • River confluence is often characterized by shear layer and the associated strong mixing. In natural rivers, the main channel and its tributary can be separated by the shear layer using contrasting colors. The shear layer can be easily observed using aerial images from satellite or unmanned aerial vehicles. This study proposes a low-cost identification method extracting geographic features of the shear layer using RGB aerial image. The method consists of three stages. At first, in order to identify the shear layer, it performs image segmentation using a Gaussian mixture model and extracts the water bodies of the main channel and tributary. Next, the self-organizing map simplifies the flow line of the water bodies into the 1-dimensional curve grid. After that, the curvilinear coordinate transformation is performed using the water body pixels and the curve grid. As a result, the shear layer identification method was successfully applied to the confluence between Nakdong River and Nam River to extract geometric shear layer features (confluence angle, upstream- and downstream- channel widths, shear layer length, maximum shear layer thickness).

Development of Chinese Cabbage Detection Algorithm Based on Drone Multi-spectral Image and Computer Vision Techniques (드론 다중분광영상과 컴퓨터 비전 기술을 이용한 배추 객체 탐지 알고리즘 개발)

  • Ryu, Jae-Hyun;Han, Jung-Gon;Ahn, Ho-yong;Na, Sang-Il;Lee, Byungmo;Lee, Kyung-do
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.535-543
    • /
    • 2022
  • A drone is used to diagnose crop growth and to provide information through images in the agriculture field. In the case of using high spatial resolution drone images, growth information for each object can be produced. However, accurate object detection is required and adjacent objects should be efficiently classified. The purpose of this study is to develop a Chinese cabbage object detection algorithm using multispectral reflectance images observed from drone and computer vision techniques. Drone images were captured between 7 and 15 days after planting a Chinese cabbage from 2018 to 2020 years. The thresholds of object detection algorithm were set based on 2019 year, and the algorithm was evaluated based on images in 2018 and 2019 years. The vegetation area was classified using the characteristics of spectral reflectance. Then, morphology techniques such as dilatation, erosion, and image segmentation by considering the size of the object were applied to improve the object detection accuracy in the vegetation area. The precision of the developed object detection algorithm was over 95.19%, and the recall and accuracy were over 95.4% and 93.68%, respectively. The F1-Score of the algorithm was over 0.967 for 2 years. The location information about the center of the Chinese cabbage object extracted using the developed algorithm will be used as data to provide decision-making information during the growing season of crops.

Spatial Replicability Assessment of Land Cover Classification Using Unmanned Aerial Vehicle and Artificial Intelligence in Urban Area (무인항공기 및 인공지능을 활용한 도시지역 토지피복 분류 기법의 공간적 재현성 평가)

  • Geon-Ung, PARK;Bong-Geun, SONG;Kyung-Hun, PARK;Hung-Kyu, LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.63-80
    • /
    • 2022
  • As a technology to analyze and predict an issue has been developed by constructing real space into virtual space, it is becoming more important to acquire precise spatial information in complex cities. In this study, images were acquired using an unmanned aerial vehicle for urban area with complex landscapes, and land cover classification was performed object-based image analysis and semantic segmentation techniques, which were image classification technique suitable for high-resolution imagery. In addition, based on the imagery collected at the same time, the replicability of land cover classification of each artificial intelligence (AI) model was examined for areas that AI model did not learn. When the AI models are trained on the training site, the land cover classification accuracy is analyzed to be 89.3% for OBIA-RF, 85.0% for OBIA-DNN, and 95.3% for U-Net. When the AI models are applied to the replicability assessment site to evaluate replicability, the accuracy of OBIA-RF decreased by 7%, OBIA-DNN by 2.1% and U-Net by 2.3%. It is found that U-Net, which considers both morphological and spectroscopic characteristics, performs well in land cover classification accuracy and replicability evaluation. As precise spatial information becomes important, the results of this study are expected to contribute to urban environment research as a basic data generation method.

The Application Methods of FarmMap Reading in Agricultural Land Using Deep Learning (딥러닝을 이용한 농경지 팜맵 판독 적용 방안)

  • Wee Seong Seung;Jung Nam Su;Lee Won Suk;Shin Yong Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.77-82
    • /
    • 2023
  • The Ministry of Agriculture, Food and Rural Affairs established the FarmMap, an digital map of agricultural land. In this study, using deep learning, we suggest the application of farm map reading to farmland such as paddy fields, fields, ginseng, fruit trees, facilities, and uncultivated land. The farm map is used as spatial information for planting status and drone operation by digitizing agricultural land in the real world using aerial and satellite images. A reading manual has been prepared and updated every year by demarcating the boundaries of agricultural land and reading the attributes. Human reading of agricultural land differs depending on reading ability and experience, and reading errors are difficult to verify in reality because of budget limitations. The farmmap has location information and class information of the corresponding object in the image of 5 types of farmland properties, so the suitable AI technique was tested with ResNet50, an instance segmentation model. The results of attribute reading of agricultural land using deep learning and attribute reading by humans were compared. If technology is developed by focusing on attribute reading that shows different results in the future, it is expected that it will play a big role in reducing attribute errors and improving the accuracy of digital map of agricultural land.

Detection of Plastic Greenhouses by Using Deep Learning Model for Aerial Orthoimages (딥러닝 모델을 이용한 항공정사영상의 비닐하우스 탐지)

  • Byunghyun Yoon;Seonkyeong Seong;Jaewan Choi
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.183-192
    • /
    • 2023
  • The remotely sensed data, such as satellite imagery and aerial photos, can be used to extract and detect some objects in the image through image interpretation and processing techniques. Significantly, the possibility for utilizing digital map updating and land monitoring has been increased through automatic object detection since spatial resolution of remotely sensed data has improved and technologies about deep learning have been developed. In this paper, we tried to extract plastic greenhouses into aerial orthophotos by using fully convolutional densely connected convolutional network (FC-DenseNet), one of the representative deep learning models for semantic segmentation. Then, a quantitative analysis of extraction results had performed. Using the farm map of the Ministry of Agriculture, Food and Rural Affairsin Korea, training data was generated by labeling plastic greenhouses into Damyang and Miryang areas. And then, FC-DenseNet was trained through a training dataset. To apply the deep learning model in the remotely sensed imagery, instance norm, which can maintain the spectral characteristics of bands, was used as normalization. In addition, optimal weights for each band were determined by adding attention modules in the deep learning model. In the experiments, it was found that a deep learning model can extract plastic greenhouses. These results can be applied to digital map updating of Farm-map and landcover maps.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.