• Title/Summary/Keyword: multi-scale features

Search Result 186, Processing Time 0.026 seconds

A Multi-Layer Perceptron for Color Index based Vegetation Segmentation (색상지수 기반의 식물분할을 위한 다층퍼셉트론 신경망)

  • Lee, Moon-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.1
    • /
    • pp.16-25
    • /
    • 2020
  • Vegetation segmentation in a field color image is a process of distinguishing vegetation objects of interests like crops and weeds from a background of soil and/or other residues. The performance of the process is crucial in automatic precision agriculture which includes weed control and crop status monitoring. To facilitate the segmentation, color indices have predominantly been used to transform the color image into its gray-scale image. A thresholding technique like the Otsu method is then applied to distinguish vegetation parts from the background. An obvious demerit of the thresholding based segmentation will be that classification of each pixel into vegetation or background is carried out solely by using the color feature of the pixel itself without taking into account color features of its neighboring pixels. This paper presents a new pixel-based segmentation method which employs a multi-layer perceptron neural network to classify the gray-scale image into vegetation and nonvegetation pixels. The input data of the neural network for each pixel are 2-dimensional gray-level values surrounding the pixel. To generate a gray-scale image from a raw RGB color image, a well-known color index called Excess Green minus Excess Red Index was used. Experimental results using 80 field images of 4 vegetation species demonstrate the superiority of the neural network to existing threshold-based segmentation methods in terms of accuracy, precision, recall, and harmonic mean.

Human Action Recognition Using Pyramid Histograms of Oriented Gradients and Collaborative Multi-task Learning

  • Gao, Zan;Zhang, Hua;Liu, An-An;Xue, Yan-Bing;Xu, Guang-Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.2
    • /
    • pp.483-503
    • /
    • 2014
  • In this paper, human action recognition using pyramid histograms of oriented gradients and collaborative multi-task learning is proposed. First, we accumulate global activities and construct motion history image (MHI) for both RGB and depth channels respectively to encode the dynamics of one action in different modalities, and then different action descriptors are extracted from depth and RGB MHI to represent global textual and structural characteristics of these actions. Specially, average value in hierarchical block, GIST and pyramid histograms of oriented gradients descriptors are employed to represent human motion. To demonstrate the superiority of the proposed method, we evaluate them by KNN, SVM with linear and RBF kernels, SRC and CRC models on DHA dataset, the well-known dataset for human action recognition. Large scale experimental results show our descriptors are robust, stable and efficient, and outperform the state-of-the-art methods. In addition, we investigate the performance of our descriptors further by combining these descriptors on DHA dataset, and observe that the performances of combined descriptors are much better than just using only sole descriptor. With multimodal features, we also propose a collaborative multi-task learning method for model learning and inference based on transfer learning theory. The main contributions lie in four aspects: 1) the proposed encoding the scheme can filter the stationary part of human body and reduce noise interference; 2) different kind of features and models are assessed, and the neighbor gradients information and pyramid layers are very helpful for representing these actions; 3) The proposed model can fuse the features from different modalities regardless of the sensor types, the ranges of the value, and the dimensions of different features; 4) The latent common knowledge among different modalities can be discovered by transfer learning to boost the performance.

Text-to-Face Generation Using Multi-Scale Gradients Conditional Generative Adversarial Networks (다중 스케일 그라디언트 조건부 적대적 생성 신경망을 활용한 문장 기반 영상 생성 기법)

  • Bui, Nguyen P.;Le, Duc-Tai;Choo, Hyunseung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.764-767
    • /
    • 2021
  • While Generative Adversarial Networks (GANs) have seen huge success in image synthesis tasks, synthesizing high-quality images from text descriptions is a challenging problem in computer vision. This paper proposes a method named Text-to-Face Generation Using Multi-Scale Gradients for Conditional Generative Adversarial Networks (T2F-MSGGANs) that combines GANs and a natural language processing model to create human faces has features found in the input text. The proposed method addresses two problems of GANs: model collapse and training instability by investigating how gradients at multiple scales can be used to generate high-resolution images. We show that T2F-MSGGANs converge stably and generate good-quality images.

Multi-scale Crack Detection Using Scaling (스케일링을 이용한 다중 스케일 균열 검출)

  • Kim, Young-Ro;Oh, Tae-Myung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.9
    • /
    • pp.194-200
    • /
    • 2013
  • In this paper, we propose a multi-scale crack detection method using scaling. It is based on morphology algorithm, crack features, and scaling. We use a morphology operator which extracts patterns of crack. It segments cracks and background using opening and closing operations. Morphology based segmentation is better than existing integration methods using subtraction in detecting a crack it has small width. However, morphology methods using only one structure element could detect only fixed width crack. Thus, we use a scaling method. We use bilinear interpolation for scaling. Our method calculates values of properties such as the number of pixels and the maximum length of the segmented region. We decide whether the segmented region belongs to cracks according to those data. Experimental results show that our proposed multi-scale crack detection method has better results than those of existing detection methods.

Multi-view Image Generation from Stereoscopic Image Features and the Occlusion Region Extraction (가려짐 영역 검출 및 스테레오 영상 내의 특징들을 이용한 다시점 영상 생성)

  • Lee, Wang-Ro;Ko, Min-Soo;Um, Gi-Mun;Cheong, Won-Sik;Hur, Nam-Ho;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.17 no.5
    • /
    • pp.838-850
    • /
    • 2012
  • In this paper, we propose a novel algorithm that generates multi-view images by using various image features obtained from the given stereoscopic images. In the proposed algorithm, we first create an intensity gradient saliency map from the given stereo images. And then we calculate a block-based optical flow that represents the relative movement(disparity) of each block with certain size between left and right images. And we also obtain the disparities of feature points that are extracted by SIFT(scale-invariant We then create a disparity saliency map by combining these extracted disparity features. Disparity saliency map is refined through the occlusion detection and removal of false disparities. Thirdly, we extract straight line segments in order to minimize the distortion of straight lines during the image warping. Finally, we generate multi-view images by grid mesh-based image warping algorithm. Extracted image features are used as constraints during grid mesh-based image warping. The experimental results show that the proposed algorithm performs better than the conventional DIBR algorithm in terms of visual quality.

Development of Multi-Ensemble GCMs Based Spatio-Temporal Downscaling Scheme for Short-term Prediction (여름강수량의 단기예측을 위한 Multi-Ensemble GCMs 기반 시공간적 Downscaling 기법 개발)

  • Kwon, Hyun-Han;Min, Young-Mi;Hameed, Saji N.
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2009.05a
    • /
    • pp.1142-1146
    • /
    • 2009
  • A rainfall simulation and forecasting technique that can generate daily rainfall sequences conditional on multi-model ensemble GCMs is developed and applied to data in Korea for the major rainy season. The GCM forecasts are provided by APEC climate center. A Weather State Based Downscaling Model (WSDM) is used to map teleconnections from ocean-atmosphere data or key state variables from numerical integrations of Ocean-Atmosphere General Circulation Models to simulate daily sequences at multiple rain gauges. The method presented is general and is applied to the wet season which is JJA(June-July-August) data in Korea. The sequences of weather states identified by the EM algorithm are shown to correspond to dominant synoptic-scale features of rainfall generating mechanisms. Application of the methodology to seasonal rainfall forecasts using empirical teleconnections and GCM derived climate forecast are discussed.

  • PDF

Towards Resource-Generative Skyscrapers

  • Imam, Mohamed;Kolarevic, Branko
    • International Journal of High-Rise Buildings
    • /
    • v.7 no.2
    • /
    • pp.161-170
    • /
    • 2018
  • Rapid urbanization, resource depletion, and limited land are further increasing the need for skyscrapers in city centers; therefore, it is imperative to enhance tall building performance efficiency and energy-generative capability. Potential performance improvements can be explored using parametric multi-objective optimization, aided by evaluation tools, such as computational fluid dynamics and energy analysis software, to visualize and explore skyscrapers' multi-resource, multi-system generative potential. An optimization-centered, software-based design platform can potentially enable the simultaneous exploration of multiple strategies for the decreased consumption and large-scale production of multiple resources. Resource Generative Skyscrapers (RGS) are proposed as a possible solution to further explore and optimize the generative potentials of skyscrapers. RGS can be optimized with waste-energy-harvesting capabilities by capitalizing on passive features of integrated renewable systems. This paper describes various resource-generation technologies suitable for a synergetic integration within the RGS typology, and the software tools that can facilitate exploration of their optimal use.

Multi-resolution Fusion Network for Human Pose Estimation in Low-resolution Images

  • Kim, Boeun;Choo, YeonSeung;Jeong, Hea In;Kim, Chung-Il;Shin, Saim;Kim, Jungho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2328-2344
    • /
    • 2022
  • 2D human pose estimation still faces difficulty in low-resolution images. Most existing top-down approaches scale up the target human bonding box images to the large size and insert the scaled image into the network. Due to up-sampling, artifacts occur in the low-resolution target images, and the degraded images adversely affect the accurate estimation of the joint positions. To address this issue, we propose a multi-resolution input feature fusion network for human pose estimation. Specifically, the bounding box image of the target human is rescaled to multiple input images of various sizes, and the features extracted from the multiple images are fused in the network. Moreover, we introduce a guiding channel which induces the multi-resolution input features to alternatively affect the network according to the resolution of the target image. We conduct experiments on MS COCO dataset which is a representative dataset for 2D human pose estimation, where our method achieves superior performance compared to the strong baseline HRNet and the previous state-of-the-art methods.

Damage Detecion of CFRP-Laminated Concrete based on a Continuous Self-Sensing Technology (셀프센싱 상시계측 기반 CFRP보강 콘크리트 구조물의 손상검색)

  • Kim, Young-Jin;Park, Seung-Hee;Jin, Kyu-Nam;Lee, Chang-Gil
    • Land and Housing Review
    • /
    • v.2 no.4
    • /
    • pp.407-413
    • /
    • 2011
  • This paper reports a novel structural health monitoring (SHM) technique for detecting de-bonding between a concrete beam and CFRP (Carbon Fiber Reinforced Polymer) sheet that is attached to the concrete surface. To achieve this, a multi-scale actuated sensing system with a self-sensing circuit using piezoelectric active sensors is applied to the CFRP laminated concrete beam structure. In this self-sensing based multi-scale actuated sensing, one scale provides a wide frequency-band structural response from the self-sensed impedance measurements and the other scale provides a specific frequency-induced structural wavelet response from the self-sensed guided wave measurement. To quantify the de-bonding levels, the supervised learning-based statistical pattern recognition was implemented by composing a two-dimensional (2D) plane using the damage indices extracted from the impedance and guided wave features.

A Multi-Scale Parallel Convolutional Neural Network Based Intelligent Human Identification Using Face Information

  • Li, Chen;Liang, Mengti;Song, Wei;Xiao, Ke
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1494-1507
    • /
    • 2018
  • Intelligent human identification using face information has been the research hotspot ranging from Internet of Things (IoT) application, intelligent self-service bank, intelligent surveillance to public safety and intelligent access control. Since 2D face images are usually captured from a long distance in an unconstrained environment, to fully exploit this advantage and make human recognition appropriate for wider intelligent applications with higher security and convenience, the key difficulties here include gray scale change caused by illumination variance, occlusion caused by glasses, hair or scarf, self-occlusion and deformation caused by pose or expression variation. To conquer these, many solutions have been proposed. However, most of them only improve recognition performance under one influence factor, which still cannot meet the real face recognition scenario. In this paper we propose a multi-scale parallel convolutional neural network architecture to extract deep robust facial features with high discriminative ability. Abundant experiments are conducted on CMU-PIE, extended FERET and AR database. And the experiment results show that the proposed algorithm exhibits excellent discriminative ability compared with other existing algorithms.