• Title/Summary/Keyword: spatial features

Search Result 1,283, Processing Time 0.029 seconds

2차원 마이크로폰 배열에 의한 능동 청각 시스템 (Active Audition System based on 2-Dimensional Microphone Array)

  • 이창훈;김용호
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 학술회의 논문집 정보 및 제어부문 A
    • /
    • pp.175-178
    • /
    • 2003
  • This paper describes a active audition system for robot-human interface in real environment. We propose a strategy for a robust sound localization and for -talking speech recognition(60-300cm) based on 2-dimensional microphone array. We consider spatial features, the relation of position and interaural time differences, and realize speaker tracking system using fuzzy inference profess based on inference rules generated by its spatial features.

  • PDF

공간정보의 탐색과정에 나타난 시각정보획득특성에 관한 연구 - 지하철 홀 공간의 주시실험을 대상으로 - (A Study on the Features of Visual-Information Acquirement Shown at Searching of Spatial Information - With the Experiment of Observing the Space of Hall in Subway Station -)

  • 김종하
    • 한국실내디자인학회논문집
    • /
    • 제23권2호
    • /
    • pp.90-98
    • /
    • 2014
  • This study has analyzed the meaning of observation time in the course of acquiring the information of subjects who observed the space of hall in subway stations to figure out the process of spatial information excluded and the features of intensive searching. The followings are the results from the analysis of searching process with the interpretation of the process for information acquirement through the interpretation of observation area and time. First, based on the general definition of observation time, the reason for analyzing the features of acquiring spatial information according to the subjects' observation time has been established. The feature of decreased analysis data reflected that of observation time in the process of perceiving and recognizing spatial information, which showed that the observation was focused on the enter of the space during the time spent in the process of observing the space and the spent time with considerable exclusion of bottom end (in particular, right bottom end). Second, while the subjects were observing the space of hall in subway stations, they focused on the top of the left center and the signs on the right exit the most, which was followed by the focus on the both side horizontally and the clock on the top. Third, the analysis of consecutive observation frequency enabled the comparison of the changes to the observation concentration by area. The difference of time by area produced the data with which the change to the contents of spatial searching in the process of searching space could be known. Fourth, as the observation frequency in the area of I changed [three times -> six times -> 9 times], the observation time included in the area increased, which showed the process for the change from perception to recognition of information with the concentration of attention through visual information. It makes it possible to understand that more time was spent on the information to be acquired with the exclusion of the unnecessary information around.

범죄발생지점의 공간적 특성분석을 통한 인위적 감시지역의 선정 (A Selection of Artificial Surveillance Zone through the Spatial Features Analysis of Crime Occurrence Place)

  • 김동문;박재국
    • 대한공간정보학회지
    • /
    • 제18권3호
    • /
    • pp.83-90
    • /
    • 2010
  • 현대사회는 도시환경의 급격하고 복잡한 변화의 결과로 각종 범죄가 빈번히 발생하고 있어 국민의 생명과 재산보호에 대한 요구가 증가하고 있다. 이를 위해 도시지역의 치안 담당자들은 경찰 인력의 부족과 과중한 업무 속에서도 효율적인 범죄예방과 감시활동을 위해 경찰의 역할과 기능을 확대하고 있다. 최근에는 24시간 동안 일정한 지역을 집중적으로 모니터링 할 수 있는 CCTV 등의 인위적인 감시도구를 통해 효과적으로 범죄를 감시하고 예방하기 위한 시스템을 도입하고 있으나 감시도구 설치를 위한 체계적인 기준의 미비와 사생활 침해라는 문제가 발생되고 있다. 따라서 이 연구에서는 CCTV 등의 인위적 감시도구와 범죄 발생지점의 공간적 특성, GIS의 공간분석 기법 등을 이용하여 범죄 모니터링이 가능한 인위적 감시지역을 선정하였으며, 그 결과 CCTV 설치대수가 절대적으로 부족하며, 기존에 설치된 위치도 공간적 분포를 충분히 고려하지 못한 것으로 나타났다.

Depth Images-based Human Detection, Tracking and Activity Recognition Using Spatiotemporal Features and Modified HMM

  • Kamal, Shaharyar;Jalal, Ahmad;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • 제11권6호
    • /
    • pp.1857-1862
    • /
    • 2016
  • Human activity recognition using depth information is an emerging and challenging technology in computer vision due to its considerable attention by many practical applications such as smart home/office system, personal health care and 3D video games. This paper presents a novel framework of 3D human body detection, tracking and recognition from depth video sequences using spatiotemporal features and modified HMM. To detect human silhouette, raw depth data is examined to extract human silhouette by considering spatial continuity and constraints of human motion information. While, frame differentiation is used to track human movements. Features extraction mechanism consists of spatial depth shape features and temporal joints features are used to improve classification performance. Both of these features are fused together to recognize different activities using the modified hidden Markov model (M-HMM). The proposed approach is evaluated on two challenging depth video datasets. Moreover, our system has significant abilities to handle subject's body parts rotation and body parts missing which provide major contributions in human activity recognition.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Two-stage Deep Learning Model with LSTM-based Autoencoder and CNN for Crop Classification Using Multi-temporal Remote Sensing Images

  • Kwak, Geun-Ho;Park, No-Wook
    • 대한원격탐사학회지
    • /
    • 제37권4호
    • /
    • pp.719-731
    • /
    • 2021
  • This study proposes a two-stage hybrid classification model for crop classification using multi-temporal remote sensing images; the model combines feature embedding by using an autoencoder (AE) with a convolutional neural network (CNN) classifier to fully utilize features including informative temporal and spatial signatures. Long short-term memory (LSTM)-based AE (LAE) is fine-tuned using class label information to extract latent features that contain less noise and useful temporal signatures. The CNN classifier is then applied to effectively account for the spatial characteristics of the extracted latent features. A crop classification experiment with multi-temporal unmanned aerial vehicle images is conducted to illustrate the potential application of the proposed hybrid model. The classification performance of the proposed model is compared with various combinations of conventional deep learning models (CNN, LSTM, and convolutional LSTM) and different inputs (original multi-temporal images and features from stacked AE). From the crop classification experiment, the best classification accuracy was achieved by the proposed model that utilized the latent features by fine-tuned LAE as input for the CNN classifier. The latent features that contain useful temporal signatures and are less noisy could increase the class separability between crops with similar spectral signatures, thereby leading to superior classification accuracy. The experimental results demonstrate the importance of effective feature extraction and the potential of the proposed classification model for crop classification using multi-temporal remote sensing images.

EDMFEN: Edge detection-based multi-scale feature enhancement Network for low-light image enhancement

  • Canlin Li;Shun Song;Pengcheng Gao;Wei Huang;Lihua Bi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권4호
    • /
    • pp.980-997
    • /
    • 2024
  • To improve the brightness of images and reveal hidden information in dark areas is the main objective of low-light image enhancement (LLIE). LLIE methods based on deep learning show good performance. However, there are some limitations to these methods, such as the complex network model requires highly configurable environments, and deficient enhancement of edge details leads to blurring of the target content. Single-scale feature extraction results in the insufficient recovery of the hidden content of the enhanced images. This paper proposed an edge detection-based multi-scale feature enhancement network for LLIE (EDMFEN). To reduce the loss of edge details in the enhanced images, an edge extraction module consisting of a Sobel operator is introduced to obtain edge information by computing gradients of images. In addition, a multi-scale feature enhancement module (MSFEM) consisting of multi-scale feature extraction block (MSFEB) and a spatial attention mechanism is proposed to thoroughly recover the hidden content of the enhanced images and obtain richer features. Since the fused features may contain some useless information, the MSFEB is introduced so as to obtain the image features with different perceptual fields. To use the multi-scale features more effectively, a spatial attention mechanism module is used to retain the key features and improve the model performance after fusing multi-scale features. Experimental results on two datasets and five baseline datasets show that EDMFEN has good performance when compared with the stateof-the-art LLIE methods.

Quadtree를 사용한 색상-공간 특징과 객체 MBR의 질감 정보를 이용한 영상 검색 (Image Retrieval based on Color-Spatial Features using Quadtree and Texture Information Extracted from Object MBR)

  • 최창규;류상률;김승호
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제8권6호
    • /
    • pp.692-704
    • /
    • 2002
  • 본 논문은 이미지에서 Quadtree를 이용한 색상-공간 특징 추출과 이미지 내에 포함되어 있는 객체의 MBR(Minimum Boundary Rectangie)을 구하여 질감 정보를 추출하는 방법을 제안한다. 제안된 방법은 각 이미지로부터 DC 이미지를 만들고 색상 좌표계를 변환한 후, Quadtree를 이용하여 영역을 분할한다. 영역의 분한 기준은 제안된 조건에 의하여 이루어지며, 각 분할된 영역으로부터 대표 색상을 추출한다. 그리고, 이미지 분할(segmentation)을 통하여 각 이미지의 객체, 객체를 포함한 배경, 또는 일부 배경의 MBR을 구하고, 제안된 알고리즘에 의하여 검색된 MBR의 웨이블릿 계수(wavelet coefficients)를 계산한다. 이 계수들이 MBR의 질감 정보가 되며, 추출된 색상-공간 정보와 질감 정보를 이용하여 제안된 유사도 계산 방법을 통하여 결과를 나타내게 된다. 제안된 방법은 원 이미지(original image)에 비해 특징 정보의 저장 공간을 53% 감소시켰으며, 성능은 유사하게 나타났다. 그리고, 질감 정보를 추가함으로써, 색상-공간 특징의 단점인 객체 정보의 손실을 보완하였고, 질의 이미지의 객체를 포함한 검색 결과를 보였다.

Detection of Hotspots on Multivariate Spatial Data

  • Moon, Sung-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • 제17권4호
    • /
    • pp.1181-1190
    • /
    • 2006
  • Statistical analyses for spatial data are important features for various types of fields. Spatial data are taken at specific locations or within specific regions and their relative positions are recorded. Lattice data are synoptic observation covering an entire spatial region, like cancer rates corresponding to each county in a state. Until now, the echelon analysis has been applied only to univariate spatial data. As a result, it is impossible to detect the hotspots on the multivariate spatial data In this paper, we expand the spatial data to time series structure. And then we analyze them on the time space and detect the hotspots. Echelon dendrogram has been made by piling up each multivariate spatial data to bring time spatial data. We perform the structural analysis of temporal spatial data.

  • PDF

아돌프 로스 단독주택의 공간구조 분석 연구 (An Analysis of the Spatial Configuration of Adolf Loos' House)

  • 이다연;전병권
    • 한국주거학회논문집
    • /
    • 제27권6호
    • /
    • pp.85-93
    • /
    • 2016
  • The spaces have a variety of sizes dependent upon their function and significance as well as their geometric shape. An architect Adolf Loos (1870-1933) had incorporated a correlation between the unconstrained formation of space into design. He had noticeably revealed the features of space that are unconstrained and mutually related with each other, for the space compositions among modern architects. This study is about to analyze the feature of space structure for houses of Adolf Loos through Space syntax which is the quantitative space analyzing method for the subject of his detached houses. These houses were emphasized for functional aspects of space without unnecessary decorations. Le Corbusier's Villa Savoye was analysed along with it to review a comparative point of view of his house's characteristic. Based on this, the features of the spatial structure of Loos' house were examined in conjunction with his views space as essence. A J-Graph and VGA for Adolf Loos' detached house revealed the structure's spatial characteristics in which the interior space is located deeply removed from exterior yet it is integrated as a whole. Also, it revealed that the experiments of the various spatial structures shown in Adolf Loos' detached house and European rationalist architects like Le Corbusier affected each other at the same time.