• Title/Summary/Keyword: temporal fusion

Search Result 102, Processing Time 0.024 seconds

An Efficient Data Fusion Mechanism on Wireless Sensor Networks (센서네트워크 환경에서 효율적인 데이터 퓨전 기법)

  • Choi, Kyung;Park, Kyung-Ran;Chae, Ki-Joon;Park, Jong-Jun;Joo, Seong-Soon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2009.04a
    • /
    • pp.1260-1263
    • /
    • 2009
  • 제한된 배터리를 가지는 센서 네트워크의 수명을 길게 하기 위하여 에너지 소비를 줄이고 효율성을 높이는 다양한 방법이 제안되었다. 센서 노드에서의 데이터 전송은 가장 큰 에너지 소비 활동이 되기 때문에, 데이터가 전송되는 양을 줄여서 에너지 소비를 줄일 수 있는 방법이 한 부분으로 연구되고 있다. 본 논문에서는 이와 같은 특징을 반영하고, 시간적 연관성만을 고려한 데이터 퓨전 기법인 TiNA(Temporal coherency-aware in-Network Aggregation)를 기반으로 시간, 공간적 연관성을 동시에 고려하여 데이터 퓨전을 하는 효율적인 데이터 퓨전(Data Fusion) 기법을 제안하였다.

A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map (객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템)

  • Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

Fashion Style and Sensibility Fusion Effect of Fashion Icons in the 21th Century (21세기 패션아이콘의 패션 스타일과 감성적 융합작용)

  • Park, Song-Ae
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • v.15 no.3
    • /
    • pp.109-118
    • /
    • 2013
  • Fashion icons of 21st century are not only the fashion leaders that show fashion trend but also the typical fashion signs or symbols that show visually changes in sensibility trends. The purpose of this study was to analyze the framework of 21st century fashion by the public to recognize through these changes. In this study, the background of the occurrence of various 21st century fashion icons and their characteristics were investigated and the changes of revealed features and symbolic meanings were examined compared with them of 20th century. The 24 celebrities which have been called as the bests of fashion icons since year 2000 were selected by searching the most popular search engines such as daum, yahoo and google, and 13 of them were picked as the highest in preference and awareness by surveying 50 students majoring in fashion. And then their fashion styles, backgrounds, and influence on the public fashion were studied. As a result, the 21st century fashion icons reflecting the cultural characteristics such as convergence and exaggeration and the sensitivities of fusion, collaboration, hybrid sensibility in art were powerful enough to create innovative styles destroying the era and the standard. Their styles have constantly created new looks. The exposed new individual sensitivities on media-fusion of two or more sensibility and coordination techniques without being tied to the existing anchorage system-were as influential as high fashion and leaded the imitation and reproduction by dazzling the public. As the media become more powerful, the influence of fashion icons interacted more closely with the public and has been evolved through the sensitivity of the reversal, cultural, economic, visual, or temporal fusions. To sum up, it is shown that the outstanding fashion styles suggested by the leading fashion designers have approach to the public more closely by the fashion icons.

  • PDF

Fine Registration between Very High Resolution Satellite Images Using Registration Noise Distribution (등록오차 분포특성을 이용한 고해상도 위성영상 간 정밀 등록)

  • Han, Youkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.3
    • /
    • pp.125-132
    • /
    • 2017
  • Even after applying an image registration, Very High Resolution (VHR) multi-temporal images acquired from different optical satellite sensors such as IKONOS, QuickBird, and Kompsat-2 show a local misalignment due to dissimilarities in sensor properties and acquisition conditions. As the local misalignment, also referred to as Registration Noise (RN), is likely to have a negative impact on multi-temporal information extraction, detecting and reducing the RN can improve the multi-temporal image processing performance. In this paper, an approach to fine registration between VHR multi-temporal images by considering local distribution of RN is proposed. Since the dominant RN mainly exists along boundaries of objects, we use edge information in high frequency regions to identify it. In order to validate the proposed approach, datasets are built from VHR multi-temporal images acquired by optical satellite sensors. Both qualitative and quantitative assessments confirm the effectiveness of the proposed RN-based fine registration approach compared to the manual registration.

A Study on the Classification of Fault Motors using Sound Data (소리 데이터를 이용한 불량 모터 분류에 관한 연구)

  • Il-Sik, Chang;Gooman, Park
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.885-896
    • /
    • 2022
  • Motor failure in manufacturing plays an important role in future A/S and reliability. Motor failure is detected by measuring sound, current, and vibration. For the data used in this paper, the sound of the car's side mirror motor gear box was used. Motor sound consists of three classes. Sound data is input to the network model through a conversion process through MelSpectrogram. In this paper, various methods were applied, such as data augmentation to improve the performance of classifying fault motors and various methods according to class imbalance were applied resampling, reweighting adjustment, change of loss function and representation learning and classification into two stages. In addition, the curriculum learning method and self-space learning method were compared through a total of five network models such as Bidirectional LSTM Attention, Convolutional Recurrent Neural Network, Multi-Head Attention, Bidirectional Temporal Convolution Network, and Convolution Neural Network, and the optimal configuration was found for motor sound classification.

Overexpression of ginseng UGT72AL1 causes organ fusion in the axillary leaf branch of Arabidopsis

  • Nguyen, Ngoc Quy;Lee, Ok Ran
    • Journal of Ginseng Research
    • /
    • v.41 no.3
    • /
    • pp.419-427
    • /
    • 2017
  • Background: Glycosylation of natural compounds increases the diversity of secondary metabolites. Glycosylation steps are implicated not only in plant growth and development, but also in plant defense responses. Although the activities of uridine-dependent glycosyltransferases (UGTs) have long been recognized, and genes encoding them in several higher plants have been identified, the specific functions of UGTs in planta remain largely unknown. Methods: Spatial and temporal patterns of gene expression were analyzed by quantitative reverse transcription (qRT)-polymerase chain reaction (PCR) and GUS histochemical assay. In planta transformation in heterologous Arabidopsis was generated by floral dipping using Agrobacterium tumefaciens (C58C1). Protein localization was analyzed by confocal microscopy via fluorescent protein tagging. Results: PgUGT72AL1 was highly expressed in the rhizome, upper root, and youngest leaf compared with the other organs. GUS staining of the promoter: GUS fusion revealed high expression in different organs, including axillary leaf branch. Overexpression of PgUGT72AL1 resulted in a fused organ in the axillary leaf branch. Conclusion: PgUGT72AL1, which is phylogenetically close to PgUGT71A27, is involved in the production of ginsenoside compound K. Considering that compound K is not reported in raw ginseng material, further characterization of this gene may shed light on the biological function of ginsenosides in ginseng plant growth and development. The organ fusion phenotype could be caused by the defective growth of cells in the boundary region, commonly regulated by phytohormones such as auxins or brassinosteroids, and requires further analysis.

Secured Authentication through Integration of Gait and Footprint for Human Identification

  • Murukesh, C.;Thanushkodi, K.;Padmanabhan, Preethi;Feroze, Naina Mohamed D.
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.6
    • /
    • pp.2118-2125
    • /
    • 2014
  • Gait Recognition is a new technique to identify the people by the way they walk. Human gait is a spatio-temporal phenomenon that typifies the motion characteristics of an individual. The proposed method makes a simple but efficient attempt to gait recognition. For each video file, spatial silhouettes of a walker are extracted by an improved background subtraction procedure using Gaussian Mixture Model (GMM). Here GMM is used as a parametric probability density function represented as a weighted sum of Gaussian component densities. Then, the relevant features are extracted from the silhouette tracked from the given video file using the Principal Component Analysis (PCA) method. The Fisher Linear Discriminant Analysis (FLDA) classifier is used in the classification of dimensional reduced image derived by the PCA method for gait recognition. Although gait images can be easily acquired, the gait recognition is affected by clothes, shoes, carrying status and specific physical condition of an individual. To overcome this problem, it is combined with footprint as a multimodal biometric system. The minutiae is extracted from the footprint and then fused with silhouette image using the Discrete Stationary Wavelet Transform (DSWT). The experimental result shows that the efficiency of proposed fusion algorithm works well and attains better result while comparing with other fusion schemes.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Multimodal audiovisual speech recognition architecture using a three-feature multi-fusion method for noise-robust systems

  • Sanghun Jeon;Jieun Lee;Dohyeon Yeo;Yong-Ju Lee;SeungJun Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.22-34
    • /
    • 2024
  • Exposure to varied noisy environments impairs the recognition performance of artificial intelligence-based speech recognition technologies. Degraded-performance services can be utilized as limited systems that assure good performance in certain environments, but impair the general quality of speech recognition services. This study introduces an audiovisual speech recognition (AVSR) model robust to various noise settings, mimicking human dialogue recognition elements. The model converts word embeddings and log-Mel spectrograms into feature vectors for audio recognition. A dense spatial-temporal convolutional neural network model extracts features from log-Mel spectrograms, transformed for visual-based recognition. This approach exhibits improved aural and visual recognition capabilities. We assess the signal-to-noise ratio in nine synthesized noise environments, with the proposed model exhibiting lower average error rates. The error rate for the AVSR model using a three-feature multi-fusion method is 1.711%, compared to the general 3.939% rate. This model is applicable in noise-affected environments owing to its enhanced stability and recognition rate.

Application of a Deep Learning Method on Aerial Orthophotos to Extract Land Categories

  • Won, Taeyeon;Song, Junyoung;Lee, Byoungkil;Pyeon, Mu Wook;Sa, Jiwon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.5
    • /
    • pp.443-453
    • /
    • 2020
  • The automatic land category extraction method was proposed, and the accuracy was evaluated by learning the aerial photo characteristics by land category in the border area with various restrictions on the acquisition of geospatial data. As experimental data, this study used four years' worth of published aerial photos as well as serial cadastral maps from the same time period. In evaluating the results of land category extraction by learning features from different temporal and spatial ranges of aerial photos, it was found that land category extraction accuracy improved as the temporal and spatial ranges increased. Moreover, the greater the diversity and quantity of provided learning images, the less the results were affected by the quality of images at a specific time to be extracted, thus generally demonstrating accurate and practical land category feature extraction.