• 제목/요약/키워드: Spatiotemporal expressions

검색결과 7건 처리시간 0.028초

시공간 데이터를 위한 공간 및 시간 관계 연산자의 통합 (An Integration of Spatial and Temporal Relationship Operators for Spatiotemporal Data)

  • 이종연;류근호
    • 한국정보처리학회논문지
    • /
    • 제6권1호
    • /
    • pp.21-31
    • /
    • 1999
  • 이 논문은 시공간 데이터의 통합된 연산을 위하여 공간 연산자와 시간 연산자의 경합 접속을 연구한다. 시공간 연산을 위한 통합 연구는 시공간 참조 메크로에 의한 시공간 위상관계 연산자의공통 사용을 의미한다. 또한 이 논문은 지형 객체의 이력 정보를 검색하는 이력 연산자와 시간관계 비교 연산자의 통합 알고리즘을 제안한다. 제안된 확장 알고리즘은 기존의 GIS(Geographic Information System) 공간 데이터베이스를 기반으로 구현되며, 아울러 시공간 질의표현의 예를 통해 평가된다. 여기서 연구된 시공간 연산자의 통합은 통일된 시공간 질의를 지원하는데 유용한 기반구조를 제공할 것이다.

  • PDF

시공간 데이타 모델 : 이원 시간을 지원하는 삼차원 구조 (A Spatiotemporal Data Model : 3D Supporting BiTemporal Time)

  • 이성종;김동호;류근호
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제26권10호
    • /
    • pp.1167-1167
    • /
    • 1999
  • Although spatial databases support an efficient spatial management on objects in the real world, they have a characteristic that process only spatial information valid at current time, So in case of change in the spatial domain, it is very hard to support an efficient historical management for time-varying spatial information because they delete an old value and then replace with new value that is valid at current time. To solve these problems, there are rapidly increasing of interest for spatiotemporal databases, which serve historical functions for spatial information as well as spatial management functions for an object. However most of them presented in an abstract time-varying spatial phenomenon, but have not presented a concrete policy in spatiotemporal databases. In this paper, we propose a spatiotemporal data model that supports bitemporal time concepts in three dimensional architecture. In the proposed model, not only data types and their operation for object of spatiotemporal databases have been classified, but also mathematical expressions using formal semantics for them have been given. Then, the data structures and their operations based on relational database model as well as object-oriented database model are presented.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Cloning and Spatiotemporal Expression Analysis of Bombyx mori elav, an Embryonic Lethal Abnormal Visual Gene

  • Wang, Geng-Xian;Liu, Ying;Sim, Yang-Hu;Zhang, Sheng-Xiang;Xu, Shi-Qing
    • International Journal of Industrial Entomology and Biomaterials
    • /
    • 제18권2호
    • /
    • pp.113-120
    • /
    • 2009
  • Embryonic lethal abnormal visual (elav) is a lethal gene in Drosophila inducing the abnormal development and function of nervous system. We cloned a Bm-elav gene by bioinformatics and biological experiment, based on sequence of ELAV protein and dbEST of Bombyx mori. The full-length of Bm-elav cDNA is 1498 bp, contains a 906 bp open read frame (ORF) encoding a precursor of 301 amino acid residues with a calculated molecular weight of 34 kDa and pI of 8.99. Bm-ELAV protein precursor contains three RNA recognition motifs (RRM) in $24{\sim}91$, $110{\sim}177$ and $222{\sim}295$ bit amino acid residues respectively, and belongs to RNA-binding protein family. Bm-ELAV shared varying positives, ranging from 56% to 60% (Identities from 41% to 45%), with RRM from other species of Xenopus tropicalis, Apis mellifera, Tribolium castaneum, Branchiostoma belcheri and Drosophila. Gene localization indicated that Bm-elav is a single-copy gene, gene mapping within 12-chromosome from 7916.68 knt to 7918.16 knt region of nscaf2993. Spatiotemporal expressions pattern analysis revealed that Bm-elav expressed higher in most tested tissues and developmental stages in whole generation, such as silk gland, fat body, midgut, hemopoietic organ and ovary, but almost no expression in terminated diapause eggs. This suggested that the expression of Bm-elav in early developmental embryonic stages might induce abnormal development like in Drosophila. Cloning of the Bm-elav gene enables us to test its potential role in controlling pests by transferring the gene into field lepidopteran insects in the future.

Genome-wide identification of histone lysine methyltransferases and their implications in the epigenetic regulation of eggshell formation-related genes in a trematode parasite Clonorchis sinensis

  • Min-Ji Park;Woon-Mok Sohn;Young-An Bae
    • Parasites, Hosts and Diseases
    • /
    • 제62권1호
    • /
    • pp.98-116
    • /
    • 2024
  • Epigenetic writers including DNA and histone lysine methyltransferases (DNMT and HKMT, respectively) play an initiative role in the differentiation and development of eukaryotic organisms through the spatiotemporal regulation of functional gene expressions. However, the epigenetic mechanisms have long been suspected in helminth parasites lacking the major DNA methyltransferases DNMT1 and DNMT3a/3b. Very little information on the evolutionary status of the epigenetic tools and their role in regulating chromosomal genes is currently available in the parasitic trematodes. We previously suggested the probable role of a DNMT2-like protein (CsDNMT2) as a genuine epigenetic writer in a trematode parasite Clonorchis sinensis. Here, we analyzed the phylogeny of HKMT subfamily members in the liver fluke and other platyhelminth species. The platyhelminth genomes examined conserved genes for the most of SET domain-containing HKMT and Disruptor of Telomeric Silencing 1 subfamilies, while some genes were expanded specifically in certain platyhelminth genomes. Related to the high gene dosages for HKMT activities covering differential but somewhat overlapping substrate specificities, variously methylated histones were recognized throughout the tissues/organs of C. sinensis adults. The temporal expressions of genes involved in eggshell formation were gradually decreased to their lowest levels proportionally to aging, whereas those of some epigenetic tool genes were re-boosted in the later adult stages of the parasite. Furthermore, these expression levels were significantly affected by treatment with DNMT and HKMT inhibitors. Our data strongly suggest that methylated histones are potent epigenetic markers that modulate the spatiotemporal expressions of C. sinensis genes, especially those involved in sexual reproduction.

Performance Analysis of Cellular Networks with D2D communication Based on Queuing Theory Model

  • Xin, Jianfang;Zhu, Qi;Liang, Guangjun;Zhang, Tiaojiao;Zhao, Su
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권6호
    • /
    • pp.2450-2469
    • /
    • 2018
  • In this paper, we develop a spatiotemporal model to analysis of cellular user in underlay D2D communication by using stochastic geometry and queuing theory. Firstly, by exploring stochastic geometry to model the user locations, we derive the probability that the SINR of cellular user in a predefined interval, which constrains the corresponding transmission rate of cellular user. Secondly, in contrast to the previous studies with full traffic models, we employ queueing theory to evaluate the performance parameters of dynamic traffic model and formulate the cellular user transmission mechanism as a M/G/1 queuing model. In the derivation, Embedded Markov chain is introduced to depict the stationary distribution of cellular user queue status. Thirdly, the expressions of performance metrics in terms of mean queue length, mean throughput, mean delay and mean dropping probability are obtained, respectively. Simulation results show the validity and rationality of the theoretical analysis under different channel conditions.

Gesture-Based Emotion Recognition by 3D-CNN and LSTM with Keyframes Selection

  • Ly, Son Thai;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • International Journal of Contents
    • /
    • 제15권4호
    • /
    • pp.59-64
    • /
    • 2019
  • In recent years, emotion recognition has been an interesting and challenging topic. Compared to facial expressions and speech modality, gesture-based emotion recognition has not received much attention with only a few efforts using traditional hand-crafted methods. These approaches require major computational costs and do not offer many opportunities for improvement as most of the science community is conducting their research based on the deep learning technique. In this paper, we propose an end-to-end deep learning approach for classifying emotions based on bodily gestures. In particular, the informative keyframes are first extracted from raw videos as input for the 3D-CNN deep network. The 3D-CNN exploits the short-term spatiotemporal information of gesture features from selected keyframes, and the convolutional LSTM networks learn the long-term feature from the features results of 3D-CNN. The experimental results on the FABO dataset exceed most of the traditional methods results and achieve state-of-the-art results for the deep learning-based technique for gesture-based emotion recognition.