• Title/Summary/Keyword: video fusion

Search Result 118, Processing Time 0.025 seconds

Effect of the Electric Field on the Plant Protoplasts During Cell Fusion (세포융합시 전계하에서 식물세포가 받는 영향에 관한연구)

  • Lee, Sang-Hoon;Lee, Yon-Min;Cha, Hyeon-Cheol
    • Journal of Biomedical Engineering Research
    • /
    • v.17 no.2
    • /
    • pp.173-178
    • /
    • 1996
  • The objective of this paper is to investigate the effect of AC field on the protoplast of plant cells. The results of investigation will be the basis for the development of etectric cell fusion device. For the experiment, we made the electrode and AC and DC pulse generator and observed the behavior of the protoplasts through the inverted microscope which is connected to the monitor and video recorder by the CCD camera. As a result, the numbers of rotating, moving and destructed protoplasts and viability of the protoplasts have close relation to the amplitude of AC field, while the rotation rate is closely related to the frequency of AC pulse.

  • PDF

Intelligent User Pattern Recognition based on Vision, Audio and Activity for Abnormal Event Detections of Single Households

  • Jung, Ju-Ho;Ahn, Jun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.5
    • /
    • pp.59-66
    • /
    • 2019
  • According to the KT telecommunication statistics, people stayed inside their houses on an average of 11.9 hours a day. As well as, according to NSC statistics in the united states, people regardless of age are injured for a variety of reasons in their houses. For purposes of this research, we have investigated an abnormal event detection algorithm to classify infrequently occurring behaviors as accidents, health emergencies, etc. in their daily lives. We propose a fusion method that combines three classification algorithms with vision pattern, audio pattern, and activity pattern to detect unusual user events. The vision pattern algorithm identifies people and objects based on video data collected through home CCTV. The audio and activity pattern algorithms classify user audio and activity behaviors using the data collected from built-in sensors on their smartphones in their houses. We evaluated the proposed individual pattern algorithm and fusion method based on multiple scenarios.

Efficient Video Retrieval Scheme with Luminance Projection Model (휘도투시모델을 적용한 효율적인 비디오 검색기법)

  • Kim, Sang Hyun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8649-8653
    • /
    • 2015
  • A number of video indexing and retrieval algorithms have been proposed to manage large video databases efficiently. The video similarity measure is one of most important technical factor for video content management system. In this paper, we propose the luminance characteristics model to measure the video similarity efficiently. Most algorithms for video indexing have been commonly used histograms, edges, or motion features, whereas in this paper, the proposed algorithm is employed an efficient similarity measure using the luminance projection. To index the video sequences effectively and to reduce the computational complexity, we calculate video similarity using the key frames extracted by the cumulative measure, and compare the set of key frames using the modified Hausdorff distance. Experimental results show that the proposed luminance projection model yields the remarkable improved accuracy and performance than the conventional algorithm such as the histogram comparison method, with the low computational complexity.

VLSI Architecture for Video Object Boundary Enhancement (비디오객체의 경계향상을 위한 VLSI 구조)

  • Kim, Jinsang-
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.11A
    • /
    • pp.1098-1103
    • /
    • 2005
  • The edge and contour information are very much appreciated by the human visual systems and are responsible for our perceptions and recognitions. Therefore, if edge information is integrated during extracting video objects, we can generate boundaries of oects closer to human visual systems for multimedia applications such as interaction between video objects, object-based coding, and representation. Most of object extraction methods are difficult to implement real-time systems due to their iterative and complex arithmetic operations. In this paper, we propose a VLSI architecture integrating edge information to extract video objects for precisely located object boundaries. The proposed architecture can be easily implemented into hardware due to simple arithmetic operations. Also, it can be applied to real-time object extraction for object-oriented multimedia applications.

Fusion of Background Subtraction and Clustering Techniques for Shadow Suppression in Video Sequences

  • Chowdhury, Anuva;Shin, Jung-Pil;Chong, Ui-Pil
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.231-234
    • /
    • 2013
  • This paper introduces a mixture of background subtraction technique and K-Means clustering algorithm for removing shadows from video sequences. Lighting conditions cause an issue with segmentation. The proposed method can successfully eradicate artifacts associated with lighting changes such as highlight and reflection, and cast shadows of moving object from segmentation. In this paper, K-Means clustering algorithm is applied to the foreground, which is initially fragmented by background subtraction technique. The estimated shadow region is then superimposed on the background to eliminate the effects that cause redundancy in object detection. Simulation results depict that the proposed approach is capable of removing shadows and reflections from moving objects with an accuracy of more than 95% in every cases considered.

Face detection enhancement using independent color channels (독립적 컬러채널을 이용한 얼굴검출 성능개선)

  • Lee, Young-Bok;Min, Hyun-Seok;Ro, Yong-Man
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.05a
    • /
    • pp.95-98
    • /
    • 2008
  • 본 논문은 기존의 질감기반 (texture) 얼굴검출 시스템에서 컬러 영상을 도입하여 성능개선의 중요한 부분인 얼굴 오검출율을 줄이는 방법을 제안한다. 얼굴 영상의 컬러 성분은 흑백 성분과 비교하여 낮은 공간 주파수 영역을 가지는 특징이 있다. 질감기반 얼굴검출에서 높은 대비 (contrast) 성분의 에지는 얼굴이 아닌 영역에서 얼굴로 오인할 수가 있다. 본 논문에서는 이런 오인을 감소하기 위해 독립적인 컬러 채널 성분들을 질감기반 얼굴 검출에 각각 이용하여 그 얻어진 결과들을 융합 (fusion) 하는 방법을 제안한다. 실험결과로 제안한 칼라 채널 융합 방법을 통해 얻은 얼굴 검출율은 기존 흑백 영상과 비슷하게 유지되며 오검출율을 현저히 줄이는 것을 보였다.

Secured Authentication through Integration of Gait and Footprint for Human Identification

  • Murukesh, C.;Thanushkodi, K.;Padmanabhan, Preethi;Feroze, Naina Mohamed D.
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.6
    • /
    • pp.2118-2125
    • /
    • 2014
  • Gait Recognition is a new technique to identify the people by the way they walk. Human gait is a spatio-temporal phenomenon that typifies the motion characteristics of an individual. The proposed method makes a simple but efficient attempt to gait recognition. For each video file, spatial silhouettes of a walker are extracted by an improved background subtraction procedure using Gaussian Mixture Model (GMM). Here GMM is used as a parametric probability density function represented as a weighted sum of Gaussian component densities. Then, the relevant features are extracted from the silhouette tracked from the given video file using the Principal Component Analysis (PCA) method. The Fisher Linear Discriminant Analysis (FLDA) classifier is used in the classification of dimensional reduced image derived by the PCA method for gait recognition. Although gait images can be easily acquired, the gait recognition is affected by clothes, shoes, carrying status and specific physical condition of an individual. To overcome this problem, it is combined with footprint as a multimodal biometric system. The minutiae is extracted from the footprint and then fused with silhouette image using the Discrete Stationary Wavelet Transform (DSWT). The experimental result shows that the efficiency of proposed fusion algorithm works well and attains better result while comparing with other fusion schemes.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

Development of A Multi-sensor Fusion-based Traffic Information Acquisition System with Robust to Environmental Changes using Mono Camera, Radar and Infrared Range Finder (환경변화에 강인한 단안카메라 레이더 적외선거리계 센서 융합 기반 교통정보 수집 시스템 개발)

  • Byun, Ki-hoon;Kim, Se-jin;Kwon, Jang-woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.2
    • /
    • pp.36-54
    • /
    • 2017
  • The purpose of this paper is to develop a multi-sensor fusion-based traffic information acquisition system with robust to environmental changes. it combines the characteristics of each sensor and is more robust to the environmental changes than the video detector. Moreover, it is not affected by the time of day and night, and has less maintenance cost than the inductive-loop traffic detector. This is accomplished by synthesizing object tracking informations based on a radar, vehicle classification informations based on a video detector and reliable object detections of a infrared range finder. To prove the effectiveness of the proposed system, I conducted experiments for 6 hours over 5 days of the daytime and early evening on the pedestrian - accessible road. According to the experimental results, it has 88.7% classification accuracy and 95.5% vehicle detection rate. If the parameters of this system is optimized to adapt to the experimental environment changes, it is expected that it will contribute to the advancement of ITS.

Convergence Class for Basic Video Grammar and Communication Ability - Through Interview Video Production - (기초 영상 문법과 소통 능력 향상 융합 수업 -인터뷰 영상 제작을 통해-)

  • Kim, Leo;Huh, Yoon Jung
    • Journal of Digital Convergence
    • /
    • v.17 no.7
    • /
    • pp.341-349
    • /
    • 2019
  • Today, art education needs education of media image production or media in the extended field. The purpose of this study is to develop the basic visual grammar and communication ability. We developed and applied the teaching plan for the interview video production as the fusion lesson. 3 classes were conducted for 30 middle school students. The results obtained through the analysis of the pre & post questions and student works are as follows. First, the students used the shot size and stable composition through the interview video production. As a result, this class was effective for the acquisition of the basic video grammar. Second, students improved their ability to ask and listen, understand and organize through interview. As a result, students were able to apply the basic video grammar through the interview video production lesson, effectively producing the interview image.