• Title/Summary/Keyword: Frame camera

Search Result 610, Processing Time 0.025 seconds

Speaker Detection System for Video Conference (영상회의를 위한 화자 검출 시스템)

  • Lee, Byung-Sun;Ko, Sung-Won;Kwon, Heak-Bong
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.17 no.5
    • /
    • pp.68-79
    • /
    • 2003
  • In this paper, we propose a system that detects the current speaker in multi-speaker video conference by using lip motion. First, the system detects the face and lip area of each of the speakers using face color and shape information. Then, to detect the current speaker, it calculates the change between the current frame and the previous frame. To accomplish this, we used two CCD cameras. One is a general CCD camera, the other is a PTZ camera controlled by RS-232C serial port. The result is a system capable of detecting the face of current speaker in a video feed with more than three people, regardless of orientation of the faces. With this system, it only takes 4 to 5 seconds to zoom in on the speaker from the initial image. Also, it is amore efficient image transmission system for such things as video conference and internet broadcasting because it offers a face area screen at a resolution of 320X240, while at the same time providing a whole background screen.

Fast Structure Recovery and Integration using Improved Scaled Orthographic Factorization (개선된 직교분해기법을 사용한 빠른 구조 복원 및 융합)

  • Park, Jong-Seung;Yoon, Jong-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.3
    • /
    • pp.303-315
    • /
    • 2007
  • This paper proposes a 3D structure recovery and registration method that uses four or more common points. For each frame of a given video, a partial structure is recovered using tracked points. The 3D coordinates, camera positions and camera directions are computed at once by our improved scaled orthographic factorization method. The partially recovered point sets are parts of a whole model. A registration of point sets makes the complete shape. The recovered subsets are integrated by transforming each coordinate system of the local point subset into a common basis coordinate system. The process of shape recovery and integration is performed uniformly and linearly without any nonlinear iterative process and without loss of accuracy. The execution time for the integration is significantly reduced relative to the conventional ICP method. Due to the fast recovery and registration framework, our shape recovery scheme is applicable to various interactive video applications. The processing time per frame is under 0.01 seconds in most cases and the integration error is under 0.1mm on average.

  • PDF

PIV System for the Flow Pattern Anaysis of Artificial Organs ; Applied to the In Vitro Test of Artificial Heart Valves

  • Lee, Dong-Hyeok;Seh, Soo-Won;An, Hyuk;Min, Byoung-Goo
    • Journal of Biomedical Engineering Research
    • /
    • v.15 no.4
    • /
    • pp.489-497
    • /
    • 1994
  • The most serious problems related to the cardiovascular prothesis are thrombosis and hemolysis. It is known that the flow pattern of cardiovascular prostheses is highly correlated with thrombosis and hemolysis. Laser Doppler Anemometry (LDA) is a usual method to get flow pattern, which is difficult to operate and has narrow measure region. Particle Image Velocimetry (PIV) can solve these problems. Because the flow speed of valve is too high to catch particles by CCD camera, high-speed camera (Hyspeed : Holland-Photonics) was used. The estimated maximum flow speed was 5m/sec and maximum trackable length is 0.5 cm, so the shutter speed was determined as 1000 frames per sec. Several image processing techniques (blurring, segmentation, morphology, etc) were used for the preprocessing. Particle tracking algorithm and 2-D interpolation technique which were necessary in making gridrized velocity pronto, were applied to this PIV program. By using Single-Pulse Multi-Frame particle tracking algorithm, some problems of PIV can be solved. To eliminate particles which penetrate the sheeted plane and to determine the direction of particle paths are these solving methods. 1-D relaxation fomula is modified to interpolate 2-D field. Parachute artificial heart valve which was developed by Seoul National University and Bjork-Shiely valve was testified. For each valve, different flow pattern, velocity profile, wall shear stress and mean velocity were obtained.

  • PDF

Realtime 3D Human Full-Body Convergence Motion Capture using a Kinect Sensor (Kinect Sensor를 이용한 실시간 3D 인체 전신 융합 모션 캡처)

  • Kim, Sung-Ho
    • Journal of Digital Convergence
    • /
    • v.14 no.1
    • /
    • pp.189-194
    • /
    • 2016
  • Recently, there is increasing demand for image processing technology while activated the use of equipments such as camera, camcorder and CCTV. In particular, research and development related to 3D image technology using the depth camera such as Kinect sensor has been more activated. Kinect sensor is a high-performance camera that can acquire a 3D human skeleton structure via a RGB, skeleton and depth image in real-time frame-by-frame. In this paper, we develop a system. This system captures the motion of a 3D human skeleton structure using the Kinect sensor. And this system can be stored by selecting the motion file format as trc and bvh that is used for general purposes. The system also has a function that converts TRC motion captured format file into BVH format. Finally, this paper confirms visually through the motion capture data viewer that motion data captured using the Kinect sensor is captured correctly.

Implementation of the high speed signal processing hardware system for Color Line Scan Camera (Color Line Scan Camera를 위한 고속 신호처리 하드웨어 시스템 구현)

  • Park, Se-hyun;Geum, Young-wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.9
    • /
    • pp.1681-1688
    • /
    • 2017
  • In this paper, we implemented a high-speed signal processing hardware system for Color Line Scan Camera using FPGA and Nor-Flash. The existing hardware system mainly processed by high-speed DSP based on software and it was a method of detecting defects mainly by RGB individual logic, however we suggested defect detection hardware using RGB-HSL hardware converter, FIFO, HSL Full-Color Defect Decoder and Image Frame Buffer. The defect detection hardware is composed of hardware look-up table in converting RGB to HSL and 4K HSL Full-Color Defect Decoder with high resolution. In addition, we included an image frame for comprehensive image processing based on two dimensional image by line data accumulation instead of local image processing based on line data. As a result, we can apply the implemented system to the grain sorting machine for the sorting of peanuts effectively.

A Fundamental Study on Detection of Weeds in Paddy Field using Spectrophotometric Analysis (분광특성 분석에 의한 논 잡초 검출의 기초연구)

  • 서규현;서상룡;성제훈
    • Journal of Biosystems Engineering
    • /
    • v.27 no.2
    • /
    • pp.133-142
    • /
    • 2002
  • This is a fundamental study to develop a sensor to detect weeds in paddy field using machine vision adopted spectralphotometric technique in order to use the sensor to spread herbicide selectively. A set of spectral reflectance data was collected from dry and wet soil and leaves of rice and 6 kinds of weed to select desirable wavelengths to classify soil, rice and weeds. Stepwise variable selection method of discriminant analysis was applied to the data set and wavelengths of 680 and 802 m were selected to distinguish plants (including rice and weeds) from dry and wet soil, respectively. And wavelengths of 580 and 680 nm were selected to classify rice and weeds by the same method. Validity of the wavelengths to distinguish the plants from soil was tested by cross-validation test with built discriminant function to prove that all of soil and plants were classified correctly without any failure. Validity of the wavelengths for classification of rice and weeds was tested by the same method and the test resulted that 98% of rice and 83% of weeds were classified correctly. Feasibility of CCD color camera to detect weeds in paddy field was tested with the spectral reflectance data by the same statistical method as above. Central wavelengths of RGB frame of color camera were tried as tile effective wavelengths to distingush plants from soil and weeds from plants. The trial resulted that 100% and 94% of plants in dry soil and wet soil, respectively, were classified correctly by the central wavelength or R frame only, and 95% of rice and 85% of weeds were classified correctly by the central wavelengths of RGB frames. As a result, it was concluded that CCD color camera has good potential to be used to detect weeds in paddy field.

Robust 3-D Motion Estimation Based on Stereo Vision and Kalman Filtering (스테레오 시각과 Kalman 필터링을 이용한 강인한 3차원 운동추정)

  • 계영철
    • Journal of Broadcast Engineering
    • /
    • v.1 no.2
    • /
    • pp.176-187
    • /
    • 1996
  • This paper deals with the accurate estimation of 3- D pose (position and orientation) of a moving object with reference to the world frame (or robot base frame), based on a sequence of stereo images taken by cameras mounted on the end - effector of a robot manipulator. This work is an extension of the previous work[1]. Emphasis is given to the 3-D pose estimation relative to the world (or robot base) frame under the presence of not only the measurement noise in 2 - D images[ 1] but also the camera position errors due to the random noise involved in joint angles of a robot manipulator. To this end, a new set of discrete linear Kalman filter equations is derived, based on the following: 1) the orientation error of the object frame due to measurement noise in 2 - D images is modeled with reference to the camera frame by analyzing the noise propagation through 3- D reconstruction; 2) an extended Jacobian matrix is formulated by combining the result of 1) and the orientation error of the end-effector frame due to joint angle errors through robot differential kinematics; and 3) the rotational motion of an object, which is nonlinear in nature, is linearized based on quaternions. Motion parameters are computed from the estimated quaternions based on the iterated least-squares method. Simulation results show the significant reduction of estimation errors and also demonstrate an accurate convergence of the actual motion parameters to the true values.

  • PDF

The Existence of Implicit Frames in VR Movies (VR 영화에서 암묵적 프레임의 존재)

  • Kim, Tae-Eun
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.272-286
    • /
    • 2018
  • VR movies form a relationship with the audience in completely different ways from general movies with their screen. In VR movies, the audience becomes the camera and also the subject of the camera viewpoint, which raises a need for a frame theory unique to VR movies to examine the first person viewpoint and replace the edition of frames to deliver a narrative. In VR movies, the frames delivering a narrative are not revealed and perform the symbolic narrative function, thus being called "implicit frames." The study discussed their related theoretical backgrounds including Russian Ark made in the one shot, one cut method by Alexander Sokurov, off-screen elements, and the Fourth Wall. In VR movies, the audience gets immersed in the narrative based on their paradoxical dilemma, which means that they exist in reality but are absent on screen at the same time, and experiences hyper-reality. In VR movies, space has a couple of attributes including the blocking of eyeline to move it and telepresence to tie up presence between reality and virtuality.

Design of Image Recognition Module for Face and Iris Area based on Pixel with Eye Blinking (눈 깜박임 화소 값 기반의 안면과 홍채영역 영상인식용 모듈설계)

  • Kang, Mingoo
    • Journal of Internet Computing and Services
    • /
    • v.18 no.1
    • /
    • pp.21-26
    • /
    • 2017
  • In this paper, an USB-OTG (Uiversal Serial Bus On-the-go) interface module was designed with the iris information for personal identification. The image recognition algorithm which was searching face and iris areas, was proposed with pixel differences from eye blinking after several facial images were captured and then detected without any activities like as pressing the button of smart phone. The region of pupil and iris could be fast involved with the proper iris area segmentation from the pixel value calculation of frame difference among the images which were detected with two adjacent open-eye and close-eye pictures. This proposed iris recognition could be fast processed with the proper grid size of the eye region, and designed with the frame difference between the adjacent images from the USB-OTG interface with this camera module with the restrict of searching area in face and iris location. As a result, the detection time of iris location can be reduced, and this module can be expected with eliminating the standby time of eye-open.

Epipolar Resampling for High Resolution Satellite Imagery Based on Parallel Projection (평행투영 기반의 고해상도 위성영상 에피폴라 재배열)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Chang, Hwi-Jeong;Jeong, Ji-Yeon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.15 no.4
    • /
    • pp.81-88
    • /
    • 2007
  • The geometry of satellite image captured by linear CCD sensor is different from that of frame camera image. The fact that the exterior orientation parameters for satellite image with linear CCD sensor varies from scan line by scan line, causes the difference of image geometry between frame and linear CCD sensor. Therefore, we need the epipolar geometry for linear CCD image which differs from that of frame camera image. In this paper, we proposed a method of resampling linear CCD satellite image in epipolar geometry under the assumption that image is not formed in perspective projection but in parallel projection, and the sensor model is a 2D affine sensor model based on parallel projection. For the experiment, IKONOS stereo images, which are high resolution linear CCD images, were used and tested. As results, the spatial accuracy of 2D affine sensor model is investigated and the accuracy of epipolar resampled image with RFM was presented.

  • PDF