• Title/Summary/Keyword: movement-image

Search Result 1,062, Processing Time 0.03 seconds

Image Analysis for the Simultaneous Measurement of Underwater Flow Velocity and Direction (수중 유속 및 유향의 동시 측정을 위한 이미지 분석 기술에 관한 연구)

  • Dongmin Seo;Sangwoo Oh;Sung-Hoon Byun
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.5
    • /
    • pp.307-312
    • /
    • 2023
  • To measure the flow velocity and direction in the near field of an unmanned underwater vehicle, an optical measurement unit containing an image sensor and a phosphor-integrated pillar that mimics the neuromasts of a fish was constructed. To analyze pillar movement, which changes with fluid flow, fluorescence image analysis was conducted. To analyze the flow velocity, mean force analysis, which could determine the relationship between the light intensity of a fluorescence image and an external force, and length-force analysis, which could determine the distance between the center points of two fluorescence images, were employed. Additionally, angle analysis that can determine the angles at which pixels of a digital image change was selected to analyze the direction of fluid flow. The flow velocity analysis results showed a high correlation of 0.977 between the external force and the light intensity of the fluorescence image, and in the case of direction analysis, omnidirectional movement could be analyzed. Through this study, we confirmed the effectiveness of optical flow sensors equipped with phosphor-integrated pillars.

Gaze Detection in Head Mounted Camera environment (Head Mounted Camera 환경에서 응시위치 추적)

  • 이철한;이정준;김재희
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.25-28
    • /
    • 2000
  • Gaze detection is to find out the position on a monitor screen where a user is looking at, using the computer vision processing. This System can help the handicapped to use a computer, substitute a touch screen which is expensive, and navigate the virtual reality. There are basically two main types of the study of gaze detection. The first is to find out the location by face movement, and the second is by eye movement. In the gaze detection by eye movement, we find out the position with special devices, or the methode of image processing. In this paper, we detect not the iris but the pupil from the image captured by Head-Mounted Camera with infra-red light, and accurately locate the position where a user looking at by A(fine Transform.

  • PDF

Real-time Implementation of Character Movement by Floating Hologram based on Depth Video

  • Oh, Kyoo-jin;Kwon, Soon-kak
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.289-294
    • /
    • 2017
  • In this paper, we implement to make the character content with the floating hologram. The floating hologram is the one of hologram techniques for projecting the 2D image to represent the 3D image in the air using the glass panel. The floating hologram technique is easy to apply and is used in exhibitions, corporate events, and advertising events. This paper uses both the depth information and the unreal engine for the floating hologram. Simulation results show that this method can make the character content to follow the movement of the user in the real time by capturing the depth video.

Improvement of Dynamic Characteristics of an Optical Image Stabilizer in a Compact Camera (초소형 카메라 흔들림 보정장치의 동특성 개선)

  • Song, Myeong-Gyu;Son, Dong-Hun;Park, No-Cheol;Park, Kyoung-Su;Park, Young-Pil
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.21 no.2
    • /
    • pp.178-185
    • /
    • 2011
  • Optical image stabilization is a device to compensate the camera movement in the exposure time. The compensation is implemented by movable lens or image sensor that adjusts the optical path to the camera movement. Generally, the camera is moved by a handshake, thus the handshake is considered as an external disturbance. However, there are many other vibrations such as car and train vibration. In this paper, the optical image stabilization system in high frequency region is presented. Notch filter and lead compensator are designed and applied to improve the stability without changing the actuator. To verify the performance of the optical image stabilization system in high frequency region, the experiment equipment with moving object is established. It is confirmed that the opticalimage stabilization system does not diverge at the resonance frequency.

Head motion during cone-beam computed tomography: Analysis of frequency and influence on image quality

  • Moratin, Julius;Berger, Moritz;Ruckschloss, Thomas;Metzger, Karl;Berger, Hannah;Gottsauner, Maximilian;Engel, Michael;Hoffmann, Jurgen;Freudlsperger, Christian;Ristow, Oliver
    • Imaging Science in Dentistry
    • /
    • v.50 no.3
    • /
    • pp.227-236
    • /
    • 2020
  • Purpose: Image artifacts caused by patient motion cause problems in cone-beam computed tomography (CBCT) because they lead to distortion of the 3-dimensional reconstruction. This prospective study was performed to quantify patient movement during CBCT acquisition and its influence on image quality. Materials and Methods: In total, 412 patients receiving CBCT imaging were equipped with a wireless head sensor system that detected inertial, gyroscopic, and magnetometric movements with 6 dimensions of freedom. The type and amplitude of movements during CBCT acquisition were evaluated and image quality was rated in 7 different anatomical regions of interest. For continuous variables, significance was calculated using the Student t-test. A linear regression model was applied to identify associations of the type and extent of motion with image quality scores. Kappa statistics were used to assess intra- and inter-rater agreement. Chi-square testing was used to analyze the impact of age and sex on head movement. Results: All CBCT images were acquired in a 10-month period. In 24% of the investigations, movement was recorded (acceleration: >0.10 [m/s2]; angular velocity: >0.018 [°/s]). In all examined regions of interest, head motion during CBCT acquisition resulted in significant impairment of image quality (P<0.001). Movement in the horizontal and vertical axes was most relevant for image quality (R2>0.7). Conclusion: Relevant head motions during CBCT imaging were frequently detected, leading to image quality loss and potentially impairing diagnosis and therapy planning. The presented data illustrate the need for digital correction algorithms and hardware to minimize motion artefacts in CBCT imaging.

HAND GESTURE INTERFACE FOR WEARABLE PC

  • Nishihara, Isao;Nakano, Shizuo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.664-667
    • /
    • 2009
  • There is strong demand to create wearable PC systems that can support the user outdoors. When we are outdoors, our movement makes it impossible to use traditional input devices such as keyboards and mice. We propose a hand gesture interface based on image processing to operate wearable PCs. The semi-transparent PC screen is displayed on the head mount display (HMD), and the user makes hand gestures to select icons on the screen. The user's hand is extracted from the images captured by a color camera mounted above the HMD. Since skin color can vary widely due to outdoor lighting effects, a key problem is accurately discrimination the hand from the background. The proposed method does not assume any fixed skin color space. First, the image is divided into blocks and blocks with similar average color are linked. Contiguous regions are then subjected to hand recognition. Blocks on the edges of the hand region are subdivided for more accurate finger discrimination. A change in hand shape is recognized as hand movement. Our current input interface associates a hand grasp with a mouse click. Tests on a prototype system confirm that the proposed method recognizes hand gestures accurately at high speed. We intend to develop a wider range of recognizable gestures.

  • PDF

Position Estimation of Object Based on Vergence Movement of Cameras (카메라의 vergence 운동에 근거한 물체의 위치 추정)

  • 정남채
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.4
    • /
    • pp.59-64
    • /
    • 2001
  • In this paper it was proposed method that solve problems of method to segment region of zero disparity and algorithm that extract binocular disparity to estimate position of object by vergence movement of moving stereo cameras experimented to compare those. There was not change of density value almost in region that change of critcal value was not found almost in image, because a high critical value was set so that critical value may be kipt changelessly about all small regions in studied treatise so far. The corresponding points were extracted wrongly by the result. By because the characteristics of small region was evaluated by autocorrelation and the critical value was established that may be proportional to the autocorrelation value, it was confirmed that corresponding points are not extracted almost by mistake and binocular disparity could by extracted with high speed.

  • PDF

Characteristics of Drone Broadcasting Camera Moving through Content Analysis Method (내용분석을 통해 본 드론 방송영상의 카메라 움직임 특성 연구)

  • Lim, Hyunchan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1178-1183
    • /
    • 2021
  • Based on the camera movement on image expression and grammar, this study intended to analyze the characteristics of image expression filmed and broadcasted by drones. This study analyzed drone images using the movement characteristics of existing video cameras as a coding nomenclature. These were intended to examine the differences from existing video grammar and their implications. This study conducted a content analysis using the entire population of drone news footage broadcast for four years in 2015, 2016, 2017 and 2018 by TV Chosun. The size of the screen, camera work, duration of the shot, camera angle, etc. were selected and analyzed. As a result, the drone camera work showed that it uses the most dolly shots in the case of camera movement, followed by the drone camera movement in the order of pan and tilt shots. For zoom, the frequency of use was the smallest. In addition, this study analyzed the size of the screen, duration of the shot, and camera angle of drone. Analysis shows that drones use certain camera movements most frequently, and unlike grandiose modifiers such as "extension of human gaze," drone remains as a supplementary means to enhance the traditional media expression.

A Motion Adaptive Multi-Frame Interpolation Algorithm (움직임 적응형 멀티프레임 보간 알고리즘)

  • 김희철;채종석;최철호;권병헌;최명렬
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.54-57
    • /
    • 2000
  • In this paper, we propose a new interpolation method by using the motion between two moving image frames. In the proposed method, the movement is detected by using neighborhood pixels of target pixel in the past frame and the present frame. Then, H-shaped pseudomedian filter (below HPMED) is used for the still part of the image and Delta-shaped interpolation filter (below $\Delta$-shaped) for used in the moving part of the image. We detect the movement by comparing the differences between pixels in 4${\times}$5 window of the past frame and the present frame; the difference has a critical value. We simultaneously accomplish checking PSNR(peak signal noise ratio) and subjective assessment that is placed the focus on edge characteristic for assessment of result in computer simulation. The results show that the proposed adaptive method is better than the conventional methods.

  • PDF

Man-machine interface using eyeball movement

  • Takami, Osamu;Morimoto, Kazuaki;Ochiai, Tsumoru;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.195-198
    • /
    • 1995
  • In this paper We propose one computer interface device for handicapped people. Input signals of the interface device are movements of eyeballs and head of handicapped. The movements of the eyeballs and head are detected by an image processing system. One feature of our system is that the operator is not obliged to wear any burdensome device like glasses and a helmet. The sensing performance of the image processing of the eyeballs and head is evaluated through experiments. Experimental results reveal the applicability of our system.

  • PDF