• Title/Summary/Keyword: Camera Modeling

Search Result 334, Processing Time 0.019 seconds

Head Position Detection Using Omnidirectional Camera (전 방향 카메라 영상에서 사람의 얼굴 위치검출 방법)

  • Bae, Kwang-Hyuk;Park, Kang-Ryoung;Kim, Jai-Hie
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.283-284
    • /
    • 2007
  • This paper proposes a method of real-time segmentation of moving region and detection of head position in a single omnidrectional camera Segmentation of moving region used background modeling method by a mixture of Gaussian(MOG) and shadow detection method. Circular constraint was proposed for detecting head position.

  • PDF

Surface Treatment Effect on the Toilet by Numerical Modeling and High Speed CCD Camera (수치모델과 고속 CCD 카메라를 이용한 세변기 표면 처리 효과 특성 해석)

  • Roh, Ji-Hyun;Do, Woo-Ri;Yang, Won-Kyun;Joo, Jung-Hoon
    • Journal of the Korean institute of surface engineering
    • /
    • v.44 no.1
    • /
    • pp.32-37
    • /
    • 2011
  • Numerical analysis is done to investigate the effect of surface treatment of a toilet on the cleanness. The surface treatment using plasma for the super-hydrophobic surface expects the self-cleaning effect of the toilet seat cover for preventing the droplets with a great quantity of bacteria during the toilet flushing after evacuation. In this study, the fluid analysis in the toilet during the flushing was performed by an ultrahigh-speed CCD camera with 1,000 frame/sec and the numerical modeling. And the spattering phenomenon from the toilet surface during urine was analyzed quantitatively by CFD-ACE+ with a free surface model and a mixed model of two fluids. If the surface tension of the toilet surface is weak, many urine droplets after collision bounded in spite of considering the gravity. The turbulence generated by the change of angle and velocity of urine and the variation of the collision phenomenon from toilet surface were modeled numerically.

DEM Extraction from KOMPSAT-1 EOC Stereo Images and Accuracy Assessment (KOMPSAT-1 EOC입체 영상을 이용한 DEM생성과 정확도 검증)

  • 임용조;김태정;김준식
    • Korean Journal of Remote Sensing
    • /
    • v.18 no.2
    • /
    • pp.81-90
    • /
    • 2002
  • We carried out accuracy assessment for DEM extraction from the KOMPSAT-1 EOC stereo images over Daejeon and Nonsan in Korea. DEM generation divided into two parts. One is camera modeling and the other stereo matching. We used Orun & Natarajan's(1994) model and Gupta & Hartley's(1997) model in the camera modeling step and checked the possibility using Orun & Natarajan and Gupta & Hartley's models in EOC stereo pairs. For stereo matching, we used an algorithms developed in-house for SPOT images and showed that this algorithm could work with EOC images. Using these algorithms, DEMs were successfully generated from EOC images. The comparison of DEM from EOC Images with a DEM from SPOT Images showed that EOC could be used for high-accuracy DEM generation.

A Study on the Real-Time Vision Control Method for Manipulator's position Control in the Uncertain Circumstance (불확실한 환경에서 매니퓰레이터 위치제어를 위한 실시간 비젼제어기법에 관한 연구)

  • Jang, W.-S.;Kim, K.-S.;Shin, K.-S.;Joo, C.;;Yoon, H.-K.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.12
    • /
    • pp.87-98
    • /
    • 1999
  • This study is concentrated on the development of real-time estimation model and vision control method as well as the experimental test. The proposed method permits a kind of adaptability not otherwise available in that the relationship between the camera-space location of manipulable visual cues and the vector of manipulator joint coordinates is estimate in real time. This is done based on a estimation model ta\hat generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation as well as uncertainty of manipulator. This vision control method is roboust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the manipulator, and correct knowledge of position and orientation of CCD camera with respect to the manipulator base. Finally, evidence of the ability of real-time vision control method for manipulator's position control is provided by performing the thin-rod placement in space with 2 cues test model which is completed without a prior knowledge of camera or manipulator positions. This feature opens the door to a range of applications of manipulation, including a mobile manipulator with stationary cameras tracking and providing information for control of the manipulator event.

  • PDF

A Visual Servo Algorithm for Underwater Docking of an Autonomous Underwater Vehicle (AUV) (자율무인잠수정의 수중 도킹을 위한 비쥬얼 서보 제어 알고리즘)

  • 이판묵;전봉환;이종무
    • Journal of Ocean Engineering and Technology
    • /
    • v.17 no.1
    • /
    • pp.1-7
    • /
    • 2003
  • Autonomous underwater vehicles (AUVs) are unmanned, underwater vessels that are used to investigate sea environments in the study of oceanography. Docking systems are required to increase the capability of the AUVs, to recharge the batteries, and to transmit data in real time for specific underwater works, such as repented jobs at sea bed. This paper presents a visual :em control system used to dock an AUV into an underwater station. A camera mounted at the now center of the AUV is used to guide the AUV into dock. To create the visual servo control system, this paper derives an optical flow model of a camera, where the projected motions of the image plane are described with the rotational and translational velocities of the AUV. This paper combines the optical flow equation of the camera with the AUVs equation of motion, and deriver a state equation for the visual servo AUV. Further, this paper proposes a discrete-time MIMO controller, minimizing a cost function. The control inputs of the AUV are automatically generated with the projected target position on the CCD plane of the camera and with the AUVs motion. To demonstrate the effectiveness of the modeling and the control law of the visual servo AUV simulations on docking the AUV to a target station are performed with the 6-dof nonlinear equations of REMUS AUV and a CCD camera.

Noncontact 3-dimensional measurement using He-Ne laser and CCD camera (He-Ne 레이저와 CCD 카메라를 이용한 비접촉 3차원 측정)

  • Kim, Bong-chae;Jeon, Byung-cheol;Kim, Jae-do
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.11
    • /
    • pp.1862-1870
    • /
    • 1997
  • A fast and precise technique to measure 3-dimensional coordinates of an object is proposed. It is essential to take the 3-dimensional measurements of the object in design and inspection. Using this developed system a surface model of a complex shape can be constructed. 3-dimensional world coordinates are projected onto a camera plane by the perspective transformation, which plays an important role in this measurement system. According to the shape of the object two measuring methods are proposed. One is rotation of an object and the other is translation of measuring unit. Measuring speed depending on image processing time is obtained as 200 points per second. Measurement resolution i sexperimented by two parameters among others; the angle between the laser beam plane and the camera, and the distance between the camera and the object. As a result of these experiments, it was found that measurement resolution ranges from 0.3mm to 1.0mm. This constructed surface model could be used in manufacturing tools such as rapid prototyping machine.

3D Environment Perception using Stereo Infrared Light Sources and a Camera (스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지)

  • Lee, Soo-Yong;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

3D Head Modeling using Depth Sensor

  • Song, Eungyeol;Choi, Jaesung;Jeon, Taejae;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.13-16
    • /
    • 2015
  • Purpose We conducted a study on the reconstruction of the head's shape in 3D using the ToF depth sensor. A time-of-flight camera (ToF camera) is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The above method is the safest way of measuring the head shape of plagiocephaly patients in 3D. The texture, appearance and size of the head were reconstructed from the measured data and we used the SDF method for a precise reconstruction. Materials and Methods To generate a precise model, mesh was generated by using Marching cube and SDF. Results The ground truth was determined by measuring 10 people of experiment participants for 3 times repetitively and the created 3D model of the same part from this experiment was measured as well. Measurement of actual head circumference and the reconstructed model were made according to the layer 3 standard and measurement errors were also calculated. As a result, we were able to gain exact results with an average error of 0.9 cm, standard deviation of 0.9, min: 0.2 and max: 1.4. Conclusion The suggested method was able to complete the 3D model by minimizing errors. This model is very effective in terms of quantitative and objective evaluation. However, measurement range somewhat lacks 3D information for the manufacture of protective helmets, as measurements were made according to the layer 3 standard. As a result, measurement range will need to be widened to facilitate production of more precise and perfectively protective helmets by conducting scans on all head circumferences in the future.

A Water Model Study on Molten Metal Flow in Die Cavity of Die Casting (다이캐스트 주물의 금형공동내에서 탕류에 관한 수모델적 연구)

  • Kim, Myung-Jae;Choi, Hee-Ho;Cho, Nam-Don
    • Journal of Korea Foundry Society
    • /
    • v.14 no.6
    • /
    • pp.576-589
    • /
    • 1994
  • Water modeling experiments and computer simulation for the predictions of defects of die castings are very important to produce high quality castings with less cost. The relation between the variable air vent system and the characteristics of the fluid flow in the die cavity is studied by using water modeling tests, which give ideas on reasonable designing of die cavity, vent arrangement and gating system. In order to test the water modeling, injection is done by using water containing NaCl. Flow behaviors in cavities are visualized by high speed camera and video tape recorder, and local filling time is measured with electrode sensors. Special attention is paid to the configuration of die cavity. Simulated results by computer are examined and compared with the results of water modeling experiments. There are close correlations between the simulated results and water modeling ones.

  • PDF

City-Scale Modeling for Street Navigation

  • Huang, Fay;Klette, Reinhard
    • Journal of information and communication convergence engineering
    • /
    • v.10 no.4
    • /
    • pp.411-419
    • /
    • 2012
  • This paper proposes a semi-automatic image-based approach for 3-dimensional (3D) modeling of buildings along streets. Image-based urban 3D modeling techniques are typically based on the use of aerial and ground-level images. The aerial image of the relevant area is extracted from publically available sources in Google Maps by stitching together different patches of the map. Panoramic images are common for ground-level recording because they have advantages for 3D modeling. A panoramic video recorder is used in the proposed approach for recording sequences of ground-level spherical panoramic images. The proposed approach has two advantages. First, detected camera trajectories are more accurate and stable (compared to methods using multi-view planar images only) due to the use of spherical panoramic images. Second, we extract the texture of a facade of a building from a single panoramic image. Thus, there is no need to deal with color blending problems that typically occur when using overlapping textures.