• 제목/요약/키워드: camera modeling

검색결과 333건 처리시간 0.036초

전 방향 카메라 영상에서 사람의 얼굴 위치검출 방법 (Head Position Detection Using Omnidirectional Camera)

  • 배광혁;박강령;김재희
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2007년도 하계종합학술대회 논문집
    • /
    • pp.283-284
    • /
    • 2007
  • This paper proposes a method of real-time segmentation of moving region and detection of head position in a single omnidrectional camera Segmentation of moving region used background modeling method by a mixture of Gaussian(MOG) and shadow detection method. Circular constraint was proposed for detecting head position.

  • PDF

수치모델과 고속 CCD 카메라를 이용한 세변기 표면 처리 효과 특성 해석 (Surface Treatment Effect on the Toilet by Numerical Modeling and High Speed CCD Camera)

  • 노지현;도우리;양원균;주정훈
    • 한국표면공학회지
    • /
    • 제44권1호
    • /
    • pp.32-37
    • /
    • 2011
  • Numerical analysis is done to investigate the effect of surface treatment of a toilet on the cleanness. The surface treatment using plasma for the super-hydrophobic surface expects the self-cleaning effect of the toilet seat cover for preventing the droplets with a great quantity of bacteria during the toilet flushing after evacuation. In this study, the fluid analysis in the toilet during the flushing was performed by an ultrahigh-speed CCD camera with 1,000 frame/sec and the numerical modeling. And the spattering phenomenon from the toilet surface during urine was analyzed quantitatively by CFD-ACE+ with a free surface model and a mixed model of two fluids. If the surface tension of the toilet surface is weak, many urine droplets after collision bounded in spite of considering the gravity. The turbulence generated by the change of angle and velocity of urine and the variation of the collision phenomenon from toilet surface were modeled numerically.

KOMPSAT-1 EOC입체 영상을 이용한 DEM생성과 정확도 검증 (DEM Extraction from KOMPSAT-1 EOC Stereo Images and Accuracy Assessment)

  • 임용조;김태정;김준식
    • 대한원격탐사학회지
    • /
    • 제18권2호
    • /
    • pp.81-90
    • /
    • 2002
  • 본 논문에서는 대전과 논산지역의 KOMPSAT-1 EOC입체 영상으로부터 DEM을 생성하고 정확도를 검증하였다. DEM생성 과정을 크게 카메라 모델링 단계와 영상 정합 단계로 구분하여 논의하였으며 카메라 모델링 기법은 Orun과 Natarajan이 제안한 모델(1994)과 Gupta와 Harteley(1997)가 제안한 DLT모델을 사용하였으며 두 모델링 기법을 EOC입체 영상에 적용하는 것이 가능한지 확인하였다. 영상정합 단계에서는SPOT용으로 개발된 알고리즘이 EOC입체 영상에 적용될 수 있는지를 검토 하였다. 그리고 각 단계마다 EOC영상에 적용했을 때의 결과를 SPOT영상을 적용했을 때의 결과와 비교하였다. 본 실험에서 KOMPSAT-1 EOC입체 영상에 대해 카메라 모델링 기법과 영상 정합을 수행하여 DEM을 생성한 결과 SPOT입체 영상에서 생성한 DEM 보다 성능이 우수한 DEM을 얻을 수 있었다.

불확실한 환경에서 매니퓰레이터 위치제어를 위한 실시간 비젼제어기법에 관한 연구 (A Study on the Real-Time Vision Control Method for Manipulator's position Control in the Uncertain Circumstance)

  • 정완식;김경석;신광수;주철;김재확;윤현권
    • 한국정밀공학회지
    • /
    • 제16권12호
    • /
    • pp.87-98
    • /
    • 1999
  • This study is concentrated on the development of real-time estimation model and vision control method as well as the experimental test. The proposed method permits a kind of adaptability not otherwise available in that the relationship between the camera-space location of manipulable visual cues and the vector of manipulator joint coordinates is estimate in real time. This is done based on a estimation model ta\hat generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation as well as uncertainty of manipulator. This vision control method is roboust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the manipulator, and correct knowledge of position and orientation of CCD camera with respect to the manipulator base. Finally, evidence of the ability of real-time vision control method for manipulator's position control is provided by performing the thin-rod placement in space with 2 cues test model which is completed without a prior knowledge of camera or manipulator positions. This feature opens the door to a range of applications of manipulation, including a mobile manipulator with stationary cameras tracking and providing information for control of the manipulator event.

  • PDF

자율무인잠수정의 수중 도킹을 위한 비쥬얼 서보 제어 알고리즘 (A Visual Servo Algorithm for Underwater Docking of an Autonomous Underwater Vehicle (AUV))

  • 이판묵;전봉환;이종무
    • 한국해양공학회지
    • /
    • 제17권1호
    • /
    • pp.1-7
    • /
    • 2003
  • Autonomous underwater vehicles (AUVs) are unmanned, underwater vessels that are used to investigate sea environments in the study of oceanography. Docking systems are required to increase the capability of the AUVs, to recharge the batteries, and to transmit data in real time for specific underwater works, such as repented jobs at sea bed. This paper presents a visual :em control system used to dock an AUV into an underwater station. A camera mounted at the now center of the AUV is used to guide the AUV into dock. To create the visual servo control system, this paper derives an optical flow model of a camera, where the projected motions of the image plane are described with the rotational and translational velocities of the AUV. This paper combines the optical flow equation of the camera with the AUVs equation of motion, and deriver a state equation for the visual servo AUV. Further, this paper proposes a discrete-time MIMO controller, minimizing a cost function. The control inputs of the AUV are automatically generated with the projected target position on the CCD plane of the camera and with the AUVs motion. To demonstrate the effectiveness of the modeling and the control law of the visual servo AUV simulations on docking the AUV to a target station are performed with the 6-dof nonlinear equations of REMUS AUV and a CCD camera.

He-Ne 레이저와 CCD 카메라를 이용한 비접촉 3차원 측정 (Noncontact 3-dimensional measurement using He-Ne laser and CCD camera)

  • 김봉채;전병철;김재도
    • 대한기계학회논문집A
    • /
    • 제21권11호
    • /
    • pp.1862-1870
    • /
    • 1997
  • A fast and precise technique to measure 3-dimensional coordinates of an object is proposed. It is essential to take the 3-dimensional measurements of the object in design and inspection. Using this developed system a surface model of a complex shape can be constructed. 3-dimensional world coordinates are projected onto a camera plane by the perspective transformation, which plays an important role in this measurement system. According to the shape of the object two measuring methods are proposed. One is rotation of an object and the other is translation of measuring unit. Measuring speed depending on image processing time is obtained as 200 points per second. Measurement resolution i sexperimented by two parameters among others; the angle between the laser beam plane and the camera, and the distance between the camera and the object. As a result of these experiments, it was found that measurement resolution ranges from 0.3mm to 1.0mm. This constructed surface model could be used in manufacturing tools such as rapid prototyping machine.

스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지 (3D Environment Perception using Stereo Infrared Light Sources and a Camera)

  • 이수용;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

3D Head Modeling using Depth Sensor

  • Song, Eungyeol;Choi, Jaesung;Jeon, Taejae;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • 제2권1호
    • /
    • pp.13-16
    • /
    • 2015
  • Purpose We conducted a study on the reconstruction of the head's shape in 3D using the ToF depth sensor. A time-of-flight camera (ToF camera) is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The above method is the safest way of measuring the head shape of plagiocephaly patients in 3D. The texture, appearance and size of the head were reconstructed from the measured data and we used the SDF method for a precise reconstruction. Materials and Methods To generate a precise model, mesh was generated by using Marching cube and SDF. Results The ground truth was determined by measuring 10 people of experiment participants for 3 times repetitively and the created 3D model of the same part from this experiment was measured as well. Measurement of actual head circumference and the reconstructed model were made according to the layer 3 standard and measurement errors were also calculated. As a result, we were able to gain exact results with an average error of 0.9 cm, standard deviation of 0.9, min: 0.2 and max: 1.4. Conclusion The suggested method was able to complete the 3D model by minimizing errors. This model is very effective in terms of quantitative and objective evaluation. However, measurement range somewhat lacks 3D information for the manufacture of protective helmets, as measurements were made according to the layer 3 standard. As a result, measurement range will need to be widened to facilitate production of more precise and perfectively protective helmets by conducting scans on all head circumferences in the future.

다이캐스트 주물의 금형공동내에서 탕류에 관한 수모델적 연구 (A Water Model Study on Molten Metal Flow in Die Cavity of Die Casting)

  • 김명제;최희호;조남돈
    • 한국주조공학회지
    • /
    • 제14권6호
    • /
    • pp.576-589
    • /
    • 1994
  • Water modeling experiments and computer simulation for the predictions of defects of die castings are very important to produce high quality castings with less cost. The relation between the variable air vent system and the characteristics of the fluid flow in the die cavity is studied by using water modeling tests, which give ideas on reasonable designing of die cavity, vent arrangement and gating system. In order to test the water modeling, injection is done by using water containing NaCl. Flow behaviors in cavities are visualized by high speed camera and video tape recorder, and local filling time is measured with electrode sensors. Special attention is paid to the configuration of die cavity. Simulated results by computer are examined and compared with the results of water modeling experiments. There are close correlations between the simulated results and water modeling ones.

  • PDF

City-Scale Modeling for Street Navigation

  • Huang, Fay;Klette, Reinhard
    • Journal of information and communication convergence engineering
    • /
    • 제10권4호
    • /
    • pp.411-419
    • /
    • 2012
  • This paper proposes a semi-automatic image-based approach for 3-dimensional (3D) modeling of buildings along streets. Image-based urban 3D modeling techniques are typically based on the use of aerial and ground-level images. The aerial image of the relevant area is extracted from publically available sources in Google Maps by stitching together different patches of the map. Panoramic images are common for ground-level recording because they have advantages for 3D modeling. A panoramic video recorder is used in the proposed approach for recording sequences of ground-level spherical panoramic images. The proposed approach has two advantages. First, detected camera trajectories are more accurate and stable (compared to methods using multi-view planar images only) due to the use of spherical panoramic images. Second, we extract the texture of a facade of a building from a single panoramic image. Thus, there is no need to deal with color blending problems that typically occur when using overlapping textures.