• Title/Summary/Keyword: Camera Modeling

Search Result 334, Processing Time 0.024 seconds

Rigorous Modeling of the First Generation of the Reconnaissance Satellite Imagery

  • Shin, Sung-Woong;Schenk, Tony
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.3
    • /
    • pp.223-233
    • /
    • 2008
  • In the mid 90's, the U.S. government released images acquired by the first generation of photo reconnaissance satellite missions between 1960 and 1972. The Declassified Intelligent Satellite Photographs (DISP) from the Corona mission are of high quality with an astounding ground resolution of about 2 m. The KH-4A panoramic camera system employed a scan angle of $70^{\circ}$ that produces film strips with a dimension of $55\;mm\;{\times}\;757\;mm$. Since GPS/INS did not exist at the time of data acquisition, the exterior orientation must be established in the traditional way by using control information and the interior orientation of the camera. Detailed information about the camera is not available, however. For reconstructing points in object space from DISP imagery to an accuracy that is comparable to high resolution (a few meters), a precise camera model is essential. This paper is concerned with the derivation of a rigorous mathematical model for the KH-4A/B panoramic camera. The proposed model is compared with generic sensor models, such as affine transformation and rational functions. The paper concludes with experimental results concerning the precision of reconstructed points in object space. The rigorous mathematical panoramic camera model for the KH-4A camera system is based on extended collinearity equations assuming that the satellite trajectory during one scan is smooth and the attitude remains unchanged. As a result, the collinearity equations express the perspective center as a function of the scan time. With the known satellite velocity this will translate into a shift along-track. Therefore, the exterior orientation contains seven parameters to be estimated. The reconstruction of object points can now be performed with the exterior orientation parameters, either by intersecting bundle rays with a known surface or by using the stereoscopic KH-4A arrangement with fore and aft cameras mounted an angle of $30^{\circ}$.

Compensation for Machining Error included by Tool Deflection Using High-Speed Camera (고속카메라를 이용한 절삭공구변형의 보상에 관한 연구)

  • Bae, J.S.;Kim, G.H.;Yoon, G.S.;Seo, T.I.
    • Transactions of Materials Processing
    • /
    • v.16 no.1 s.91
    • /
    • pp.15-19
    • /
    • 2007
  • This paper presents an integrated machining error compensation method based on captured images of tool deflection shapes in flat end-milling processes. This approach allows us to avoid modeling machining characteristics (cutting forces, tool deflections and machining errors etc.) and accumulating calculation errors induced by several simulations. For this, a high-speed camera captured images of real deformed tool shapes which were cutting under given machining conditions. Using image processes and a machining error model, it is possible to estimate tool deflection in cutting conditions modeled and to compensate for machining errors using an iterative algorithm correcting tool paths. This corrected tool path can effectively reduce machining errors in the flat end-milling process. Experiments are carried out to validate the approaches proposed in this paper. The proposed error compensation method can be effectively implemented in a real machining situation, producing much smaller errors.

A Bump Mapping Method Using Camera Lens (카메라 렌즈를 이용한 범프 맵핑)

  • Koh, Wook
    • Journal of the Korea Computer Graphics Society
    • /
    • v.3 no.1
    • /
    • pp.9-15
    • /
    • 1997
  • Most rendering research has concentrated on two sub-problems: modeling the reflection of light sources, and calculating the direct and indirect illumination from light sources and other surfaces. As Kolb wrote in [1], the camera and lens model is an important key component of a rendering system. In this paper we describe a bump-mapping method using camera lens. It allows another level of control in rendering. We developed an efficient framework to develop customized bump mapping lens and combine with other sequence of lens elements to create various effects as shown in Figure 2,3,4,5.

  • PDF

Open Standard Based 3D Urban Visualization and Video Fusion

  • Enkhbaatar, Lkhagva;Kim, Seong-Sam;Sohn, Hong-Gyoo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.4
    • /
    • pp.403-411
    • /
    • 2010
  • This research demonstrates a 3D virtual visualization of urban environment and video fusion for effective damage prevention and surveillance system using open standard. We present the visualization and interaction simulation method to increase the situational awareness and optimize the realization of environmental monitoring through the CCTV video and 3D virtual environment. New camera prototype was designed based on the camera frustum view model to project recorded video prospectively onto the virtual 3D environment. The demonstration was developed by the X3D, which is royalty-free open standard and run-time architecture, and it offers abilities to represent, control and share 3D spatial information via the internet browsers.

Stray Light Analysis of High Resolution Camera for a Low-Earth-Orbit Satellite

  • Park, Jun-Oh;Jang, Won-Kweon;Kim, Seong-Hui;Jang, Hong-Sul;Lee, Seung-Hoon
    • Journal of the Optical Society of Korea
    • /
    • v.15 no.1
    • /
    • pp.52-55
    • /
    • 2011
  • We discuss the effect of stray light on a high-precision camera in an LEO(Low Earth Orbit) satellite. The critical objects and illumination objects were sorted to discover the stray light sources in the optical system. Scatter modeling was applied to determine a noise effect on the surface of a detector, and the relative flux of a signal and noise were also calculated. The stable range of reflectivity of the beam splitter was estimated for various scattering models.

Basic Research for Construction Indoor Digital Twin Construction (건설공사 실내 Digital Twin 구축을 위한 기초연구)

  • Kim, Young Hyun
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.05a
    • /
    • pp.349-350
    • /
    • 2023
  • In the field of domestic construction, 3D modeling mainly targets outdoor construction sites, and acquires outdoor spatial information by operating UAVs and UGVs equipped with cameras. 3D modeling of construction sites tends to expand its scope to indoors along with the increasing demand for site monitoring and management through indoor spatial information. In the case of indoors, it is impossible to shoot with a drone after the framework and outer walls of the building are completed, so it is necessary to collect indoor spatial information and 3D modeling using a 360 camera. The purpose of this study is limited to basic research to establish a process that can obtain simple and high-quality indoor 3D modeling results using indoor data collected from 360 cameras.

  • PDF

A 3D Face Modeling Method Using Region Segmentation and Multiple light beams (지역 분할과 다중 라이트 빔을 이용한 3차원 얼굴 형상 모델링 기법)

  • Lee, Yo-Han;Cho, Joo-Hyun;Song, Tai-Kyong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.6
    • /
    • pp.70-81
    • /
    • 2001
  • This paper presents a 3D face modeling method using a CCD camera and a projector (LCD projector or Slide projector). The camera faces the human face and the projector casts white stripe patterns on the human face. The 3D shape of the face is extracted from spatial and temporal locations of the white stripe patterns on a series of image frames. The proposed method employs region segmentation and multi-beam techniques for efficient 3D modeling of hair region and faster 3D scanning respectively. In the proposed method, each image is segmented into face, hair, and shadow regions, which are independently processed to obtain the optimum results for each region. The multi-beam method, which uses a number of equally spaced stripe patterns, reduces the total number of image frames and consequently the overall data acquisition time. Light beam calibration is adopted for efficient light plane measurement, which is not influenced by the direction (vertical or horizontal) of the stripe patterns. Experimental results show that the proposed method provides a favorable 3D face modeling results, including the hair region.

  • PDF

A Realistic Modeling and Rendering of Cloth Textures by Photometry (사진 측정에 의한 옷감의 질감 모델링 및 사실적 렌더링)

  • Kim, Min-Soo;Kim, Dae-Hyun;Kim, Myoung-Jun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.2
    • /
    • pp.84-93
    • /
    • 2008
  • Modeling and rendering of cloth texture have been regarded as one of the most important factors to enhance reality of the contents in the digital contents industry. So far, however, two major approaches to realistically describe the cloth texture were developed: building analytical reflectance model for target cloth and sometimes thread itself and obtaining overall reflectance model using optical equipments. However, yielding a plausible analytic reflection model satisfying many subtle characteristics of a cloth is not an easy task; moreover, fine-detailed modeling of the cloth pattern across the target clothes should also be accompanied by huge amount of computation. The method to obtain overall reflectance model needs expensive measurement equipments and data size becomes huge. Since it applies in the end the reflectance model obtained at one point of a cloth to across whole the visible area of the target clothes, it cannot properly reproduce the pattern of the clothes nor the texture. To address the aforementioned problems, this paper proposes a simple low cost camera rig and a novel method for realistic modeling and rendering of the cloth texture by analyzing photos taken by the proposed camera rig, which can reproduce even the texture pattern applied to the whole clothes, overcoming the one-point reflectance model.

3D Range Measurement using Infrared Light and a Camera (적외선 조명 및 단일카메라를 이용한 입체거리 센서의 개발)

  • Kim, In-Cheol;Lee, Soo-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.10
    • /
    • pp.1005-1013
    • /
    • 2008
  • This paper describes a new sensor system for 3D range measurement using the structured infrared light. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and the projected infrared light are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Identification of the cells from the pattern is the key issue in the proposed method. Several methods of correctly identifying the cells are discussed and verified with experiments.

Automated texture mapping for 3D modeling of objects with complex shapes --- a case study of archaeological ruins

  • Fujiwara, Hidetomo;Nakagawa, Masafumi;Shibasaki, Ryosuke
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1177-1179
    • /
    • 2003
  • Recently, the ground-based laser profiler is used for acquisition of 3D spatial information of a rchaeological objects. However, it is very difficult to measure complicated objects, because of a relatively low-resolution. On the other hand, texture mapping can be a solution to complement the low resolution, and to generate 3D model with higher fidelity. But, a huge cost is required for the construction of textured 3D model, because huge labor is demanded, and the work depends on editor's experiences and skills . Moreover, the accuracy of data would be lost during the editing works. In this research, using the laser profiler and a non-calibrated digital camera, a method is proposed for the automatic generation of 3D model by integrating these data. At first, region segmentation is applied to laser range data to extract geometric features of an object in the laser range data. Various information such as normal vectors of planes, distances from a sensor and a sun-direction are used in this processing. Next, an image segmentation is also applied to the digital camera images, which include the same object. Then, geometrical relations are determined by corresponding the features extracted in the laser range data and digital camera’ images. By projecting digital camera image onto the surface data reconstructed from laser range image, the 3D texture model was generated automatically.

  • PDF