• 제목/요약/키워드: 3D Camera

검색결과 1,635건 처리시간 0.027초

3차원 카메라와 수치표고모델 자료에 따른 기상청 일사관측소의 복사관측환경 분석 (An Analysis of Radiative Observation Environment for Korea Meteorological Administration (KMA) Solar Radiation Stations based on 3-Dimensional Camera and Digital Elevation Model (DEM))

  • 지준범;조일성;이규태;조지영
    • 대기
    • /
    • 제29권5호
    • /
    • pp.537-550
    • /
    • 2019
  • To analyze the observation environment of solar radiation stations operated by the Korea Meteorological Administration (KMA), we analyzed the skyline, Sky View Factor (SVF), and solar radiation due to the surrounding topography and artificial structures using a Digital Elevation Model (DEM), 3D camera, and solar radiation model. Solar energy shielding of 25 km around the station was analyzed using 10 m resolution DEM data and the skyline elevation and SVF were analyzed by the surrounding environment using the image captured by the 3D camera. The solar radiation model was used to assess the contribution of the environment to solar radiation. Because the skyline elevation retrieved from the DEM is different from the actual environment, it is compared with the results obtained from the 3D camera. From the skyline and SVF calculations, it was observed that some stations were shielded by the surrounding environment at sunrise and sunset. The topographic effect of 3D camera is therefore more than 20 times higher than that of DEM throughout the year for monthly accumulated solar radiation. Due to relatively low solar radiation in winter, the solar radiation shielding is large in winter. Also, for the annual accumulated solar radiation, the difference of the global solar radiation calculated using the 3D camera was 176.70 MJ (solar radiation with 7 days; suppose daily accumulated solar radiation 26 MJ) on an average and a maximum of 439.90 MJ (solar radiation with 17.5 days).

Automated texture mapping for 3D modeling of objects with complex shapes --- a case study of archaeological ruins

  • Fujiwara, Hidetomo;Nakagawa, Masafumi;Shibasaki, Ryosuke
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.1177-1179
    • /
    • 2003
  • Recently, the ground-based laser profiler is used for acquisition of 3D spatial information of a rchaeological objects. However, it is very difficult to measure complicated objects, because of a relatively low-resolution. On the other hand, texture mapping can be a solution to complement the low resolution, and to generate 3D model with higher fidelity. But, a huge cost is required for the construction of textured 3D model, because huge labor is demanded, and the work depends on editor's experiences and skills . Moreover, the accuracy of data would be lost during the editing works. In this research, using the laser profiler and a non-calibrated digital camera, a method is proposed for the automatic generation of 3D model by integrating these data. At first, region segmentation is applied to laser range data to extract geometric features of an object in the laser range data. Various information such as normal vectors of planes, distances from a sensor and a sun-direction are used in this processing. Next, an image segmentation is also applied to the digital camera images, which include the same object. Then, geometrical relations are determined by corresponding the features extracted in the laser range data and digital camera’ images. By projecting digital camera image onto the surface data reconstructed from laser range image, the 3D texture model was generated automatically.

  • PDF

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF

하나의 카메라를 이용한 인터렉티스 3D 집적 영상 시스템 (Interactive 3D Integral Imaging System using Single Camera)

  • 신동학;김은수
    • 한국통신학회논문지
    • /
    • 제33권10C호
    • /
    • pp.829-835
    • /
    • 2008
  • 최근 무안경식 3D 디스플레이 방법으로 잘 알려진 3D 집적 영상 시스템에 대한 연구가 활발히 진행되고 있다. 이 3D 집적 영상 기술은 연속적인 시점, 완전 시차와 풀 칼라 영상을 공중에 표현하는 유망한 기술이다. 본 논문에서는 한 대의 카메라를 이용한 새로운 형태의 인터렉티스 3D 집적 영상 시스템을 제안한다. 이 장치는 기존의 3D 집적 영상 디스플레이 시스템에 단순한 한 대의 카메라를 추가적으로 사용하여 유저 인터페이스가 구현될 수 있다. 제안하는 시스템의 가능성을 보이기 위해서, 실험적인 장치 구현을 수행하고 기초적인 실험 결과를 보고한다. 우리가 아는 한 제안하는 방법은 3D 집적 영상 시스템에 최초로 인터렉션 기능을 추가한 것이다.

TSK 퍼지 시스템을 이용한 카메라 켈리브레이션 (Camera Calibration using the TSK fuzzy system)

  • 이희성;홍성준;오경세;김은태
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2006년도 춘계학술대회 학술발표 논문집 제16권 제1호
    • /
    • pp.56-58
    • /
    • 2006
  • Camera calibration in machine vision is the process of determining the intrinsic cameara parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

  • PDF

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • 제33권5호
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

액티브 포커싱을 이용한 3차원 물체의 깊이 계측 (Active Focusing Technique for Extracting Depth Information)

  • 이용수;박종훈;최종수
    • 전자공학회논문지B
    • /
    • 제29B권2호
    • /
    • pp.40-49
    • /
    • 1992
  • In this paper,a new approach-using the linear movement of the lens location in a camera and focal distance in each location for the measurement of the depth of the 3-D object from several 2-D images-is proposed. The sharply focused edges are extracted from the images obtained by moving the lens of the camera, that is, the distance between the lens and the image plane, in the range allowed by the camera lens system. Then the depthin formation of the edges are obtained by the lens location. In our method, the accurate and complicated control system of the camera and a special algorithm for tracing the accurate focus point are not necessary, and the method has some advantage that the depth of all objects in a scene are measured by only the linear movement of the lens location of the camera. The accuracy of the extracted depth information is approximately 5% of object distances between 1 and 2m. We can see the possibility of application of the method in the depth measurement of the 3-D objects.

  • PDF

가상 평면 기법을 이용한 3차원 기하 정보 획득 알고리즘 (The 3D Geometric Information Acquisition Algorithm using Virtual Plane Method)

  • 박상범;이찬호;오종규;이상훈;한영준;한헌수
    • 제어로봇시스템학회논문지
    • /
    • 제15권11호
    • /
    • pp.1080-1087
    • /
    • 2009
  • This paper presents an algorithm to acquire 3D geometric information using a virtual plane method. The method to measure 3D information on the plane is easy, because it's not concerning value on the z-axis. A plane can be made by arbitrary three points in the 3D space, so the algorithm is able to make a number of virtual planes from feature points on the target object. In this case, these geometric relations between the origin of each virtual plane and the origin of the target object coordinates should be expressed as known homogeneous matrices. To include this idea, the algorithm could induce simple matrix formula which is only concerning unknown geometric relation between the origin of target object and the origin of camera coordinates. Therefore, it's more fast and simple than other methods. For achieving the proposed method, a regular pin-hole camera model and a perspective projection matrix which is defined by a geometric relation between each coordinate system is used. In the final part of this paper, we demonstrate the techniques for a variety of applications, including measurements in industrial parts and known patches images.

3차원 영상을 위한 다초점 방식 영상획득장치 (Multi-Focusing Image Capture System for 3D Stereo Image)

  • 함운철;권혁재;투멘자르갈 엔크바타르
    • 로봇학회논문지
    • /
    • 제6권2호
    • /
    • pp.118-129
    • /
    • 2011
  • In this paper, we suggest a new camera capturing and synthesizing algorithm with the multi-captured left and right images for the better comfortable feeling of 3D depth and also propose 3D image capturing hardware system based on the this new algorithm. We also suggest the simple control algorithm for the calibration of camera capture system with zooming function based on a performance index measure which is used as feedback information for the stabilization of focusing control problem. We also comment on the theoretical mapping theory concerning projection under the assumption that human is sitting 50cm in front of and watching the 3D LCD screen for the captured image based on the modeling of pinhole Camera. We choose 9 segmentations and propose the method to find optimal alignment and focusing based on the measure of alignment and sharpness and propose the synthesizing fusion with the optimized 9 segmentation images for the best 3D depth feeling.

Development of a Camera Self-calibration Method for 10-parameter Mapping Function

  • Park, Sung-Min;Lee, Chang-je;Kong, Dae-Kyeong;Hwang, Kwang-il;Doh, Deog-Hee;Cho, Gyeong-Rae
    • 한국해양공학회지
    • /
    • 제35권3호
    • /
    • pp.183-190
    • /
    • 2021
  • Tomographic particle image velocimetry (PIV) is a widely used method that measures a three-dimensional (3D) flow field by reconstructing camera images into voxel images. In 3D measurements, the setting and calibration of the camera's mapping function significantly impact the obtained results. In this study, a camera self-calibration technique is applied to tomographic PIV to reduce the occurrence of errors arising from such functions. The measured 3D particles are superimposed on the image to create a disparity map. Camera self-calibration is performed by reflecting the error of the disparity map to the center value of the particles. Vortex ring synthetic images are generated and the developed algorithm is applied. The optimal result is obtained by applying self-calibration once when the center error is less than 1 pixel and by applying self-calibration 2-3 times when it was more than 1 pixel; the maximum recovery ratio is 96%. Further self-correlation did not improve the results. The algorithm is evaluated by performing an actual rotational flow experiment, and the optimal result was obtained when self-calibration was applied once, as shown in the virtual image result. Therefore, the developed algorithm is expected to be utilized for the performance improvement of 3D flow measurements.