• Title/Summary/Keyword: 최소 평면 변환

Search Result 20, Processing Time 0.024 seconds

A Method of Evaluating the Spatial Difference between Two Numerical Surfaces (두 개의 수치 평면에 대한 공간적 차이의 측정 방법)

  • Lee Jung-Eun;Sadahiro Yukio
    • Journal of the Korean Geographical Society
    • /
    • v.41 no.2 s.113
    • /
    • pp.212-226
    • /
    • 2006
  • Surface data generally represent continuous distribution of geographical or social phenomena of a region in urban analysis. Instances include distribution of temperature, population of region, and various distributions related to human activities. When spatial data are given in the form of surface, surface comparison is required as a way of comprehending the surface change or the relationship between two surfaces. As for previous approaches of surface comparison, there are visualization, quantitative methods and qualitative method. All those approaches, however, show the difference between two surfaces in a limited way. Especially, they are not able to distinguish spatial difference between two surfaces. To overcome such problem, this paper proposes a method of comparing two surfaces in terms of their spatial structure. Main concept of the method comes from earth moving problem and the method is named minimum surface transformation, here. When a surface is transformed into another, total surface volume moved in the process of transformation should be the minimum. Both quantitative and spatial differences between two surfaces are evaluted by total surface volume moved and the distribution of moved surface volume of each cell respectively. The method is applied to hypothetical and actual data. From the former, it is understood that the method explains how two surfaces are quantitatively and spatially different. The result of the latter shows that moved total surface volume decreases as time goes by which fits the actual situation that population change rate gets smaller. Concerning the other measure of surface difference, the distribution of $X_{ij}$ describes detailed flow of surface volume than that of simply subtracting surface volume by indicating to what direction the population change occurs.

TLS (Total Least-Squares) within Gauss-Helmert Model: 3D Planar Fitting and Helmert Transformation of Geodetic Reference Frames (가우스-헬머트 모델 전최소제곱: 평면방정식과 측지좌표계 변환)

  • Bae, Tae-Suk;Hong, Chang-Ki;Lim, Soo-Hyeon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.315-324
    • /
    • 2022
  • The conventional LESS (LEast-Squares Solution) is calculated under the assumption that there is no errors in independent variables. However, the coordinates of a point, either from traditional ground surveying such as slant distances, horizontal and/or vertical angles, or GNSS (Global Navigation Satellite System) positioning, cannot be determined independently (and the components are correlated each other). Therefore, the TLS (Total Least Squares) adjustment should be applied for all applications related to the coordinates. Many approaches were suggested in order to solve this problem, resulting in equivalent solutions except some restrictions. In this study, we calculated the normal vector of the 3D plane determined by the trace of the VLBI targets based on TLS within GHM (Gauss-Helmert Model). Another numerical test was conducted for the estimation of the Helmert transformation parameters. Since the errors in the horizontal components are very small compared to the radius of the circle, the final estimates are almost identical. However, the estimated variance components are significantly reduced as well as show a different characteristic depending on the target location. The Helmert transformation parameters are estimated more precisely compared to the conventional LESS case. Furthermore, the residuals can be predicted on both reference frames with much smaller magnitude (in absolute sense).

A 3D Image Measurement Algorithm for the Distance Measurement to the Object on 3D Plane (평면상에 존재하는 물체의 거리계측을 위한 3차원 영상계측 알고리즘)

  • 김용준;서경호;김태효
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2000.12a
    • /
    • pp.65-68
    • /
    • 2000
  • 본 논문에서는 평면상에 존재하는 물체까지의 거리를 카메라시스템을 이용하여 실제 거리를 계측하는 알고기즘을 제안하였다. 계측 시스템을 교정하기 위해, 우선 3차인 실세계 좌표계와 2차원의 카메라 좌표계의 관계를 해석하고, 카메라의 변수들을 포함하는 카메라 좌표계의 변수들을 구하였다. 한편, 3파원 공간에서 계측면을 평면으로 가정하고 평면의 방정식과 좌표계 변환 방정식으로부터 뉴턴-랩슨법을 이용하여 최소값에 대응하는 근사치를 구함으로써 물체까지의 거리 정보를 추출하였다. 실제의 계측 실험에서, 도로에 표준 물체인 Ca]ibration 시트를 두고 승용차의 백미러 위치에 카메라를 설치하고 영상을 획득하였다. 계측 거리는 4m부터 lOm까지는 1m간격으로 계측하고, 10m부터 30m까지는 10m간격으로 계측하였다. 그 결과 4m에서는 약 1.4mm의 오차가 발생하였고, 30m의 거리에서는 3.5m의 오차를 보였는데 계측 거리가 길어질수록 오차가 지수함수적으로 증가함을 알 수 있었다.

  • PDF

An Analysis on Face Recognition system of Housdorff Distance and Hough Transform (Housdorff Distance 와 Hough Transform을 적용한 얼굴인식시스템의 분석)

  • Cho, Meen-Hwan
    • Journal of the Korea Computer Industry Society
    • /
    • v.8 no.3
    • /
    • pp.155-166
    • /
    • 2007
  • In this paper, captured face-image was pre-processing, segmentation, and extracting features from thinning by differential operator and minute-delineation. A straight line in slope-intercept form was transformed at the $r-\theta$ domain using Hough Transform, instead of Housdorff distance are extract feature as length, rotation, displacement of lines from thinning line components by differentiation. This research proposed a new approach compare with Hough Transformation and Housdorff Distance for face recognition so that Hough transform is simple and fast processing of face recognition than processing by Housdorff Distance. Rcognition accuracy rate is that Housdorff method is higher than Hough transformation's method.

  • PDF

2D-3D Vessel Registration for Image-guided Surgery based on distance map (영상유도시술을 위한 거리지도기반 2D-3D 혈관영상 정합)

  • 송수민;최유주;김민정;김명희
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04a
    • /
    • pp.913-915
    • /
    • 2004
  • 시술 중 제공되는 2D영상은 실시간으로 환자와 시술도구의 상태정보를 제공해주지만 환부의 입체적ㆍ해부학적 파악이 어렵다. 따라서 긴 촬영시간으로 시술 전 획득되는 3D영상과 시술 중 얻어지는 2D영상간 정합영상은 영상 유도술에 있어서 유용한 정보를 제공한다. 이를 위해 본 논문에서는 볼륨영상으로부터 혈관모델을 추출하고 이를 평면으로 투영하였다. 두 2D영상에서 정차대상이 되는 혈관골격을 추출한 후 혈관의 분기특성을 고려 한 초기정합을 수행하였다. 크기와 초기 위치를 맞춘 혈관골격을 골격간 거리가 최소가 되도록 반복적으로 혈관을 기하변환시키고 최종 변환된 혈관골격을 시술 중 제공되는 2D영상에 겹쳐 가시화 하였다. 이로써 시술시간 경감과 시술성공률 향상을 유도할 수 있는 시술경로맵을 제시하고자 하였다.

The Development of Data Transformation Program for Establishing the Real-Time Database in Underground Utility (실시간 지하시설물 데이터베이스 구축을 위한 자료 변환 프로그램 개발)

  • 최석근;박경식;임인섭
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.16 no.2
    • /
    • pp.159-168
    • /
    • 1998
  • In traditional method, the establishment of database for current data of underground utilities would frequently occur a lot of losses in the time, economic effect and problems of accuracy because the procedure for the generations of transverse and vertical section mapping would be composed of many steps and the establishment of underground utility is inefficiently accomplished. The goals of this study are 1) to obtain digital information and data aquisition simultaneously by realtime in the field, 2) to develop the computer program of generation of transverse and vertical section mapping based on the plan for data transformation. As a result of this study, the establishment of database for underground utilities is able to save the time and improve economic effect and accuracy while minimizing errors in rewriting and acquiring data.

  • PDF

New lithography technology to fabricate arbitrary shapes of patterns in nanometer scale (나노미터 크기의 임의 형상을 제작하기 위한 새로운 리소그래피 기술)

  • 홍진수;김창교
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.5 no.3
    • /
    • pp.197-203
    • /
    • 2004
  • New lithography techniques are employed for the patterning of arbitrary shapes in nanometer scale. When, in the photolithography, the electromagnetic waves such as UV and X-ray are incident on the mask patterned in nanometer scale, the diffraction effect is unavoidable and degrades images of the mask imprinted on wafer. Only a convex lens is well-known Fourier transformer. It is possible to make the mask Fourier-transformed with the convex lens, even though the size of pattern on the mask is very large compared to the wavelength of electromagnetic wave. If the mask, modified according to new technique described in this paper, was placed at the front of the lens and was illuminated with laser beam, the nanometer-size patterns are only formed on the plane called Fourier transform plane. The new method presented here is quite simple setup and comparable with present and next generation lithographies such as UV/EUV photolithograpy and electron projection lithography when compared in attainable minimum linewidth. In this paper, we showed our theoretical research work in the field of Fourier optics, . In the near future, we are going to verify this theoretical work by experiments.

  • PDF

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

A Study on Three-dimensional Coordinates Analysis Using Mirror Images (거울영상을 이용한 3차원 좌표해석에 관한 연구)

  • 유복모;이현직;정영동;오창수
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.4 no.1
    • /
    • pp.25-36
    • /
    • 1986
  • Minimum three pairs of model are necessary to determine absolute coordinates of three-dimensional objects. This paper researches the method of analyzing three-dimensional coordinates of all sides of objects by photographing a pair of stereo model through the medium of mirror. An objective lies in improving the accuracy and the efficiency of mirror images method by introducing an error correction function for distortion of the mirror. Projected-onto-mirror points are transformed on the object plane through mirror plane equation. As the result, X-coordinates error is the largest and Y, Z-coordinates error represents about 1mm. Also, accuracy can be improved by introducing correction function for left and right mirror and correcting the distortion of mirror.

  • PDF

Segmentation of Airborne LIDAR Data: From Points to Patches (항공 라이다 데이터의 분할: 점에서 패치로)

  • Lee Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.1
    • /
    • pp.111-121
    • /
    • 2006
  • Recently, many studies have been performed to apply airborne LIDAR data to extracting urban models. In order to model efficiently the man-made objects which are the main components of these urban models, it is important to extract automatically planar patches from the set of the measured three-dimensional points. Although some research has been carried out for their automatic extraction, no method published yet is sufficiently satisfied in terms of the accuracy and completeness of the segmentation results and their computational efficiency. This study thus aimed to developing an efficient approach to automatic segmentation of planar patches from the three-dimensional points acquired by an airborne LIDAR system. The proposed method consists of establishing adjacency between three-dimensional points, grouping small number of points into seed patches, and growing the seed patches into surface patches. The core features of this method are to improve the segmentation results by employing the variable threshold value repeatedly updated through a statistical analysis during the patch growing process, and to achieve high computational efficiency using priority heaps and sequential least squares adjustment. The proposed method was applied to real LIDAR data to evaluate the performance. Using the proposed method, LIDAR data composed of huge number of three dimensional points can be converted into a set of surface patches which are more explicit and robust descriptions. This intermediate converting process can be effectively used to solve object recognition problems such as building extraction.