• Title/Summary/Keyword: precise image

Search Result 714, Processing Time 0.036 seconds

Study on Flood Prediction System Based on Radar Rainfall Data (레이더 강우자료에 의한 홍수 예보 시스템 연구)

  • Kim, Won-Il;Oh, Kyoung-Doo;Ahn, Won-Sik;Jun, Byong-Ho
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.11
    • /
    • pp.1153-1162
    • /
    • 2008
  • The use of radar rainfall for hydrological appraisal has been a challenge due to the limitations in raw data generation followed by the complex analysis needed to come up with precise data interpretation. In this study, RAIDOM (RAdar Image DigitalizatiOn Method) has been developed to convert synthetic radar CAPPI(Constant Altitude Plan Position Indicator) image data from Korea Meteorological Administration into digital format in order to come up with a more practical and useful radar image data. RAIDOM was used to examine a severe local rainstorm that occurred in July 2006 as well as two other separate events that caused heavy floods on both upper and mid parts of the HanRiver basin. A distributed model was developed based on the available radar rainfall data. The Flood Hydrograph simulation has been found consistent with actual values. The results show the potentials of RAIDOM and the distributed model as tools for flood prediction. Furthermore, these findings are expected to extend the usefulness of radar rainfall data in hydrological appraisal.

Target Detection Using Texture Features and Neural Network in Infrared Images (적외선영상에서 질감 특징과 신경회로망을 이용한 표적탐지)

  • Sun, Sun-Gu
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.5
    • /
    • pp.62-68
    • /
    • 2010
  • This study is to identify target locations with low false alarms on thermal infrared images obtained from natural environment. The proposed method is different from the previous researches because it uses morphology filters for Gabor response images instead of an intensity image in initial detection stage. This method does not need precise extracting a target silhouette to distinguish true targets or clutters. It comprises three distinct stages. First, morphological operations and adaptive thresholding are applied to the summation image of four Gabor responses of an input image to find out salient regions. The locations of extracted regions can be classified into targets or clutters. Second, local texture features are computed from salient regions of an input image. Finally, the local texture features are compared with the training data to distinguish between true targets and clutters. The multi-layer perceptron having three layers is used as a classifier. The performance of the proposed method is proved by using natural infrared images. Therefore it can be applied to real automatic target detection systems.

Comparison of the observer reliability of cranial anatomic landmarks based on cephalometric radiograph and three-dimensional computed tomography scans (삼차원 전산화단층촬영사진과 측모두부 방사선규격사진의 계측자에 따른 계측오차에 대한 비교분석)

  • Kim, Jae-Young;Lee, Dong-Keun;Lee, Sang-Han
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.36 no.4
    • /
    • pp.262-269
    • /
    • 2010
  • Introduction: Accurate diagnosis and treatment planning are very important for orthognathic surgery. A small error in diagnosis can cause postoperative functional and esthetic problems. Pre-existing 2-dimensional (D) chephalogram analysis has a high likelihood of error due to its intrinsic and extrinsic problems. A cephalogram can also be inaccurate due to the limited anatomic points, superimposition of the image, and the considerable time and effort required. Recently, an improvement in technology and popularization of computed tomography (CT) provides patients with 3-D computer based cephalometric analysis, which complements traditional analysis in many ways. However, the results are affected by the experience and the subject of the investigator. Materials and Methods: The effects of the sources human error in 2-D cephalogram analysis and 3-D computerized tomography cephalometric analysis were compared using Simplant CMF program. From 2008 Jan to 2009 June, patients who had undergone CT, cephalo AP, lat were investigated. Results: 1. In the 3 D and 2 D images, 10 out of 93 variables (10.4%) and 11 out 44 variables (25%), respectively, showed a significant difference. 2. Landmarks that showed a significant difference in the 2 D image were the points frequently superimposed anatomically. 3. Go Po Orb landmarks, which showed a significant difference in the 3 D images, were found to be the artificial points for analysis in the 2 D image, and in the current definition, these points cannot be used for reproducibility in the 3 D image. Conclusion: Generally, 3-D CT images provide more precise identification of the traditional cephalometric landmark. Greater variability of certain landmarks in the mediolateral direction is probably related to the inadequate definition of the landmarks in the third dimension.

3D Building Reconstruction and Visualization by Clustering Airborne LiDAR Data and Roof Shape Analysis

  • Lee, Dong-Cheon;Jung, Hyung-Sup;Yom, Jae-Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.507-516
    • /
    • 2007
  • Segmentation and organization of the LiDAR (Light Detection and Ranging) data of the Earth's surface are difficult tasks because the captured LiDAR data are composed of irregularly distributed point clouds with lack of semantic information. The reason for this difficulty in processing LiDAR data is that the data provide huge amount of the spatial coordinates without topological and/or relational information among the points. This study introduces LiDAR data segmentation technique by utilizing histograms of the LiDAR height image data and analyzing roof shape for 3D reconstruction and visualization of the buildings. One of the advantages in utilizing LiDAR height image data is no registration required because the LiDAR data are geo-referenced and ortho-projected data. In consequence, measurements on the image provide absolute reference coordinates. The LiDAR image allows measurement of the initial building boundaries to estimate locations of the side walls and to form the planar surfaces which represent approximate building footprints. LiDAR points close to each side wall were grouped together then the least-square planar surface fitting with the segmented point clouds was performed to determine precise location of each wall of an building. Finally, roof shape analysis was performed by accumulated slopes along the profiles of the roof top. However, simulated LiDAR data were used for analyzing roof shape because buildings with various shapes of the roof do not exist in the test area. The proposed approach has been tested on the heavily built-up urban residential area. 3D digital vector map produced by digitizing complied aerial photographs was used to evaluate accuracy of the results. Experimental results show efficiency of the proposed methodology for 3D building reconstruction and large scale digital mapping especially for the urban area.

Automatic Estimation of Geometric Translations Between High-resolution Optical and SAR Images (고해상도 광학영상과 SAR 영상 간 자동 변위량 추정)

  • Han, You Kyung;Byun, Young Gi;Kim, Yong Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.3
    • /
    • pp.41-48
    • /
    • 2012
  • Using multi-sensor or multi-temporal high resolution satellite images together is essential for efficient applications in remote sensing area. The purpose of this paper is to estimate geometric difference of translations between high-resolution optical and SAR images automatically. The geometric and radiometric pre-processing steps were fulfilled to calculate the similarity between optical and SAR images by using Mutual Information method. The coarsest-level pyramid images of each sensor constructed by gaussian pyramid method were generated to estimate the initial translation difference of the x, y directions for calculation efficiency. The precise geometric difference of translations was able to be estimated by applying this method from coarsest-level pyramid image to original image in order. Yet even when considered only translation between optical and SAR images, the proposed method showed RMSE lower than 5m in all study sites.

A WWW Images Automatic Annotation Based On Multi-cues Integration (멀티-큐 통합을 기반으로 WWW 영상의 자동 주석)

  • Shin, Seong-Yoon;Moon, Hyung-Yoon;Rhee, Yang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.79-86
    • /
    • 2008
  • As the rapid development of the Internet, the embedded images in HTML web pages nowadays become predominant. For its amazing function in describing the content and attracting attention, images become substantially important in web pages. All these images consist a considerable database. What's more, the semantic meanings of images are well presented by the surrounding text and links. But only a small minority of these images have precise assigned keyphrases. and manually assigning keyphrases to existing images is very laborious. Therefore it is highly desirable to automate the keyphrases extraction process. In this paper, we first introduce WWW image annotation methods, based on low level features, page tags, overall word frequency and local word frequency. Then we put forward our method of multi-cues integration image annotation. Also, show multi-cue image annotation method is more superior than other method through an experiment.

  • PDF

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

Performance Analysis of Face Recognition by Face Image resolutions using CNN without Backpropergation and LDA (역전파가 제거된 CNN과 LDA를 이용한 얼굴 영상 해상도별 얼굴 인식률 분석)

  • Moon, Hae-Min;Park, Jin-Won;Pan, Sung Bum
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.24-29
    • /
    • 2016
  • To satisfy the needs of high-level intelligent surveillance system, it shall be able to extract objects and classify to identify precise information on the object. The representative method to identify one's identity is face recognition that is caused a change in the recognition rate according to environmental factors such as illumination, background and angle of camera. In this paper, we analyze the robust face recognition of face image by changing the distance through a variety of experiments. The experiment was conducted by real face images of 1m to 5m. The method of face recognition based on Linear Discriminant Analysis show the best performance in average 75.4% when a large number of face images per one person is used for training. However, face recognition based on Convolution Neural Network show the best performance in average 69.8% when the number of face images per one person is less than five. In addition, rate of low resolution face recognition decrease rapidly when the size of the face image is smaller than $15{\times}15$.

Image Transfer Using Cellular Phones and Wireless Internet Service

  • Shin, Dong-Ah;Doo, Tae-Hoon;Kim, Hyo-Jun;Kim, Hyoung-Ihl
    • Journal of Korean Neurosurgical Society
    • /
    • v.39 no.6
    • /
    • pp.471-474
    • /
    • 2006
  • Objective : Neuroimaging data are of paramount importance in making correct diagnosis. We herein evaluate the clinical usefulness of image transfer using cellular phones to facilitate neurological diagnosis and decision-making. Methods : Selected images from CT, MRI scans, and plain films obtained from 50 neurosurgical patients were transferred by cellular phones. A cellular phone with a built-in 1,300,000-pixel digital camera was used to capture and send the images. A cellular phone with a 262,000 color thin-film transistor liquid crystal display was used to receive the images. Communication between both cellular phones was operated by the same wireless protocol and the same wireless internet service. We compared the concordance of diagnoses and treatment plans between a house staff who could review full-scale original films and a consultant who could only review transferred images. These finding were later analyzed by a third observer. Results : The mean time of complete transfer was $2{\sim}3\;minutes$. The quality of all images received was good enough to make precise diagnosis and to select treatment options. Transferred images were helpful in making correct diagnosis and decision making in 49/50 [98%] cases. Discordant result was caused in one patient by improper selection of images by the house staff. Conclusion : The cellular phone system was useful for image transfer and delivery patient's information, leading to earlier diagnosis and initiation of treatment. This usefulness was due to sufficient resolution of the built-in camera and the TFT-LCD, the user-friendly features of the devices, and their low cost.

The Cost Optimization Solution for Developing the Image Infra-Red (IIR) Missile Seeker Operated Under Various Environments (정밀 유도무기용 적외선 영상탐색기의 운용환경에 따른 성능대비 개발비용 최적화 연구)

  • Kim, Ho-Yong;Kang, Seok-Joong;Jhee, Ho-Jin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.4
    • /
    • pp.365-373
    • /
    • 2019
  • An Image Infra-Red(IIR) seeker is widely used for precision guided munitions to provide intelligent and precise target detection in terms of high kill probability. However, there have been issues in determining the performance versus cost trade-offs due to high cost of seeker comparing to other units of the munitions. In this paper, performance/cost evaluations have been carried out to find the most cost-effective solution for developing the IIR seekers. The relationships between the critical parameters and cost are investigated to determine the optimal point which represents the low cost with high performance. It is expected that the presented approach will be able to be used for guidelines to select the appropriate IIR seeker for the given operating conditions and can be useful to estimate the cost effectiveness of the precision guided munitions at early design stage.