• Title/Summary/Keyword: Automatic Correction

Search Result 254, Processing Time 0.022 seconds

Automatic Text Extraction from News Video using Morphology and Text Shape (형태학과 문자의 모양을 이용한 뉴스 비디오에서의 자동 문자 추출)

  • Jang, In-Young;Ko, Byoung-Chul;Kim, Kil-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.479-488
    • /
    • 2002
  • In recent years the amount of digital video used has risen dramatically to keep pace with the increasing use of the Internet and consequently an automated method is needed for indexing digital video databases. Textual information, both superimposed and embedded scene texts, appearing in a digital video can be a crucial clue for helping the video indexing. In this paper, a new method is presented to extract both superimposed and embedded scene texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, a color image is converted into a gray-level image and applies contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose+CloseOpen)/2 morphological operations, maintaining text components using (OpenClose+CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, my method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.

Terrain Shadow Detection in Satellite Images of the Korean Peninsula Using a Hill-Shade Algorithm (음영기복 알고리즘을 활용한 한반도 촬영 위성영상에서의 지형그림자 탐지)

  • Hyeong-Gyu Kim;Joongbin Lim;Kyoung-Min Kim;Myoungsoo Won;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.637-654
    • /
    • 2023
  • In recent years, the number of users has been increasing with the rapid development of earth observation satellites. In response, the Committee on Earth Observation Satellites (CEOS) has been striving to provide user-friendly satellite images by introducing the concept of Analysis Ready Data (ARD) and defining its requirements as CEOS ARD for Land (CARD4L). In ARD, a mask called an Unusable Data Mask (UDM), identifying unnecessary pixels for land analysis, should be provided with a satellite image. UDMs include clouds, cloud shadows, terrain shadows, etc. Terrain shadows are generated in mountainous terrain with large terrain relief, and these areas cause errors in analysis due to their low radiation intensity. previous research on terrain shadow detection focused on detecting terrain shadow pixels to correct terrain shadows. However, this should be replaced by the terrain correction method. Therefore, there is a need to expand the purpose of terrain shadow detection. In this study, to utilize CAS500-4 for forest and agriculture analysis, we extended the scope of the terrain shadow detection to shaded areas. This paper aims to analyze the potential for terrain shadow detection to make a terrain shadow mask for South and North Korea. To detect terrain shadows, we used a Hill-shade algorithm that utilizes the position of the sun and a surface's derivatives, such as slope and aspect. Using RapidEye images with a spatial resolution of 5 meters and Sentinel-2 images with a spatial resolution of 10 meters over the Korean Peninsula, the optimal threshold for shadow determination was confirmed by comparing them with the ground truth. The optimal threshold was used to perform terrain shadow detection, and the results were analyzed. As a qualitative result, it was confirmed that the shape was similar to the ground truth as a whole. In addition, it was confirmed that most of the F1 scores were between 0.8 and 0.94 for all images tested. Based on the results of this study, it was confirmed that automatic terrain shadow detection was well performed throughout the Korean Peninsula.

Building Change Detection Methodology in Urban Area from Single Satellite Image (단일위성영상 기반 도심지 건물변화탐지 방안)

  • Seunghee Kim;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1097-1109
    • /
    • 2023
  • Urban is an area where small-scale changes to individual buildings occur frequently. An existing urban building database requires periodic updating to increase its usability. However, there are limitations in data collection for building changes over a wide urban. In this study, we check the possibility of detecting building changes and updating a building database by using satellite images that can capture a wide urban region by a single image. For this purpose, building areas in a satellite image are first extracted by projecting 3D coordinates of building corners available in a building database onto the image. Building areas are then divided into roof and facade areas. By comparing textures of the roof areas projected, building changes such as height change or building removal can be detected. New height values are estimated by adjusting building heights until projected roofs align to actual roofs observed in the image. If the projected image appeared in the image while no building is observed, it corresponds to a demolished building. By checking buildings in the original image whose roofs and facades areas are not projected, new buildings are identified. Based on these results, the building database is updated by the three categories of height update, building deletion, or new building creation. This method was tested with a KOMPSAT-3A image over Incheon Metropolitan City and Incheon building database available in public. Building change detection and building database update was carried out. Updated building corners were then projected to another KOMPSAT-3 image. It was confirmed that building areas projected by updated building information agreed with actual buildings in the image very well. Through this study, the possibility of semi-automatic building change detection and building database update based on single satellite image was confirmed. In the future, follow-up research is needed on technology to enhance computational automation of the proposed method.

Evaluation of the Usefulness of Exactrac in Image-guided Radiation Therapy for Head and Neck Cancer (두경부암의 영상유도방사선치료에서 ExacTrac의 유용성 평가)

  • Baek, Min Gyu;Kim, Min Woo;Ha, Se Min;Chae, Jong Pyo;Jo, Guang Sub;Lee, Sang Bong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.7-15
    • /
    • 2020
  • Purpose: In modern radiotherapy technology, several methods of image guided radiation therapy (IGRT) are used to deliver accurate doses to tumor target locations and normal organs, including CBCT (Cone Beam Computed Tomography) and other devices, ExacTrac System, other than CBCT equipped with linear accelerators. In previous studies comparing the two systems, positional errors were analysed rearwards using Offline-view or evaluated only with a Yaw rotation with the X, Y, and Z axes. In this study, when using CBCT and ExacTrac to perform 6 Degree of the Freedom(DoF) Online IGRT in a treatment center with two equipment, the difference between the set-up calibration values seen in each system, the time taken for patient set-up, and the radiation usefulness of the imaging device is evaluated. Materials and Methods: In order to evaluate the difference between mobile calibrations and exposure radiation dose, the glass dosimetry and Rando Phantom were used for 11 cancer patients with head circumference from March to October 2017 in order to assess the difference between mobile calibrations and the time taken from Set-up to shortly before IGRT. CBCT and ExacTrac System were used for IGRT of all patients. An average of 10 CBCT and ExacTrac images were obtained per patient during the total treatment period, and the difference in 6D Online Automation values between the two systems was calculated within the ROI setting. In this case, the area of interest designation in the image obtained from CBCT was fixed to the same anatomical structure as the image obtained through ExacTrac. The difference in positional values for the six axes (SI, AP, LR; Rotation group: Pitch, Roll, Rtn) between the two systems, the total time taken from patient set-up to just before IGRT, and exposure dose were measured and compared respectively with the RandoPhantom. Results: the set-up error in the phantom and patient was less than 1mm in the translation group and less than 1.5° in the rotation group, and the RMS values of all axes except the Rtn value were less than 1mm and 1°. The time taken to correct the set-up error in each system was an average of 256±47.6sec for IGRT using CBCT and 84±3.5sec for ExacTrac, respectively. Radiation exposure dose by IGRT per treatment was measured at 37 times higher than ExacTrac in CBCT and ExacTrac at 2.468mGy and 0.066mGy at Oral Mucosa among the 7 measurement locations in the head and neck area. Conclusion: Through 6D online automatic positioning between the CBCT and ExacTrac systems, the set-up error was found to be less than 1mm, 1.02°, including the patient's movement (random error), as well as the systematic error of the two systems. This error range is considered to be reasonable when considering that the PTV Margin is 3mm during the head and neck IMRT treatment in the present study. However, considering the changes in target and risk organs due to changes in patient weight during the treatment period, it is considered to be appropriately used in combination with CBCT.