• Title/Summary/Keyword: Image Resolution

Search Result 3,708, Processing Time 0.027 seconds

An Introduction to the Study of the Outlook on Highest Ruling Entity in Daesoonjinrohoe (I) - Focusing on Descriptions for Highest Ruling Entity and It's Meanings - (대순진리회 상제관 연구 서설 (I) - 최고신에 대한 표현들과 그 의미들을 중심으로 -)

  • Cha, Seon-keun
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.21
    • /
    • pp.99-156
    • /
    • 2013
  • This paper is to indicate research tendencies of faith in Daesoonjinrihoe and controversial points of those, and to consider the outlook on Sangje after defining it as theological understanding and explanation for Gu-Cheon-Sang-Je (High-est ruling Entity that is the object of devotion in Daesoon-jinrihoe). As the first introduction to the work, various descriptions for Sangje are arranged and the meanings of those are analyzed. In brief, first, the name of Gu-Cheon-Eung-Won-Nweh-Seong-Bo-Hwa-Cheon-Jon, expresses the fact that the authority of Sangje (the Supreme Entity) is exposed by spatial concept Sangje dwells in Ninth Heaven. This fact can be compared with the doctrines Allah in Islam and Jehovah in Christianity each are dwelled in Seventh Heaven. And the name shows Sangje is the ruler who reigns over the universe by using yin and yang. Second, the name, Gu-Cheon-Eung-Won-Nweh-Seong-BoHwa-Cheon-Jon, is imported from China Taoism because it has been in Ok-Chu-Gyeong (the Gaoshang shenlei yushu). But in fact it's root is in Korea because Buyeo and Goguryeo, the ancient Korean nations, have the source of the name. While the name is not the Supreme Entity in China Taoism, it is the Supreme Entity in Daesoonjinrihoe. This fact is a important difference. Third, arbitrarily or not, the name, Gu-Cheon-Eung-Won-Nweh-Seong-Bo-Hwa-Cheon-Jon, is put on the image of 'resolution of grievances'. The reason is that many peoples in Korea and China has called the name for about 1,000 years ago to help their fortunes and escape predicaments. Forth, not only Gu-Cheon-Eung-Won-Nweh-Seong-Bo-Hwa-Cheon-Jon but also the name, Three Pure Ones and Ok-Cheon-Jin-Wang (Yuqingzhenwang) in China Taoism used as the Highest ruling Entity in Daesoonjinrihoe. But the relations between three Pure Ones and Ok-Cheon-Jin-Wang and Gu-Cheon-Eung-Won-Nweh-Seong-Bo-Hwa-Cheon-Jon in Dae-soonjinrihoe are different from that in China Taoism. Fifth, Sangje is associated with the Polaris divinity of Tae-Eul, view on God in Oriental Cosmology. The description Tae-Eul as well as Gu-Cheon-Eung-Won-Nweh-Seong-Bo-Hwa-Cheon-Jon is indicated Sangje is linked to the faith of Buyeo and Goguryeo. Sixth, Sangje is not only Mugeuk-Sin (The God of The Endless) who supervise the Endless but also Taegeuk-Ji-Cheon-Jon (The God of The Ultimate Reality) who supervise the Ultimate Reality. These descriptions directly display the fact Sangje is a creator. Seventh, in case explaining Sangje, the point of view is necessary that grasps the whole viewpoints Sangje 'was' Hidden God(deus otiosus) and 'is' Unhidden God after Incarnation. Eighth, Sangje is Cheon-Ju in Donghak, but different from that. Cheon-Ju in Donghak has both transcendence and immanence in tightrope tension, but Cheon-Ju in Daesoonjinrihoe emphasize transcendence than immanence. That difference is the result of the fact Cheon-Ju in Donghak was a being having revealed a man and Cheon-Ju in Daesoonjinrihoe was a being having incarnated after revealing a man. Ninth, Sangje is Gae-Byeok-Jang who is the manager of the transforming and ordering the Three Realms of the World by the Great Do which is the mutual beneficence of all life and Hae-Won-Sin who is the God of resolution of grievances.

Verification of Indicator Rotation Correction Function of a Treatment Planning Program for Stereotactic Radiosurgery (방사선수술치료계획 프로그램의 지시자 회전 오차 교정 기능 점검)

  • Chung, Hyun-Tai;Lee, Re-Na
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.2
    • /
    • pp.47-51
    • /
    • 2008
  • Objective: This study analyzed errors due to rotation or tilt of the magnetic resonance (MR) imaging indicator during image acquisition for a stereotactic radiosurgery. The error correction procedure of a commercially available stereotactic neurosurgery treatment planning program has been verified. Materials and Methods: Software virtual phantoms were built with stereotactic images generated by a commercial programming language, Interactive Data Language (version 5.5). The thickness of an image slice was 0.5 mm, pixel size was $0.5{\times}0.5mm$, field of view was 256 mm, and image resolution was $512{\times}512$. The images were generated under the DICOM 3.0 standard in order to be used with Leksell GammaPlan$^{(R)}$. For the verification of the rotation error correction function of Leksell GammaPlan$^{(R)}$, 45 measurement points were arranged in five axial planes. On each axial plane, there were nine measurement points along a square of length 100 mm. The center of the square was located on the z-axis and a measurement point was on the z-axis, too. Five axial planes were placed at z=-50.0, -30.0, 0.0, 30.0, 50.0 mm, respectively. The virtual phantom was rotated by $3^{\circ}$ around one of x, y, and z-axis. It was also rotated by $3^{\circ}$ around two axes of x, y, and z-axis, and rotated by $3^{\circ}$ along all three axes. The errors in the position of rotated measurement points were measured with Leksell GammaPlan$^{(R)}$ and the correction function was verified. Results: The image registration errors of the virtual phantom images was $0.1{\pm}0.1mm$ and it was within the requirement of stereotactic images. The maximum theoretical errors in position of measurement points were 2.6 mm for a rotation around one axis, 3.7 mm for a rotation around two axes, and 4.5 mm for a rotation around three axes. The measured errors in position was $0.1{\pm}0.1mm$ for a rotation around single axis, $0.2{\pm}0.2mm$ for double and triple axes. These small errors verified that the rotation error correction function of Leksell GammaPlan$^{(R)}$ is working fine. Conclusion: A virtual phantom was built to verify software functions of stereotactic neurosurgery treatment planning program. The error correction function of a commercial treatment planning program worked within nominal error range. The virtual phantom of this study can be applied in many other fields to verify various functions of treatment planning programs.

A Study on Mechanical Errors in Cone Beam Computed Tomography(CBCT) System (콘빔 전산화단층촬영(CBCT) 시스템에서 기계적 오류에 관한 연구)

  • Lee, Yi-Seong;Yoo, Eun-Jeong;Kim, Seung-Keun;Choi, Kyoung-Sik;Lee, Jeong-Woo;Suh, Tae-Suk;Kim, Joeng-Koo
    • Journal of radiological science and technology
    • /
    • v.36 no.2
    • /
    • pp.123-129
    • /
    • 2013
  • This study investigated the rate of setup variance by the rotating unbalance of gantry in image-guided radiation therapy. The equipments used linear accelerator(Elekta Synergy TM, UK) and a three-dimensional volume imaging mode(3D Volume View) in cone beam computed tomography(CBCT) system. 2D images obtained by rotating $360^{\circ}$and $180^{\circ}$ were reconstructed to 3D image. Catpan503 phantom and homogeneous phantom were used to measure the setup errors. Ball-bearing phantom was used to check the rotation axis of the CBCT. The volume image from CBCT using Catphan503 phantom and homogeneous phantom were analyzed and compared to images from conventional CT in the six dimensional view(X, Y, Z, Roll, Pitch, and Yaw). The variance ratio of setup error were difference in X 0.6 mm, Y 0.5 mm Z 0.5 mm when the gantry rotated $360^{\circ}$ in orthogonal coordinate. whereas rotated $180^{\circ}$, the error measured 0.9 mm, 0.2 mm, 0.3 mm in X, Y, Z respectively. In the rotating coordinates, the more increased the rotating unbalance, the more raised average ratio of setup errors. The resolution of CBCT images showed 2 level of difference in the table recommended. CBCT had a good agreement compared to each recommended values which is the mechanical safety, geometry accuracy and image quality. The rotating unbalance of gentry vary hardly in orthogonal coordinate. However, in rotating coordinate of gantry exceeded the ${\pm}1^{\circ}$ of recommended value. Therefore, when we do sophisticated radiation therapy six dimensional correction is needed.

Characteristics of the Electro-Optical Camera(EOC) (다목적실용위성탑재 전자광학카메라(EOC)의 성능 특성)

  • Seunghoon Lee;Hyung-Sik Shim;Hong-Yul Paik
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.3
    • /
    • pp.213-222
    • /
    • 1998
  • Electro-Optical Camera(EOC) is the main payload of the KOrea Multi-Purpose SATellite(KOMPSAT) with the mission of cartography to build up a digital map of Korean territory including a Digital Terrain Elevation Map(DTEM). This instalment which comprises EOC Sensor Assembly and EOC Electronics Assembly produces the panchromatic images of 6.6 m GSD with a swath wider than 17 km by push-broom scanning and spacecraft body pointing in a visible range of wavelength, 510~730 nm. The high resolution panchromatic image is to be collected for 2 minutes during 98 minutes of orbit cycle covering about 800 km along ground track, over the mission lifetime of 3 years with the functions of programmable gain/offset and on-board image data storage. The image of 8 bit digitization, which is collected by a full reflective type F8.3 triplet without obscuration, is to be transmitted to Ground Station at a rate less than 25 Mbps. EOC was elaborated to have the performance which meets or surpasses its requirements of design phase. The spectral response, the modulation transfer function, and the uniformity of all the 2592 pixel of CCD of EOC are illustrated as they were measured for the convenience of end-user. The spectral response was measured with respect to each gain setup of EOC and this is expected to give the capability of generating more accurate panchromatic image to the users of EOC data. The modulation transfer function of EOC was measured as greater than 16 % at Nyquist frequency over the entire field of view, which exceeds its requirement of larger than 10 %. The uniformity that shows the relative response of each pixel of CCD was measured at every pixel of the Focal Plane Array of EOC and is illustrated for the data processing.

Very short-term rainfall prediction based on radar image learning using deep neural network (심층신경망을 이용한 레이더 영상 학습 기반 초단시간 강우예측)

  • Yoon, Seongsim;Park, Heeseong;Shin, Hongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.12
    • /
    • pp.1159-1172
    • /
    • 2020
  • This study applied deep convolution neural network based on U-Net and SegNet using long period weather radar data to very short-term rainfall prediction. And the results were compared and evaluated with the translation model. For training and validation of deep neural network, Mt. Gwanak and Mt. Gwangdeoksan radar data were collected from 2010 to 2016 and converted to a gray-scale image file in an HDF5 format with a 1km spatial resolution. The deep neural network model was trained to predict precipitation after 10 minutes by using the four consecutive radar image data, and the recursive method of repeating forecasts was applied to carry out lead time 60 minutes with the pretrained deep neural network model. To evaluate the performance of deep neural network prediction model, 24 rain cases in 2017 were forecast for rainfall up to 60 minutes in advance. As a result of evaluating the predicted performance by calculating the mean absolute error (MAE) and critical success index (CSI) at the threshold of 0.1, 1, and 5 mm/hr, the deep neural network model showed better performance in the case of rainfall threshold of 0.1, 1 mm/hr in terms of MAE, and showed better performance than the translation model for lead time 50 minutes in terms of CSI. In particular, although the deep neural network prediction model performed generally better than the translation model for weak rainfall of 5 mm/hr or less, the deep neural network prediction model had limitations in predicting distinct precipitation characteristics of high intensity as a result of the evaluation of threshold of 5 mm/hr. The longer lead time, the spatial smoothness increase with lead time thereby reducing the accuracy of rainfall prediction The translation model turned out to be superior in predicting the exceedance of higher intensity thresholds (> 5 mm/hr) because it preserves distinct precipitation characteristics, but the rainfall position tends to shift incorrectly. This study are expected to be helpful for the improvement of radar rainfall prediction model using deep neural networks in the future. In addition, the massive weather radar data established in this study will be provided through open repositories for future use in subsequent studies.

Radiation Dose Reduction in Digital Mammography by Deep-Learning Algorithm Image Reconstruction: A Preliminary Study (딥러닝 알고리즘을 이용한 저선량 디지털 유방 촬영 영상의 복원: 예비 연구)

  • Su Min Ha;Hak Hee Kim;Eunhee Kang;Bo Kyoung Seo;Nami Choi;Tae Hee Kim;You Jin Ku;Jong Chul Ye
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.2
    • /
    • pp.344-359
    • /
    • 2022
  • Purpose To develop a denoising convolutional neural network-based image processing technique and investigate its efficacy in diagnosing breast cancer using low-dose mammography imaging. Materials and Methods A total of 6 breast radiologists were included in this prospective study. All radiologists independently evaluated low-dose images for lesion detection and rated them for diagnostic quality using a qualitative scale. After application of the denoising network, the same radiologists evaluated lesion detectability and image quality. For clinical application, a consensus on lesion type and localization on preoperative mammographic examinations of breast cancer patients was reached after discussion. Thereafter, coded low-dose, reconstructed full-dose, and full-dose images were presented and assessed in a random order. Results Lesions on 40% reconstructed full-dose images were better perceived when compared with low-dose images of mastectomy specimens as a reference. In clinical application, as compared to 40% reconstructed images, higher values were given on full-dose images for resolution (p < 0.001); diagnostic quality for calcifications (p < 0.001); and for masses, asymmetry, or architectural distortion (p = 0.037). The 40% reconstructed images showed comparable values to 100% full-dose images for overall quality (p = 0.547), lesion visibility (p = 0.120), and contrast (p = 0.083), without significant differences. Conclusion Effective denoising and image reconstruction processing techniques can enable breast cancer diagnosis with substantial radiation dose reduction.

Analysis of Applicability of RPC Correction Using Deep Learning-Based Edge Information Algorithm (딥러닝 기반 윤곽정보 추출자를 활용한 RPC 보정 기술 적용성 분석)

  • Jaewon Hur;Changhui Lee;Doochun Seo;Jaehong Oh;Changno Lee;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.387-396
    • /
    • 2024
  • Most very high-resolution (VHR) satellite images provide rational polynomial coefficients (RPC) data to facilitate the transformation between ground coordinates and image coordinates. However, initial RPC often contains geometric errors, necessitating correction through matching with ground control points (GCPs). A GCP chip is a small image patch extracted from an orthorectified image together with height information of the center point, which can be directly used for geometric correction. Many studies have focused on area-based matching methods to accurately align GCP chips with VHR satellite images. In cases with seasonal differences or changed areas, edge-based algorithms are often used for matching due to the difficulty of relying solely on pixel values. However, traditional edge extraction algorithms,such as canny edge detectors, require appropriate threshold settings tailored to the spectral characteristics of satellite images. Therefore, this study utilizes deep learning-based edge information that is insensitive to the regional characteristics of satellite images for matching. Specifically,we use a pretrained pixel difference network (PiDiNet) to generate the edge maps for both satellite images and GCP chips. These edge maps are then used as input for normalized cross-correlation (NCC) and relative edge cross-correlation (RECC) to identify the peak points with the highest correlation between the two edge maps. To remove mismatched pairs and thus obtain the bias-compensated RPC, we iteratively apply the data snooping. Finally, we compare the results qualitatively and quantitatively with those obtained from traditional NCC and RECC methods. The PiDiNet network approach achieved high matching accuracy with root mean square error (RMSE) values ranging from 0.3 to 0.9 pixels. However, the PiDiNet-generated edges were thicker compared to those from the canny method, leading to slightly lower registration accuracy in some images. Nevertheless, PiDiNet consistently produced characteristic edge information, allowing for successful matching even in challenging regions. This study demonstrates that improving the robustness of edge-based registration methods can facilitate effective registration across diverse regions.

The Effect of PET/CT Images on SUV with the Correction of CT Image by Using Contrast Media (PET/CT 영상에서 조영제를 이용한 CT 영상의 보정(Correction)에 따른 표준화섭취계수(SUV)의 영향)

  • Ahn, Sha-Ron;Park, Hoon-Hee;Park, Min-Soo;Lee, Seung-Jae;Oh, Shin-Hyun;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.77-81
    • /
    • 2009
  • Purpose: The PET of the PET/CT (Positron Emission Tomography/Computed Tomography) quantitatively shows the biological and chemical information of the body, but has limitation of presenting the clear anatomic structure. Thus combining the PET with CT, it is not only possible to offer the higher resolution but also effectively shorten the scanning time and reduce the noises by using CT data in attenuation correction. And because, at the CT scanning, the contrast media makes it easy to determine a exact range of the lesion and distinguish the normal organs, there is a certain increase in the use of it. However, in the case of using the contrast media, it affects semi-quantitative measures of the PET/CT images. In this study, therefore, we will be to establish the reliability of the SUV (Standardized Uptake Value) with CT data correction so that it can help more accurate diagnosis. Materials and Methods: In this experiment, a total of 30 people are targeted - age range: from 27 to 72, average age : 49.6 - and DSTe (General Electric Healthcare, Milwaukee, MI, USA) is used for equipment. $^{18}F$- FDG 370~555 MBq is injected into the subjects depending on their weight and, after about 60 minutes of their stable position, a whole-body scan is taken. The CT scan is set to 140 kV and 210 mA, and the injected amount of the contrast media is 2 cc per 1 kg of the patients' weight. With the raw data from the scan, we obtain a image showing the effect of the contrast media through the attenuation correction by both of the corrected and uncorrected CT data. Then we mark out ROI (Region of Interest) in each area to measure SUV and analyze the difference. Results: According to the analysis, the SUV is decreased in the liver and heart which have more bloodstream than the others, because of the contrast media correction. On the other hand, there is no difference in the lungs. Conclusions: Whereas the CT scan images with the contrast media from the PET/CT increase the contrast of the targeted region for the test so that it can improve efficiency of diagnosis, there occurred an increase of SUV, a semi-quantitative analytical method. In this research, we measure the variation of SUV through the correction of the influence of contrast media and compare the differences. As we revise the SUV which is increasing in the image with attenuation correction by using contrast media, we can expect anatomical images of high-resolution. Furthermore, it is considered that through this trusted semi-quantitative method, it will definitely enhance the diagnostic value.

  • PDF

CAS 500-1/2 Image Utilization Technology and System Development: Achievement and Contribution (국토위성정보 활용기술 및 운영시스템 개발: 성과 및 의의)

  • Yoon, Sung-Joo;Son, Jonghwan;Park, Hyeongjun;Seo, Junghoon;Lee, Yoojin;Ban, Seunghwan;Choi, Jae-Seung;Kim, Byung-Guk;Lee, Hyun jik;Lee, Kyu-sung;Kweon, Ki-Eok;Lee, Kye-Dong;Jung, Hyung-sup;Choung, Yun-Jae;Choi, Hyun;Koo, Daesung;Choi, Myungjin;Shin, Yunsoo;Choi, Jaewan;Eo, Yang-Dam;Jeong, Jong-chul;Han, Youkyung;Oh, Jaehong;Rhee, Sooahm;Chang, Eunmi;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.867-879
    • /
    • 2020
  • As the era of space technology utilization is approaching, the launch of CAS (Compact Advanced Satellite) 500-1/2 satellites is scheduled during 2021 for acquisition of high-resolution images. Accordingly, the increase of image usability and processing efficiency has been emphasized as key design concepts of the CAS 500-1/2 ground station. In this regard, "CAS 500-1/2 Image Acquisition and Utilization Technology Development" project has been carried out to develop core technologies and processing systems for CAS 500-1/2 data collecting, processing, managing and distributing. In this paper, we introduce the results of the above project. We developed an operation system to generate precision images automatically with GCP (Ground Control Point) chip DB (Database) and DEM (Digital Elevation Model) DB over the entire Korean peninsula. We also developed the system to produce ortho-rectified images indexed to 1:5,000 map grids, and hence set a foundation for ARD (Analysis Ready Data)system. In addition, we linked various application software to the operation system and systematically produce mosaic images, DSM (Digital Surface Model)/DTM (Digital Terrain Model), spatial feature thematic map, and change detection thematic map. The major contribution of the developed system and technologies includes that precision images are to be automatically generated using GCP chip DB for the first time in Korea and the various utilization product technologies incorporated into the operation system of a satellite ground station. The developed operation system has been installed on Korea Land Observation Satellite Information Center of the NGII (National Geographic Information Institute). We expect the system to contribute greatly to the center's work and provide a standard for future ground station systems of earth observation satellites.

Parallel Processing of Satellite Images using CUDA Library: Focused on NDVI Calculation (CUDA 라이브러리를 이용한 위성영상 병렬처리 : NDVI 연산을 중심으로)

  • LEE, Kang-Hun;JO, Myung-Hee;LEE, Won-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.29-42
    • /
    • 2016
  • Remote sensing allows acquisition of information across a large area without contacting objects, and has thus been rapidly developed by application to different areas. Thus, with the development of remote sensing, satellites are able to rapidly advance in terms of their image resolution. As a result, satellites that use remote sensing have been applied to conduct research across many areas of the world. However, while research on remote sensing is being implemented across various areas, research on data processing is presently insufficient; that is, as satellite resources are further developed, data processing continues to lag behind. Accordingly, this paper discusses plans to maximize the performance of satellite image processing by utilizing the CUDA(Compute Unified Device Architecture) Library of NVIDIA, a parallel processing technique. The discussion in this paper proceeds as follows. First, standard KOMPSAT(Korea Multi-Purpose Satellite) images of various sizes are subdivided into five types. NDVI(Normalized Difference Vegetation Index) is implemented to the subdivided images. Next, ArcMap and the two techniques, each based on CPU or GPU, are used to implement NDVI. The histograms of each image are then compared after each implementation to analyze the different processing speeds when using CPU and GPU. The results indicate that both the CPU version and GPU version images are equal with the ArcMap images, and after the histogram comparison, the NDVI code was correctly implemented. In terms of the processing speed, GPU showed 5 times faster results than CPU. Accordingly, this research shows that a parallel processing technique using CUDA Library can enhance the data processing speed of satellites images, and that this data processing benefits from multiple advanced remote sensing techniques as compared to a simple pixel computation like NDVI.