• Title/Summary/Keyword: image acquisition techniques

Search Result 141, Processing Time 0.026 seconds

Automated Geometric Correction of Geostationary Weather Satellite Images (정지궤도 기상위성의 자동기하보정)

  • Kim, Hyun-Suk;Lee, Tae-Yoon;Hur, Dong-Seok;Rhee, Soo-Ahm;Kim, Tae-Jung
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.4
    • /
    • pp.297-309
    • /
    • 2007
  • The first Korean geostationary weather satellite, Communications, Oceanography and Meteorology Satellite (COMS) will be launched in 2008. The ground station for COMS needs to perform geometric correction to improve accuracy of satellite image data and to broadcast geometrically corrected images to users within 30 minutes after image acquisition. For such a requirement, we developed automated and fast geometric correction techniques. For this, we generated control points automatically by matching images against coastline data and by applying a robust estimation called RANSAC. We used GSHHS (Global Self-consistent Hierarchical High-resolution Shoreline) shoreline database to construct 211 landmark chips. We detected clouds within the images and applied matching to cloud-free sub images. When matching visible channels, we selected sub images located in day-time. We tested the algorithm with GOES-9 images. Control points were generated by matching channel 1 and channel 2 images of GOES against the 211 landmark chips. The RANSAC correctly removed outliers from being selected as control points. The accuracy of sensor models established using the automated control points were in the range of $1{\sim}2$ pixels. Geometric correction was performed and the performance was visually inspected by projecting coastline onto the geometrically corrected images. The total processing time for matching, RANSAC and geometric correction was around 4 minutes.

Parallel Processing of Satellite Images using CUDA Library: Focused on NDVI Calculation (CUDA 라이브러리를 이용한 위성영상 병렬처리 : NDVI 연산을 중심으로)

  • LEE, Kang-Hun;JO, Myung-Hee;LEE, Won-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.3
    • /
    • pp.29-42
    • /
    • 2016
  • Remote sensing allows acquisition of information across a large area without contacting objects, and has thus been rapidly developed by application to different areas. Thus, with the development of remote sensing, satellites are able to rapidly advance in terms of their image resolution. As a result, satellites that use remote sensing have been applied to conduct research across many areas of the world. However, while research on remote sensing is being implemented across various areas, research on data processing is presently insufficient; that is, as satellite resources are further developed, data processing continues to lag behind. Accordingly, this paper discusses plans to maximize the performance of satellite image processing by utilizing the CUDA(Compute Unified Device Architecture) Library of NVIDIA, a parallel processing technique. The discussion in this paper proceeds as follows. First, standard KOMPSAT(Korea Multi-Purpose Satellite) images of various sizes are subdivided into five types. NDVI(Normalized Difference Vegetation Index) is implemented to the subdivided images. Next, ArcMap and the two techniques, each based on CPU or GPU, are used to implement NDVI. The histograms of each image are then compared after each implementation to analyze the different processing speeds when using CPU and GPU. The results indicate that both the CPU version and GPU version images are equal with the ArcMap images, and after the histogram comparison, the NDVI code was correctly implemented. In terms of the processing speed, GPU showed 5 times faster results than CPU. Accordingly, this research shows that a parallel processing technique using CUDA Library can enhance the data processing speed of satellites images, and that this data processing benefits from multiple advanced remote sensing techniques as compared to a simple pixel computation like NDVI.

Effects of Iterative Reconstruction Algorithm, Automatic Exposure Control on Image Quality, and Radiation Dose: Phantom Experiments with Coronary CT Angiography Protocols (반복적 재구성 알고리즘과 관전류 자동 노출 조정 기법의 CT 영상 화질과 선량에 미치는 영향: 관상동맥 CT 조영 영상 프로토콜 기반의 팬텀 실험)

  • Ha, Seongmin;Jung, Sunghee;Chang, Hyuk-Jae;Park, Eun-Ah;Shim, Hackjoon
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.28-35
    • /
    • 2015
  • In this study, we investigated the effects of an iterative reconstruction algorithm and an automatic exposure control (AEC) technique on image quality and radiation dose through phantom experiments with coronary computed tomography (CT) angiography protocols. We scanned the AAPM CT performance phantom using 320 multi-detector-row CT. At the tube voltages of 80, 100, and 120 kVp, the scanning was repeated with two settings of the AEC technique, i.e., with the target standard deviations (SD) values of 33 (the higher tube current) and 44 (the lower tube current). The scanned projection data were reconstructed also in two ways, with the filtered back projection (FBP) and with the iterative reconstruction technique (AIDR-3D). The image quality was evaluated quantitatively with the noise standard deviation, modulation transfer function, and the contrast to noise ratio (CNR). More specifically, we analyzed the influences of selection of a tube voltage and a reconstruction algorithm on tube current modulation and consequently on radiation dose. Reduction of image noise by the iterative reconstruction algorithm compared with the FBP was revealed eminently, especially with the lower tube current protocols, i.e., it was decreased by 46% and 38%, when the AEC was established with the lower dose (the target SD=44) and the higher dose (the target SD=33), respectively. As a side effect of iterative reconstruction, the spatial resolution was decreased by a degree that could not mar the remarkable gains in terms of noise reduction. Consequently, if coronary CT angiogprahy is scanned and reconstructed using both the automatic exposure control and iterative reconstruction techniques, it is anticipated that, in comparison with a conventional acquisition method, image noise can be reduced significantly with slight decrease in spatial resolution, implying clinical advantages of radiation dose reduction, still being faithful to the ALARA principle.

Design and Implementation of Medical Information System using QR Code (QR 코드를 이용한 의료정보 시스템 설계 및 구현)

  • Lee, Sung-Gwon;Jeong, Chang-Won;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.109-115
    • /
    • 2015
  • The new medical device technologies for bio-signal information and medical information which developed in various forms have been increasing. Information gathering techniques and the increasing of the bio-signal information device are being used as the main information of the medical service in everyday life. Hence, there is increasing in utilization of the various bio-signals, but it has a problem that does not account for security reasons. Furthermore, the medical image information and bio-signal of the patient in medical field is generated by the individual device, that make the situation cannot be managed and integrated. In order to solve that problem, in this paper we integrated the QR code signal associated with the medial image information including the finding of the doctor and the bio-signal information. bio-signal. System implementation environment for medical imaging devices and bio-signal acquisition was configured through bio-signal measurement, smart device and PC. For the ROI extraction of bio-signal and the receiving of image information that transfer from the medical equipment or bio-signal measurement, .NET Framework was used to operate the QR server module on Window Server 2008 operating system. The main function of the QR server module is to parse the DICOM file generated from the medical imaging device and extract the identified ROI information to store and manage in the database. Additionally, EMR, patient health information such as OCS, extracted ROI information needed for basic information and emergency situation is managed by QR code. QR code and ROI management and the bio-signal information file also store and manage depending on the size of receiving the bio-singnal information case with a PID (patient identification) to be used by the bio-signal device. If the receiving of information is not less than the maximum size to be converted into a QR code, the QR code and the URL information can access the bio-signal information through the server. Likewise, .Net Framework is installed to provide the information in the form of the QR code, so the client can check and find the relevant information through PC and android-based smart device. Finally, the existing medical imaging information, bio-signal information and the health information of the patient are integrated over the result of executing the application service in order to provide a medical information service which is suitable in medical field.

Quality Assurance of Multileaf Collimator Using Electronic Portal Imaging (전자포탈영상을 이용한 다엽시준기의 정도관리)

  • ;Jason W Sohn
    • Progress in Medical Physics
    • /
    • v.14 no.3
    • /
    • pp.151-160
    • /
    • 2003
  • The application of more complex radiotherapy techniques using multileaf collimation (MLC), such as 3D conformal radiation therapy and intensity-modulated radiation therapy (IMRT), has increased the significance of verifying leaf position and motion. Due to thier reliability and empirical robustness, quality assurance (QA) of MLC. However easy use and the ability to provide digital data of electronic portal imaging devices (EPIDs) have attracted attention to portal films as an alternatives to films for routine qualify assurance, despite concerns about their clinical feasibility, efficacy, and the cost to benefit ratio. In this study, we developed method for daily QA of MLC using electronic portal images (EPIs). EPID availability for routine QA was verified by comparing of the portal films, which were simultaneously obtained when radiation was delivered and known prescription input to MLC controller. Specially designed two-test patterns of dynamic MLC were applied for image acquisition. Quantitative off-line analysis using an edge detection algorithm enhanced the verification procedure as well as on-line qualitative visual assessment. In conclusion, the availability of EPI was enough for daily QA of MLC leaf position with the accuracy of portal films.

  • PDF

An Implementation of Mobile Platform using Location Data Index Techniques (위치 데이터 인덱스 기법을 적용한 모바일 플랫폼구현)

  • Park, Chang-Hee;Kang, Jin-Suk;Sung, Mee-Young;Park, Jong-Song;Kim, Jang-Hyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.11
    • /
    • pp.1960-1972
    • /
    • 2006
  • In this thesis, GPS and the electronic mapping were used to realize such a system by recognizing license plate numbers and identifying the location of objects that move at synchronous times with simulated movement in the electronic map. As well, throughout the study, a camera attached to a PDA, one of the mobile devices, automatically recognized and confirmed acquired license plate numbers from the front and back of each cu. Using this mobile technique in a wireless network searches for specific plate numbers and information about the location of the car is transmitted to a remote sewer. The use of such a GPS-based system allows for the measurement of topography and the effective acquisition of a car's location. The information is then transmitted to a central controlling center and stored as text to be reproduced later in the form of diagrams. Getting positional information through GPS and using image-processing with a PDA makes it possible to estimate the correct information of a car's location and to transmit the specific information of the car to a control center simultaneously, so that the center will get information such as type of the cu, possibility of the defects that a car might have, and possibly to offer help with those functions. Such information can establish a mobile system that can recognize and accurately trace the location of cars.

A Study of Location Based Services Using Location Data Index Techniques (위치데이터인덱스 기법을 적용한 위치기반서버스에 관한 연구)

  • Park Chang-Hee;Kim Jang-Hyung;Kang Jin-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.5
    • /
    • pp.595-605
    • /
    • 2006
  • In this thesis, GPS and the electronic mapping were used to realize such a system by recognizing license plate numbers and identifying the location of objects that move at synchronous times with simulated movement in the electronic map. As well, throughout the study, a camera attached to a PDA, one of the mobile devices, automatically recognized and confirmed acquired license plate numbers from the front and back of each car. Using this mobile technique in a wireless network, searches for specific plate numbers and information about the location of the car is transmitted to a remote server. The use of such a GPS-based system allows for the measurement of topography and the effective acquisition of a car's location. The information is then transmitted to a central controlling center and stored as text to be reproduced later in the form of diagrams. Getting positional information through GPS and using image-processing with a PDA makes it possible to estimate the correct information of a car's location and to transmit the specific information of the car to a control center simultaneously, so that the center will get information such as type of the car, possibility of the defects that a car might have, and possibly to offer help with those functions. Such information can establish a mobile system that can recognize and accurately trace the location of cars.

  • PDF

Basic Research on the Possibility of Developing a Landscape Perceptual Response Prediction Model Using Artificial Intelligence - Focusing on Machine Learning Techniques - (인공지능을 활용한 경관 지각반응 예측모델 개발 가능성 기초연구 - 머신러닝 기법을 중심으로 -)

  • Kim, Jin-Pyo;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.3
    • /
    • pp.70-82
    • /
    • 2023
  • The recent surge of IT and data acquisition is shifting the paradigm in all aspects of life, and these advances are also affecting academic fields. Research topics and methods are being improved through academic exchange and connections. In particular, data-based research methods are employed in various academic fields, including landscape architecture, where continuous research is needed. Therefore, this study aims to investigate the possibility of developing a landscape preference evaluation and prediction model using machine learning, a branch of Artificial Intelligence, reflecting the current situation. To achieve the goal of this study, machine learning techniques were applied to the landscaping field to build a landscape preference evaluation and prediction model to verify the simulation accuracy of the model. For this, wind power facility landscape images, recently attracting attention as a renewable energy source, were selected as the research objects. For analysis, images of the wind power facility landscapes were collected using web crawling techniques, and an analysis dataset was built. Orange version 3.33, a program from the University of Ljubljana was used for machine learning analysis to derive a prediction model with excellent performance. IA model that integrates the evaluation criteria of machine learning and a separate model structure for the evaluation criteria were used to generate a model using kNN, SVM, Random Forest, Logistic Regression, and Neural Network algorithms suitable for machine learning classification models. The performance evaluation of the generated models was conducted to derive the most suitable prediction model. The prediction model derived in this study separately evaluates three evaluation criteria, including classification by type of landscape, classification by distance between landscape and target, and classification by preference, and then synthesizes and predicts results. As a result of the study, a prediction model with a high accuracy of 0.986 for the evaluation criterion according to the type of landscape, 0.973 for the evaluation criterion according to the distance, and 0.952 for the evaluation criterion according to the preference was developed, and it can be seen that the verification process through the evaluation of data prediction results exceeds the required performance value of the model. As an experimental attempt to investigate the possibility of developing a prediction model using machine learning in landscape-related research, this study was able to confirm the possibility of creating a high-performance prediction model by building a data set through the collection and refinement of image data and subsequently utilizing it in landscape-related research fields. Based on the results, implications, and limitations of this study, it is believed that it is possible to develop various types of landscape prediction models, including wind power facility natural, and cultural landscapes. Machine learning techniques can be more useful and valuable in the field of landscape architecture by exploring and applying research methods appropriate to the topic, reducing the time of data classification through the study of a model that classifies images according to landscape types or analyzing the importance of landscape planning factors through the analysis of landscape prediction factors using machine learning.

Assessment of Attenuation Correction Techniques with a $^{137}Cs$ Point Source ($^{137}Cs$ 점선원을 이용한 감쇠 보정기법들의 평가)

  • Bong, Jung-Kyun;Kim, Hee-Joung;Son, Hye-Kyoung;Park, Yun-Young;Park, Hae-Joung;Yun, Mi-Jin;Lee, Jong-Doo;Jung, Hae-Jo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.1
    • /
    • pp.57-68
    • /
    • 2005
  • Purpose: The objective of this study was to assess attenuation correction algorithms with the $^{137}Cs$ point source for the brain positron omission tomography (PET) imaging process. Materials & Methods: Four different types of phantoms were used in this study for testing various types of the attenuation correction techniques. Transmission data of a $^{137}Cs$ point source were acquired after infusing the emission source into phantoms and then the emission data were subsequently acquired in 3D acquisition mode. Scatter corrections were performed with a background tail-fitting algorithm. Emission data were then reconstructed using iterative reconstruction method with a measured (MAC), elliptical (ELAC), segmented (SAC) and remapping (RAC) attenuation correction, respectively. Reconstructed images were then both qualitatively and quantitatively assessed. In addition, reconstructed images of a normal subject were assessed by nuclear medicine physicians. Subtracted images were also compared. Results: ELEC, SAC, and RAC provided a uniform phantom image with less noise for a cylindrical phantom. In contrast, a decrease in intensity at the central portion of the attenuation map was noticed at the result of the MAC. Reconstructed images of Jaszack and Hoffan phantoms presented better quality with RAC and SAC. The attenuation of a skull on images of the normal subject was clearly noticed and the attenuation correction without considering the attenuation of the skull resulted in artificial defects on images of the brain. Conclusion: the complicated and improved attenuation correction methods were needed to obtain the better accuracy of the quantitative brain PET images.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.