• Title/Summary/Keyword: Image pixel

Search Result 2,495, Processing Time 0.044 seconds

A study to Improve the Image Quality of Low-quality Public CCTV (저화질 공공 CCTV의 영상 화질 개선 방안 연구)

  • Young-Woo Kwon;Sung-hyun Baek;Bo-Soon Kim;Sung-Hoon Oh;Young-Jun Jeon;Seok-Chan Jeong
    • The Journal of Bigdata
    • /
    • v.6 no.2
    • /
    • pp.125-137
    • /
    • 2021
  • The number of CCTV installed in Korea is over 1.3 million, increasing by more than 15% annually. However, due to the limited budget compared to the installation demand, the infrastructure is composed of 500,000 pixel low-quality CCTV, and there is a limits on identification of objects in the video. Public CCTV has high utility in various fields such as crime prevention, traffic information collection (control), facility management, and fire prevention. Especially, since installed in high height, it works as its role in solving diverse crime and is in increasing trend. However, the current public CCTV field is operated with potential problems such as inability to identify due to environmental factors such as fog, snow, and rain, and the low-quality of collected images due to the installation of low-quality CCTV. Therefore, in this study, in order to remove the typical low-quality elements of public CCTV, the method of attenuating scattered light in the image caused by dust, water droplets, fog, etc and algorithm application method which uses deep-learning algorithm to improve input video into videos over quality over 4K are suggested.

Analysis and Orange Utilization of Training Data and Basic Artificial Neural Network Development Results of Non-majors (비전공자 학부생의 훈련데이터와 기초 인공신경망 개발 결과 분석 및 Orange 활용)

  • Kyeong Hur
    • Journal of Practical Engineering Education
    • /
    • v.15 no.2
    • /
    • pp.381-388
    • /
    • 2023
  • Through artificial neural network education using spreadsheets, non-major undergraduate students can understand the operation principle of artificial neural networks and develop their own artificial neural network software. Here, training of the operation principle of artificial neural networks starts with the generation of training data and the assignment of correct answer labels. Then, the output value calculated from the firing and activation function of the artificial neuron, the parameters of the input layer, hidden layer, and output layer is learned. Finally, learning the process of calculating the error between the correct label of each initially defined training data and the output value calculated by the artificial neural network, and learning the process of calculating the parameters of the input layer, hidden layer, and output layer that minimize the total sum of squared errors. Training on the operation principles of artificial neural networks using a spreadsheet was conducted for undergraduate non-major students. And image training data and basic artificial neural network development results were collected. In this paper, we analyzed the results of collecting two types of training data and the corresponding artificial neural network SW with small 12-pixel images, and presented methods and execution results of using the collected training data for Orange machine learning model learning and analysis tools.

Development of Stream Cover Classification Model Using SVM Algorithm based on Drone Remote Sensing (드론원격탐사 기반 SVM 알고리즘을 활용한 하천 피복 분류 모델 개발)

  • Jeong, Kyeong-So;Go, Seong-Hwan;Lee, Kyeong-Kyu;Park, Jong-Hwa
    • Journal of Korean Society of Rural Planning
    • /
    • v.30 no.1
    • /
    • pp.57-66
    • /
    • 2024
  • This study aimed to develop a precise vegetation cover classification model for small streams using the combination of drone remote sensing and support vector machine (SVM) techniques. The chosen study area was the Idong stream, nestled within Geosan-gun, Chunbuk, South Korea. The initial stage involved image acquisition through a fixed-wing drone named ebee. This drone carried two sensors: the S.O.D.A visible camera for capturing detailed visuals and the Sequoia+ multispectral sensor for gathering rich spectral data. The survey meticulously captured the stream's features on August 18, 2023. Leveraging the multispectral images, a range of vegetation indices were calculated. These included the widely used normalized difference vegetation index (NDVI), the soil-adjusted vegetation index (SAVI) that factors in soil background, and the normalized difference water index (NDWI) for identifying water bodies. The third stage saw the development of an SVM model based on the calculated vegetation indices. The RBF kernel was chosen as the SVM algorithm, and optimal values for the cost (C) and gamma hyperparameters were determined. The results are as follows: (a) High-Resolution Imaging: The drone-based image acquisition delivered results, providing high-resolution images (1 cm/pixel) of the Idong stream. These detailed visuals effectively captured the stream's morphology, including its width, variations in the streambed, and the intricate vegetation cover patterns adorning the stream banks and bed. (b) Vegetation Insights through Indices: The calculated vegetation indices revealed distinct spatial patterns in vegetation cover and moisture content. NDVI emerged as the strongest indicator of vegetation cover, while SAVI and NDWI provided insights into moisture variations. (c) Accurate Classification with SVM: The SVM model, fueled by the combination of NDVI, SAVI, and NDWI, achieved an outstanding accuracy of 0.903, which was calculated based on the confusion matrix. This performance translated to precise classification of vegetation, soil, and water within the stream area. The study's findings demonstrate the effectiveness of drone remote sensing and SVM techniques in developing accurate vegetation cover classification models for small streams. These models hold immense potential for various applications, including stream monitoring, informed management practices, and effective stream restoration efforts. By incorporating images and additional details about the specific drone and sensors technology, we can gain a deeper understanding of small streams and develop effective strategies for stream protection and management.

Red Tide Detection through Image Fusion of GOCI and Landsat OLI (GOCI와 Landsat OLI 영상 융합을 통한 적조 탐지)

  • Shin, Jisun;Kim, Keunyong;Min, Jee-Eun;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_2
    • /
    • pp.377-391
    • /
    • 2018
  • In order to efficiently monitor red tide over a wide range, the need for red tide detection using remote sensing is increasing. However, the previous studies focus on the development of red tide detection algorithm for ocean colour sensor. In this study, we propose the use of multi-sensor to improve the inaccuracy for red tide detection and remote sensing data in coastal areas with high turbidity, which are pointed out as limitations of satellite-based red tide monitoring. The study area were selected based on the red tide information provided by National Institute of Fisheries Science, and spatial fusion and spectral-based fusion were attempted using GOCI image as ocean colour sensor and Landsat OLI image as terrestrial sensor. Through spatial fusion of the two images, both the red tide of the coastal area and the outer sea areas, where the quality of Landsat OLI image was low, which were impossible to observe in GOCI images, showed improved detection results. As a result of spectral-based fusion performed by feature-level and rawdata-level, there was no significant difference in red tide distribution patterns derived from the two methods. However, in the feature-level method, the red tide area tends to overestimated as spatial resolution of the image low. As a result of pixel segmentation by linear spectral unmixing method, the difference in the red tide area was found to increase as the number of pixels with low red tide ratio increased. For rawdata-level, Gram-Schmidt sharpening method estimated a somewhat larger area than PC spectral sharpening method, but no significant difference was observed. In this study, it is shown that coastal red tide with high turbidity as well as outer sea areas can be detected through spatial fusion of ocean colour and terrestrial sensor. Also, by presenting various spectral-based fusion methods, more accurate red tide area estimation method is suggested. It is expected that this result will provide more precise detection of red tide around the Korean peninsula and accurate red tide area information needed to determine countermeasure to effectively control red tide.

Analysis of Respiratory Motional Effect on the Cone-beam CT Image (Cone-beam CT 영상 획득 시 호흡에 의한 영향 분석)

  • Song, Ju-Young;Nah, Byung-Sik;Chung, Woong-Ki;Ahn, Sung-Ja;Nam, Taek-Keun;Yoon, Mi-Sun
    • Progress in Medical Physics
    • /
    • v.18 no.2
    • /
    • pp.81-86
    • /
    • 2007
  • The cone-beam CT (CBCT) which is acquired using on-board imager (OBI) attached to a linear accelerator is widely used for the image guided radiation therapy. In this study, the effect of respiratory motion on the quality of CBCT image was evaluated. A phantom system was constructed in order to simulate respiratory motion. One part of the system is composed of a moving plate and a motor driving component which can control the motional cycle and motional range. The other part is solid water phantom containing a small cubic phantom ($2{\times}2{\times}2cm^3$) surrounded by air which simulate a small tumor volume in the lung air cavity CBCT images of the phantom were acquired in 20 different cases and compared with the image in the static status. The 20 different cases are constituted with 4 different motional ranges (0.7 cm, 1.6 cm, 2.4 cm, 3.1 cm) and 5 different motional cycles (2, 3, 4, 5, 6 sec). The difference of CT number in the coronal image was evaluated as a deformation degree of image quality. The relative average pixel intensity values as a compared CT number of static CBCT image were 71.07% at 0.7 cm motional range, 48.88% at 1.6 cm motional range, 30.60% at 2.4 cm motional range, 17.38% at 3.1 cm motional range The tumor phantom sizes which were defined as the length with different CT number compared with air were increased as the increase of motional range (2.1 cm: no motion, 2.66 cm: 0.7 cm motion, 3.06 cm: 1.6 cm motion, 3.62 cm: 2.4 cm motion, 4.04 cm: 3.1 cm motion). This study shows that respiratory motion in the region of inhomogeneous structures can degrade the image quality of CBCT and it must be considered in the process of setup error correction using CBCT images.

  • PDF

(Image Analysis of Electrophoresis Gels by using Region Growing with Multiple Peaks) (다중 피크의 영역 성장 기법에 의한 전기영동 젤의 영상 분석)

  • 김영원;전병환
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.444-453
    • /
    • 2003
  • Recently, a great interest of bio-technology(BT) is concentrated and the image analysis technique for electrophoresis gels is highly requested to analyze genetic information or to look for some new bio-activation materials. For this purpose, the location and quantity of each band in a lane should be measured. In most of existing techniques, the approach of peak searching in a profile of a lane is used. But this peak is improper as the representative of a band, because its location does not correspond to that of the brightest pixel or the center of gravity. Also, it is improper to measure band quantity in most of these approaches because various enhancement processes are commonly applied to original images to extract peaks easily. In this paper, we adopt an approach to measure accumulated brightness as a band quantity in each band region, which Is extracted by not using any process of changing relative brightness, and the gravity center of the region is calculated as a band location. Actually, we first extract lanes with an entropy-based threshold calculated on a gel-image histogram. And then, three other methods are proposed and applied to extract bands. In the MER method, peaks and valleys are searched on a vertical search line by which each lane is bisected. And the minimum enclosing rectangle of each band is set between successive two valleys. On the other hand, in the RG-1 method, each band is extracted by using region growing with a peak as a seed, separating overlapped neighbor bands. In the RG-2 method, peaks and valleys are searched on two vertical lines by which each lane is trisected, and the left and right peaks nay be paired up if they seem to belong to the same band, and then each band region is grown up with a peak or both peaks if exist. To compare above three methods, we have measured the location and amount of bands. As a result, the average errors in band location of MER, RG-1, and RG-2 were 6%, 3%, and 1%, respectively, when the lane length is normalized to a unit value. And the average errors in band amount were 8%, 5%, and 2%, respectively, when the sum of band amount is normalized to a unit value. In conclusion, RG-2 was shown to be more reliable in the accuracy of measuring the location and amount of bands.

The Usefulness of LEUR Collimator for 1-Day Basal/Acetazolamide Brain Perfusion SPECT (1-Day Protocol을 사용하는 Brain Perfusion SPECT에서 LEUR 콜리메이터의 유용성)

  • Choi, Jin-Wook;Kim, Soo-Mee;Lee, Hyung-Jin;Kim, Jin-Eui;Kim, Hyun-Joo;Lee, Jae-Sung;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.1
    • /
    • pp.94-100
    • /
    • 2011
  • Purpose: Basal/Acetazolamide-challenged brain perfusion SPECT is very useful to assess cerebral perfusion and vascular reserve. However, as there is a trade off between sensitivity and spatial resolution in the selection of collimator, the selection of optimal collimator is crucial. In this study, we examined three collimators to select optimal one for 1-day brain perfusion SPECT. Materials and Methods: Three collimators, low energy high resolution-parallel beam (LEHR-par), ultra resolution-fan beam (LEUR-fan) and super fine-fan beam (LESFR-fan), were tested for 1-day imaging using Triad XLT 9 (TRIONIX). The SPECT images of Hoffman 3D brain phantom filled with 99mTc of 170 MBq and a normal volunteer were acquired with a protocol of 50 kcts/frame and detector rotation of 3 degree. Filterd backprojection (FBP) reconstruction with Butterworth filter (cut off frequencies, 0.3 to 0.5) was performed. The quantitative and qualitative assessments for three collimators were performed. Results: The blind tests showed that LESFR-fan provided the best image quality for Hoffman brain phantom and the volunteer. However, images for all the collimator were evaluated as 'acceptable'. On the other hand, in order to meet the equivalent signal-to-noise ratio (SNR), total acquisition time or radioactivity dose for LESFR-fan must have been increased up to almost twice of that for LEUR-fan and LEHR-par. The volunteer test indicated that total acquisition time could be reduced approximately by 10 to 14 min in clinical practice using LEUR-fan and LEHR-par without significant loss on image quality, in comparison with LESFR-fan. Conclusion: Although LESFR-fan provides the best image quality, it requires significantly more acquisition time than LEUR-fan and LEHR-par to provide reasonable SNR. Since there is no significant clinical difference between three collimators, LEUR-fan and LEHR-par can be recommended as optimal collimators for 1-day brain perfusion imaging with respect to image quality and SNR.

  • PDF

Verification of Indicator Rotation Correction Function of a Treatment Planning Program for Stereotactic Radiosurgery (방사선수술치료계획 프로그램의 지시자 회전 오차 교정 기능 점검)

  • Chung, Hyun-Tai;Lee, Re-Na
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.2
    • /
    • pp.47-51
    • /
    • 2008
  • Objective: This study analyzed errors due to rotation or tilt of the magnetic resonance (MR) imaging indicator during image acquisition for a stereotactic radiosurgery. The error correction procedure of a commercially available stereotactic neurosurgery treatment planning program has been verified. Materials and Methods: Software virtual phantoms were built with stereotactic images generated by a commercial programming language, Interactive Data Language (version 5.5). The thickness of an image slice was 0.5 mm, pixel size was $0.5{\times}0.5mm$, field of view was 256 mm, and image resolution was $512{\times}512$. The images were generated under the DICOM 3.0 standard in order to be used with Leksell GammaPlan$^{(R)}$. For the verification of the rotation error correction function of Leksell GammaPlan$^{(R)}$, 45 measurement points were arranged in five axial planes. On each axial plane, there were nine measurement points along a square of length 100 mm. The center of the square was located on the z-axis and a measurement point was on the z-axis, too. Five axial planes were placed at z=-50.0, -30.0, 0.0, 30.0, 50.0 mm, respectively. The virtual phantom was rotated by $3^{\circ}$ around one of x, y, and z-axis. It was also rotated by $3^{\circ}$ around two axes of x, y, and z-axis, and rotated by $3^{\circ}$ along all three axes. The errors in the position of rotated measurement points were measured with Leksell GammaPlan$^{(R)}$ and the correction function was verified. Results: The image registration errors of the virtual phantom images was $0.1{\pm}0.1mm$ and it was within the requirement of stereotactic images. The maximum theoretical errors in position of measurement points were 2.6 mm for a rotation around one axis, 3.7 mm for a rotation around two axes, and 4.5 mm for a rotation around three axes. The measured errors in position was $0.1{\pm}0.1mm$ for a rotation around single axis, $0.2{\pm}0.2mm$ for double and triple axes. These small errors verified that the rotation error correction function of Leksell GammaPlan$^{(R)}$ is working fine. Conclusion: A virtual phantom was built to verify software functions of stereotactic neurosurgery treatment planning program. The error correction function of a commercial treatment planning program worked within nominal error range. The virtual phantom of this study can be applied in many other fields to verify various functions of treatment planning programs.

Suggested Protocol for Efficient Medical Image Information Exchange in Korea: Breast MRI (효율적 의료영상정보교류를 위한 프로토콜 제안: 유방자기공명영상)

  • Park, Ji Hee;Choi, Seon-Hyeong;Kim, Sungjun;Yong, Hwan Seok;Woo, Hyunsik;Jin, Kwang Nam;Jeong, Woo Kyoung;Shin, Na-Young;Choi, Moon Hyung;Jung, Seung Eun
    • Journal of the Korean Society of Radiology
    • /
    • v.79 no.5
    • /
    • pp.254-258
    • /
    • 2018
  • Purpose: Establishment of an appropriate protocol for breast magnetic resonance imaging (MRI) in the study of image quality standards to enhance the effectiveness of medical image information exchange, which is part of the construction and activation of clinical information exchange for healthcare informatization. Materials and Methods: The recommended protocols of breast and MRI scans were reviewed and the questionnaire was prepared by a responsible researcher. Then, a panel of 9 breast dedicated radiologists was set up in Korea. The expert panel conducted a total of three Delphi agreements to draw up a consensus on the breast MRI protocol. Results: The agreed breast MRI recommendation protocol is a 1.5 Tesla or higher device that acquires images with prone position using a breast dedicated coil and includes T2-weighted and pre-contrast T1-weighted images. Contrast enhancement images are acquired at least two times, and include 60-120 seconds between images and after 4 minutes. The contrast enhancement T1-weighted image should be less than 3 mm in thickness, less than 120 seconds in temporal resolution, and less than $1.5mm^2$ in-plane pixel resolution. Conclusion: The Delphi agreement of the domestic breast imaging specialist group has established the recommendation protocol of the effective breast MRI.

Deep Learning Approaches for Accurate Weed Area Assessment in Maize Fields (딥러닝 기반 옥수수 포장의 잡초 면적 평가)

  • Hyeok-jin Bak;Dongwon Kwon;Wan-Gyu Sang;Ho-young Ban;Sungyul Chang;Jae-Kyeong Baek;Yun-Ho Lee;Woo-jin Im;Myung-chul Seo;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.1
    • /
    • pp.17-27
    • /
    • 2023
  • Weeds are one of the factors that reduce crop yield through nutrient and photosynthetic competition. Quantification of weed density are an important part of making accurate decisions for precision weeding. In this study, we tried to quantify the density of weeds in images of maize fields taken by unmanned aerial vehicle (UAV). UAV image data collection took place in maize fields from May 17 to June 4, 2021, when maize was in its early growth stage. UAV images were labeled with pixels from maize and those without and the cropped to be used as the input data of the semantic segmentation network for the maize detection model. We trained a model to separate maize from background using the deep learning segmentation networks DeepLabV3+, U-Net, Linknet, and FPN. All four models showed pixel accuracy of 0.97, and the mIOU score was 0.76 and 0.74 in DeepLabV3+ and U-Net, higher than 0.69 for Linknet and FPN. Weed density was calculated as the difference between the green area classified as ExGR (Excess green-Excess red) and the maize area predicted by the model. Each image evaluated for weed density was recombined to quantify and visualize the distribution and density of weeds in a wide range of maize fields. We propose a method to quantify weed density for accurate weeding by effectively separating weeds, maize, and background from UAV images of maize fields.