• Title/Summary/Keyword: Pixel error

Search Result 480, Processing Time 0.036 seconds

Analysis and Orange Utilization of Training Data and Basic Artificial Neural Network Development Results of Non-majors (비전공자 학부생의 훈련데이터와 기초 인공신경망 개발 결과 분석 및 Orange 활용)

  • Kyeong Hur
    • Journal of Practical Engineering Education
    • /
    • v.15 no.2
    • /
    • pp.381-388
    • /
    • 2023
  • Through artificial neural network education using spreadsheets, non-major undergraduate students can understand the operation principle of artificial neural networks and develop their own artificial neural network software. Here, training of the operation principle of artificial neural networks starts with the generation of training data and the assignment of correct answer labels. Then, the output value calculated from the firing and activation function of the artificial neuron, the parameters of the input layer, hidden layer, and output layer is learned. Finally, learning the process of calculating the error between the correct label of each initially defined training data and the output value calculated by the artificial neural network, and learning the process of calculating the parameters of the input layer, hidden layer, and output layer that minimize the total sum of squared errors. Training on the operation principles of artificial neural networks using a spreadsheet was conducted for undergraduate non-major students. And image training data and basic artificial neural network development results were collected. In this paper, we analyzed the results of collecting two types of training data and the corresponding artificial neural network SW with small 12-pixel images, and presented methods and execution results of using the collected training data for Orange machine learning model learning and analysis tools.

Matching Points Filtering Applied Panorama Image Processing Using SURF and RANSAC Algorithm (SURF와 RANSAC 알고리즘을 이용한 대응점 필터링 적용 파노라마 이미지 처리)

  • Kim, Jeongho;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.144-159
    • /
    • 2014
  • Techniques for making a single panoramic image using multiple pictures are widely studied in many areas such as computer vision, computer graphics, etc. The panorama image can be applied to various fields like virtual reality, robot vision areas which require wide-angled shots as an useful way to overcome the limitations such as picture-angle, resolutions, and internal informations of an image taken from a single camera. It is so much meaningful in a point that a panoramic image usually provides better immersion feeling than a plain image. Although there are many ways to build a panoramic image, most of them are using the way of extracting feature points and matching points of each images for making a single panoramic image. In addition, those methods use the RANSAC(RANdom SAmple Consensus) algorithm with matching points and the Homography matrix to transform the image. The SURF(Speeded Up Robust Features) algorithm which is used in this paper to extract featuring points uses an image's black and white informations and local spatial informations. The SURF is widely being used since it is very much robust at detecting image's size, view-point changes, and additionally, faster than the SIFT(Scale Invariant Features Transform) algorithm. The SURF has a shortcoming of making an error which results in decreasing the RANSAC algorithm's performance speed when extracting image's feature points. As a result, this may increase the CPU usage occupation rate. The error of detecting matching points may role as a critical reason for disqualifying panoramic image's accuracy and lucidity. In this paper, in order to minimize errors of extracting matching points, we used $3{\times}3$ region's RGB pixel values around the matching points' coordinates to perform intermediate filtering process for removing wrong matching points. We have also presented analysis and evaluation results relating to enhanced working speed for producing a panorama image, CPU usage rate, extracted matching points' decreasing rate and accuracy.

Formulation of a reference coordinate system of three-dimensional head & neck images: Part II. Reproducibility of the horizontal reference plane and midsagittal plane (3차원 두부영상의 기준좌표계 설정을 위한 연구: II부 수평기준면과 정중시상면의 재현성)

  • Park, Jae-Woo;Kim, Nam-Kug;Chang, Young-Il
    • The korean journal of orthodontics
    • /
    • v.35 no.6 s.113
    • /
    • pp.475-484
    • /
    • 2005
  • This study was performed to investigate the reproducibility of the horizontal and midsagittal planes, and to suggest a stable coordinate system for three-dimensional (3D) cephalometric analysis. Eighteen CT scans were taken and the coordinate system was established using 7 reference points marked by a volume model, with no more than 4 points on the same plane. The 3D landmarks were selected on V works (Cybermed Inc., Seoul, Korea), then exported to V surgery (Cybermed Inc., Seoul, Korea) to calculate the coordinate values. All the landmarks were taken twice with a lapse of 2 weeks. The horizontal and midsagittal planes were constructed and its reproducibility was evaluated. There was no significant difference in the reproducibility of the horizontal reference planes, But, FH planes were more reproducible than other horizontal planes. FH planes showed no difference between the planes constructed with 3 out of 4 points. The angle of intersection made by 2 FH planes, composed of both Po and one Or showed less than $1^{\circ}$ difference. This was identical when 2 FH planes were composed of both Or and one Po. But, the latter cases showed a significantly smaller error. The reproducibility of the midsagittal plane was reliable with an error range of 0.61 to $1.93^{\circ}$ except for 5 establishments (FMS-Nc, Na-Rh, Na-ANS, Rh-ANS, and FR-PNS). The 3D coordinate system may be constructed with 3 planes; the horizontal plane constructed by both Po and right Or; the midsagittal plane perpendicular to the horizontal plane, including the midpoint of the Foramen Spinosum and Nc; and the coronal plane perpendicular to the horizontal and midsagittal planes, including point clinoidale, or sella, or PNS.

Development of an Offline Based Internal Organ Motion Verification System during Treatment Using Sequential Cine EPID Images (연속촬영 전자조사 문 영상을 이용한 오프라인 기반 치료 중 내부 장기 움직임 확인 시스템의 개발)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Huh, Woong;Kim, Min-Kyu;Han, Young-Yih;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jing-Sung;Park, Hee-Chul;Ahn, Sung-Hwan;Lim, Do-Hoon;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.91-98
    • /
    • 2012
  • Verification of internal organ motion during treatment and its feedback is essential to accurate dose delivery to the moving target. We developed an offline based internal organ motion verification system (IMVS) using cine EPID images and evaluated its accuracy and availability through phantom study. For verification of organ motion using live cine EPID images, a pattern matching algorithm using an internal surrogate, which is very distinguishable and represents organ motion in the treatment field, like diaphragm, was employed in the self-developed analysis software. For the system performance test, we developed a linear motion phantom, which consists of a human body shaped phantom with a fake tumor in the lung, linear motion cart, and control software. The phantom was operated with a motion of 2 cm at 4 sec per cycle and cine EPID images were obtained at a rate of 3.3 and 6.6 frames per sec (2 MU/frame) with $1,024{\times}768$ pixel counts in a linear accelerator (10 MVX). Organ motion of the target was tracked using self-developed analysis software. Results were compared with planned data of the motion phantom and data from the video image based tracking system (RPM, Varian, USA) using an external surrogate in order to evaluate its accuracy. For quantitative analysis, we analyzed correlation between two data sets in terms of average cycle (peak to peak), amplitude, and pattern (RMS, root mean square) of motion. Averages for the cycle of motion from IMVS and RPM system were $3.98{\pm}0.11$ (IMVS 3.3 fps), $4.005{\pm}0.001$ (IMVS 6.6 fps), and $3.95{\pm}0.02$ (RPM), respectively, and showed good agreement on real value (4 sec/cycle). Average of the amplitude of motion tracked by our system showed $1.85{\pm}0.02$ cm (3.3 fps) and $1.94{\pm}0.02$ cm (6.6 fps) as showed a slightly different value, 0.15 (7.5% error) and 0.06 (3% error) cm, respectively, compared with the actual value (2 cm), due to time resolution for image acquisition. In analysis of pattern of motion, the value of the RMS from the cine EPID image in 3.3 fps (0.1044) grew slightly compared with data from 6.6 fps (0.0480). The organ motion verification system using sequential cine EPID images with an internal surrogate showed good representation of its motion within 3% error in a preliminary phantom study. The system can be implemented for clinical purposes, which include organ motion verification during treatment, compared with 4D treatment planning data, and its feedback for accurate dose delivery to the moving target.

Stand Volume Estimation of Pinus Koraiensis Using Landsat TM and Forest Inventory (Landsat TM 영상과 현장조사를 이용한 잣나무림 재적 추정)

  • Park, Jin-Woo;Lee, Jung-Soo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.1
    • /
    • pp.80-90
    • /
    • 2014
  • The objective of this research is to estimate the stand volume of Pinus koraiensis, by using the investigated volume and the information of remote sensing(RS), in the research forest of Kangwon National University. The average volume of the research forest per hectare was $307.7m^3/ha$ and standard deviation was $168.4m^3/ha$. Before and after carrying out 3 by 3 majority filtering on TM image, eleven indices were extracted each time. Independent variables needed for linear regression equation were selected using mean pixel values by indices. The number of indices were eleven: six Bands(except for thermal Band), NDVI, Band Ratio(BR1:Band4/Band3, BR2:Band5/Band4, BR3:Band7/Band4), Tasseled Cap-Greeness. As a result, NDVI and TC G were chosen as the most suitable indices for regression before and after filtering, and R-squared was high: 0.736 before filtering, 0.753 after filtering. As a result of error verification for an exact comparison, RMSE before and after filtering was about $69.1m^3/ha$, $67.5m^3/ha$, respectively, and bias was $-12.8m^3/ha$, $9.7m^3/ha$, respectively. Therefore, the regression conducted with filtering was selected as an appropriate model because of low RMSE and bias. The estimated stand volume applying the regression was $160,758m^3$, and the average volume was $314m^3/ha$. This estimation was 1.2 times higher than the actual stand volume of Pinus koraiensis.

The Evaluation of Meteorological Inputs retrieved from MODIS for Estimation of Gross Primary Productivity in the US Corn Belt Region (MODIS 위성 영상 기반의 일차생산성 알고리즘 입력 기상 자료의 신뢰도 평가: 미국 Corn Belt 지역을 중심으로)

  • Lee, Ji-Hye;Kang, Sin-Kyu;Jang, Keun-Chang;Ko, Jong-Han;Hong, Suk-Young
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.4
    • /
    • pp.481-494
    • /
    • 2011
  • Investigation of the $CO_2$ exchange between biosphere and atmosphere at regional, continental, and global scales can be directed to combining remote sensing with carbon cycle process to estimate vegetation productivity. NASA Earth Observing System (EOS) currently produces a regular global estimate of gross primary productivity (GPP) and annual net primary productivity (NPP) of the entire terrestrial earth surface at 1 km spatial resolution. While the MODIS GPP algorithm uses meteorological data provided by the NASA Data Assimilation Office (DAO), the sub-pixel heterogeneity or complex terrain are generally reflected due to coarse spatial resolutions of the DAO data (a resolution of $1{\circ}\;{\times}\;1.25{\circ}$). In this study, we estimated inputs retrieved from MODIS products of the AQUA and TERRA satellites with 5 km spatial resolution for the purpose of finer GPP and/or NPP determinations. The derivatives included temperature, VPD, and solar radiation. Seven AmeriFlux data located in the Corn Belt region were obtained to use for evaluation of the input data from MODIS. MODIS-derived air temperature values showed a good agreement with ground-based observations. The mean error (ME) and coefficient of correlation (R) ranged from $-0.9^{\circ}C$ to $+5.2^{\circ}C$ and from 0.83 to 0.98, respectively. VPD somewhat coarsely agreed with tower observations (ME = -183.8 Pa ~ +382.1 Pa; R = 0.51 ~ 0.92). While MODIS-derived shortwave radiation showed a good correlation with observations, it was slightly overestimated (ME = -0.4 MJ $day^{-1}$ ~ +7.9 MJ $day^{-1}$; R = 0.67 ~ 0.97). Our results indicate that the use of inputs derived MODIS atmosphere and land products can provide a useful tool for estimating crop GPP.

Evaluation of the Satellite-based Air Temperature for All Sky Conditions Using the Automated Mountain Meteorology Station (AMOS) Records: Gangwon Province Case Study (산악기상관측정보를 이용한 위성정보 기반의 전천후 기온 자료의 평가 - 강원권역을 중심으로)

  • Jang, Keunchang;Won, Myoungsoo;Yoon, Sukhee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.1
    • /
    • pp.19-26
    • /
    • 2017
  • Surface air temperature ($T_{air}$) is a key variable for the meteorology and climatology, and is a fundamental factor of the terrestrial ecosystem functions. Satellite remote sensing from the Moderate Resolution Imaging Spectroradiometer (MODIS) provides an opportunity to monitor the $T_{air}$. However, the several problems such as frequent cloud cover and mountainous region can result in substantial retrieval error and signal loss in MODIS $T_{air}$. In this study, satellite-based $T_{air}$ was estimated under both clear and cloudy sky conditions in Gangwon Province using Aqua MODIS07 temperature profile product (MYD07_L2) and GCOM-W1 Advanced Microwave Scanning Radiometer 2 (AMSR2) brightness temperature ($T_b$) at 37 GHz frequency, and was compared with the measurements from the Automated Mountain Meteorology Stations (AMOS). The application of ambient temperature lapse rate was performed to improve the retrieval accuracy in mountainous region, which showed the improvement of estimation accuracy approximately 4% of RMSE. A simple pixel-wise regression method combining synergetic information from MYD07_L2 $T_{air}$ and AMSR2 $T_b$ was applied to estimate surface $T_{air}$ for all sky conditions. The $T_{air}$ retrievals showed favorable agreement in comparison with AMOS data (r=0.80, RMSE=7.9K), though the underestimation was appeared in winter season. Substantial $T_{air}$ retrievals were estimated 61.4% (n=2,657) for cloudy sky conditions. The results presented in this study indicate that the satellite remote sensing can produce the surface $T_{air}$ at the complex mountainous region for all sky conditions.

Assembly and Testing of a Visible and Near-infrared Spectrometer with a Shack-Hartmann Wavefront Sensor (샤크-하트만 센서를 이용한 가시광 및 근적외선 분광기 조립 및 평가)

  • Hwang, Sung Lyoung;Lee, Jun Ho;Jeong, Do Hwan;Hong, Jin Suk;Kim, Young Soo;Kim, Yeon Soo;Kim, Hyun Sook
    • Korean Journal of Optics and Photonics
    • /
    • v.28 no.3
    • /
    • pp.108-115
    • /
    • 2017
  • We report the assembly procedure and performance evaluation of a visible and near-infrared spectrometer in the wavelength region of 400-900 nm, which is later to be combined with fore-optics (a telescope) to form a f/2.5 imaging spectrometer with a field of view of ${\pm}7.68^{\circ}$. The detector at the final image plane is a $640{\times}480$ charge-coupled device with a $24{\mu}m$ pixel size. The spectrometer is in an Offner relay configuration consisting of two concentric, spherical mirrors, the secondary of which is replaced by a convex grating mirror. A double-pass test method with an interferometer is often applied in the assembly process of precision optics, but was excluded from our study due to a large residual wavefront error (WFE) in optical design of 210 nm ($0.35{\lambda}$ at 600 nm) root-mean-square (RMS). This results in a single-path test method with a Shack-Hartmann sensor. The final assembly was tested to have a RMS WFE increase of less than 90 nm over the entire field of view, a keystone of 0.08 pixels, a smile of 1.13 pixels and a spectral resolution of 4.32 nm. During the procedure, we confirmed the validity of using a Shack-Hartmann wavefront sensor to monitor alignment in the assembly of an Offner-like spectrometer.

Creation of Actual CCTV Surveillance Map Using Point Cloud Acquired by Mobile Mapping System (MMS 점군 데이터를 이용한 CCTV의 실질적 감시영역 추출)

  • Choi, Wonjun;Park, Soyeon;Choi, Yoonjo;Hong, Seunghwan;Kim, Namhoon;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1361-1371
    • /
    • 2021
  • Among smart city services, the crime and disaster prevention sector accounted for the highest 24% in 2018. The most important platform for providing real-time situation information is CCTV (Closed-Circuit Television). Therefore, it is essential to create the actual CCTV surveillance coverage to maximize the usability of CCTV. However, the amount of CCTV installed in Korea exceeds one million units, including those operated by the local government, and manual identification of CCTV coverage is a time-consuming and inefficient process. This study proposed a method to efficiently construct CCTV's actual surveillance coverage and reduce the time required for the decision-maker to manage the situation. For this purpose, first, the exterior orientation parameters and focal lengths of the pre-installed CCTV cameras, which are difficult to access, were calculated using the point cloud data of the MMS (Mobile Mapping System), and the FOV (Field of View) was calculated accordingly. Second, using the FOV result calculated in the first step, CCTV's actual surveillance coverage area was constructed with 1 m, 2 m, 3 m, 5 m, and 10 m grid interval considering the occluded regions caused by the buildings. As a result of applying our approach to 5 CCTV images located in Uljin-gun, Gyeongsnagbuk-do the average re-projection error was about 9.31 pixels. The coordinate difference between calculated CCTV and location obtained from MMS was about 1.688 m on average. When the grid length was 3 m, the surveillance coverage calculated through our research matched the actual surveillance obtained from visual inspection with a minimum of 70.21% to a maximum of 93.82%.

Analysis of Respiratory Motional Effect on the Cone-beam CT Image (Cone-beam CT 영상 획득 시 호흡에 의한 영향 분석)

  • Song, Ju-Young;Nah, Byung-Sik;Chung, Woong-Ki;Ahn, Sung-Ja;Nam, Taek-Keun;Yoon, Mi-Sun
    • Progress in Medical Physics
    • /
    • v.18 no.2
    • /
    • pp.81-86
    • /
    • 2007
  • The cone-beam CT (CBCT) which is acquired using on-board imager (OBI) attached to a linear accelerator is widely used for the image guided radiation therapy. In this study, the effect of respiratory motion on the quality of CBCT image was evaluated. A phantom system was constructed in order to simulate respiratory motion. One part of the system is composed of a moving plate and a motor driving component which can control the motional cycle and motional range. The other part is solid water phantom containing a small cubic phantom ($2{\times}2{\times}2cm^3$) surrounded by air which simulate a small tumor volume in the lung air cavity CBCT images of the phantom were acquired in 20 different cases and compared with the image in the static status. The 20 different cases are constituted with 4 different motional ranges (0.7 cm, 1.6 cm, 2.4 cm, 3.1 cm) and 5 different motional cycles (2, 3, 4, 5, 6 sec). The difference of CT number in the coronal image was evaluated as a deformation degree of image quality. The relative average pixel intensity values as a compared CT number of static CBCT image were 71.07% at 0.7 cm motional range, 48.88% at 1.6 cm motional range, 30.60% at 2.4 cm motional range, 17.38% at 3.1 cm motional range The tumor phantom sizes which were defined as the length with different CT number compared with air were increased as the increase of motional range (2.1 cm: no motion, 2.66 cm: 0.7 cm motion, 3.06 cm: 1.6 cm motion, 3.62 cm: 2.4 cm motion, 4.04 cm: 3.1 cm motion). This study shows that respiratory motion in the region of inhomogeneous structures can degrade the image quality of CBCT and it must be considered in the process of setup error correction using CBCT images.

  • PDF