• Title/Summary/Keyword: a error model

Search Result 7,367, Processing Time 0.04 seconds

A Study on the Digital Filter Design for Radio Astronomy Using FPGA (FPGA를 이용한 전파천문용 디지털 필터 설계에 관한 기본연구)

  • Jung, Gu-Young;Roh, Duk-Gyoo;Oh, Se-Jin;Yeom, Jae-Hwan;Kang, Yong-Woo;Lee, Chang-Hoon;Chung, Hyun0Soo;Kim, Kwang-Dong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.1
    • /
    • pp.62-74
    • /
    • 2008
  • In this paper, we would like to propose the design of symmetric digital filter core in order to use in the radio astronomy. The function of FIR filter core would be designed by VHDL code required at the Data Acquisition System (DAS) of Korean VLBI Network (KVN) based on the FPGA chip of Vertex-4 SX55 model of Xilinx company. The designed digital filter has the symmetric structure to increase the effectiveness of system by sharing the digital filter coefficient. The SFFU(Symmetric FIR Filter Unit) use the parallel processing method to perform the data processing efficiently by using the constrained system clock. In this paper, therefore, for the effective design of SFFU, the Unified Synthesis software ISE Foundation and Core Generator which has excellent GUI environment were used to overall IP core synthesis and experiments. Through the synthesis results of digital filter core, we verified the resource usage is less than 40% such as Slice LUT and achieved the maximum operation frequency is more than 260MHz. We also confirmed the SFFU would be well operated without error according to the SFFU simulation result using the Modelsim 6.1a of Mentor Graphics Company. To verify the function of SFFU, we carried out the additional simulation experiments using the pseudo signal to the Matlab software. From the comparison experimental results of simulation and the designed digital FIR filter, we confirmed the FIR filter was well performed with filter's basic function. So we verified the effectiveness of the designed FIR digital filter with symmetric structure using FPGA and VHDL.

  • PDF

Data processing system and spatial-temporal reproducibility assessment of GloSea5 model (GloSea5 모델의 자료처리 시스템 구축 및 시·공간적 재현성평가)

  • Moon, Soojin;Han, Soohee;Choi, Kwangsoon;Song, Junghyun
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.9
    • /
    • pp.761-771
    • /
    • 2016
  • The GloSea5 (Global Seasonal forecasting system version 5) is provided and operated by the KMA (Korea Meteorological Administration). GloSea5 provides Forecast (FCST) and Hindcast (HCST) data and its horizontal resolution is about 60km ($0.83^{\circ}{\times}0.56^{\circ}$) in the mid-latitudes. In order to use this data in watershed-scale water management, GloSea5 needs spatial-temporal downscaling. As such, statistical downscaling was used to correct for systematic biases of variables and to improve data reliability. HCST data is provided in ensemble format, and the highest statistical correlation ($R^2=0.60$, RMSE = 88.92, NSE = 0.57) of ensemble precipitation was reported for the Yongdam Dam watershed on the #6 grid. Additionally, the original GloSea5 (600.1 mm) showed the greatest difference (-26.5%) compared to observations (816.1 mm) during the summer flood season. However, downscaled GloSea5 was shown to have only a -3.1% error rate. Most of the underestimated results corresponded to precipitation levels during the flood season and the downscaled GloSea5 showed important results of restoration in precipitation levels. Per the analysis results of spatial autocorrelation using seasonal Moran's I, the spatial distribution was shown to be statistically significant. These results can improve the uncertainty of original GloSea5 and substantiate its spatial-temporal accuracy and validity. The spatial-temporal reproducibility assessment will play a very important role as basic data for watershed-scale water management.

A study on traffic signal control at signalized intersections in VANETs (VANETs 환경에서 단일 교차로의 교통신호 제어방법에 관한 연구)

  • Chang, Hyeong-Jun;Park, Gwi-Tae
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.6
    • /
    • pp.108-117
    • /
    • 2011
  • Seoul metropolitan government has been operating traffic signal control system with the name of COSMOS since 2001. COSMOS uses the degrees of saturation and congestion which are calculated by installing loop detectors. At present, inductive loop detector is generally used for detecting vehicles but it is inconvenient and costly for maintenance since it is buried on the road. In addition, the estimated queue length might be influenced in case of error occurred in measuring speed, because it only uses the speed of vehicles passing by the detector. A traffic signal control algorithm which enables smooth traffic flow at intersection is proposed. The proposed algorithm assigns vehicles to the group of each lane and calculates traffic volume and congestion degree using traffic information of each group using VANETs(Vehicular Ad-hoc Networks) inter-vehicle communication. It does not demand additional devices installation such as cameras, sensors or image processing units. In this paper, the algorithm we suggest is verified for AJWT(Average Junction Waiting Time) and TQL(Total Queue Length) under single intersection model based on GLD(Green Light District) Simulator. And the result is better than Random control method and Best first control method. In case real-time control method with VANETs is generalized, this research that suggests the technology of traffic control in signalized intersections using wireless communication will be highly useful.

Comparative Analysis of Export Behaviors of Pyeongtaek-Dangjin Port and Daesan Port (평택.당진항과 대산항의 수출행태의 비교분석)

  • Mo, Soowon
    • Journal of Korea Port Economic Association
    • /
    • v.29 no.3
    • /
    • pp.25-37
    • /
    • 2013
  • This study investigates the export behavior of port of Pyeongtaek-Dangjin and Daesan. The monthly data cover the period from January 2002 to December 2012. This paper tests whether the exchange rate and the industrial production are stationary or not, rejecting the null hypothesis of a unit root in each of the level variables and of a unit root for the residuals from the cointegration at the 5 percent significance level. The error-correction model is estimated to find that Daesan port is faster than Pyeongtaek-Dangjin in adjusting the short-run disequilibrium. This paper finds that the exchange rate coefficient of Daesan port is higher than that of Pyeongtaek-Dangjin port, while the industrial production coefficient of the former is much smaller than that of the latter. The industrial production coefficient is, however, much higher than the exchange rate coefficient in both ports. The rolling regression shows that the influence of exchange rate and industrial production tends to increase in Pyeongtaek-Dangjin port but tends to decrease in Daesan. The impulse response functions indicate that export volumes respond much greater to the positive shocks in industrial production than in exchange rate, and the exchange rate shock decays very fast, while the industrial production shock lasts very long.

A Study of Web Application Attack Detection extended ESM Agent (통합보안관리 에이전트를 확장한 웹 어플리케이션 공격 탐지 연구)

  • Kim, Sung-Rak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.1 s.45
    • /
    • pp.161-168
    • /
    • 2007
  • Web attack uses structural, logical and coding error or web application rather than vulnerability to Web server itself. According to the Open Web Application Security Project (OWASP) published about ten types of the web application vulnerability to show the causes of hacking, the risk of hacking and the severity of damage are well known. The detection ability and response is important to deal with web hacking. Filtering methods like pattern matching and code modification are used for defense but these methods can not detect new types of attacks. Also though the security unit product like IDS or web application firewall can be used, these require a lot of money and efforts to operate and maintain, and security unit product is likely to generate false positive detection. In this research profiling method that attracts the structure of web application and the attributes of input parameters such as types and length is used, and by installing structural database of web application in advance it is possible that the lack of the validation of user input value check and the verification and attack detection is solved through using profiling identifier of database against illegal request. Integral security management system has been used in most institutes. Therefore even if additional unit security product is not applied, attacks against the web application will be able to be detected by showing the model, which the security monitoring log gathering agent of the integral security management system and the function of the detection of web application attack are combined.

  • PDF

A Study on High-Precision DEM Generation Using ERS-Envisat SAR Cross-Interferometry (ERS-Envisat SAR Cross-Interferomety를 이용한 고정밀 DEM 생성에 관한 연구)

  • Lee, Won-Jin;Jung, Hyung-Sup;Lu, Zhong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.4
    • /
    • pp.431-439
    • /
    • 2010
  • Cross-interferometic synthetic aperture radar (CInSAR) technique from ERS-2 and Envisat images is capable of generating submeter-accuracy digital elevation model (DEM). However, it is very difficult to produce high-quality CInSAR-derived DEM due to the difference in the azimuth and range pixel size between ERS-2 and Envisat images as well as the small height ambiguity of CInSAR interferogram. In this study, we have proposed an efficient method to overcome the problems, produced a high-quality DEM over northern Alaska, and compared the CInSAR-derived DEM with the national elevation dataset (NED) DEM from U.S. Geological Survey. In the proposed method, azimuth common band filtering is applied in the radar raw data processing to mitigate the mis-registation due to the difference in the azimuth and range pixel size, and differential SAR interferogram (DInSAR) is used for reducing the unwrapping error occurred by the high fringe rate of CInSAR interferogram. Using the CInSAR DEM, we have identified and corrected man-made artifacts in the NED DEM. The wave number analysis further confirms that the CInSAR DEM has valid Signal in the high frequency of more than 0.08 radians/m (about 40m) while the NED DEM does not. Our results indicate that the CInSAR DEM is superior to the NED DEM in terms of both height precision and ground resolution.

Production of Reactive Diluent for Epoxy Resin with High Chemical Resistance from Natural Oil : Optimization Using CCD-RSM (천연오일로부터 내화학성이 향상된 에폭시계 수지용 반응성 희석제의 제조 : CCD-RSM을 이용한 최적화)

  • Yoo, Bong-Ho;Jang, Hyun Sik;Lee, Seung Bum
    • Applied Chemistry for Engineering
    • /
    • v.31 no.2
    • /
    • pp.147-152
    • /
    • 2020
  • In this study, we dedicated to optimize the process for a reactive diluent for epoxy resin of improved chemical resistance by using cardanol, a component of natural oil of cashew nut shell liquid (CNSL). The central composite design (CCD) model of response surface methodology (RSM) was used for the optimization. The quantitative factors for CCD-RSM were the cardanol/ECH mole ratio, reaction time, and reaction temperature. The yield, epoxy equivalent, and viscosity were selected as response values. Basic experiments were performed to design the reaction surface analysis. The ranges of quantitative factors were determined as 2~4, 4~8 h, and 100~140 ℃ for the cardanol/ECH reaction mole ratio, reaction time, and reaction temperature, respectively. From the result of CCD-RSM, the optimum conditions were determined as 3.33, 6.18 h, and 120 ℃ for the cardanol/ECH reaction mole ratio, reaction time, and reaction temperature, respectively. At these conditions, the yield, epoxy equivalence, and viscosity were estimated as 100%, 429.89 g/eq., and 41.65 cP, respectively. In addition, the experimental results show that the error rate was less than 0.3%, demonstrating the validity of optimization.

GIUH Variation by Estimating Locations (단위도 산정지점에 따른 GIUH 형상 변화에 관한 연구)

  • Joo, Jin-Gul;Yang, Jae-Mo;Kim, Joong-Hoon
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.11 no.1
    • /
    • pp.85-91
    • /
    • 2011
  • RV-GIUH must be applied at an outlet or a junction of highest order stream of a subbasin because the model was derived for basins following Horton's ordering system. However hydrograph is calculated at various locations which does not fit to the desirable points. Therefore, some guideline is required for RV-GIUH application in practice. This study would like to suggest the outlet location criteria for appling RV-GIUH at un-gauged basin. Locations were selected by moving to upstream from outlet of Sanganmi basin and unit hydrograph using derived and simple RV-GIUH were estimated at each location. As the results, the peaks of RV-GIUH in upstream were exaggerated because of distortion of length ratio and total stream length. To avoid this error, the location must be selected at 60% downstream of highest stream length. To apply RV-GIUH at various places, equations correcting distortion of total stream length were suggested. With the correcting equations, it can be possible that RV-GIUH is applied at 20% downstream of highest stream length. Application and precision of RV-GIUH will be improved through this research.

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

Different penalty methods for assessing interval from first to successful insemination in Japanese Black heifers

  • Setiaji, Asep;Oikawa, Takuro
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.9
    • /
    • pp.1349-1354
    • /
    • 2019
  • Objective: The objective of this study was to determine the best approach for handling missing records of first to successful insemination (FS) in Japanese Black heifers. Methods: Of a total of 2,367 records of heifers born between 2003 and 2015 used, 206 (8.7%) of open heifers were missing. Four penalty methods based on the number of inseminations were set as follows: C1, FS average according to the number of inseminations; C2, constant number of days, 359; C3, maximum number of FS days to each insemination; and C4, average of FS at the last insemination and FS of C2. C5 was generated by adding a constant number (21 d) to the highest number of FS days in each contemporary group. The bootstrap method was used to compare among the 5 methods in terms of bias, mean squared error (MSE) and coefficient of correlation between estimated breeding value (EBV) of non-censored data and censored data. Three percentages (5%, 10%, and 15%) were investigated using the random censoring scheme. The univariate animal model was used to conduct genetic analysis. Results: Heritability of FS in non-censored data was $0.012{\pm}0.016$, slightly lower than the average estimate from the five penalty methods. C1, C2, and C3 showed lower standard errors of estimated heritability but demonstrated inconsistent results for different percentages of missing records. C4 showed moderate standard errors but more stable ones for all percentages of the missing records, whereas C5 showed the highest standard errors compared with noncensored data. The MSE in C4 heritability was $0.633{\times}10^{-4}$, $0.879{\times}10^{-4}$, $0.876{\times}10^{-4}$ and $0.866{\times}10^{-4}$ for 5%, 8.7%, 10%, and 15%, respectively, of the missing records. Thus, C4 showed the lowest and the most stable MSE of heritability; the coefficient of correlation for EBV was 0.88; 0.93 and 0.90 for heifer, sire and dam, respectively. Conclusion: C4 demonstrated the highest positive correlation with the non-censored data set and was consistent within different percentages of the missing records. We concluded that C4 was the best penalty method for missing records due to the stable value of estimated parameters and the highest coefficient of correlation.