• Title/Summary/Keyword: kernel density analysis

Search Result 117, Processing Time 0.023 seconds

Analysis of Radiation Characteristics on Offset Gregorian Antenna Using Jacobi-Bessel Series (Jacobi-Bessel 급수를 이용한 옵셋 그레고리안 안테나의 복사특성 해석)

  • Ryu, Hwang
    • The Journal of Engineering Research
    • /
    • v.1 no.1
    • /
    • pp.5-14
    • /
    • 1997
  • The purpose of thesis is to analyze the radiation characteristics of an offset gregorian antenna in order to design the satellite-loaded antenna. In order to compute the radiation pattern of the sub-reflector, the reflected wave is obtained by GO(Geometric Optics) at an arbitrary shaped sub-reflector. Then the total radiation EM wave is obtained by summing the diffracted fields obtained by UTD(Uniform Geometrical Theory of Diffraction) and the GO fields. In order to calculate the far field radiation pattern of the main reflector, the radiation integral equation is derived from the induced current density on reflector surface using PO(Physical Optics). The kernel is expanded in terms of Jacobi-Bessel series for increasing the computational efficiency, then the modified radiation integral is represented as the double integral equation independent of observation points. When the incident fields are assumed to be x-or y-polarized field, the characteristics of radiation patterns in the gregorian antenna is analyzed in case of the main reflector having the focal length of 62.4$\lambda$, diameter of 100$\lambda$, and offset height of 75$\lambda$, and the sub-reflector having the eccentricity of 0.501, the inter focal length og 32.8$\lambda$, the horn axis angle of $9^{\circ}$ and the half aperture angle of $15.89^{\circ}$. The cross-polarized level and side lobe level in the offset geogorian reflector are reduced by 30dB and 10dB, respectively, in comparison with those of the offset parabolic antenna.

  • PDF

Selection Method for Installation of Reduction Facilities to Prevention of Roe Deer(Capreouls pygargus) Road-kill in Jeju Island (제주도 노루 로드킬 방지를 위한 저감시설 대상지 선정방안 연구)

  • Kim, Min-Ji;Jang, Rae-ik;Yoo, Young-jae;Lee, Jun-Won;Song, Eui-Geun;Oh, Hong-Shik;Sung, Hyun-Chan;Kim, Do-kyung;Jeon, Seong-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.5
    • /
    • pp.19-32
    • /
    • 2023
  • The fragmentation of habitats resulting from human activities leads to the isolation of wildlife and it also causes wildlife-vehicle collisions (i.e. Road-kill). In that sense, it is important to predict potential habitats of specific wildlife that causes wildlife-vehicle collisions by considering geographic, environmental and transportation variables. Road-kill, especially by large mammals, threatens human safety as well as financial losses. Therefore, we conducted this study on roe deer (Capreolus pygargus tianschanicus), a large mammal that causes frequently Road-kill in Jeju Island. So, to predict potential wildlife habitats by considering geographic, environmental, and transportation variables for a specific species this study was conducted to identify high-priority restoration sites with both characteristics of potential habitats and road-kill hotspot. we identified high-priority restoration sites that is likely to be potential habitats, and also identified the known location of a Road-kill records. For this purpose, first, we defined the environmental variables and collect the occurrence records of roe deer. After that, the potential habitat map was generated by using Random Forest model. Second, to analyze roadkill hotspots, a kernel density estimation was used to generate a hotspot map. Third, to define high-priority restoration sites, each map was normalized and overlaid. As a result, three northern regions roads and two southern regions roads of Jeju Island were defined as high-priority restoration sites. Regarding Random Forest modeling, in the case of environmental variables, The importace was found to be a lot in the order of distance from the Oreum, elevation, distance from forest edge(outside) and distance from waterbody. The AUC(Area under the curve) value, which means discrimination capacity, was found to be 0.973 and support the statistical accuracy of prediction result. As a result of predicting the habitat of C. pygargus, it was found to be mainly distributed in forests, agricultural lands, and grasslands, indicating that it supported the results of previous studies.

Analysis of PM2.5 Impact and Human Exposure from Worst-Case of Mt. Baekdu Volcanic Eruption (백두산 분화 Worst-case로 인한 우리나라 초미세먼지(PM2.5) 영향분석 및 노출평가)

  • Park, Jae Eun;Kim, Hyerim;Sunwoo, Young
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_4
    • /
    • pp.1267-1276
    • /
    • 2020
  • To quantitatively predict the impacts of large-scale volcanic eruptions of Mt. Baekdu on air quality and damage around the Korean Peninsula, a three-dimensional chemistry-transport modeling system (Weather Research & Forecasting - Sparse Matrix Operation Kernel Emission - Comunity Multi-scale Air Quality) was adopted. A worst-case meteorology scenario was selected to estimate the direct impact on Korea. This study applied the typical worst-case scenarios that are likely to cause significant damage to Korea among worst-case volcanic eruptions of Mt. Baekdu in the past decade (2005~2014) and assumed a massive VEI 4 volcanic eruption on May 16, 2012, to analyze the concentration of PM2.5 caused by the volcanic eruption. The effects of air quality in each region-cities, counties, boroughs-were estimated, and vulnerable areas were derived by conducting an exposure assessment reflecting vulnerable groups. Moreover, the effects of cities, counties, and boroughs were analyzed with a high-resolution scale (9 km × 9 km) to derive vulnerable areas within the regions. As a result of analyzing the typical worst-case volcanic eruptions of Mt. Baekdu, a discrepancy was shown in areas between high PM2.5 concentration, high population density, and where vulnerable groups are concentrated. From the result, PM2.5 peak concentration was about 24,547 ㎍/㎥, which is estimated to be a more serious situation than the eruption of Mt. St. Helensin 1980, which is known for 540 million tons of volcanic ash. Paju, Gimpo, Goyang, Ganghwa, Sancheong, Hadong showed to have a high PM2.5 concentration. Paju appeared to be the most vulnerable area from the exposure assessment. While areas estimated with a high concentration of air pollutants are important, it is also necessary to develop plans and measures considering densely populated areas or areas with high concentrations of susceptible population or vulnerable groups. Also, establishing measures for each vulnerable area by selecting high concentration areas within cities, counties, and boroughs rather than establishing uniform measures for all regions is needed. This study will provide the foundation for developing the standards for disaster declaration and preemptive response systems for volcanic eruptions.

Analysis of Traffic Accidents Injury Severity in Seoul using Decision Trees and Spatiotemporal Data Visualization (의사결정나무와 시공간 시각화를 통한 서울시 교통사고 심각도 요인 분석)

  • Kang, Youngok;Son, Serin;Cho, Nahye
    • Journal of Cadastre & Land InformatiX
    • /
    • v.47 no.2
    • /
    • pp.233-254
    • /
    • 2017
  • The purpose of this study is to analyze the main factors influencing the severity of traffic accidents and to visualize spatiotemporal characteristics of traffic accidents in Seoul. To do this, we collected the traffic accident data that occurred in Seoul for four years from 2012 to 2015, and classified as slight, serious, and death traffic accidents according to the severity of traffic accidents. The analysis of spatiotemporal characteristics of traffic accidents was performed by kernel density analysis, hotspot analysis, space time cube analysis, and Emerging HotSpot Analysis. The factors affecting the severity of traffic accidents were analyzed using decision tree model. The results show that traffic accidents in Seoul are more frequent in suburbs than in central areas. Especially, traffic accidents concentrated in some commercial and entertainment areas in Seocho and Gangnam, and the traffic accidents were more and more intense over time. In the case of death traffic accidents, there were statistically significant hotspot areas in Yeongdeungpo-gu, Guro-gu, Jongno-gu, Jung-gu and Seongbuk. However, hotspots of death traffic accidents by time zone resulted in different patterns. In terms of traffic accident severity, the type of accident is the most important factor. The type of the road, the type of the vehicle, the time of the traffic accident, and the type of the violation of the regulations were ranked in order of importance. Regarding decision rules that cause serious traffic accidents, in case of van or truck, there is a high probability that a serious traffic accident will occur at a place where the width of the road is wide and the vehicle speed is high. In case of bicycle, car, motorcycle or the others there is a high probability that a serious traffic accident will occur under the same circumstances in the dawn time.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Landslide Susceptibility Mapping Using Deep Neural Network and Convolutional Neural Network (Deep Neural Network와 Convolutional Neural Network 모델을 이용한 산사태 취약성 매핑)

  • Gong, Sung-Hyun;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1723-1735
    • /
    • 2022
  • Landslides are one of the most prevalent natural disasters, threating both humans and property. Also landslides can cause damage at the national level, so effective prediction and prevention are essential. Research to produce a landslide susceptibility map with high accuracy is steadily being conducted, and various models have been applied to landslide susceptibility analysis. Pixel-based machine learning models such as frequency ratio models, logistic regression models, ensembles models, and Artificial Neural Networks have been mainly applied. Recent studies have shown that the kernel-based convolutional neural network (CNN) technique is effective and that the spatial characteristics of input data have a significant effect on the accuracy of landslide susceptibility mapping. For this reason, the purpose of this study is to analyze landslide vulnerability using a pixel-based deep neural network model and a patch-based convolutional neural network model. The research area was set up in Gangwon-do, including Inje, Gangneung, and Pyeongchang, where landslides occurred frequently and damaged. Landslide-related factors include slope, curvature, stream power index (SPI), topographic wetness index (TWI), topographic position index (TPI), timber diameter, timber age, lithology, land use, soil depth, soil parent material, lineament density, fault density, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used. Landslide-related factors were built into a spatial database through data preprocessing, and landslide susceptibility map was predicted using deep neural network (DNN) and CNN models. The model and landslide susceptibility map were verified through average precision (AP) and root mean square errors (RMSE), and as a result of the verification, the patch-based CNN model showed 3.4% improved performance compared to the pixel-based DNN model. The results of this study can be used to predict landslides and are expected to serve as a scientific basis for establishing land use policies and landslide management policies.

COATED PARTICLE FUEL FOR HIGH TEMPERATURE GAS COOLED REACTORS

  • Verfondern, Karl;Nabielek, Heinz;Kendall, James M.
    • Nuclear Engineering and Technology
    • /
    • v.39 no.5
    • /
    • pp.603-616
    • /
    • 2007
  • Roy Huddle, having invented the coated particle in Harwell 1957, stated in the early 1970s that we know now everything about particles and coatings and should be going over to deal with other problems. This was on the occasion of the Dragon fuel performance information meeting London 1973: How wrong a genius be! It took until 1978 that really good particles were made in Germany, then during the Japanese HTTR production in the 1990s and finally the Chinese 2000-2001 campaign for HTR-10. Here, we present a review of history and present status. Today, good fuel is measured by different standards from the seventies: where $9*10^{-4}$ initial free heavy metal fraction was typical for early AVR carbide fuel and $3*10^{-4}$ initial free heavy metal fraction was acceptable for oxide fuel in THTR, we insist on values more than an order of magnitude below this value today. Half a percent of particle failure at the end-of-irradiation, another ancient standard, is not even acceptable today, even for the most severe accidents. While legislation and licensing has not changed, one of the reasons we insist on these improvements is the preference for passive systems rather than active controls of earlier times. After renewed HTGR interest, we are reporting about the start of new or reactivated coated particle work in several parts of the world, considering the aspects of designs/ traditional and new materials, manufacturing technologies/ quality control quality assurance, irradiation and accident performance, modeling and performance predictions, and fuel cycle aspects and spent fuel treatment. In very general terms, the coated particle should be strong, reliable, retentive, and affordable. These properties have to be quantified and will be eventually optimized for a specific application system. Results obtained so far indicate that the same particle can be used for steam cycle applications with $700-750^{\circ}C$ helium coolant gas exit, for gas turbine applications at $850-900^{\circ}C$ and for process heat/hydrogen generation applications with $950^{\circ}C$ outlet temperatures. There is a clear set of standards for modem high quality fuel in terms of low levels of heavy metal contamination, manufacture-induced particle defects during fuel body and fuel element making, irradiation/accident induced particle failures and limits on fission product release from intact particles. While gas-cooled reactor design is still open-ended with blocks for the prismatic and spherical fuel elements for the pebble-bed design, there is near worldwide agreement on high quality fuel: a $500{\mu}m$ diameter $UO_2$ kernel of 10% enrichment is surrounded by a $100{\mu}m$ thick sacrificial buffer layer to be followed by a dense inner pyrocarbon layer, a high quality silicon carbide layer of $35{\mu}m$ thickness and theoretical density and another outer pyrocarbon layer. Good performance has been demonstrated both under operational and under accident conditions, i.e. to 10% FIMA and maximum $1600^{\circ}C$ afterwards. And it is the wide-ranging demonstration experience that makes this particle superior. Recommendations are made for further work: 1. Generation of data for presently manufactured materials, e.g. SiC strength and strength distribution, PyC creep and shrinkage and many more material data sets. 2. Renewed start of irradiation and accident testing of modem coated particle fuel. 3. Analysis of existing and newly created data with a view to demonstrate satisfactory performance at burnups beyond 10% FIMA and complete fission product retention even in accidents that go beyond $1600^{\circ}C$ for a short period of time. This work should proceed at both national and international level.