• Title/Summary/Keyword: performance estimation

Search Result 6,257, Processing Time 0.037 seconds

Estimation of the major sources for organic aerosols at the Anmyeon Island GAW station (안면도에서의 초미세먼지 유기성분 주요 영향원 평가)

  • Han, Sanghee;Lee, Ji Yi;Lee, Jongsik;Heo, Jongbae;Jung, Chang Hoon;Kim, Eun-Sill;Kim, Yong Pyo
    • Particle and aerosol research
    • /
    • v.14 no.4
    • /
    • pp.135-144
    • /
    • 2018
  • Based on a two-year measurement data, major sources for the ambient carbonaceous aerosols at the Anmyeon Global Atmosphere Watch (GAW) station were identified by using the Positive Matrix Factorization (PMF) model. The particulate matter less than or equal to $2.5{\mu}m$ in aerodynamic diameter (PM2.5) aerosols were sampled between June 2015 to May 2017 and carbonaceous species including ~80 organic compounds were analyzed. When the number of factors was 5 or 6, the performance evaluation parameters showed the best results, With 6 factor case, the characteristics of transported factors were clearer. The 6 factors were identified with various analyses including chemical characteristics and air parcel movement analysis. The 6 factors with their relative contributions were (1) anthropogenic Secondary Organic Aerosols (SOA) (10.3%), (2) biogenic sources (24.8%), (3) local biomass burning (26.4%), (4) transported biomass burning (7.3%), (5) combustion related sources (12.0%), and (6) transported sources (19.2%). The air parcel movement analysis result and seasonal variation of the contribution of these factors also supported the identification of these factors. Thus, the Anmyeon Island GAW station has been affected by both regional and local sources for the carbonaceous aerosols.

Study on Compensation Method of Anisotropic H-field Antenna (Loran H-field 안테나의 지향성 보상 기법 연구)

  • Park, Sul-Gee;Son, Pyo-Woong
    • Journal of Navigation and Port Research
    • /
    • v.43 no.3
    • /
    • pp.172-178
    • /
    • 2019
  • Although the needs for providing resilient PNT information are increasing, threats due to the intentional RFI or space weather change are challenging to resolve. eLoran, which is a terrestrial navigation system that use a high-power signal is considered as a best back-up navigation system. Depending on the user's environment in the eLoran system, the user may use one of E-field or H-field antennas. H-field antenna, which has no restriction on setting stable ground and is relatively resistant to noise of general electronic equipment, is composed of two loops, and shows anisotropic gain pattern due to the different measurement at the two loops. Therefore, the H-field antenna's phase estimation value of signal varies depending on its direction even at the static environment. The error due to the direction of the signal should be eliminated if the user want to estimate the own position more precisely. In this paper, a method to compensate the error according to the geometric distribution between the H-field antenna and the transmitting station is proposed. A model was developed to compensate the directional error of H-field antenna based on the signal generated from the eLoran signal simulator. The model is then used to the survey measurement performed in the land area and verify its performance.

A comparison of imputation methods using nonlinear models (비선형 모델을 이용한 결측 대체 방법 비교)

  • Kim, Hyein;Song, Juwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.4
    • /
    • pp.543-559
    • /
    • 2019
  • Data often include missing values due to various reasons. If the missing data mechanism is not MCAR, analysis based on fully observed cases may an estimation cause bias and decrease the precision of the estimate since partially observed cases are excluded. Especially when data include many variables, missing values cause more serious problems. Many imputation techniques are suggested to overcome this difficulty. However, imputation methods using parametric models may not fit well with real data which do not satisfy model assumptions. In this study, we review imputation methods using nonlinear models such as kernel, resampling, and spline methods which are robust on model assumptions. In addition, we suggest utilizing imputation classes to improve imputation accuracy or adding random errors to correctly estimate the variance of the estimates in nonlinear imputation models. Performances of imputation methods using nonlinear models are compared under various simulated data settings. Simulation results indicate that the performances of imputation methods are different as data settings change. However, imputation based on the kernel regression or the penalized spline performs better in most situations. Utilizing imputation classes or adding random errors improves the performance of imputation methods using nonlinear models.

Classical testing based on B-splines in functional linear models (함수형 선형모형에서의 B-스플라인에 기초한 검정)

  • Sohn, Jihoon;Lee, Eun Ryung
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.4
    • /
    • pp.607-618
    • /
    • 2019
  • A new and interesting task in statistics is to effectively analyze functional data that frequently comes from advances in modern science and technology in areas such as meteorology and biomedical sciences. Functional linear regression with scalar response is a popular functional data analysis technique and it is often a common problem to determine a functional association if a functional predictor variable affects the scalar response in the models. Recently, Kong et al. (Journal of Nonparametric Statistics, 28, 813-838, 2016) established classical testing methods for this based on functional principal component analysis (of the functional predictor), that is, the resulting eigenfunctions (as a basis). However, the eigenbasis functions are not generally suitable for regression purpose because they are only concerned with the variability of the functional predictor, not the functional association of interest in testing problems. Additionally, eigenfunctions are to be estimated from data so that estimation errors might be involved in the performance of testing procedures. To circumvent these issues, we propose a testing method based on fixed basis such as B-splines and show that it works well via simulations. It is also illustrated via simulated and real data examples that the proposed testing method provides more effective and intuitive results due to the localization properties of B-splines.

Development of a new system for measurement of total effluent load of water quality

  • Keiji, Takase;Akira, Ogura
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2015.05a
    • /
    • pp.221-221
    • /
    • 2015
  • Sustainable use of water resource and conservation of water quality are essential problems in the world. Especially, problems of water quality are serious one for human health as well as ecological system of all creatures on the earth. Recently, the importance of total effluent load as well as the concentrations of pollutant materials has been recognized not only for the conservation of water quality but also for sustainable water use in watersheds. However, the measurement or estimation of total effluent load from non-point source area such as farm lands or forests may be more difficult because both of concentration and discharge of the water are greatly changed depending on various factors especially metrological conditions such as rainfall, while the measurement from a point source area may be easy because the concentration of pollutant materials and amount of discharge water are relatively steady. Therefore, the total effluent load from a non-point source is often estimated by statistical relationships between concentration and discharge, which is called as L-Q equation. However, a lot of work and time are required to collect and analyze water samples and to get the accurate relationship or regressive equation. So, we proposed a new system for direct measurement of total effluent load of water quality from non-point source areas to solve the problem. In this system, the overflow depth at a hydraulic weir is measured with a pressure gage every hourly interval to calculate the amount of hourly discharge at first. Then, the operating time of a small electric pump to collect an amount of water which is proportional to the discharge is calculated to intake the water into a storage tank. The stored water is taken out a few days later in a case of storm event or several weeks later in a case of non-rainfall event and the concentrations of water quality such as total nitrogen and phosphorous are analyzed in a laboratory. Finally, total load of the water quality can be calculated by multiplying the concentration by the total volume of discharge. The system was installed in a small experimental forestry watershed to check the performance and know the total load of water quality from the forest. It was found that the system to collect a proportional amount of water to actual discharge operated perfectly and a total load of water quality was analyzed accurately. As the result, it was expected that the system will be very available to know the total load from a non-point source area.

  • PDF

Panamax Second-hand Vessel Valuation Model (파나막스 중고선가치 추정모델 연구)

  • Lim, Sang-Seop;Lee, Ki-Hwan;Yang, Huck-Jun;Yun, Hee-Sung
    • Journal of Navigation and Port Research
    • /
    • v.43 no.1
    • /
    • pp.72-78
    • /
    • 2019
  • The second-hand ship market provides immediate access to the freight market for shipping investors. When introducing second-hand vessels, the precise estimate of the price is crucial to the decision-making process because it directly affects the burden of capital cost to investors in the future. Previous studies on the second-hand market have mainly focused on the market efficiency. The number of papers on the estimation of second-hand vessel values is very limited. This study proposes an artificial neural network model that has not been attempted in previous studies. Six factors, freight, new-building price, orderbook, scrap price, age and vessel size, that affect the second-hand ship price were identified through literature review. The employed data is 366 real trading records of Panamax second-hand vessels reported to Clarkson between January 2016 and December 2018. Statistical filtering was carried out through correlation analysis and stepwise regression analysis, and three parameters, which are freight, age and size, were selected. Ten-fold cross validation was used to estimate the hyper-parameters of the artificial neural network model. The result of this study confirmed that the performance of the artificial neural network model is better than that of simple stepwise regression analysis. The application of the statistical verification process and artificial neural network model differentiates this paper from others. In addition, it is expected that a scientific model that satisfies both statistical rationality and accuracy of the results will make a contribution to real-life practices.

Nonlinear Impact Analysis for Eco-Pillar Debris Barrier with Hollow Cross-Section (중공트랙단면 에코필라 사방댐의 비선형 충돌해석)

  • Kim, Hyun-Gi;Kim, Bum-Joon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.7
    • /
    • pp.430-439
    • /
    • 2019
  • In this study, a nonlinear impact analysis was performed to evaluate the safety and damage of an eco-pillar debris barrier with a hollow cross-section, which was proposed to improve constructability and economic efficiency. The construction of concrete eco-pillar debris barriers has increased recently. However, there are no design standards concerning debris barriers in Korea, and it is difficult to find a study on performance evaluations in extreme environments. Thus, an analysis of an eco-pillar debris barrier was done using the rock impact speed, which was estimated from the debris flow velocity. The diameters of rocks were determined by ETAG 27. The impact position, angles, and rock diameter were considered as variables. A concrete nonlinear material model was applied, and the estimation of damage was done by ABAQUS software. As a result, the damage ratio was found to be less than 1.0 at rock diameters of 0.3 m and 0.5 m, but it was 1.39 when the diameter was 0.7 m. This study could be used as basic data on impact force in the design of the cross section of an eco-pillar debris barrier.

Dense-Depth Map Estimation with LiDAR Depth Map and Optical Images based on Self-Organizing Map (라이다 깊이 맵과 이미지를 사용한 자기 조직화 지도 기반의 고밀도 깊이 맵 생성 방법)

  • Choi, Hansol;Lee, Jongseok;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.283-295
    • /
    • 2021
  • This paper proposes a method for generating dense depth map using information of color images and depth map generated based on lidar based on self-organizing map. The proposed depth map upsampling method consists of an initial depth prediction step for an area that has not been acquired from LiDAR and an initial depth filtering step. In the initial depth prediction step, stereo matching is performed on two color images to predict an initial depth value. In the depth map filtering step, in order to reduce the error of the predicted initial depth value, a self-organizing map technique is performed on the predicted depth pixel by using the measured depth pixel around the predicted depth pixel. In the process of self-organization map, a weight is determined according to a difference between a distance between a predicted depth pixel and an measured depth pixel and a color value corresponding to each pixel. In this paper, we compared the proposed method with the bilateral filter and k-nearest neighbor widely used as a depth map upsampling method for performance comparison. Compared to the bilateral filter and the k-nearest neighbor, the proposed method reduced by about 6.4% and 8.6% in terms of MAE, and about 10.8% and 14.3% in terms of RMSE.

A Study on the Effect of Technology Readiness Level and Commercialization Activities on the Success of Technology Commercialization: Focusing on Public Technology (기술사업화 성공에 대한 기술성숙도 및 사업화 활동의 영향에 관한 연구: 공공기술을 중심으로)

  • Shin, Yoonmi;Bong, Kang Ho;Park, Jaemin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.6
    • /
    • pp.197-206
    • /
    • 2021
  • There is growing interest in the function and role of public research institutes as "entrepreneurial actors" that can contribute to industrial development by commercializing excellent research outputs. On the other hand, their performance in the commercialization phase is insufficient because of the insufficient technological technology readiness level or repeatability. This study conducted probit model analysis to examine the effect of the technology readiness level and commercialization activities on the success of technology commercialization. The results showed that the possibility of success in technology commercialization increased with increasing TRL at the time of acquisition. In addition, the difference between the TRL at the time of acquisition and the current TRL (TRL Gap) does not affect technology commercialization on its own. It generates additional effects in conjunction with the TRL at the time of acquisition. Finally, the results show that technology commercialization is most likely to succeed if technology with a TRL 4-6 level is improved to TRL 9 level through a marginal effect estimation.

Statistical Analysis of Extreme Values of Financial Ratios (재무비율의 극단치에 대한 통계적 분석)

  • Joo, Jihwan
    • Knowledge Management Research
    • /
    • v.22 no.2
    • /
    • pp.247-268
    • /
    • 2021
  • Investors mainly use PER and PBR among financial ratios for valuation and investment decision-making. I conduct an analysis of two basic financial ratios from a statistical perspective. Financial ratios contain key accounting numbers which reflect firm fundamentals and are useful for valuation or risk analysis such as enterprise credit evaluation and default prediction. The distribution of financial data tends to be extremely heavy-tailed, and PER and PBR show exceedingly high level of kurtosis and their extreme cases often contain significant information on financial risk. In this respect, Extreme Value Theory is required to fit its right tail more precisely. I introduce not only GPD but exGPD. GPD is conventionally preferred model in Extreme Value Theory and exGPD is log-transformed distribution of GPD. exGPD has recently proposed as an alternative of GPD(Lee and Kim, 2019). First, I conduct a simulation for comparing performances of the two distributions using the goodness of fit measures and the estimation of 90-99% percentiles. I also conduct an empirical analysis of Information Technology firms in Korea. Finally, exGPD shows better performance especially for PBR, suggesting that exGPD could be an alternative for GPD for the analysis of financial ratios.