• Title/Summary/Keyword: modified regression model

Search Result 235, Processing Time 0.025 seconds

Use of bivariate gamma function to reconstruct dynamic behavior of laminated composite plates containing embedded delamination under impact loads

  • Lee, Sang-Youl;Jeon, Jong-Su
    • Structural Engineering and Mechanics
    • /
    • v.70 no.1
    • /
    • pp.1-11
    • /
    • 2019
  • This study deals with a method based on the modified bivariate gamma function for reconstructions of dynamic behavior of delaminated composite plates subjected to impact loads. The proposed bivariate gamma function is associated with micro-genetic algorithms, which is capable of solving inverse problems to determine the stiffness reduction associated with delamination. From computing the unknown parameters, it is possible for the entire dynamic response data to develop a prediction model of the dynamic response through a regression analysis based on the measurement data. The validity of the proposed method was verified by comparing with results employing a higher-order finite element model. Parametric results revealed that the proposed method can reconstruct dynamic responses and the stiffness reduction of delaminated composite plates can be investigated for different measurements and loading locations.

Model Independent Statistics in Cosmology

  • Keeley, Ryan E.;Shafieloo, Arman
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.49.1-49.1
    • /
    • 2020
  • In this talk, I will discuss a few different techniques to reconstruct different cosmological functions, such as the primordial power spectrum and the expansion history. These model independent techniques are useful because they can discover surprising results in a way that nested modeling cannot. For instance, we can use the modified Richardson Lucy algorithm to reconstruct a novel primordial power spectra from the Planck data that can resolve the "Hubble tension". This novel primordial power spectrum has regular oscillatory features that would be difficult to find using parametric methods. Further, we can use Gaussian process regression to reconstruct the expansion history of the Universe from low-redshift distance datasets. We can also this technique to test if these datasets are consistent with one another, which essentially allows for this technique to serve as a systematics finder.

  • PDF

Vehicle Classification and Tracking based on Deep Learning (딥러닝 기반의 자동차 분류 및 추적 알고리즘)

  • Hyochang Ahn;Yong-Hwan Lee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.161-165
    • /
    • 2023
  • One of the difficult works in an autonomous driving system is detecting road lanes or objects in the road boundaries. Detecting and tracking a vehicle is able to play an important role on providing important information in the framework of advanced driver assistance systems such as identifying road traffic conditions and crime situations. This paper proposes a vehicle detection scheme based on deep learning to classify and tracking vehicles in a complex and diverse environment. We use the modified YOLO as the object detector and polynomial regression as object tracker in the driving video. With the experimental results, using YOLO model as deep learning model, it is possible to quickly and accurately perform robust vehicle tracking in various environments, compared to the traditional method.

  • PDF

Assessment of radiopacity of restorative composite resins with various target distances and exposure times and a modified aluminum step wedge

  • Mir, Arash Poorsattar Bejeh;Mir, Morvarid Poorsattar Bejeh
    • Imaging Science in Dentistry
    • /
    • v.42 no.3
    • /
    • pp.163-167
    • /
    • 2012
  • Purpose: ANSI/ADA has established standards for adequate radiopacity. This study was aimed to assess the changes in radiopacity of composite resins according to various tube-target distances and exposure times. Materials and Methods: Five 1-mm thick samples of Filtek P60 and Clearfil composite resins were prepared and exposed with six tube-target distance/exposure time setups (i.e., 40 cm, 0.2 seconds; 30 cm, 0.2 seconds; 30 cm, 0.16 seconds, 30 cm, 0.12 seconds; 15 cm, 0.2 seconds; 15 cm, 0.12 seconds) performing at 70 kVp and 7 mA along with a 12-step aluminum stepwedge (1 mm incremental steps) using a PSP digital sensor. Thereafter, the radiopacities measured with Digora for Windows software 2.5 were converted to absorbencies (i.e., A=-log (1-G/255)), where A is the absorbency and G is the measured gray scale). Furthermore, the linear regression model of aluminum thickness and absorbency was developed and used to convert the radiopacity of dental materials to the equivalent aluminum thickness. In addition, all calculations were compared with those obtained from a modified 3-step stepwedge (i.e., using data for the 2nd, 5th, and 8th steps). Results: The radiopacities of the composite resins differed significantly with various setups (p<0.001) and between the materials (p<0.001). The best predicted model was obtained for the 30 cm 0.2 seconds setup ($R^2$=0.999). Data from the reduced modified stepwedge was remarkable and comparable with the 12-step stepwedge. Conclusion: Within the limits of the present study, our findings support that various setups might influence the radiopacity of dental materials on digital radiographs.

A Comparison of Models for Predicting Discretionary Accruals: A Cross-Country Analysis

  • ACAR, Goksel;COSKUN, Ali
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.9
    • /
    • pp.315-328
    • /
    • 2020
  • In this study, we examined various aspects of discretionary accruals. We compared the power of Jones Model (JM), Modified Jones Model (MJM) and Performance Matched Model (PMM). Furthermore, we tested whether accruals derived from cash flow approach or balance sheet approach provide better results and we investigated the significance of country and industry control variables in models. In order to perform these tests, we constructed thirty equations. The data consists of 319 non-financial companies over five years in the GCC region. We used panel data regression models, and testing suggests us to use random effect model as the most suitable one. The results show that PMM has the highest explanatory power among models and it is followed by JM and MJM, consecutively. Secondly, results reveal that accruals derived from cash flow approach provide more accurate results. Moreover, country dummies are significant in models with cash flow approach and they lose significance in balance sheet approach. We differentiated industries due to two different classifications: the first group with higher number of industries is more precise compared to the second group with a narrower scope and lower number of industries. The model including both industrial and country-wise dummies scores highest in significance.

Do Industry 4.0 & Technology Affect Carbon Emission: Analyse with the STIRPAT Model?

  • Asha SHARMA
    • Fourth Industrial Review
    • /
    • v.3 no.2
    • /
    • pp.1-10
    • /
    • 2023
  • Purpose - The main purpose of the paper is to examine the variables affecting carbon emissions in different nations around the world. Research design, data, and methodology - To measure its impact on carbon emissions, secondary data has data of the top 50 Countries have been taken. The stochastic Impacts by Regression on Population, Affluence, and Technology (STIRPAT) model have been used to quantify the factors that affect carbon emissions. A modified version using Industry 4.0 and region in fundamental STIRPAT model has been applied with the ordinary least square approach. The outcome has been measured using both the basic and extended STIRPAT models. Result - Technology found a positive determinant as well as statistically significant at the alpha level of 0.001models indicating that technological innovation helps reduce carbon emissions. In total, 4 models have been derived to test the best fit and find the highest explaining capacity of variance. Model 3 is found best fit in explanatory power with the highest adjusted R2 (97.95%). Conclusion - It can be concluded that the selected explanatory variables population and Industry 4.0 are found important indicators and causal factors for carbon emission and found constant with all four models for total CO2 and Co2 per capita.

A SVR Based-Pseudo Modified Einstein Procedure Incorporating H-ADCP Model for Real-Time Total Sediment Discharge Monitoring (실시간 총유사량 모니터링을 위한 H-ADCP 연계 수정 아인슈타인 방법의 의사 SVR 모형)

  • Noh, Hyoseob;Son, Geunsoo;Kim, Dongsu;Park, Yong Sung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.3
    • /
    • pp.321-335
    • /
    • 2023
  • Monitoring sediment loads in natural rivers is the key process in river engineering, but it is costly and dangerous. In practice, suspended loads are directly measured, and total loads, which is a summation of suspended loads and bed loads, are estimated. This study proposes a real-time sediment discharge monitoring system using the horizontal acoustic Doppler current profiler (H-ADCP) and support vector regression (SVR). The proposed system is comprised of the SVR model for suspended sediment concentration (SVR-SSC) and for total loads (SVR-QTL), respectively. SVR-SSC estimates SSC and SVR-QTL mimics the modified Einstein procedure. The grid search with K-fold cross validation (Grid-CV) and the recursive feature elimination (RFE) were employed to determine SVR's hyperparameters and input variables. The two SVR models showed reasonable cross-validation scores (R2) with 0.885 (SVR-SSC) and 0.860 (SVR-QTL). During the time-series sediment load monitoring period, we successfully detected various sediment transport phenomena in natural streams, such as hysteresis loops and sensitive sediment fluctuations. The newly proposed sediment monitoring system depends only on the gauged features by H-ADCP without additional assumptions in hydraulic variables (e.g., friction slope and suspended sediment size distribution). This method can be applied to any ADCP-installed discharge monitoring station economically and is expected to enhance temporal resolution in sediment monitoring.

Effects of Ovarian Status at the Time of Initiation of the Modified Double-Ovsynch Program on the Reproductive Performance in Dairy Cows

  • Jaekwan Jeong;Illhwa Kim
    • Journal of Veterinary Clinics
    • /
    • v.40 no.3
    • /
    • pp.238-241
    • /
    • 2023
  • This study determined the effect of ovarian status at the beginning of the modified Double-Ovsynch program on reproductive performance in dairy cows. In the study, 1,302 cows were treated with a modified Double-Ovsynch program at 56 days after calving. This program comprises administering gonadotropin-releasing hormones (GnRH), prostaglandin F (PGF) 10 days later, GnRH 3 days later, GnRH 7 days later, and GnRH 56 h later, followed by timed artificial insemination (TAI) 16 h later. At the beginning of the program, cows were categorized according to the size of the largest follicle and the presence of a corpus luteum (CL) in the ovaries as follows: 1) small follicle (<5 mm, SF group, n = 100), 2) medium follicle (8-20 mm, MF group, n = 538), and 3) large follicle (≥25 mm, LF group, n = 354) without a CL, or 4) the presence of a CL (CL group, n = 310). The pregnancies per AI after the first TAI were analyzed by logistic regression using the LOGISTIC procedure, and the logistic model included the fixed effects of the herd size, parity, body condition score (BCS) at the first TAI, TAI period, and ovarian status. A larger herd size, higher BCS at the first TAI, and TAI period with no heat stress increased (p < 0.05) the probability of pregnancy per AI after the first TAI. However, ovarian status at the beginning of the program did not affect (p > 0.05) the pregnancies per AI (ranges of 37.9% to 42.9%). These results show that the modified Double-Ovsynch program can be used effectively while maintaining good fertility regardless of the ovarian status in dairy herds.

Comparison of the Estimation-Before-Modeling Technique with the Parameter Estimation Method Using the Extended Kalman Filter in the Estimation of Manoeuvring Derivatives of a Ship (선박 조종미계수 식별 시 모델링 전 추정기법과 확장 Kalman 필터에 의한 계수추정법의 비교에 관한 연구)

  • 윤현규;이기표
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.40 no.5
    • /
    • pp.43-52
    • /
    • 2003
  • Two methods which estimate manoeuvring derivatives in the model of hydrodynamic force and moment acting on a manoeuvring ship using sea trial data were compared. One is the widely used parameter estimation method by using the Extended Kalman Filter (EKF), which estimates state variables of linearized state space model at every instant after dealing with the coefficients as the augmented state variables. The other one is the Estimation-Before-Modeling (EBM) technique, so called the two-step method. In the first step, hydrodynamic force of which dynamic model is assumed the third-order Gauss-Markov process is estimated along with motion variables by the EKF and the modified Bryson-Frazier smoother. Then, in the next step, manoeuvring derivatives are identified through the regression analysis. If the exact structure of hydrodynamic force could be known, which was an ideal case, the EKF method would be regarded as being more superior compared to the EBM technique. However the EBM technique was more robust than the EKF method from a realistic point of view where the assumed model structure was slightly different from the real one.

Impact of climate variability and change on crop Productivity (기후변화에 따른 작물 생산성반응과 기술적 대응)

  • Shin Jin Chul;Lee Chung Geun;Yoon Young Hwan;Kang Yang Soon
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2000.11a
    • /
    • pp.12-27
    • /
    • 2000
  • During the recent decades, he problem of climate variability and change has been in the forefront of scientific problems. The objective of this study was to assess the impact of climate variability on crop growth and yield. The growth duration was the main impact of climate variability on crop yield. Phyllochronterval was shortened in the global worming situations. A simple model to describe developmental traits was provided from heading data of directly seeded rice cultivars and temperature data. Daily mean development rate could be explained by the average temperature during the growth stage. Simple regression equation between daily mean development rate(x) and the average temperature(y) during the growth period as y = ax + b. It can be simply modified as x = 1/a $\ast$ (y-b). The parameters of the model could depict the thermo sensitivity of the cultivars. On the base of this model, the three doubled CO2 GCM scenarios were assessed. The average of these would suggest a decline in rice production of about 11% if we maintained the current cultivars. Future cultivar's developmental traits could be suggested by the two model parameters.

  • PDF