• Title/Summary/Keyword: E-Metrics

Search Result 199, Processing Time 0.028 seconds

Comparative study of an integrated QoS in WLAN and WiMAX (WLAN과 WiMAX에서의 연동 서비스 품질 비교 연구)

  • Wang, Ye;Zhang, Xiao-Lei;Chen, Weiwei;Ki, Jang-Geun;Lee, Kyu-Tae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.3
    • /
    • pp.103-110
    • /
    • 2010
  • This paper addressed the implementation of the systematic performance analysis of Quality of Service (QoS) by using OPNET simulator in the interworking architecture of IEEE 802.16e (mobile WiMAX) and IEEE 802.11e (WLAN) wireless network. Four simulation cases were provided in OPNET simulator and a voice traffic was simulated with various performance metrics, such as Mean Opinion Score (MOS), end-to-end delay and packet transmission ratio. Based on the simulation results, the MOS value presented better in WiMAX to WiMAX case compared to others in both static and mobility case. Meanwhile, end-to-end delay was not greatly affected by mobility in four cases. However, mobility was affected much in MOS value and packet transmission ratio in WLAN to WLAN case than in others.

Estimation of TCP Throughput Fairness Ratio under Various Background Traffic (다양한 백그라운드 트래픽이 존재하는 경우의 TCP 공정성 비율 측정)

  • Lee, Jun-Soo;Kim, Ju-Kyun
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.2
    • /
    • pp.197-205
    • /
    • 2008
  • TCP packets occupy over 90% of current Internet traffic thus understanding of TCP throughput is crucial to understand Internet. Under the TCP congestion regime, heterogeneous flows, i.e., flows with different round-trip times (RTTs), that share the same bottleneck link will not attain equal portions of the available bandwidth. In fact, according to the TCP friendly formula, the throughput ratio of two flows is inversely proportional to the ratio of their RTTs. It has also been shown that TCP's unfairness to flows with longer RTTs is accentuated under loss synchronization. In this paper, we show that, injecting bursty background traffic may actually lead to new type of synchronization and result in unfairness to foreground TCP flows with longer RTTs. We propose three different metrics to characterize traffic burstiness and show that these metrics are reliable predictors of TCP unfairness.

  • PDF

Handling Of Sensitive Data With The Use Of 3G In Vehicular Ad-Hoc Networks

  • Mallick, Manish;Shakya, Subarna;Shrestha, Surendra;Shrestha, Bhanu;Cho, Seongsoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.8 no.2
    • /
    • pp.49-54
    • /
    • 2016
  • Data delivery is very challenging in VANETs because of its unique characteristics, such as fast topology change, frequent disruptions, and rare contact opportunities. This paper tries to explore the scope of 3G-assisted data delivery in a VANET within a budget constraint of 3G traffic. It is started from the simple S_Random (Srand) and finally reached the 3GSDD, i.e., the proposed algorithm. The performance evaluation of different algorithms is done through the two metrics delivery ratio and average delay. A third function utility is created to reflect the above two metrics and is used to find out the best algorithm. A packet can either be delivered via multihop transmissions in the VANET or via 3G. The main challenge is to decide which set of packets should be selected for 3G transmissions and when to deliver them via 3G. The aim is to select and send those packets through 3G that are most sensitive and requiring immediate attention. Through appropriate communication mechanism, these sensitive information are delivered via VANET for 3G transmissions. This way the sensitive information which could not be transmitted through normal VANET will certainly find its destination through 3G transmission unconditionally and with top priority. The delivery ratio of the packets can also be maximized by this system.

Regional land cover patterns, changes and potential relationships with scaled quail (Callipepla squamata) abundance

  • Rho, Paikho;Wu, X. Ben;Smeins, Fred E.;Silvy, Nova J.;Peterson, Markus J.
    • Journal of Ecology and Environment
    • /
    • v.38 no.2
    • /
    • pp.185-193
    • /
    • 2015
  • A dramatic decline in the abundance of the scaled quail (Callipepla squamata) has been observed across most of its geographic range. In order to evaluate the influence of land cover patterns and their changes on scaled quail abundance, we examined landscape patterns and their changes from the 1970s to the1990s in two large ecoregions with contrasting population trends: (1) the Rolling Plains ecoregion with a significantly decreased scaled quail population and (2) the South Texas Plains ecoregion with a relatively stable scaled quail population. The National Land Cover Database (NLCD) and the U.S. Geological Survey's (USGS) Land Use/Land Cover data were used to quantify landscape patterns and their changes based on 80 randomly located $20{\times}20km^2$ windows in each of the ecoregions. We found that landscapes in the Rolling Plains and the South Texas Plains were considerably different in composition and spatial characteristics related to scaled quail habitats. The landscapes in the South Texas Plains had significantly more shrubland and less grassland-herbaceous rangeland; and except for shrublands, they were more fragmented, with greater interspersion among land cover classes. Correlation analysis between the landscape metrics and the quail-abundance-survey data showed that shrublands appeared to be more important for scaled quail in the South Texas Plains, while grassland-herbaceous rangelands and pasture-croplands were essential to scaled quail habitats in the Rolling Plains. The decrease in the amount of grassland-herbaceous rangeland and spatial aggregation of pasture-croplands has likely contributed to the population decline of scaled quails in the Rolling Plains ecoregion.

A comparison of three performance-based seismic design methods for plane steel braced frames

  • Kalapodis, Nicos A.;Papagiannopoulos, George A.;Beskos, Dimitri E.
    • Earthquakes and Structures
    • /
    • v.18 no.1
    • /
    • pp.27-44
    • /
    • 2020
  • This work presents a comparison of three performance-based seismic design methods (PBSD) as applied to plane steel frames having eccentric braces (EBFs) and buckling restrained braces (BRBFs). The first method uses equivalent modal damping ratios (ξk), referring to an equivalent multi-degree-of-freedom (MDOF) linear system, which retains the mass, the elastic stiffness and responds in the same way as the original non-linear MDOF system. The second method employs modal strength reduction factors (${\bar{q}}_k$) resulting from the corresponding modal damping ratios. Contrary to the behavior factors of code based design methods, both ξk and ${\bar{q}}_k$ account for the first few modes of significance and incorporate target deformation metrics like inter-storey drift ratio (IDR) and local ductility as well as structural characteristics like structural natural period, and soil types. Explicit empirical expressions of ξk and ${\bar{q}}_k$, recently presented by the present authors elsewhere, are also provided here for reasons of completeness and easy reference. The third method, developed here by the authors, is based on a hybrid force/displacement (HFD) seismic design scheme, since it combines the force-base design (FBD) method with the displacement-based design (DBD) method. According to this method, seismic design is accomplished by using a behavior factor (qh), empirically expressed in terms of the global ductility of the frame, which takes into account both non-structural and structural deformation metrics. These expressions for qh are obtained through extensive parametric studies involving non-linear dynamic analysis (NLDA) of 98 frames, subjected to 100 far-fault ground motions that correspond to four soil types of Eurocode 8. Furthermore, these factors can be used in conjunction with an elastic acceleration design spectrum for seismic design purposes. Finally, a comparison among the above three seismic design methods and the Eurocode 8 method is conducted with the aid of non-linear dynamic analyses via representative numerical examples, involving plane steel EBFs and BRBFs.

Quantifications of Intensity-Modulated Radiation Therapy Plan Complexities in Magnetic Resonance Image Guided Radiotherapy Systems

  • Chun, Minsoo;Kwon, Ohyun;Park, Jong Min;Kim, Jung-in
    • Journal of Radiation Protection and Research
    • /
    • v.46 no.2
    • /
    • pp.48-57
    • /
    • 2021
  • Background: In this study, the complexities of step-and-shoot intensity-modulated radiation therapy (IMRT) plans in magnetic resonance-guided radiation therapy systems were evaluated. Materials and Methods: Overall, 194 verification plans from the abdomen, prostate, and breast sites were collected using a 60Co-based ViewRay radiotherapy system (ViewRay Inc., Cleveland, OH, USA). Various plan complexity metrics (PCMs) were calculated for each verification plan, including the modulation complexity score (MCS), plan-averaged beam area (PA), plan-averaged beam irregularity, plan-averaged edge (PE), plan-averaged beam modulation, number of segments, average area among all segments (AA/Seg), and total beam-on time (TBT). The plan deliverability was quantified in terms of gamma passing rates (GPRs) with a 1 mm/2% criterion, and the Pearson correlation coefficients between GPRs and various PCMs were analyzed. Results and Discussion: For the abdomen, prostate, and breast groups, the average GPRs with the 1 mm/2% criterion were 77.8 ± 6.0%, 79.8 ± 4.9%, and 84.7 ± 7.3%; PCMs were 0.263, 0.271, and 0.386; PAs were 15.001, 18.779, and 35.683; PEs were 1.575, 1.444, and 1.028; AA/Segs were 15.37, 19.89, and 36.64; and TBTs were 18.86, 19.33, and 5.91 minutes, respectively. The various PCMs, i.e., MCS, PA, PE, AA/Seg, and TBT, showed statistically significant Pearson correlation coefficients of 0.416, 0.627, -0.541, 0.635, and -0.397, respectively, with GPRs. Conclusion: The area-related metrics exhibited strong correlations with GPRs. Moreover, the AA/Seg metric can be used to estimate the IMRT plan accuracy without beam delivery in the 60Co-based ViewRay radiotherapy system.

Application of a Statistical Interpolation Method to Correct Extreme Values in High-Resolution Gridded Climate Variables (고해상도 격자 기후자료 내 이상 기후변수 수정을 위한 통계적 보간법 적용)

  • Jeong, Yeo min;Eum, Hyung-Il
    • Journal of Climate Change Research
    • /
    • v.6 no.4
    • /
    • pp.331-344
    • /
    • 2015
  • A long-term gridded historical data at 3 km spatial resolution has been generated for practical regional applications such as hydrologic modelling. However, overly high or low values have been found at some grid points where complex topography or sparse observational network exist. In this study, the Inverse Distance Weighting (IDW) method was applied to properly smooth the overly predicted values of Improved GIS-based Regression Model (IGISRM), called the IDW-IGISRM grid data, at the same resolution for daily precipitation, maximum temperature and minimum temperature from 2001 to 2010 over South Korea. We tested various effective distances in the IDW method to detect an optimal distance that provides the highest performance. IDW-IGISRM was compared with IGISRM to evaluate the effectiveness of IDW-IGISRM with regard to spatial patterns, and quantitative performance metrics over 243 AWS observational points and four selected stations showing the largest biases. Regarding the spatial pattern, IDW-IGISRM reduced irrational overly predicted values, i. e. producing smoother spatial maps that IGISRM for all variables. In addition, all quantitative performance metrics were improved by IDW-IGISRM; correlation coefficient (CC), Index Of Agreement (IOA) increase up to 11.2% and 2.0%, respectively. Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) were also reduced up to 5.4% and 15.2% respectively. At the selected four stations, this study demonstrated that the improvement was more considerable. These results indicate that IDW-IGISRM can improve the predictive performance of IGISRM, consequently providing more reliable high-resolution gridded data for assessment, adaptation, and vulnerability studies of climate change impacts.

Safe Discharge Criteria After Curative Gastrectomy for Gastric Cancer

  • Guner, Ali;Kim, Ki Yoon;Park, Sung Hyun;Cho, Minah;Kim, Yoo Min;Hyung, Woo Jin;Kim, Hyoung-Il
    • Journal of Gastric Cancer
    • /
    • v.22 no.4
    • /
    • pp.395-407
    • /
    • 2022
  • Purpose: This study aimed to investigate the relationship between clinical and laboratory parameters and complication status to predict which patients can be safely discharged from the hospital on the third postoperative day (POD). Materials and Methods: Data from a prospectively maintained database of 2,110 consecutive patients with gastric adenocarcinoma who underwent curative surgery were reviewed. The third POD vital signs, laboratory data, and details of the course after surgery were collected. Patients with grade II or higher complications after the third POD were considered unsuitable for early discharge. The performance metrics were calculated for all algorithm parameters. The proposed algorithm was tested using a validation dataset of consecutive patients from the same center. Results: Of 1,438 patients in the study cohort, 142 (9.9%) were considered unsuitable for early discharge. C-reactive protein level, body temperature, pulse rate, and neutrophil count had good performance metrics and were determined to be independent prognostic factors. An algorithm consisting of these 4 parameters had a negative predictive value (NPV) of 95.9% (95% confidence interval [CI], 94.2-97.3), sensitivity of 80.3% (95% CI, 72.8-86.5), and specificity of 51.1% (95% CI, 48.3-53.8). Only 28 (1.9%) patients in the study cohort were classified as false negatives. In the validation dataset, the NPV was 93.7%, sensitivity was 66%, and 3.3% (17/512) of patients were classified as false negatives. Conclusions: Simple clinical and laboratory parameters obtained on the third POD can be used when making decisions regarding the safe early discharge of patients who underwent gastrectomy.

Comparison of Multi-Label U-Net and Mask R-CNN for panoramic radiograph segmentation to detect periodontitis

  • Rini, Widyaningrum;Ika, Candradewi;Nur Rahman Ahmad Seno, Aji;Rona, Aulianisa
    • Imaging Science in Dentistry
    • /
    • v.52 no.4
    • /
    • pp.383-391
    • /
    • 2022
  • Purpose: Periodontitis, the most prevalent chronic inflammatory condition affecting teeth-supporting tissues, is diagnosed and classified through clinical and radiographic examinations. The staging of periodontitis using panoramic radiographs provides information for designing computer-assisted diagnostic systems. Performing image segmentation in periodontitis is required for image processing in diagnostic applications. This study evaluated image segmentation for periodontitis staging based on deep learning approaches. Materials and Methods: Multi-Label U-Net and Mask R-CNN models were compared for image segmentation to detect periodontitis using 100 digital panoramic radiographs. Normal conditions and 4 stages of periodontitis were annotated on these panoramic radiographs. A total of 1100 original and augmented images were then randomly divided into a training (75%) dataset to produce segmentation models and a testing (25%) dataset to determine the evaluation metrics of the segmentation models. Results: The performance of the segmentation models against the radiographic diagnosis of periodontitis conducted by a dentist was described by evaluation metrics(i.e., dice coefficient and intersection-over-union [IoU] score). MultiLabel U-Net achieved a dice coefficient of 0.96 and an IoU score of 0.97. Meanwhile, Mask R-CNN attained a dice coefficient of 0.87 and an IoU score of 0.74. U-Net showed the characteristic of semantic segmentation, and Mask R-CNN performed instance segmentation with accuracy, precision, recall, and F1-score values of 95%, 85.6%, 88.2%, and 86.6%, respectively. Conclusion: Multi-Label U-Net produced superior image segmentation to that of Mask R-CNN. The authors recommend integrating it with other techniques to develop hybrid models for automatic periodontitis detection.

EDNN based prediction of strength and durability properties of HPC using fibres & copper slag

  • Gupta, Mohit;Raj, Ritu;Sahu, Anil Kumar
    • Advances in concrete construction
    • /
    • v.14 no.3
    • /
    • pp.185-194
    • /
    • 2022
  • For producing cement and concrete, the construction field has been encouraged by the usage of industrial soil waste (or) secondary materials since it decreases the utilization of natural resources. Simultaneously, for ensuring the quality, the analyses of the strength along with durability properties of that sort of cement and concrete are required. The prediction of strength along with other properties of High-Performance Concrete (HPC) by optimization and machine learning algorithms are focused by already available research methods. However, an error and accuracy issue are possessed. Therefore, the Enhanced Deep Neural Network (EDNN) based strength along with durability prediction of HPC was utilized by this research method. Initially, the data is gathered in the proposed work. Then, the data's pre-processing is done by the elimination of missing data along with normalization. Next, from the pre-processed data, the features are extracted. Hence, the data input to the EDNN algorithm which predicts the strength along with durability properties of the specific mixing input designs. Using the Switched Multi-Objective Jellyfish Optimization (SMOJO) algorithm, the weight value is initialized in the EDNN. The Gaussian radial function is utilized as the activation function. The proposed EDNN's performance is examined with the already available algorithms in the experimental analysis. Based on the RMSE, MAE, MAPE, and R2 metrics, the performance of the proposed EDNN is compared to the existing DNN, CNN, ANN, and SVM methods. Further, according to the metrices, the proposed EDNN performs better. Moreover, the effectiveness of proposed EDNN is examined based on the accuracy, precision, recall, and F-Measure metrics. With the already-existing algorithms i.e., JO, GWO, PSO, and GA, the fitness for the proposed SMOJO algorithm is also examined. The proposed SMOJO algorithm achieves a higher fitness value than the already available algorithm.