• Title/Summary/Keyword: size metrics

Search Result 106, Processing Time 0.027 seconds

Sample Size Calculations for the Development of Biosimilar Products Based on Binary Endpoints

  • Kang, Seung-Ho;Jung, Ji-Yong;Baik, Seon-Hye
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.4
    • /
    • pp.389-399
    • /
    • 2015
  • It is important not to overcalculate sample sizes for clinical trials due to economic, ethical, and scientific reasons. Kang and Kim (2014) investigated the accuracy of a well-known sample size calculation formula based on the approximate power for continuous endpoints in equivalence trials, which has been widely used for Development of Biosimilar Products. They concluded that this formula is overly conservative and that sample size should be calculated based on an exact power. This paper extends these results to binary endpoints for three popular metrics: the risk difference, the log of the relative risk, and the log of the odds ratio. We conclude that the sample size formulae based on the approximate power for binary endpoints in equivalence trials are overly conservative. In many cases, sample sizes to achieve 80% power based on approximate powers have 90% exact power. We propose that sample size should be computed numerically based on the exact power.

An In-depth Analysis and Performance Improvement of a Container Relocation Algorithm

  • Lee, Hyung-Bong;Kwon, Ki-Hyeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.9
    • /
    • pp.81-89
    • /
    • 2017
  • The CRP(Container Relocation Problem) algorithms pursuing efficient container relocation of wharf container terminal can not be deterministic because of the large number of layout cases. Therefore, the CRP algorithms should adopt trial and error intuition and experimental heuristic techniques. And because the heuristic can not be best for all individual cases, it is necessary to find metrics which show excellent on average. In this study, we analyze GLAH(Greedy Look-ahead Heuristic) algorithm which is one of the recent researches in detail, and propose a heuristic metrics HOB(sum of the height differences between a badly placed container and the containers prohibited by the badly placed container) to improve the algorithm. The experimental results show that the improved algorithm, GLAH', exerts a stable performance increment of up to 3.8% in our test data, and as the layout size grows, the performance increment gap increases.

A Novel Filter ed Bi-Histogram Equalization Method

  • Sengee, Nyamlkhagva;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.6
    • /
    • pp.691-700
    • /
    • 2015
  • Here, we present a new framework for histogram equalization in which both local and global contrasts are enhanced using neighborhood metrics. When checking neighborhood information, filters can simultaneously improve image quality. Filters are chosen depending on image properties, such as noise removal and smoothing. Our experimental results confirmed that this does not increase the computational cost because the filtering process is done by our proposed arrangement of making the histogram while checking neighborhood metrics simultaneously. If the two methods, i.e., histogram equalization and filtering, are performed sequentially, the first method uses the original image data and next method uses the data altered by the first. With combined histogram equalization and filtering, the original data can be used for both methods. The proposed method is fully automated and any spatial neighborhood filter type and size can be used. Our experiments confirmed that the proposed method is more effective than other similar techniques reported previously.

A Memory-Efficient Block-wise MAP Decoder Architecture

  • Kim, Sik;Hwang, Sun-Young;Kang, Moon-Jun
    • ETRI Journal
    • /
    • v.26 no.6
    • /
    • pp.615-621
    • /
    • 2004
  • Next generation mobile communication system, such as IMT-2000, adopts Turbo codes due to their powerful error correction capability. This paper presents a block-wise maximum a posteriori (MAP) Turbo decoding structure with a low memory requirement. During this research, it has been observed that the training size and block size determine the amount of required memory and bit-error rate (BER) performance of the block-wise MAP decoder, and that comparable BER performance can be obtained with much shorter blocks when the training size is sufficient. Based on this observation, a new decoding structure is proposed and presented in this paper. The proposed block-wise decoder employs a decoding scheme for reducing the memory requirement by setting the training size to be N times the block size. The memory requirement for storing the branch and state metrics can be reduced 30% to 45%, and synthesis results show that the overall memory area can be reduced by 5.27% to 7.29%, when compared to previous MAP decoders. The decoder throughput can be maintained in the proposed scheme without degrading the BER performance.

  • PDF

Determinants of Liquidity in Manufacturing Firms

  • VU, Thu Minh Thi;TRUONG, Tu Van;DINH, Dung Thuy
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.12
    • /
    • pp.11-19
    • /
    • 2020
  • This study examines the factors that affect firm's liquidity in manufacturing companies listed in Vietnam. Factors studied include the board size, the board independence, the firm size, the firm age, and its return. We use different metrics to measure firm's solvency status, including the cash ratio, the quick ratio, and the cash conversion cycle. Accordingly, three econometric models are built to test hypotheses proposed by researchers in order to explain the relationship between the five factors above and liquidity's measures. The study used the data set of manufacturing companies listed on the Ho Chi Minh City Stock Exchange in the period from 2015 to 2019. The final sample group comprises 139 firms with 633 observations. The results show that in manufacturing firms, while the cash ratio and the quick ratio are positively associated to the board size, the board independence, and the firm's profitability, the net operating cycle is negatively correlated to the board size, the firm size, the board independence, and the profitability. Therefore, larger firms with larger board size and more independent members can help to improve capital management efficiency.There is no evidence for the relationship between the firm age and solvency measurements, between cash conversion cycle and firm's profitability.

A Study of Estimation for Web Software Size (웹 소프트웨어 규모 예측에 관한 연구)

  • KIM JeeHyun;YOO HaeYoung
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.409-416
    • /
    • 2005
  • Even though development paradigm of software has been changing very fast at the beginning of 21st Centuries, there are just few studies of quality and estimation metrics appropriate for Web environment. So in this study after analyzing the correlation between the size of the final code and property of objects, three industrial real world projects written in ASP have been used for deriving programs with high possibilities of faults. And then the size of programs was analyzed to correlate with the number of classes or the number of methods through linear regression. Among the web software with the complex architecture or server, client and un, type or form file written in Javascript for client has the high correlation and the number of methods is well correlated with the size of final code.

A Study of Estimation for Web Application Complexity (웹 어플리케이션의 복잡도 예측에 관한 연구)

  • Oh Sung-Kyun;Kim Mi-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.3
    • /
    • pp.27-34
    • /
    • 2004
  • As software developing paradigm has been changing to complicate Web environment, study of complexity becomes vigorous. Yet still it seems that general agreement has not to be reached to architecture or complexity measure of Web application. And so traditional complexity metrics - program size(LOC) and Cyclomatic Complexity can be derived from the source code after implementation. it is not helpful to the early phase of software development life cycle - analysis and design phase. In this study 6 Web projects has been used for deriving applications with possible errors suited by Complexity Indicator. Using 61 programs derived, linear correlation between complexity, number of classes and number of methods has been proposed. As Web application complexity could be estimated before implementation, effort and cost management will be processed more effectively.

  • PDF

Efficiency Measurement Method and Simplification of Program (프로그램의 효율성 측정 방법과 간소화)

  • Yang, Hae-Sool
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.1
    • /
    • pp.49-62
    • /
    • 1998
  • Softwares which have many functions to satisfy user's requirements is developing. But generally, users use partial functions of software. If we could construct software which leave useful functions and remove unuseful functions in software with many functions, we could enhance execution efficiency by reduction of program size and quality of software. There are 6 items in international standard ISO/IEC 9126 about quality of software. These are functionality, reliability, usability, efficiency, maintenance and portability. In this study, we proposed metrics for measurement of efficiency and simplification method for source code. And we described products evaluation result and indicated problem and progress method for practical development project about proposed efficiency metrics.

  • PDF

A Case for Using Service Availability to Characterize IP Backbone Topologies

  • Keralapura Ram;Moerschell Adam;Chuah Chen Nee;Iannaccone Gianluca;Bhattacharyya Supratik
    • Journal of Communications and Networks
    • /
    • v.8 no.2
    • /
    • pp.241-252
    • /
    • 2006
  • Traditional service-level agreements (SLAs), defined by average delay or packet loss, often camouflage the instantaneous performance perceived by end-users. We define a set of metrics for service availability to quantify the performance of Internet protocol (IP) backbone networks and capture the impact of routing dynamics on packet forwarding. Given a network topology and its link weights, we propose a novel technique to compute the associated service availability by taking into account transient routing dynamics and operational conditions, such as border gateway protocol (BGP) table size and traffic distributions. Even though there are numerous models for characterizing topologies, none of them provide insights on the expected performance perceived by end customers. Our simulations show that the amount of service disruption experienced by similar networks (i.e., with similar intrinsic properties such as average out-degree or network diameter) could be significantly different, making it imperative to use new metrics for characterizing networks. In the second part of the paper, we derive goodness factors based on service availability viewed from three perspectives: Ingress node (from one node to many destinations), link (traffic traversing a link), and network-wide (across all source-destination pairs). We show how goodness factors can be used in various applications and describe our numerical results.

Can the Skewed Student-t Distribution Assumption Provide Accurate Estimates of Value-at-Risk?

  • Kang, Sang-Hoon;Yoon, Seong-Min
    • The Korean Journal of Financial Management
    • /
    • v.24 no.3
    • /
    • pp.153-186
    • /
    • 2007
  • It is well known that the distributional properties of financial asset returns exhibit fatter-tails and skewer-mean than the assumption of normal distribution. The correct assumption of return distribution might improve the estimated performance of the Value-at-Risk(VaR) models in financial markets. In this paper, we estimate and compare the VaR performance using the RiskMetrics, GARCH and FIGARCH models based on the normal and skewed-Student-t distributions in two daily returns of the Korean Composite Stock Index(KOSPI) and Korean Won-US Dollar(KRW-USD) exchange rate. We also perform the expected shortfall to assess the size of expected loss in terms of the estimation of the empirical failure rate. From the results of empirical VaR analysis, it is found that the presence of long memory in the volatility of sample returns is not an important in estimating an accurate VaR performance. However, it is more important to consider a model with skewed-Student-t distribution innovation in determining better VaR. In short, the appropriate assumption of return distribution provides more accurate VaR models for the portfolio managers and investors.

  • PDF