• Title/Summary/Keyword: 산술 평균

Search Result 188, Processing Time 0.05 seconds

Development of Stem Analysis Program(Stemwin1.0) for Windows (Windows용 수간석해(樹幹析解) 프로그램(Stemwin1.0)의 개발(開發))

  • Lee, Joon-Hak;Lee, Woo-Kyun;Seo, Jeong-Ho
    • Journal of Korean Society of Forest Science
    • /
    • v.90 no.3
    • /
    • pp.331-337
    • /
    • 2001
  • This study was performed to develope stem analysis program(Stemwin1.0) which can be used in PC with MS-Windows operating system. Stemwin1.0 uses width of annual tree ring measured with 1/100mm unit, and calculate increments of several growth factors such as DBH, height and volume with various methods. Mean DBH can be calculated by arithmetic and quadratic mean methods. Height can be estimated by parallel line, line extending and height curve methods. Volume can be estimated by Huber, Smalian, and Spline functions. Not only Total growth, Mean Annual Increment(MAI) and Current Annual Increment(CAI) of growth factors, but also merchantable volume and height, form factor, growth rate, and merchantable volume rate are automatically calculated. Stemwin1.0 can also output accurate stem taper curve with various scale, and prepare stem taper data(diameter at different disk heights) for statistical analysis for deriving stem taper model. Stemwin1.0 can export output data and graph to Excel for more compatible use of it.

  • PDF

Determinants of Leverage for Manufacturing Firms Listed in the KOSDAQ Stock Market (한국 KOSDAQ 상장기업들의 자본구조 결정요인 분석)

  • Kim, Han-Joon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.5
    • /
    • pp.2096-2109
    • /
    • 2012
  • This study investigates empirical issues that have received little attention in the previous research in the Korean capital market. It is to find any financial determinants on the capital structure for the firms listed in the KOSDAQ(Korea Securities Dealers Automated Quotation). Another test is performed to find any possible discriminating factors by utilizing a robust methodology, which may distinguish between the firms belonging the 'prime section' and the 'venture section' in terms of their financial aspects. Moreover, the null hypothesis that the changing trend or movement of a firm's capital structure with respect to its industry mean (or median) may be random, is also tested. For the book-value based debt ratios, size(INSIZE), growth(GROWTH), Market to book value of equity(MVBV), volatility(VOLATILITY), market value of equity (MVE) and section dummy (SECTION) showed their statistically significant effects on the book-value based leverage ratios, respectively, while size(INSIZE), growth(GROWTH), market value of equity(MVE), beta(BETA) and section dummy (SECTION) showed their statistically significant effects on the market-value based leverage ratios. This study also found an interesting result that a firm belonging to each corresponding industry has a tendency for reversion toward its mean and median leverage ratios over the five-year tested period.

Evaluation of Video Codec AI-based Multiple tasks (인공지능 기반 멀티태스크를 위한 비디오 코덱의 성능평가 방법)

  • Kim, Shin;Lee, Yegi;Yoon, Kyoungro;Choo, Hyon-Gon;Lim, Hanshin;Seo, Jeongil
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.273-282
    • /
    • 2022
  • MPEG-VCM(Video Coding for Machine) aims to standardize video codec for machines. VCM provides data sets and anchors, which provide reference data for comparison, for several machine vision tasks including object detection, object segmentation, and object tracking. The evaluation template can be used to compare compression and machine vision task performance between anchor data and various proposed video codecs. However, performance comparison is carried out separately for each machine vision task, and information related to performance evaluation of multiple machine vision tasks on a single bitstream is not provided currently. In this paper, we propose a performance evaluation method of a video codec for AI-based multi-tasks. Based on bits per pixel (BPP), which is the measure of a single bitstream size, and mean average precision(mAP), which is the accuracy measure of each task, we define three criteria for multi-task performance evaluation such as arithmetic average, weighted average, and harmonic average, and to calculate the multi-tasks performance results based on the mAP values. In addition, as the dynamic range of mAP may very different from task to task, performance results for multi-tasks are calculated and evaluated based on the normalized mAP in order to prevent a problem that would be happened because of the dynamic range.

The Optimization of Ensembles for Bankruptcy Prediction (기업부도 예측 앙상블 모형의 최적화)

  • Myoung Jong Kim;Woo Seob Yun
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.39-57
    • /
    • 2022
  • This paper proposes the GMOPTBoost algorithm to improve the performance of the AdaBoost algorithm for bankruptcy prediction in which class imbalance problem is inherent. AdaBoost algorithm has the advantage of providing a robust learning opportunity for misclassified samples. However, there is a limitation in addressing class imbalance problem because the concept of arithmetic mean accuracy is embedded in AdaBoost algorithm. GMOPTBoost can optimize the geometric mean accuracy and effectively solve the category imbalance problem by applying Gaussian gradient descent. The samples are constructed according to the following two phases. First, five class imbalance datasets are constructed to verify the effect of the class imbalance problem on the performance of the prediction model and the performance improvement effect of GMOPTBoost. Second, class balanced data are constituted through data sampling techniques to verify the performance improvement effect of GMOPTBoost. The main results of 30 times of cross-validation analyzes are as follows. First, the class imbalance problem degrades the performance of ensembles. Second, GMOPTBoost contributes to performance improvements of AdaBoost ensembles trained on imbalanced datasets. Third, Data sampling techniques have a positive impact on performance improvement. Finally, GMOPTBoost contributes to significant performance improvement of AdaBoost ensembles trained on balanced datasets.

Recursive SPIHT(Set Partitioning in Hierarchy Trees) Algorithm for Embedded Image Coding (내장형 영상코딩을 위한 재귀적 SPIHT 알고리즘)

  • 박영석
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.4
    • /
    • pp.7-14
    • /
    • 2003
  • A number of embedded wavelet image coding methods have been proposed since the introduction of EZW(Embedded Zerotree Wavelet) algorithm. A common characteristic of these methods is that they use fundamental ideas found in the EZW algorithm. Especially, one of these methods is the SPIHT(Set Partitioning in Hierarchy Trees) algorithm, which became very popular since it was able to achieve equal or better performance than EZW without having to use an arithmetic encoder. In this paper We propose a recursive set partitioning in hierarchy trees(RSPIHT) algorithm for embedded image coding and evaluate it's effectiveness experimentally. The proposed RSPIHT algorithm takes the simple and regular form and the worst case time complexity of O(n). From the viewpoint of processing time, the RSPIHT algorithm takes about 16.4% improvement in average than the SPIHT algorithm at T-layer over 4 of experimental images. Also from the viewpoint of coding rate, the RSPIHT algorithm takes similar results at T-layer under 7 but the improved results at other T-layer of experimental images.

  • PDF

Conceptual Cost Estimation Model Using by a Parametric Method for High-speed Railroad (매개변수기법을 이용한 고속철도 노반공사의 개략공사비 예측모델)

  • Lee, Young Joo;Jang, Seong Yong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.4D
    • /
    • pp.595-601
    • /
    • 2011
  • There is currently applied to the unit cost per a distance (KRW/km) for estimating the conceptual cost of civil work on basic planning stage of high speed railroad. This unit cost is an arithmetic average value based on historical data, which could be in big error. It also is difficult to explain the deficiency comparing the estimated cost derived from next basic design stage. This study provides the conceptual estimation model using by the parametric method and regression analysis. Independent variables are the distance and the geological materials (earth, weathered rock, soft-rock, hard-rock), extracting from the actual data to 36 contracts. The deviation between the unit costs estimated using the developed model and the actual cost data is presented in the range from -0.4% to +31%. This range is acceptable compared the typical range "-30% to + 50%". This model will improve the accuracy of existing method and be expected to contribute to effective total cost management and the economic aspects, reduce the financial expenditure.

Comparison of Customer Satisfaction Indices Using Different Methods of Weight Calculation (가중치 산출방법에 따른 고객만족도지수의 비교)

  • Lee, Sang-Jun;Kim, Yong-Tae;Kim, Seong-Yoon
    • Journal of Digital Convergence
    • /
    • v.11 no.12
    • /
    • pp.201-211
    • /
    • 2013
  • This study compares Customer Satisfaction Index(CSI) and the weight for each dimension by applying various methods of weight calculation and attempts to suggest some implications. For the purpose, the study classified the methods of weight calculation into the subjective method and the statistical method. Constant sum scale was used for the subjective method, and the statistical method was again segmented into correlation analysis, principal component analysis, factor analysis, structural equation model. The findings showed that there is difference between the weights from the subjective method and the statistical method. The order of the weights by the analysis methods were classified with similar patterns. Besides, the weight for each dimension by different methods of weight calculation showed considerable deviation and revealed the difference of discrimination and stability among the dimensions. Lastly, the CSI calculated by various methods of weight calculation showed to be the highest in structural equation model, followed by in the order of regression analysis, correlation analysis, arithmetic mean, principal component analysis, constant sum scale and factor analysis. The CSI calculated by each method showed to have statistically significant difference.

A Study on the Improvement of a Lecture Evaluation Tool in Higher Education -A case of improvement of a lecture evaluation questionnaire in "A" university- (대학 강의평가 도구 개선 방안 연구 -"A" 대학의 강의평가 문항 개선 사례-)

  • Park, Hye-Rim
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.11
    • /
    • pp.5033-5043
    • /
    • 2012
  • The purpose of this study is to complement the problem of lecture evaluation items and to improve the lecture evaluation items to fit to original purpose of lecture evaluation to enhance the lecture's quality. For this, meanings of good teaching, lecture evaluation domains and elements of preceding study, the contents and problems of lecture evaluation tools in A college were searched, and in this foundation, an improved lecture evaluation tool was suggested. As the result of this study, important features of the improved tool are followed: First, the compositions of evaluation domains, evaluation elements, and evaluation items were reconstituted. Second, to acquire the important information for the better lecture, the items were devised according to the features of good teaching in colleges. Third, the items concerned of evaluation elements which is commonly suggested by the lecture evaluation tools of preceding study were developed. Forth, if there is the information which is required for the enhancement of the lecture quality, the items were developed though the result could not be presented in the arithmetical means. Fifth, evaluation items to improve the problems of lecture evaluation tools which had been carried out in A college were developed.

An Adaptive Background Formation Algorithm Considering Stationary Object (정지 물체를 고려한 적응적 배경생성 알고리즘)

  • Jeong, Jongmyeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.10
    • /
    • pp.55-62
    • /
    • 2014
  • In the intelligent video surveillance system, moving objects generally are detected by calculating difference between background and input image. However formation of reliable background is known to be still challenging task because it is hard to cope with the complicated background. In this paper we propose an adaptive background formation algorithm considering stationary object. At first, the initial background is formed by averaging the initial N frames. Object detection is performed by comparing the current input image and background. If the object is at a stop for a long time, we consider the object as stationary object and background is replaced with the stationary object. On the other hand, if the object is a moving object, the pixels in the object are not reflected for background modification. Because the proposed algorithm considers gradual illuminance change, slow moving object and stationary object, we can form background adaptively and robustly which has been shown by experimental results.

SIMD Instruction-based Fast HEVC RExt Decoder (SIMD 명령어 기반 HEVC RExt 복호화기 고속화)

  • Mok, Jung-Soo;Ahn, Yong-Jo;Ryu, Hochan;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.20 no.2
    • /
    • pp.224-237
    • /
    • 2015
  • In this paper, we introduce the fast decoding method with the SIMD (Single Instruction Multiple Data) instructions for HEVC RExt (High Efficiency Video Coding Range Extensions). Several tools of HEVC RExt such as intra prediction, interpolation, inverse-quantization, inverse-transform, and clipping modules can be classified as the proper modules for applying the SIMD instructions. In consideration of bit-depth increasement of RExt, intra prediction, interpolation, inverse-quantization, inverse-transform, and clipping modules are accelerated by SSE (Streaming SIMD Extension) instructions. In addition, we propose effective implementations for interpolation filter, inverse-quantization, and clipping modules by utilizing a set of AVX2 (Advanced Vector eXtension 2) instructions that can use 256 bits register. The evaluation of the proposed methods were performed on the private HEVC RExt decoder developed based on HM 16.0. The experimental results show that the developed RExt decoder reduces 12% average decoding time, compared with the conventional sequential method.