• Title/Summary/Keyword: minimum value

Search Result 2,421, Processing Time 0.029 seconds

Understanding of a Rate of Return Analysis using an IRR (내부수익률을 이용한 수익률분석법에 대한 이해)

  • 김진욱;이현주;차동수
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.25 no.5
    • /
    • pp.9-14
    • /
    • 2002
  • A capital investment problem is essentially one of determining whether the anticipated cash inflows from a proposed project are sufficiently attractive to invest funds in the project. The net present value(NPV) criterion and internal rate of return(IRR) criterion are widely used as means of making investment decisions. A positive NPV means the equivalent worth of the inflows is greater than the equivalent worth of outflows, so, the project makes profit. Business people are familiar with rates of return because they all borrow money to finance ventures, even if the money they borrow is their own. Thus they are apt to use the IRR in preference to the NPV. The IRR can be defined as the discount rate that causes the net present value of a cash flow to equal zero. Why the project are accepted if the project's IRR is greater than the investor's minimum attractive rate of return\ulcorner Against the NPV, the definition cannot distinctly explain the concept of the IRR as decision criterion. We present a new definition of the IRR as the ratio of profit on the invested capital.

A Preprocessing Algorithm for Layered Depth Image Coding (계층적 깊이영상 정보의 압축 부호화를 위한 전처리 방법)

  • 윤승욱;김성열;호요성
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.207-213
    • /
    • 2004
  • The layered depth image (LDI) is an efficient approach to represent three-dimensional objects with complex geometry for image-based rendering (IBR). LDI contains several attribute values together with multiple layers at each pixel location. In this paper, we propose an efficient preprocessing algorithm to compress depth information of LDI. Considering each depth value as a point in the two-dimensional space, we compute the minimum distance between a straight line passing through the previous two values and the current depth value. Finally, the minimum distance replaces the current attribute value. The proposed algorithm reduces the variance of the depth information , therefore, It Improves the transform and coding efficiency.

Mean Heat Flux at the Port of Yeosu (여수항의 평균 열플럭스)

  • Choi Yong-Kyu;Yang Jun-Hyuk
    • Journal of Environmental Science International
    • /
    • v.15 no.7
    • /
    • pp.653-657
    • /
    • 2006
  • Based on the monthly weather report of Korea Meteorological Administration (KMA) and daily sea surface temperature (SST) data from National Fisheries Research and Development Institute (NFRDI) (1995-2004), mean heat fluxes were estimated at the port of Yeosu. Net heat flux was transported from the air to the sea surface during February to September, and it amounts to $205 Wm^{-2}$ in daily average value in May. During October to January, the transfer of net heat flux was conversed from the sea surface to the air with $-70 Wm^{-2}$ in minimum of daily average value in December. Short wave radiation was ranged from $167 Wm^{-2}$ in December to $300 Wm^{-2}$ in April. Long wave radiation (Sensible heat) was ranged from $27 (-14) Wm^{-2}$ in July to $90 (79) Wm^{-2}$ in December. Latent heat showed $42 Wm^{-2}$ with its minimum in July and $104 Wm^{-2}$ with its maximum in October in daily average value.

An Adaptive Occluded Region Detection and Interpolation for Robust Frame Rate Up-Conversion

  • Kim, Jin-Soo;Kim, Jae-Gon
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.201-206
    • /
    • 2011
  • FRUC (Frame Rate Up-Conversion) technique needs an effective frame interpolation algorithm using motion information between adjacent neighboring frames. In order to have good visual qualities in the interpolated frames, it is necessary to develop an effective detection and interpolation algorithms for occluded regions. For this aim, this paper proposes an effective occluded region detection algorithm through the adaptive forward and backward motion searches and also by introducing the minimum value of normalized cross-correlation coefficient (NCCC). That is, the proposed scheme looks for the location with the minimum sum of absolute differences (SAD) and this value is compared to that of the location with the maximum value of NCCC based on the statistics of those relations. And, these results are compared with the size of motion vector and then the proposed algorithm decides whether the given block is the occluded region or not. Furthermore, once the occluded regions are classified, then this paper proposes an adaptive interpolation algorithm for occluded regions, which still exist in the merged frame, by using the neighboring pixel information and the available data in the occluded block. Computer simulations show that the proposed algorithm can effectively classify the occluded region, compared to the conventional SAD-based method and the performance of the proposed interpolation algorithm has better PSNR than the conventional algorithms.

An Adaptive Hexagon Based Search for Fast Motion Estimation (고속 움직임 추정을 위한 적응형 육각 탐색 방법)

  • 전병태;김병천
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.7A
    • /
    • pp.828-835
    • /
    • 2004
  • An adaptive hexagon based search(AHBS) algorithm is proposed in this paper to perform block motion estimation in video coding. The AHBS evaluates the value of a given objective function starting from a diamond-shaped checking block and then continues its process using two hexagon-shaped checking blocks until the minimum value is found at the center of checking blocks. Also, the determination of which checking block is used depends on the position of minimum value occurred in previous searching step. The AHBS is compared with other fast searching algorithms including full search(FS). Experimental results show that the proposed algorithm provides competitive performance with slightly reduced computational complexity.

Categorizing tumor size as a prognostic factor for risk of relapse of hepatocellular carcinoma (간세포암종의 재발 위험과 관련된 한 예후인자로서의 종양의 크기의 범주화)

  • 김선우;박철근
    • The Korean Journal of Applied Statistics
    • /
    • v.15 no.1
    • /
    • pp.1-8
    • /
    • 2002
  • Categorizing prognostic factors is very useful for a disease diagnosis, determination of treatment and study eligibility criteria. Methods often used to categorize factors are to select a cutpoint by biological theory, by graphical examination, by the minimum p-value approach. The last method involves multiple testing, and several methods for adjusting p-values have been developed. This study determines the cutpoint of tumor size to separate patients of high risk of relapse after hepatic resection of hepatocellular carcinoma.

Reconstruction of Myocardial Current Distribution Using Magnetocardiogram and its Clinical Use (심자도를 이용한 심근 전류분포 복원과 임상적 응용)

  • 권혁찬;정용석;이용호;김진목;김기웅;김기영;박기락;배장호
    • Journal of Biomedical Engineering Research
    • /
    • v.24 no.5
    • /
    • pp.459-464
    • /
    • 2003
  • The source current distribution in a heart was reconstructed from the magnetocardiogram (MCG) and its clinical usefulness was demonstrated. MCG was measured using 40-channel superconducting quantum interference device (SQUID) gradiometers for a patient of Wolff-Parkinson-White (WPW) syndrome, which has an accessory pathway between the atria and the ventricles. Reconstruction of source current distribution in a plane below the chest surface was performed using minimum norm estimation (MNE) algorithm and truncated singular value decomposition (SVD), In the simulation, we confirmed that the current distributions. which were computed for the test dipoles, represented well the essential feature of the test current configurations, In the current map of WPW syndrome, we observed abnormal currents that would bypass the atrioventricular junction at a delta wave. However, we could not observe such currents any more after the surgery. These results showed that the obtained current distribution using MCG signals is consistent with the electrical activity in a heart and has clinical usefulness.

An Experimental Study on Ventilated Supercavitation of the Disk Cavitator (원판 캐비테이터의 환기 초공동에 대한 실험적 연구)

  • Kim, Byeung-Jin;Choi, Jung-Kyu;Kim, Hyoung-Tae
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.52 no.3
    • /
    • pp.236-247
    • /
    • 2015
  • In this paper, the experimental equipments for ventilated supercavitation in cavitation tunnel is constructed and the basic data of ventilated supercavitation regard to the entrainment coefficient and Froude number is fulfilled. The experiments are conducted for the disk cavitator with injecting air and the pressure inside cavity and the shape of cavity are measured. As the entrainment coefficient increases while the Froude number is kept constant, the ventilated cavitation number decreases to a minimum value which decreases no more even with increasing the air entrainment. The minimum value of ventilated cavitation number, caused by the blockage effect, decreases according to increasing the diameter ratio of test section to cavitator. The cavity length is rapidly enlarged near the minimum cavitation number. In low Froude numbers, the cavity tail is floating up due to buoyancy and the air inside the cavity is evacuated from its rear end with twin-vortex hollow tubes. However, in high Froude numbers, the buoyancy effect is almost negligible and there is no more twin-vortex tubes so that the cavity shape becomes close to axisymmetric. In order to measure the cavity length and width, the two methods, which are to be based on the cavity shapes and the maximum width of cavity, are applied. As the entrainment coefficient increases after the ventilated cavitation number gets down to the minimum cavitation number, the cavity length still increases gradually. These phenomenon can be confirmed by the measurement using the method based on the cavity shapes. On the other hand, when the method based on the maximum width of cavity is used, the length and width of the cavity agree well with a semi-empirical formular of natural cavity. So the method based on the maximum width of cavity can be a valid method for cavitator design.

A Study on the Estimation of the Minimum Buoyancy for the Respiration of a Drowning Person (익수자의 호흡이 가능한 최소 부력 추정에 관한 연구)

  • Yim, Jeong-Bin;Park, Deuk-Jin;Kang, Yu Mi
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.23 no.7
    • /
    • pp.820-828
    • /
    • 2017
  • Tools and equipment that can provide buoyancy for a drowning person are important for saving lives. The purpose of this study was to estimate the minimum amount of gas needed and the buoyancy value in newton units required to generate the minimum buoyancy determined to be sufficient for keeping the head of a drowning person above the water's surface to allow for respiration for at least 1 minute. A buoyancy experiment was carried out with a long rubber balloon injected with carbon dioxide gas, and a buoyancy measurement experiment was performed on six college students. The degree of buoyancy was measured using a 5-point scale, and the statistical value of the measured data was analyzed to estimate minimum buoyancy. As a result, 8 grams of carbon dioxide were determined to satisfy minimum buoyancy conditions with a confidence level of 72%, and buoyancy was calculated to be 44.66 newtons. 12 grams of carbon dioxide met the minimum buoyancy conditions with a confidence level of 100%, and buoyancy was calculated to be 66.99 newtons. This study is expected to contribute to the development of low cost, easy-to-carry minimum buoyancy aids.

Estimation of b-value for Earthquakes Data Recorded on KSRS (KSRS 관측자료에 의한 b-값 평가)

  • 신진수;강익범;김근영
    • Proceedings of the Earthquake Engineering Society of Korea Conference
    • /
    • 2002.09a
    • /
    • pp.28-34
    • /
    • 2002
  • The b-value in the magnitude-frequency relationship logN(m) = $\alpha$ - bmwhere N(m) is the number of earthquakes exceeding magnitude m, is important seismicity parameter In hazard analysis. Estimation of the b-value for earthquake data observed on KSRS array network is done employing the maximum likelihood technique. Assuming the whole Korea Peninsula as a single seismic source area, the b-value is computed at 0.9. The estimation for KMA earthquake data is also similar to that. Since estimate is a function of minimum magnitude, we can inspect the completeness of earthquake catalog in the fitting process of b-value. KSRS and KMA data lists are probably incomplete for magnitudes less than 2.0 and 3.0, respectively. Examples from probabilistic seismic hazard assessment calculated for a range of b-value show that the small change of b-value has seriously effect on the prediction of ground motion.

  • PDF