• Title/Summary/Keyword: static method

Search Result 4,712, Processing Time 0.04 seconds

A Study on the Calculation of Optimal Compensation Capacity of Reactive Power for Grid Connection of Offshore Wind Farms (해상풍력단지 전력계통 연계를 위한 무효전력 최적 보상용량 계산에 관한 연구)

  • Seong-Min Han;Joo-Hyuk Park;Chang-Hyun Hwang;Chae-Joo Moon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.65-76
    • /
    • 2024
  • With the recent activation of the offshore wind power industry, there has been a development of power plants with a scale exceeding 400MW, comparable to traditional thermal power plants. Renewable energy, characterized by intermittency depending on the energy source, is a prominent feature of modern renewable power generation facilities, which are structured based on controllable inverter technology. As the integration of renewable energy sources into the grid expands, the grid codes for power system connection are progressively becoming more defined, leading to active discussions and evaluations in this area. In this paper, we propose a method for selecting optimal reactive power compensation capacity when multiple offshore wind farms are integrated and connected through a shared interconnection facility to comply with grid codes. Based on the requirements of the grid code, we analyze the reactive power compensation and excessive stability of the 400MW wind power generation site under development in the southwest sea of Jeonbuk. This analysis involves constructing a generation site database using PSS/E (Power System Simulation for Engineering), incorporating turbine layouts and cable data. The study calculates reactive power due to charging current in internal and external network cables and determines the reactive power compensation capacity at the interconnection point. Additionally, static and dynamic stability assessments are conducted by integrating with the power system database.

Dosimetric Comparison of Left-sided Whole Breast Irradiation using a Virtual Bolus with VMAT and static IMRT (좌측 유방의 세기변조 방사선치료 시 Virtual Bolus 적용에 따른 선량 변화 비교 평가)

  • Lim, Kyeong Jin;Kim, Tae Woan;Jang, Yo Jong;Yang, Jin Ho;Lee, Seong Hyeon;Yeom, Du Seok;Kim, Seon Yeong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.2
    • /
    • pp.51-63
    • /
    • 2019
  • Purpose: Radiation therapy for breast cancer should consider the change in breast shape due to breathing and swelling. In this study, we evaluate the benefit of using virtual bolus for IMRT of left breast cancer. Materials and methods: 10 patients with early breast cancer who received radiation therapy after breastconserving surgery compared the VMAT and IMRT plans using the virtual bolus method and without using it. The first analysis compared the V95%, HI, CI of treatment volume, Dmean, V5, V20, V30 of ipsilateral lung, and Dmean of heart in VMAT plan made using the virtual bolus method(VMAT_VB) to the plan without using it(VMAT_NoVB) in case there is no change in the breast. In IMRT, the same method was used. The second analysis compared TCP and NTCP based on each treatment plan in case there is 1cm expansion of treatment volume. Result: If there is no change in breast, V95% in VB Plan(VMAT_VB, IMRT_VB) and NoVB Plan(VMAT_NoVB, IMRT_NoVB) is all over 99% on each treatment plan. V95% in VMAT_NoVB and VMAT_VB is 99.80±0.17% and 99.75±0.12%, V95% in IMRT_NoVB and IMRT_VB is 99.67±0.26% and 99.51±0.15%. Difference of HI, CI is within 3%. OAR dose in VB plan is a little high than NoVB plan, and did not exceed guidelines. If there is 1cm change in breast, VMAT_NoVB and IMRT_NoVB are less effective for treatment. But VMAT_VB and IMRT_VB continue similar treatment effect compared in case no variation of breast. Conclusion: This study confirms the benefit of using a virtual bolus during VMAT and IMRT to compensate potential breast shape modification.

Usefulness of $^{99m}Tc$-labeled RBC Scan and SPECT in the Diagnosis of Head and Neck Hemangiomas (두경부 혈관종 진단시 $^{99m}Tc$-RBC Scan and SPECT 검사의 유용성)

  • Oh, Shin-Hyun;Roh, Dong-Wook;Ahn, Sha-Ron;Park, Hoon-Hee;Lee, Seung-Jae;Kang, Chun-Goo;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.1
    • /
    • pp.39-43
    • /
    • 2008
  • Purpose: There are various methods to diagnose hemangioma, such as ultrasonography (US), computed tomography (CT), magnetic resonance imaging (MRI) and nuclear medicine. However, by development of SPECT imaging, the blood-pool scan using $^{99m}Tc$-labeled red blood cell has been used, because it was non-invasive and the most economical method. Therefore, in this study, we proposed that the usefulness of $^{99m}Tc$-RBC scan and SPECT of the head and neck to diagnose unlocated hemangiomas. Materials and Methods: $^{99m}Tc$-RBC scan and SPECT was performed on 6 patients with doubtful hemangioma (4 person, head; 1 person, neck; 1 person, another). We labeled radiopharmaceutical using modified in vivo method and then, centrifuged it to remove plasma. After a bolus injection of tracer, dynamic perfusion flow images were acquired. Then, anterior, posterior, both lateral static blood-pool images were obtained as early and 4 hours delayed. SPECT was progressed 64 projections per 30 seconds. Each image was interpreted by physicians, Nuclear medicine specialist, and technologist blinded to patient's data. Results: In 5 patients of all the radioactivity of doubtful site didn't change in flow images, but, in blood-pool, delayed and SPECT images, it was increased. So, it was a typical hemangioma finding. The size of lesion was over 2 cm, and it could discriminate as comparing to the delayed and SPECT imaging. On the other hand, in 1 patient, the radioactivity was increased in blood-pool images, but, not in delayed and SPECT images, so, it was proved no hemangioma. Conclusion: Using $^{99m}Tc$-RBC Scan and SPECT, we could diagnose the hemangiomas in head and neck, as well as, liver, more non-invasive, economical, and easy. Therefore, it considered that $^{99m}Tc$-RBC scan and SPECT would offer more useful information for diagnosis of hemangioma, rather than otherimaging such as US, CT, MRI.

  • PDF

Commissionning of Dynamic Wedge Field Using Conventional Dosimetric Tools (선량 중첩 방식을 이용한 동적 배기 조사면의 특성 연구)

  • Yi Byong Yong;Nha Sang Kyun;Choi Eun Kyung;Kim Jong Hoon;Chang Hyesook;Kim Mi Hwa
    • Radiation Oncology Journal
    • /
    • v.15 no.1
    • /
    • pp.71-78
    • /
    • 1997
  • Purpose : To collect beam data for dynamic wedge fields using conventional measurement tools without the multi-detector system, such as the linear diode detectors or ionization chambers. Materials and Methods : The accelerator CL 2100 C/D has two photon energies of 6MV and 15MV with dynamic wedge an91es of 15o, 30o, 45o and 60o. Wedge transmission factors, percentage depth doses(PDD's) and dose Profiles were measured. The measurements for wedge transmission factors are performed for field sizes ranging from $4\times4cm^2\;to\;20\times20cm^2$ in 1-2cm steps. Various rectangular field sizes are also measured for each photon energy of 6MV and 15MV, with the combination of each dynamic wedge angle of 15o 30o. 45o and 60o. These factors are compared to the calculated wedge factors using STT(Segmented Treatment Table) value. PDD's are measured with the film and the chamber in water Phantom for fixed square field. Converting parameters for film data to chamber data could be obtained from this procedure. The PDD's for dynamic wedged fields could be obtained from film dosimetry by using the converting parameters without using ionization chamber. Dose profiles are obtained from interpolation and STT weighted superposition of data through selected asymmetric static field measurement using ionization chamber. Results : The measured values of wedge transmission factors show good agreement to the calculated values The wedge factors of rectangular fields for constant V-field were equal to those of square fields The differences between open fields' PDDs and those from dynamic fields are insignificant. Dose profiles from superposition method showed acceptable range of accuracy(maximum 2% error) when we compare to those from film dosimetry. Conclusion : The results from this superposition method showed that commissionning of dynamic wedge could be done with conventional dosimetric tools such as Point detector system and film dosimetry winthin maximum 2% error range of accuracy.

  • PDF

Comparison of Acting Style Between 2D Hand-drawn Animation and 3D Computer Animation : Focused on Expression of Emotion by Using Close-up (2D 핸드 드로운 애니메이션과 3D 컴퓨터 애니메이션에서의 액팅(acting) 스타일 비교 -클로즈-업을 이용한 감정표현을 중심으로-)

  • Moon, Jaecheol;Kim, Yumi
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.147-165
    • /
    • 2014
  • Around the turn of 21st century, there has been a major technological shift in the animation industry. With development of reality-based computer graphics, major American animation studios replaced hand-drawn method with the new 3D computer graphics. Traditional animation was known for its simplified shapes such as circles and triangle that makes characters' movements distinctive from non-animated feature films. Computer-generated animation has largely replaced it, but is under continuous criticism that automated movements and reality-like graphics devaluate the aesthetics of animation. Although hand-drawn animation is still produced, 3D computer graphics have taken commercial lead and there has been many changes to acting of animated characters, which calls for detailed investigation. Firstly, the changes in acting of 3D characters can be traced from looking at human-like rigging method that mimics humanistic moving mechanism. Also, if hair and clothing was part of hand-drawn characters' acting, it has now been hidden inside mathematical simulation of 3D graphics, leaving only the body to be used in acting. Secondly, looking at "Stretch and Squash" method, which represents the distinctive movements of animation, through the lens of media, a paradox arises. Hand-drawn animation are produced frame-by-frame, and a subtle change would make animated frames shiver. This slight shivering acts as an aesthetic distinction of animated feature films, but can also require exaggerated movements to hide the shivering. On the contrary, acting of 3D animation make use of calculated movements that may seem exaggerated compared to human acting, but seem much more moderate and static compared to hand-drawn acting. Moreover, 3D computer graphics add the third dimension that allows more intuitive movements - maybe animators no longer need fine drawing skills; what they now need is directing skills to animate characters in 3D space intuitively. On the assumption that technological advancement and change of artistic expressionism are inseparable, this paper compares acting of 3D animation studio Pixar and classical drawing studio Disney to investigate character acting style and movements.

The Need for Paradigm Shift in Semantic Similarity and Semantic Relatedness : From Cognitive Semantics Perspective (의미간의 유사도 연구의 패러다임 변화의 필요성-인지 의미론적 관점에서의 고찰)

  • Choi, Youngseok;Park, Jinsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.111-123
    • /
    • 2013
  • Semantic similarity/relatedness measure between two concepts plays an important role in research on system integration and database integration. Moreover, current research on keyword recommendation or tag clustering strongly depends on this kind of semantic measure. For this reason, many researchers in various fields including computer science and computational linguistics have tried to improve methods to calculating semantic similarity/relatedness measure. This study of similarity between concepts is meant to discover how a computational process can model the action of a human to determine the relationship between two concepts. Most research on calculating semantic similarity usually uses ready-made reference knowledge such as semantic network and dictionary to measure concept similarity. The topological method is used to calculated relatedness or similarity between concepts based on various forms of a semantic network including a hierarchical taxonomy. This approach assumes that the semantic network reflects the human knowledge well. The nodes in a network represent concepts, and way to measure the conceptual similarity between two nodes are also regarded as ways to determine the conceptual similarity of two words(i.e,. two nodes in a network). Topological method can be categorized as node-based or edge-based, which are also called the information content approach and the conceptual distance approach, respectively. The node-based approach is used to calculate similarity between concepts based on how much information the two concepts share in terms of a semantic network or taxonomy while edge-based approach estimates the distance between the nodes that correspond to the concepts being compared. Both of two approaches have assumed that the semantic network is static. That means topological approach has not considered the change of semantic relation between concepts in semantic network. However, as information communication technologies make advantage in sharing knowledge among people, semantic relation between concepts in semantic network may change. To explain the change in semantic relation, we adopt the cognitive semantics. The basic assumption of cognitive semantics is that humans judge the semantic relation based on their cognition and understanding of concepts. This cognition and understanding is called 'World Knowledge.' World knowledge can be categorized as personal knowledge and cultural knowledge. Personal knowledge means the knowledge from personal experience. Everyone can have different Personal Knowledge of same concept. Cultural Knowledge is the knowledge shared by people who are living in the same culture or using the same language. People in the same culture have common understanding of specific concepts. Cultural knowledge can be the starting point of discussion about the change of semantic relation. If the culture shared by people changes for some reasons, the human's cultural knowledge may also change. Today's society and culture are changing at a past face, and the change of cultural knowledge is not negligible issues in the research on semantic relationship between concepts. In this paper, we propose the future directions of research on semantic similarity. In other words, we discuss that how the research on semantic similarity can reflect the change of semantic relation caused by the change of cultural knowledge. We suggest three direction of future research on semantic similarity. First, the research should include the versioning and update methodology for semantic network. Second, semantic network which is dynamically generated can be used for the calculation of semantic similarity between concepts. If the researcher can develop the methodology to extract the semantic network from given knowledge base in real time, this approach can solve many problems related to the change of semantic relation. Third, the statistical approach based on corpus analysis can be an alternative for the method using semantic network. We believe that these proposed research direction can be the milestone of the research on semantic relation.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Medical Information Dynamic Access System in Smart Mobile Environments (스마트 모바일 환경에서 의료정보 동적접근 시스템)

  • Jeong, Chang Won;Kim, Woo Hong;Yoon, Kwon Ha;Joo, Su Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.47-55
    • /
    • 2015
  • Recently, the environment of a hospital information system is a trend to combine various SMART technologies. Accordingly, various smart devices, such as a smart phone, Tablet PC is utilized in the medical information system. Also, these environments consist of various applications executing on heterogeneous sensors, devices, systems and networks. In these hospital information system environment, applying a security service by traditional access control method cause a problems. Most of the existing security system uses the access control list structure. It is only permitted access defined by an access control matrix such as client name, service object method name. The major problem with the static approach cannot quickly adapt to changed situations. Hence, we needs to new security mechanisms which provides more flexible and can be easily adapted to various environments with very different security requirements. In addition, for addressing the changing of service medical treatment of the patient, the researching is needed. In this paper, we suggest a dynamic approach to medical information systems in smart mobile environments. We focus on how to access medical information systems according to dynamic access control methods based on the existence of the hospital's information system environments. The physical environments consist of a mobile x-ray imaging devices, dedicated mobile/general smart devices, PACS, EMR server and authorization server. The software environment was developed based on the .Net Framework for synchronization and monitoring services based on mobile X-ray imaging equipment Windows7 OS. And dedicated a smart device application, we implemented a dynamic access services through JSP and Java SDK is based on the Android OS. PACS and mobile X-ray image devices in hospital, medical information between the dedicated smart devices are based on the DICOM medical image standard information. In addition, EMR information is based on H7. In order to providing dynamic access control service, we classify the context of the patients according to conditions of bio-information such as oxygen saturation, heart rate, BP and body temperature etc. It shows event trace diagrams which divided into two parts like general situation, emergency situation. And, we designed the dynamic approach of the medical care information by authentication method. The authentication Information are contained ID/PWD, the roles, position and working hours, emergency certification codes for emergency patients. General situations of dynamic access control method may have access to medical information by the value of the authentication information. In the case of an emergency, was to have access to medical information by an emergency code, without the authentication information. And, we constructed the medical information integration database scheme that is consist medical information, patient, medical staff and medical image information according to medical information standards.y Finally, we show the usefulness of the dynamic access application service based on the smart devices for execution results of the proposed system according to patient contexts such as general and emergency situation. Especially, the proposed systems are providing effective medical information services with smart devices in emergency situation by dynamic access control methods. As results, we expect the proposed systems to be useful for u-hospital information systems and services.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Analyses of the Efficiency in Hospital Management (병원 단위비용 결정요인에 관한 연구)

  • Ro, Kong-Kyun;Lee, Seon
    • Korea Journal of Hospital Management
    • /
    • v.9 no.1
    • /
    • pp.66-94
    • /
    • 2004
  • The objective of this study is to examine how to maximize the efficiency of hospital management by minimizing the unit cost of hospital operation. For this purpose, this paper proposes to develop a model of the profit maximization based on the cost minimization dictum using the statistical tools of arriving at the maximum likelihood values. The preliminary survey data are collected from the annual statistics and their analyses published by Korea Health Industry Development Institute and Korean Hospital Association. The maximum likelihood value statistical analyses are conducted from the information on the cost (function) of each of 36 hospitals selected by the random stratified sampling method according to the size and location (urban or rural) of hospitals. We believe that, although the size of sample is relatively small, because of the sampling method used and the high response rate, the power of estimation of the results of the statistical analyses of the sample hospitals is acceptable. The conceptual framework of analyses is adopted from the various models of the determinants of hospital costs used by the previous studies. According to this framework, the study postulates that the unit cost of hospital operation is determined by the size, scope of service, technology (production function) as measured by capacity utilization, labor capital ratio and labor input-mix variables, and by exogeneous variables. The variables to represent the above cost determinants are selected by using the step-wise regression so that only the statistically significant variables may be utilized in analyzing how these variables impact on the hospital unit cost. The results of the analyses show that the models of hospital cost determinants adopted are well chosen. The various models analyzed have the (goodness of fit) overall determination (R2) which all turned out to be significant, regardless of the variables put in to represent the cost determinants. Specifically, the size and scope of service, no matter how it is measured, i. e., number of admissions per bed, number of ambulatory visits per bed, adjusted inpatient days and adjusted outpatients, have overall effects of reducing the hospital unit costs as measured by the cost per admission, per inpatient day, or office visit implying the existence of the economy of scale in the hospital operation. Thirdly, the technology used in operating a hospital has turned out to have its ramifications on the hospital unit cost similar to those postulated in the static theory of the firm. For example, the capacity utilization as represented by the inpatient days per employee tuned out to have statistically significant negative impacts on the unit cost of hospital operation, while payroll expenses per inpatient cost has a positive effect. The input-mix of hospital operation, as represented by the ratio of the number of doctor, nurse or medical staff per general employee, supports the known thesis that the specialized manpower costs more than the general employees. The labor/capital ratio as represented by the employees per 100 beds is shown to have a positive effect on the cost as expected. As for the exogeneous variable's impacts on the cost, when this variable is represented by the percent of urban 100 population at the location where the hospital is located, the regression analysis shows that the hospitals located in the urban area have a higher cost than those in the rural area. Finally, the case study of the sample hospitals offers a specific information to hospital administrators about how they share in terms of the cost they are incurring in comparison to other hospitals. For example, if his/her hospital is of small size and located in a city, he/she can compare the various costs of his/her hospital operation with those of other similar hospitals. Therefore, he/she may be able to find the reasons why the cost of his/her hospital operation has a higher or lower cost than other similar hospitals in what factors of the hospital cost determinants.

  • PDF