• Title/Summary/Keyword: Performance Information Use

Search Result 5,650, Processing Time 0.042 seconds

A Study for the Methodology of Analyzing the Operation Behavior of Thermal Energy Grids with Connecting Operation (열 에너지 그리드 연계운전의 운전 거동 특성 분석을 위한 방법론에 관한 연구)

  • Im, Yong Hoon;Lee, Jae Yong;Chung, Mo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.1 no.3
    • /
    • pp.143-150
    • /
    • 2012
  • A simulation methodology and corresponding program based on it is to be discussed for analyzing the effects of the networking operation of existing DHC system in connection with CHP system on-site. The practical simulation for arbitrary areas with various building compositions is carried out for the analysis of operational features in both systems, and the various aspects of thermal energy grids with connecting operation are highlighted through the detailed assessment of predicted results. The intrinsic operational features of CHP prime movers, gas engine, gas turbine etc., are effectively implemented by realizing the performance data, i.e. actual operation efficiency in the full and part loads range. For the sake of simplicity, a simple mathematical correlation model is proposed for simulating various aspects of change effectively on the existing DHC system side due to the connecting operation, instead of performing cycle simulations separately. The empirical correlations are developed using the hourly based annual operation data for a branch of the Korean District Heating Corporation (KDHC) and are implicit in relation between main operation parameters such as fuel consumption by use, heat and power production. In the simulation, a variety of system configurations are able to be considered according to any combination of the probable CHP prime-movers, absorption or turbo type cooling chillers of every kind and capacity. From the analysis of the thermal network operation simulations, it is found that the newly proposed methodology of mathematical correlation for modelling of the existing DHC system functions effectively in reflecting the operational variations due to thermal energy grids with connecting operation. The effects of intrinsic features of CHP prime-movers, e.g. the different ratio of heat and power production, various combinations of different types of chillers (i.e. absorption and turbo types) on the overall system operation are discussed in detail with the consideration of operation schemes and corresponding simulation algorithms.

A Case Study: Improvement of Wind Risk Prediction by Reclassifying the Detection Results (풍해 예측 결과 재분류를 통한 위험 감지확률의 개선 연구)

  • Kim, Soo-ock;Hwang, Kyu-Hong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.3
    • /
    • pp.149-155
    • /
    • 2021
  • Early warning systems for weather risk management in the agricultural sector have been developed to predict potential wind damage to crops. These systems take into account the daily maximum wind speed to determine the critical wind speed that causes fruit drops and provide the weather risk information to farmers. In an effort to increase the accuracy of wind risk predictions, an artificial neural network for binary classification was implemented. In the present study, the daily wind speed and other weather data, which were measured at weather stations at sites of interest in Jeollabuk-do and Jeollanam-do as well as Gyeongsangbuk- do and part of Gyeongsangnam- do provinces in 2019, were used for training the neural network. These weather stations include 210 synoptic and automated weather stations operated by the Korean Meteorological Administration (KMA). The wind speed data collected at the same locations between January 1 and December 12, 2020 were used to validate the neural network model. The data collected from December 13, 2020 to February 18, 2021 were used to evaluate the wind risk prediction performance before and after the use of the artificial neural network. The critical wind speed of damage risk was determined to be 11 m/s, which is the wind speed reported to cause fruit drops and damages. Furthermore, the maximum wind speeds were expressed using Weibull distribution probability density function for warning of wind damage. It was found that the accuracy of wind damage risk prediction was improved from 65.36% to 93.62% after re-classification using the artificial neural network. Nevertheless, the error rate also increased from 13.46% to 37.64%, as well. It is likely that the machine learning approach used in the present study would benefit case studies where no prediction by risk warning systems becomes a relatively serious issue.

Development of Deep-Learning-Based Models for Predicting Groundwater Levels in the Middle-Jeju Watershed, Jeju Island (딥러닝 기법을 이용한 제주도 중제주수역 지하수위 예측 모델개발)

  • Park, Jaesung;Jeong, Jiho;Jeong, Jina;Kim, Ki-Hong;Shin, Jaehyeon;Lee, Dongyeop;Jeong, Saebom
    • The Journal of Engineering Geology
    • /
    • v.32 no.4
    • /
    • pp.697-723
    • /
    • 2022
  • Data-driven models to predict groundwater levels 30 days in advance were developed for 12 groundwater monitoring stations in the middle-Jeju watershed, Jeju Island. Stacked long short-term memory (stacked-LSTM), a deep learning technique suitable for time series forecasting, was used for model development. Daily time series data from 2001 to 2022 for precipitation, groundwater usage amount, and groundwater level were considered. Various models were proposed that used different combinations of the input data types and varying lengths of previous time series data for each input variable. A general procedure for deep-learning-based model development is suggested based on consideration of the comparative validation results of the tested models. A model using precipitation, groundwater usage amount, and previous groundwater level data as input variables outperformed any model neglecting one or more of these data categories. Using extended sequences of these past data improved the predictions, possibly owing to the long delay time between precipitation and groundwater recharge, which results from the deep groundwater level in Jeju Island. However, limiting the range of considered groundwater usage data that significantly affected the groundwater level fluctuation (rather than using all the groundwater usage data) improved the performance of the predictive model. The developed models can predict the future groundwater level based on the current amount of precipitation and groundwater use. Therefore, the models provide information on the soundness of the aquifer system, which will help to prepare management plans to maintain appropriate groundwater quantities.

Biochemical Assessment of Deer Velvet Antler Extract and its Cytotoxic Effect including Acute Oral Toxicity using an ICR Mice Model (ICR 마우스 모델을 이용한 녹용 추출물의 생화학적 평가 및 급성 경구 독성을 포함한 세포 독성 효과)

  • Ramakrishna Chilakala;Hyeon Jeong Moon;Hwan Lee;Dong-Sung Lee;Sun Hee Cheong
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.6
    • /
    • pp.430-441
    • /
    • 2023
  • Velvet antler is widely used as a traditional medicine, and numerous studies have demonstrated its tremendous nutritional and medicinal values including immunity-enhancing effects. This study aimed to investigate different deer velvet extracts (Sample 1: raw extract, Sample 2: dried extract, and Sample 3: freeze-dried extract) for proximate composition, uronic acid, sulfated glycosaminoglycan, sialic acid, collagen levels, and chemical components using ultra-performance liquid chromatography-quadrupole-time-of-light mass spectrometry. In addition, we evaluated the cytotoxic effect of the deer velvet extracts on BV2 microglia, HT22 hippocampal cells, HaCaT keratinocytes, and RAW264.7 macrophages using the cell viability MTT assay. Furthermore, we evaluated acute toxicity of the deer velvet extracts at different doses (0, 500, 1000, and 2000 mg/kg) administered orally to both male and female ICR mice for 14 d (five mice per group). After treatment, we evaluated general toxicity, survival rate, body weight changes, mortality, clinical signs, and necropsy findings in the experimental mice based on OECD guidelines. The results suggested that in vitro treatment with the evaluated extracts had no cytotoxic effect in HaCaT keratinocytes cells, whereas Sample-2 had a cytotoxic effect at 500 and 1000 ㎍/mL on HT22 hippocampal cells and RAW264.7 macrophages. Sample 3 was also cytotoxic at concentrations of 500 and 1000 ㎍/mL to RAW264.7 and BV2 microglial cells. However, the mice treated in vivo with the velvet extracts at doses of 500-2000 mg/kg BW showed no clinical signs, mortality, or necropsy findings, indicating that the LD50 is higher than this dosage. These findings indicate that there were no toxicological abnormalities connected with the deer velvet extract treatment in mice. However, further human and animal studies are needed before sufficient safety information is available to justify its use in humans.

A Study for Improvement of Nursing Service Administration (병원 간호행정 개선을 위한 연구)

  • 박정호
    • Journal of Korean Academy of Nursing
    • /
    • v.3 no.1
    • /
    • pp.13-40
    • /
    • 1972
  • Much has teed changed in the field of hospital administration in the It wake of the rapid development of sciences, techniques ana systematic hospital management. However, we still have a long way to go in organization, in the quality of hospital employees and hospital equipment and facilities, and in financial support in order to achieve proper hospital management. The above factors greatly effect the ability of hospitals to fulfill their obligation in patient care and nursing services. The purpose of this study is to determine the optimal methods of standardization and quality nursing so as to improve present nursing services through investigations and analyses of various problems concerning nursing administration. This study has been undertaken during the six month period from October 1971 to March 1972. The 41 comprehensive hospitals have been selected iron amongst the 139 in the whole country. These have been categorized according-to the specific purposes of their establishment, such as 7 university hospitals, 18 national or public hospitals, 12 religious hospitals and 4 enterprise ones. The following conclusions have been acquired thus far from information obtained through interviews with nursing directors who are in charge of the nursing administration in each hospital, and further investigations concerning the purposes of establishment, the organization, personnel arrangements, working conditions, practices of service, and budgets of the nursing service department. 1. The nursing administration along with its activities in this country has been uncritical1y adopted from that of the developed countries. It is necessary for us to re-establish a new medical and nursing system which is adequate for our social environments through continuous study and research. 2. The survey shows that the 7 university hospitals were chiefly concerned with education, medical care and research; the 18 national or public hospitals with medical care, public health and charity work; the 2 religious hospitals with medical care, charity and missionary works; and the 4 enterprise hospitals with public health, medical care and charity works. In general, the main purposes of the hospitals were those of charity organizations in the pursuit of medical care, education and public benefits. 3. The survey shows that in general hospital facilities rate 64 per cent and medical care 60 per-cent against a 100 per cent optimum basis in accordance with the medical treatment law and approved criteria for training hospitals. In these respects, university hospitals have achieved the highest standards, followed by religious ones, enterprise ones, and national or public ones in that order. 4. The ages of nursing directors range from 30 to 50. The level of education achieved by most of the directors is that of graduation from a nursing technical high school and a three year nursing junior college; a very few have graduated from college or have taken graduate courses. 5. As for the career tenure of nurses in the hospitals: one-third of the nurses, or 38 per cent, have worked less than one year; those in the category of one year to two represent 24 pet cent. This means that a total of 62 per cent of the career nurses have been practicing their profession for less than two years. Career nurses with over 5 years experience number only 16 per cent: therefore the efficiency of nursing services has been rated very low. 6. As for the standard of education of the nurses: 62 per cent of them have taken a three year course of nursing in junior colleges, and 22 per cent in nursing technical high schools. College graduate nurses come up to only 15 per cent; and those with graduate course only 0.4 per cent. This indicates that most of the nurses are front nursing technical high schools and three year nursing junior colleges. Accordingly, it is advisable that nursing services be divided according to their functions, such as professional, technical nurses and nurse's aides. 7. The survey also shows that the purpose of nursing service administration in the hospitals has been regulated in writing in 74 per cent of the hospitals and not regulated in writing in 26 per cent of the hospitals. The general purposes of nursing are as follows: patient care, assistance in medical care and education. The main purpose of these nursing services is to establish proper operational and personnel management which focus on in-service education. 8. The nursing service departments belong to the medical departments in almost 60 per cent of the hospitals. Even though the nursing service department is formally separated, about 24 per cent of the hospitals regard it as a functional unit in the medical department. Only 5 per cent of the hospitals keep the department as a separate one. To the contrary, approximately 12 per cent of the hospitals have not established a nursing service department at all but surbodinate it to the other department. In this respect, it is required that a new hospital organization be made to acknowledge the independent function of the nursing department. In 76 per cent of the hospitals they have advisory committees under the nursing department, such as a dormitory self·regulating committee, an in-service education committee and a nursing procedure and policy committee. 9. Personnel arrangement and working conditions of nurses 1) The ratio of nurses to patients is as follows: In university hospitals, 1 to 2.9 for hospitalized patients and 1 to 4.0 for out-patients; in religious hospitals, 1 to 2.3 for hospitalized patients and 1 to 5.4 for out-patients. Grouped together this indicates that one nurse covers 2.2 hospitalized patients and 4.3 out-patients on a daily basis. The current medical treatment law stipulates that one nurse should care for 2.5 hospitalized patients or 30.0 out-patients. Therefore the statistics indicate that nursing services are being peformed with an insufficient number of nurses to cover out-patients. The current law concerns the minimum number of nurses and disregards the required number of nurses for operation rooms, recovery rooms, delivery rooms, new-born baby rooms, central supply rooms and emergency rooms. Accordingly, tile medical treatment law has been requested to be amended. 2) The ratio of doctors to nurses: In university hospitals, the ratio is 1 to 1.1; in national of public hospitals, 1 to 0.8; in religious hospitals 1 to 0.5; and in private hospitals 1 to 0.7. The average ratio is 1 to 0.8; generally the ideal ratio is 3 to 1. Since the number of doctors working in hospitals has been recently increasing, the nursing services have consequently teen overloaded, sacrificing the services to the patients. 3) The ratio of nurses to clerical staff is 1 to 0.4. However, the ideal ratio is 5 to 1, that is, 1 to 0.2. This means that clerical personnel far outnumber the nursing staff. 4) The ratio of nurses to nurse's-aides; The average 2.5 to 1 indicates that most of the nursing service are delegated to nurse's-aides owing to the shortage of registered nurses. This is the main cause of the deterioration in the quality of nursing services. It is a real problem in the guest for better nursing services that certain hospitals employ a disproportionate number of nurse's-aides in order to meet financial requirements. 5) As for the working conditions, most of hospitals employ a three-shift day with 8 hours of duty each. However, certain hospitals still use two shifts a day. 6) As for the working environment, most of the hospitals lack welfare and hygienic facilities. 7) The salary basis is the highest in the private university hospitals, with enterprise hospitals next and religious hospitals and national or public ones lowest. 8) Method of employment is made through paper screening, and further that the appointment of nurses is conditional upon the favorable opinion of the nursing directors. 9) The unemployment ratio for one year in 1971 averaged 29 per cent. The reasons for unemployment indicate that the highest is because of marriage up to 40 per cent, and next is because of overseas employment. This high unemployment ratio further causes the deterioration of efficiency in nursing services and supplementary activities. The hospital authorities concerned should take this matter into a jeep consideration in order to reduce unemployment. 10) The importance of in-service education is well recognized and established. 1% has been noted that on the-job nurses. training has been most active, with nursing directors taking charge of the orientation programs of newly employed nurses. However, it is most necessary that a comprehensive study be made of instructors, contents and methods of education with a separate section for in-service education. 10. Nursing services'activities 1) Division of services and job descriptions are urgently required. 81 per rent of the hospitals keep written regulations of services in accordance with nursing service manuals. 19 per cent of the hospitals do not keep written regulations. Most of hospitals delegate to the nursing directors or certain supervisors the power of stipulating service regulations. In 21 per cent of the total hospitals they have policy committees, standardization committees and advisory committees to proceed with the stipulation of regulations. 2) Approximately 81 per cent of the hospitals have service channels in which directors, supervisors, head nurses and staff nurses perform their appropriate services according to the service plans and make up the service reports. In approximately 19 per cent of the hospitals the staff perform their nursing services without utilizing the above channels. 3) In the performance of nursing services, a ward manual is considered the most important one to be utilized in about 32 percent of hospitals. 25 per cent of hospitals indicate they use a kardex; 17 per cent use ward-rounding, and others take advantage of work sheets or coordination with other departments through conferences. 4) In about 78 per cent of hospitals they have records which indicate the status of personnel, and in 22 per cent they have not. 5) It has been advised that morale among nurses may be increased, ensuring more efficient services, by their being able to exchange opinions and views with each other. 6) The satisfactory performance of nursing services rely on the following factors to the degree indicated: approximately 32 per cent to the systematic nursing activities and services; 27 per cent to the head nurses ability for nursing diagnosis; 22 per cent to an effective supervisory system; 16 per cent to the hospital facilities and proper supply, and 3 per cent to effective in·service education. This means that nurses, supervisors, head nurses and directors play the most important roles in the performance of nursing services. 11. About 87 per cent of the hospitals do not have separate budgets for their nursing departments, and only 13 per cent of the hospitals have separate budgets. It is recommended that the planning and execution of the nursing administration be delegated to the pertinent administrators in order to bring about improved proved performances and activities in nursing services.

  • PDF

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

The Effect of Common Features on Consumer Preference for a No-Choice Option: The Moderating Role of Regulatory Focus (재몰유선택적정황하공동특성대우고객희호적영향(在没有选择的情况下共同特性对于顾客喜好的影响): 조절초점적조절작용(调节焦点的调节作用))

  • Park, Jong-Chul;Kim, Kyung-Jin
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.89-97
    • /
    • 2010
  • This study researches the effects of common features on a no-choice option with respect to regulatory focus theory. The primary interest is in three factors and their interrelationship: common features, no-choice option, and regulatory focus. Prior studies have compiled vast body of research in these areas. First, the "common features effect" has been observed bymany noted marketing researchers. Tversky (1972) proposed the seminal theory, the EBA model: elimination by aspect. According to this theory, consumers are prone to focus only on unique features during comparison processing, thereby dismissing any common features as redundant information. Recently, however, more provocative ideas have attacked the EBA model by asserting that common features really do affect consumer judgment. Chernev (1997) first reported that adding common features mitigates the choice gap because of the increasing perception of similarity among alternatives. Later, however, Chernev (2001) published a critically developed study against his prior perspective with the proposition that common features may be a cognitive load to consumers, and thus consumers are possible that they are prone to prefer the heuristic processing to the systematic processing. This tends to bring one question to the forefront: Do "common features" affect consumer choice? If so, what are the concrete effects? This study tries to answer the question with respect to the "no-choice" option and regulatory focus. Second, some researchers hold that the no-choice option is another best alternative of consumers, who are likely to avoid having to choose in the context of knotty trade-off settings or mental conflicts. Hope for the future also may increase the no-choice option in the context of optimism or the expectancy of a more satisfactory alternative appearing later. Other issues reported in this domain are time pressure, consumer confidence, and alternative numbers (Dhar and Nowlis 1999; Lin and Wu 2005; Zakay and Tsal 1993). This study casts the no-choice option in yet another perspective: the interactive effects between common features and regulatory focus. Third, "regulatory focus theory" is a very popular theme in recent marketing research. It suggests that consumers have two focal goals facing each other: promotion vs. prevention. A promotion focus deals with the concepts of hope, inspiration, achievement, or gain, whereas prevention focus involves duty, responsibility, safety, or loss-aversion. Thus, while consumers with a promotion focus tend to take risks for gain, the same does not hold true for a prevention focus. Regulatory focus theory predicts consumers' emotions, creativity, attitudes, memory, performance, and judgment, as documented in a vast field of marketing and psychology articles. The perspective of the current study in exploring consumer choice and common features is a somewhat creative viewpoint in the area of regulatory focus. These reviews inspire this study of the interaction possibility between regulatory focus and common features with a no-choice option. Specifically, adding common features rather than omitting them may increase the no-choice option ratio in the choice setting only to prevention-focused consumers, but vice versa to promotion-focused consumers. The reasoning is that when prevention-focused consumers come in contact with common features, they may perceive higher similarity among the alternatives. This conflict among similar options would increase the no-choice ratio. Promotion-focused consumers, however, are possible that they perceive common features as a cue of confirmation bias. And thus their confirmation processing would make their prior preference more robust, then the no-choice ratio may shrink. This logic is verified in two experiments. The first is a $2{\times}2$ between-subject design (whether common features or not X regulatory focus) using a digital cameras as the relevant stimulus-a product very familiar to young subjects. Specifically, the regulatory focus variable is median split through a measure of eleven items. Common features included zoom, weight, memory, and battery, whereas the other two attributes (pixel and price) were unique features. Results supported our hypothesis that adding common features enhanced the no-choice ratio only to prevention-focus consumers, not to those with a promotion focus. These results confirm our hypothesis - the interactive effects between a regulatory focus and the common features. Prior research had suggested that including common features had a effect on consumer choice, but this study shows that common features affect choice by consumer segmentation. The second experiment was used to replicate the results of the first experiment. This experimental study is equal to the prior except only two - priming manipulation and another stimulus. For the promotion focus condition, subjects had to write an essay using words such as profit, inspiration, pleasure, achievement, development, hedonic, change, pursuit, etc. For prevention, however, they had to use the words persistence, safety, protection, aversion, loss, responsibility, stability etc. The room for rent had common features (sunshine, facility, ventilation) and unique features (distance time and building state). These attributes implied various levels and valence for replication of the prior experiment. Our hypothesis was supported repeatedly in the results, and the interaction effects were significant between regulatory focus and common features. Thus, these studies showed the dual effects of common features on consumer choice for a no-choice option. Adding common features may enhance or mitigate no-choice, contradictory as it may sound. Under a prevention focus, adding common features is likely to enhance the no-choice ratio because of increasing mental conflict; under the promotion focus, it is prone to shrink the ratio perhaps because of a "confirmation bias." The research has practical and theoretical implications for marketers, who may need to consider common features carefully in a practical display context according to consumer segmentation (i.e., promotion vs. prevention focus.) Theoretically, the results suggest some meaningful moderator variable between common features and no-choice in that the effect on no-choice option is partly dependent on a regulatory focus. This variable corresponds not only to a chronic perspective but also a situational perspective in our hypothesis domain. Finally, in light of some shortcomings in the research, such as overlooked attribute importance, low ratio of no-choice, or the external validity issue, we hope it influences future studies to explore the little-known world of the "no-choice option."

Preliminary Report of the $1998{\sim}1999$ Patterns of Care Study of Radiation Therapy for Esophageal Cancer in Korea (식도암 방사선 치료에 대한 Patterns of Care Study ($1998{\sim}1999$)의 예비적 결과 분석)

  • Hur, Won-Joo;Choi, Young-Min;Lee, Hyung-Sik;Kim, Jeung-Kee;Kim, Il-Han;Lee, Ho-Jun;Lee, Kyu-Chan;Kim, Jung-Soo;Chun, Mi-Son;Kim, Jin-Hee;Ahn, Yong-Chan;Kim, Sang-Gi;Kim, Bo-Kyung
    • Radiation Oncology Journal
    • /
    • v.25 no.2
    • /
    • pp.79-92
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: For the first time, a nationwide survey in the Republic of Korea was conducted to determine the basic parameters for the treatment of esophageal cancer and to offer a solid cooperative system for the Korean Pattern of Care Study database. $\underline{Materials\;and\;Methods}$: During $1998{\sim}1999$, biopsy-confirmed 246 esophageal cancer patients that received radiotherapy were enrolled from 23 different institutions in South Korea. Random sampling was based on power allocation method. Patient parameters and specific information regarding tumor characteristics and treatment methods were collected and registered through the web based PCS system. The data was analyzed by the use of the Chi-squared test. $\underline{Results}$: The median age of the collected patients was 62 years. The male to female ratio was about 91 to 9 with an absolute male predominance. The performance status ranged from ECOG 0 to 1 in 82.5% of the patients. Diagnostic procedures included an esophagogram (228 patients, 92.7%), endoscopy (226 patients, 91.9%), and a chest CT scan (238 patients, 96.7%). Squamous cell carcinoma was diagnosed in 96.3% of the patients; mid-thoracic esophageal cancer was most prevalent (110 patients, 44.7%) and 135 patients presented with clinical stage III disease. Fifty seven patients received radiotherapy alone and 37 patients received surgery with adjuvant postoperative radiotherapy. Half of the patients (123 patients) received chemotherapy together with RT and 70 patients (56.9%) received it as concurrent chemoradiotherapy. The most frequently used chemotherapeutic agent was a combination of cisplatin and 5-FU. Most patients received radiotherapy either with 6 MV (116 patients, 47.2%) or with 10 MV photons (87 patients, 35.4%). Radiotherapy was delivered through a conventional AP-PA field for 206 patients (83.7%) without using a CT plan and the median delivered dose was 3,600 cGy. The median total dose of postoperative radiotherapy was 5,040 cGy while for the non-operative patients the median total dose was 5,970 cGy. Thirty-four patients received intraluminal brachytherapy with high dose rate Iridium-192. Brachytherapy was delivered with a median dose of 300 cGy in each fraction and was typically delivered $3{\sim}4\;times$. The most frequently encountered complication during the radiotherapy treatment was esophagitis in 155 patients (63.0%). $\underline{Conclusion}$: For the evaluation and treatment of esophageal cancer patients at radiation facilities in Korea, this study will provide guidelines and benchmark data for the solid cooperative systems of the Korean PCS. Although some differences were noted between institutions, there was no major difference in the treatment modalities and RT techniques.