• Title/Summary/Keyword: distribution data

Search Result 17,823, Processing Time 0.051 seconds

Determination of Cost and Measurement of nursing Care Hours for Hospice Patients Hospitalized in one University Hospital (일 대학병원 호스피스 병동 입원 환자의 간호활동시간 측정과 원가산정)

  • Kim, Kyeong-Uoon
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.6 no.3
    • /
    • pp.389-404
    • /
    • 2000
  • This study was designed to determine the cost and measurement of nursing care hours for hospice patients hostpitalized in one university hospital. 314 inpatients in the hospice unit 11 nursing manpower were enrolled. Study was taken place in C University Hospital from 8th to 28th, Nov, 1999. Researcher and investigator did pilot study for selecting compatible hospice patient classification indicators. After modifying patient classification indicators and nursing care details for general ward, approved of content validity by specialist. Using hospice patient classification indicators and per 5 min continuing observation method, researcher and investigator recorded direct nursing care hours, indirect nursing care hours, and personnel time on hospice nursing care hours, and personnel time on hospice nursing care activities sheet. All of the patients were classified into Class I(mildly ill), Class II (moderately ill), Class III (acutely ill), and Class IV (critically ill) by patient classification system (PCS) which had been carefully developed to be suitable for the Korean hospice ward. And then the elements of the nursing care cost was investigated. Based on the data from an accounting section (Riccolo, 1988), nursing care hours per patient per day in each class and nursing care cost per patient per hour were multiplied. And then the mean of the nursing care cost per patient per day in each class was calculated. Using SAS, The number of patients in class and nursing activities in duty for nursing care hours were calculated the percent, the mean, the standard deviation respectively. According to the ANOVA and the $Scheff{\'{e}$ test, direct nursing care hours per patient per day for the each class were analyzed. The results of this study were summarized as follows : 1. Distribution of patient class : class IN(33.5%) was the largest class the rest were class II(26.1%) class III(22.6%), class I(17.8%). Nursing care requirements of the inpatients in hospice ward were greater than that of the inpatients in general ward. 2. Direct nursing care activities : Measurement ${\cdot}$ observation 41.7%, medication 16.6%, exercise ${\cdot}$ safety 12.5%, education ${\cdot}$ communication 7.2% etc. The mean hours of direct nursing care per patient per day per duty were needed ; 69.3 min for day duty, 64.7 min for evening duty, 88.2 min for night duty, 38.7 min for shift duty. The mean hours of direct nursing care of night duty was longer than that of the other duty. Direct nursing care hours per patient per day in each class were needed ; 3.1 hrs for class I, 3.9 hrs for class II, 4.7 hrs for class III, and 5.2 hrs for class IV. The mean hours of direct nursing care per patient per day without the PCS was 4.1 hours. The mean hours of direct nursing care per patient per day in class was increased significantly according to increasing nursing care requirements of the inpatients(F=49.04, p=.0001). The each class was significantly different(p<0.05). The mean hours of direct nursing care of several direct nursing care activities in each class were increased according to increasing nursing care requirements of the inpatients(p<0.05) ; class III and class IV for medication and education ${\cdot}$ communication, class I, class III and class IV for measurement ${\cdot}$ observation, class I, class II and class IV for elimination ${\cdot}$ irrigation, all of class for exercise ${\cdot}$ safety. 3. Indirect nursing care activities and personnel time : Recognization 24.2%, house keeping activity 22.7%, charting 17.2%, personnel time 11.8% etc. The mean hours of indirect nursing care and personnel time per nursing manpower was 4.7 hrs. The mean hours of indirect nursing care and personnel time per duty were 294.8 min for day duty, 212.3 min for evening duty, 387.9 min for night duty, 143.3 min for shift duty. The mean of indirect nursing care hours and personnel time of night duty was longer than that of the other duty. 4. The mean hours of indirect nursing care and personnel time per patient per day was 2.5 hrs. 5. The mean hours of nursing care per patient per day in each class were class I 5.6 hrs, class II 6.4 hrs, class III 7.2 hrs, class IV 7.7 hrs. 6. The elements of the nursing care cost were composed of 2,212 won for direct nursing care cost, 267 won for direct material cost and 307 won for indirect cost. Sum of the elements of the nursing care cost was 2,786 won. 7. The mean cost of the nursing care per patient per day in each class were 15,601.6 won for class I, 17,830.4 won for class II, 20,259.2 won for class III, 21,452.2 won for class IV. As above, using modified hospice patient classification indicators and nursing care activity details, many critical ill patients were hospitalized in the hospice unit and it reflected that the more nursing care requirements of the patients, the more direct nursing care hours. Emotional ${\cdot}$ spiritual care, pain ${\cdot}$ symptom control, terminal care, education ${\cdot}$ communication, narcotics management and delivery, attending funeral ceremony, the major nursing care activities, were also the independent hospice service. But it is not compensated by the present medical insurance system. Exercise ${\cdot}$ safety, elimination ${\cdot}$ irrigation needed more nursing care hours as equal to that of intensive care units. The present nursing management fee in the medical insurance system compensated only a part of nursing car service in hospice unit, which rewarded lower cost that that of nursing care.

  • PDF

Comparison of Health Status and Activities for the Pain and No-pain Groups in the Elderly (노인의 만성동통 유무에 따른 건강상태 및 일상활동장애 비교)

  • Kim, Hyo-Jung;Kim, Myung-Ae;Park, Kyung-Min
    • Journal of agricultural medicine and community health
    • /
    • v.24 no.1
    • /
    • pp.79-89
    • /
    • 1999
  • The purpose of this study is to compare health status and activities for the pain and no-pain groups in the elderly. The study subjects included 189 elderly people(65 years and older) living in an urban area. They were surveyed at their homes through interview using a closed-ended questionnaire from Nov. 6th. to Nov. 16th. 1997. The instrument used in the study was selected after carefully reviewing pain-related articles and records well described the characteristics of the elderly. The data were analysed by using descriptive statistics and chi-square tests. The findings were as follows : Of the 189 subjects, 83.6% reported experiencing the pain for the last year. By the age, there were significant differences between the pain and no-pain group(${\chi}^2$=9.572, p=.023). The percentage of the pain complainers was the highest in 80 years and older(100.0%), followed by 70~74(89.1%), 75~79(81.3%), 65~69(76.8%) which presented crude increase according to age. By sex, men had lower pain prevalence(69.5%) than that of women(90.0%). The number of pain complainers was higher in women than men(${\chi}^2$=12.448, p=.023). There were significant differences between the pain and no-pain groups by spouse distribution(${\chi}^2$=10.736, p=.001), educational state(${\chi}^2$=13.020, p=.000), occupation(${\chi}^2$=18.807, p=.000). Pain prevalence in the subjects having no spouse(59.3%) was higher than those having spouse(40.7%), Illiteracy rate was higher in pain group(49.0%) than no-pain group(13.3%). The number of the subjects having occupation(full time or part time) was fewer in pain group than no-pain group. By health status, there were significant differences between two groups(${\chi}^2$=40.055, p=.000). : the pain group showed poor(61.4%), followed by moderate(22.1%), good(16.5%) while no-pain group showed good(64.5%), moderate(29.0%), poor(6.5%). By activities, there were significant differences between the pain and no-pain groups. The pain group was disturbed more severely than the no-pain group in movement(${\chi}^2$=57.829, p=.000), sleep(${\chi}^2$=12.785, p=.000), usual activities(${\chi}^2$=39.196, p=.000), receiving guests(${\chi}^2$=13.163, p=.000), and hobbies and recreation(${\chi}^2$=28.177, p=.000).

  • PDF

The Continuous Monitoring of Oxygen Saturation During Fiberoptic Bronchoscopy (기관지내시경 검사시 지속적인 동맥혈 산소포화도 감시의 필요성)

  • Kang, Hyun Jae;Kim, Yeon Jae;Chyun, Jae Hyun;Do, Yun Kyung;Lee, Byung Ki;Kim, Won Ho;Park, Jae Yong;Jung, Tae Hoon
    • Tuberculosis and Respiratory Diseases
    • /
    • v.52 no.4
    • /
    • pp.385-394
    • /
    • 2002
  • Background : Flexible fiberoptic bronchoscopy(FFB) has become a widely performed technique for diagnosing and managing pulmonary disease because of its low complication and mortality rate. Since the use of FFB can in patients with severely depressed cardiorespiratory function is increasing and hypoxemia during the FFB can induce significant cardiac arrhythmias, the early detection and adequate management of hypoxemia during FFB is clinically important. Method : To evaluate the necessity of the continuous monitoring of the oxygen saturation($SaO_2$) during the FFB, the $SaO_2$ was continuously monitored from the finger tip using pulse oximetry before, during and after the FFB in 379 patient. The patients were then divided into two groups, those with and without hypoxemia($SaO_2$<90%). The baseline pulmonary function data and the clinical characteristics of the two groups were compared. Results : The mean baseline $SaO_2$ was $96.9{\pm}2.85%$. An $SaO_2$ <90% was recorded at some point in 62(16.4%) out of 379 patients, with 12 out of 62 experiencing this prior to the FFB, in 37 out of 62 during the FFB, and in 13 out of 62 after the FFB. No differences were observed in the smoking and sex distribution between those with and without hypoxemia. The mean age was older in those with hypoxemia than in those without. Significant differences were observed in the mean baseline $SaO_2$ and the mean time for the procedure between the two groups. The $FEV_1$ was significantly lower in those with hypoxemia, and both the FVC and $FEV_1/FVC$ also tended to decrease in this group. Managing hypoxemia included deep breathing in 20 patients, a supplemental oxygen supply in 39 patients, and the abortion of the procedure in 3 patients. Conclusion : These results suggest that the continuous monitoring of the oxygen saturation is necessary during fiberoptic bronchoscopy, and it should be performed in patients with a depressed pulmonay function in order for the early detection and adequate management of hypoxemia.

Quantitative Assessment Technology of Small Animal Myocardial Infarction PET Image Using Gaussian Mixture Model (다중가우시안혼합모델을 이용한 소동물 심근경색 PET 영상의 정량적 평가 기술)

  • Woo, Sang-Keun;Lee, Yong-Jin;Lee, Won-Ho;Kim, Min-Hwan;Park, Ji-Ae;Kim, Jin-Su;Kim, Jong-Guk;Kang, Joo-Hyun;Ji, Young-Hoon;Choi, Chang-Woon;Lim, Sang-Moo;Kim, Kyeong-Min
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.42-51
    • /
    • 2011
  • Nuclear medicine images (SPECT, PET) were widely used tool for assessment of myocardial viability and perfusion. However it had difficult to define accurate myocardial infarct region. The purpose of this study was to investigate methodological approach for automatic measurement of rat myocardial infarct size using polar map with adaptive threshold. Rat myocardial infarction model was induced by ligation of the left circumflex artery. PET images were obtained after intravenous injection of 37 MBq $^{18}F$-FDG. After 60 min uptake, each animal was scanned for 20 min with ECG gating. PET data were reconstructed using ordered subset expectation maximization (OSEM) 2D. To automatically make the myocardial contour and generate polar map, we used QGS software (Cedars-Sinai Medical Center). The reference infarct size was defined by infarction area percentage of the total left myocardium using TTC staining. We used three threshold methods (predefined threshold, Otsu and Multi Gaussian mixture model; MGMM). Predefined threshold method was commonly used in other studies. We applied threshold value form 10% to 90% in step of 10%. Otsu algorithm calculated threshold with the maximum between class variance. MGMM method estimated the distribution of image intensity using multiple Gaussian mixture models (MGMM2, ${\cdots}$ MGMM5) and calculated adaptive threshold. The infarct size in polar map was calculated as the percentage of lower threshold area in polar map from the total polar map area. The measured infarct size using different threshold methods was evaluated by comparison with reference infarct size. The mean difference between with polar map defect size by predefined thresholds (20%, 30%, and 40%) and reference infarct size were $7.04{\pm}3.44%$, $3.87{\pm}2.09%$ and $2.15{\pm}2.07%$, respectively. Otsu verse reference infarct size was $3.56{\pm}4.16%$. MGMM methods verse reference infarct size was $2.29{\pm}1.94%$. The predefined threshold (30%) showed the smallest mean difference with reference infarct size. However, MGMM was more accurate than predefined threshold in under 10% reference infarct size case (MGMM: 0.006%, predefined threshold: 0.59%). In this study, we was to evaluate myocardial infarct size in polar map using multiple Gaussian mixture model. MGMM method was provide adaptive threshold in each subject and will be a useful for automatic measurement of infarct size.

How Enduring Product Involvement and Perceived Risk Affect Consumers' Online Merchant Selection Process: The 'Required Trust Level' Perspective (지속적 관여도 및 인지된 위험이 소비자의 온라인 상인선택 프로세스에 미치는 영향에 관한 연구: 요구신뢰 수준 개념을 중심으로)

  • Hong, Il-Yoo B.;Lee, Jung-Min;Cho, Hwi-Hyung
    • Asia pacific journal of information systems
    • /
    • v.22 no.1
    • /
    • pp.29-52
    • /
    • 2012
  • Consumers differ in the way they make a purchase. An audio mania would willingly make a bold, yet serious, decision to buy a top-of-the-line home theater system, while he is not interested in replacing his two-decade-old shabby car. On the contrary, an automobile enthusiast wouldn't mind spending forty thousand dollars to buy a new Jaguar convertible, yet cares little about his junky component system. It is product involvement that helps us explain such differences among individuals in the purchase style. Product involvement refers to the extent to which a product is perceived to be important to a consumer (Zaichkowsky, 2001). Product involvement is an important factor that strongly influences consumer's purchase decision-making process, and thus has been of prime interest to consumer behavior researchers. Furthermore, researchers found that involvement is closely related to perceived risk (Dholakia, 2001). While abundant research exists addressing how product involvement relates to overall perceived risk, little attention has been paid to the relationship between involvement and different types of perceived risk in an electronic commerce setting. Given that perceived risk can be a substantial barrier to the online purchase (Jarvenpaa, 2000), research addressing such an issue will offer useful implications on what specific types of perceived risk an online firm should focus on mitigating if it is to increase sales to a fullest potential. Meanwhile, past research has focused on such consumer responses as information search and dissemination as a consequence of involvement, neglecting other behavioral responses like online merchant selection. For one example, will a consumer seriously considering the purchase of a pricey Guzzi bag perceive a great degree of risk associated with online buying and therefore choose to buy it from a digital storefront rather than from an online marketplace to mitigate risk? Will a consumer require greater trust on the part of the online merchant when the perceived risk of online buying is rather high? We intend to find answers to these research questions through an empirical study. This paper explores the impact of enduring product involvement and perceived risks on required trust level, and further on online merchant choice. For the purpose of the research, five types or components of perceived risk are taken into consideration, including financial, performance, delivery, psychological, and social risks. A research model has been built around the constructs under consideration, and 12 hypotheses have been developed based on the research model to examine the relationships between enduring involvement and five components of perceived risk, between five components of perceived risk and required trust level, between enduring involvement and required trust level, and finally between required trust level and preference toward an e-tailer. To attain our research objectives, we conducted an empirical analysis consisting of two phases of data collection: a pilot test and main survey. The pilot test was conducted using 25 college students to ensure that the questionnaire items are clear and straightforward. Then the main survey was conducted using 295 college students at a major university for nine days between December 13, 2010 and December 21, 2010. The measures employed to test the model included eight constructs: (1) enduring involvement, (2) financial risk, (3) performance risk, (4) delivery risk, (5) psychological risk, (6) social risk, (7) required trust level, (8) preference toward an e-tailer. The statistical package, SPSS 17.0, was used to test the internal consistency among the items within the individual measures. Based on the Cronbach's ${\alpha}$ coefficients of the individual measure, the reliability of all the variables is supported. Meanwhile, the Amos 18.0 package was employed to perform a confirmatory factor analysis designed to assess the unidimensionality of the measures. The goodness of fit for the measurement model was satisfied. Unidimensionality was tested using convergent, discriminant, and nomological validity. The statistical evidences proved that the three types of validity were all satisfied. Now the structured equation modeling technique was used to analyze the individual paths along the relationships among the research constructs. The results indicated that enduring involvement has significant positive relationships with all the five components of perceived risk, while only performance risk is significantly related to trust level required by consumers for purchase. It can be inferred from the findings that product performance problems are mostly likely to occur when a merchant behaves in an opportunistic manner. Positive relationships were also found between involvement and required trust level and between required trust level and online merchant choice. Enduring involvement is concerned with the pleasure a consumer derives from a product class and/or with the desire for knowledge for the product class, and thus is likely to motivate the consumer to look for ways of mitigating perceived risk by requiring a higher level of trust on the part of the online merchant. Likewise, a consumer requiring a high level of trust on the merchant will choose a digital storefront rather than an e-marketplace, since a digital storefront is believed to be trustworthier than an e-marketplace, as it fulfills orders by itself rather than acting as an intermediary. The findings of the present research provide both academic and practical implications. The first academic implication is that enduring product involvement is a strong motivator of consumer responses, especially the selection of a merchant, in the context of electronic shopping. Secondly, academicians are advised to pay attention to the finding that an individual component or type of perceived risk can be used as an important research construct, since it would allow one to pinpoint the specific types of risk that are influenced by antecedents or that influence consequents. Meanwhile, our research provides implications useful for online merchants (both online storefronts and e-marketplaces). Merchants may develop strategies to attract consumers by managing perceived performance risk involved in purchase decisions, since it was found to have significant positive relationship with the level of trust required by a consumer on the part of the merchant. One way to manage performance risk would be to thoroughly examine the product before shipping to ensure that it has no deficiencies or flaws. Secondly, digital storefronts are advised to focus on symbolic goods (e.g., cars, cell phones, fashion outfits, and handbags) in which consumers are relatively more involved than others, whereas e- marketplaces should put their emphasis on non-symbolic goods (e.g., drinks, books, MP3 players, and bike accessories).

  • PDF

Study on Health Behavior of Hypertensive Patients and Compliance for Treatment of Antihypertensive Medication (고혈압 환자들의 순응도와 건강행태의 관계)

  • Kim, Joo-Yeon;Lee, Dong-Bae;Cho, Young-Chae;Lee, Sok-Goo;Chang, Seong-Sil;Kwon, Yun-Hyung;Lee, Tae-Yong
    • Journal of agricultural medicine and community health
    • /
    • v.25 no.1
    • /
    • pp.29-49
    • /
    • 2000
  • Objectives: To estimate the prevalence rate of hypertension, the changes of health behavior, and compliance for the drug treatment after diagnosed as hypertension. Methods: 7,030 persons who live in Cheonan City of Chungnam Province were selected by the cluster sampling method, and 5,372 persons were surveyed by questionnaire and health examination. This data is analyzed by Chi-square test on each variable. Results: 49.8%- of men and 38.8%- of women had been diagnosed as hypertension, and the prevalence rate of hypertension was significantly increased with aging in both gender. The prevalence rate tended to decrease in highly educated women group. Unemployed persons or obese persons showed relatively higher prevalence rate. The prevalence rate of hypertension increased in groups with higher total cholesterol levels over 240 mg/dl, and groups with glucose level over 200 mg/dl. 53.1%- of male patients and 66.6%- of female patients showed compliance for antihypertensive treatment. Compliance for treatment was higher in aged group or lower educated group in both gender. Among men, proportion of compliant subjects was higher in unemployed group(49.3%-), and lower in labor or primary industry than the others but among women, there was not any significant difference. And men with compliance for treatment had higher monthly income than the others, but women did not show any. Conclusion : This population had a high prevalence rate of hypertension which may lead to cardiovascular disease. Therefore health education programs and distribution of information must be emphasized in order to increase compliance to treatment and encourage the change of health behavior to promote health.

  • PDF

Performance Evaluation of Radiochromic Films and Dosimetry CheckTM for Patient-specific QA in Helical Tomotherapy (나선형 토모테라피 방사선치료의 환자별 품질관리를 위한 라디오크로믹 필름 및 Dosimetry CheckTM의 성능평가)

  • Park, Su Yeon;Chae, Moon Ki;Lim, Jun Teak;Kwon, Dong Yeol;Kim, Hak Joon;Chung, Eun Ah;Kim, Jong Sik
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.93-109
    • /
    • 2020
  • Purpose: The radiochromic film (Gafchromic EBT3, Ashland Advanced Materials, USA) and 3-dimensional analysis system dosimetry checkTM (DC, MathResolutions, USA) were evaluated for patient-specific quality assurance (QA) of helical tomotherapy. Materials and Methods: Depending on the tumors' positions, three types of targets, which are the abdominal tumor (130.6㎤), retroperitoneal tumor (849.0㎤), and the whole abdominal metastasis tumor (3131.0㎤) applied to the humanoid phantom (Anderson Rando Phantom, USA). We established a total of 12 comparative treatment plans by the four geometric conditions of the beam irradiation, which are the different field widths (FW) of 2.5-cm, 5.0-cm, and pitches of 0.287, 0.43. Ionization measurements (1D) with EBT3 by inserting the cheese phantom (2D) were compared to DC measurements of the 3D dose reconstruction on CT images from beam fluence log information. For the clinical feasibility evaluation of the DC, dose reconstruction has been performed using the same cheese phantom with the EBT3 method. Recalculated dose distributions revealed the dose error information during the actual irradiation on the same CT images quantitatively compared to the treatment plan. The Thread effect, which might appear in the Helical Tomotherapy, was analyzed by ripple amplitude (%). We also performed gamma index analysis (DD: 3mm/ DTA: 3%, pass threshold limit: 95%) for pattern check of the dose distribution. Results: Ripple amplitude measurement resulted in the highest average of 23.1% in the peritoneum tumor. In the radiochromic film analysis, the absolute dose was on average 0.9±0.4%, and gamma index analysis was on average 96.4±2.2% (Passing rate: >95%), which could be limited to the large target sizes such as the whole abdominal metastasis tumor. In the DC analysis with the humanoid phantom for FW of 5.0-cm, the three regions' average was 91.8±6.4% in the 2D and 3D plan. The three planes (axial, coronal, and sagittal) and dose profile could be analyzed with the entire peritoneum tumor and the whole abdominal metastasis target, with planned dose distributions. The dose errors based on the dose-volume histogram in the DC evaluations increased depending on FW and pitch. Conclusion: The DC method could implement a dose error analysis on the 3D patient image data by the measured beam fluence log information only without any dosimetry tools for patient-specific quality assurance. Also, there may be no limit to apply for the tumor location and size; therefore, the DC could be useful in patient-specific QAl during the treatment of Helical Tomotherapy of large and irregular tumors.

Varietal and Locational Variation of Grain Quality Components of Rice Produced n Middle and Southern Plain Areas in Korea (중ㆍ남부 평야지산 발 형태 및 이화학적 특성의 품종 및 산지간 변이)

  • Choi, Hae-Chune;Chi, Jeong-Hyun;Lee, Chong-Seob;Kim, Young-Bae;Cho, Soo-Yeon
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.39 no.1
    • /
    • pp.15-26
    • /
    • 1994
  • To understand the relative contribution of varietal and environmental variation on various grain quality components in rice, grain appearance, milling recovery, several physicochemical properties of rice grain and texture or palatability of cooked rice for milled rice materials of seven cultivars(five japonica & two Tongil-type), produced at six locations of the middle and southern plain area of Korea in 1989, were evaluated and analyzed the obtained data. Highly significant varietal variations were detected in all grain quality components of the rice materials and marked locational variations with about 14-54% portion of total variation were recognized in grain appearance, milling recovery, alkali digestibility, protein content, K /Mg ratio, gelatinization temperature, breakdown and setback viscosities. Variations of variety x location interaction were especially large in overall palatability score of cooked rice and consistency or set- back viscosities of amylograph. Tongil-type cultivars showed poor marketing quality, lower milling recovery, slightly lower alkali digestibility and amylose content, a little higher protein content and K /Mg ratio, relatively higher peak, breakdown and consistency viscosities, significantly lower setback viscosity, and more undesirable palatability of cooked rice compared with japonica rices. The japonica rice varieties possessing good palatability of cooked rice were slightly low in protein content and a little high in K /Mg ratio and stickiness /hardness ratio of cooked rice. Rice 1000-kernel weight was significantly heavier in rice materials produced in Iri lowland compared with other locations. Milling recovery from rough to brown rice and ripening quality were lowest in Milyang late-planted rice while highest in Iri lowland and Gyehwa reclaimed-land rice. Amylose content of milled rice was about 1% lower in Gyehwa rice compared with other locations. Protein content of polished rice was about 1% lower in rice materials of middle plain area than those of southern plain regions. K/Mg ratio of milled rice was lowest in Iri rice while highest in Milyang rice. Alkali digestibility was highest in Milyang rice while lowest in Honam plain rice, but the temperature of gelatinization initiation of rice flour in amylograph was lowest in Suwon and Iri rices while highest in Milyang rice. Breakdown viscosity was lowest in Milyang rice and next lower in Ichon lowland rice while highest in Gyehwa and Iri rices, and setback viscosity was the contrary tendency. The stickiness/hardness ratio of cooked rice was slightly lower in southern-plain rices than in middle-plain ones, and the palatability of cooked rice was best in Namyang reclaimed-land rice and next better with the order of Suwon$\geq$Iri$\geq$Ichon$\geq$Gyehwa$\geq$Milyang rices. The rice materials can be classified genotypically into two ecotypes of japonica and Tongil-type rice groups, and environmentally into three regions of Milyang, middle and Honam lowland by the distribution on the plane of 1st and 2nd principal components contracted from eleven grain quality properties closely associated with palatability of cooked rice by principal component analysis.

  • PDF

The Characteristics and Performances of Manufacturing SMEs that Utilize Public Information Support Infrastructure (공공 정보지원 인프라 활용한 제조 중소기업의 특징과 성과에 관한 연구)

  • Kim, Keun-Hwan;Kwon, Taehoon;Jun, Seung-pyo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.1-33
    • /
    • 2019
  • The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.