• Title/Summary/Keyword: Probability Score

Search Result 298, Processing Time 0.023 seconds

Technology Innovation Activity and Default Risk (기술혁신활동이 부도위험에 미치는 영향 : 한국 유가증권시장 및 코스닥시장 상장기업을 중심으로)

  • Kim, Jin-Su
    • Journal of Technology Innovation
    • /
    • v.17 no.2
    • /
    • pp.55-80
    • /
    • 2009
  • Technology innovation activity plays a pivotal role in constructing the entrance barrier for other firms and making process improvement and new product. and these activities give a profit increase and growth to firms. Thus, technology innovation activity can reduce the default risk of firms. However, technology innovation activity can also increase the firm's default risk because technology innovation activity requires too much investment of the firm's resources and has the uncertainty on success. The purpose of this study is to examine the effect of technology innovation activity on the default risk of firms. This study's sample consists of manufacturing firms listed on the Korea Securities Market and The Kosdaq Market from January 1,2000 to December 31, 2008. This study makes use of R&D intensity as an proxy variable of technology innovation activity. The default probability which proxies the default risk of firms is measured by the Merton's(l974) debt pricing model. The main empirical results are as follows. First, from the empirical results, it is found that technology innovation activity has a negative and significant effect on the default risk of firms independent of the Korea Securities Market and Kosdaq Market. In other words, technology innovation activity reduces the default risk of firms. Second, technology innovation activity reduces the default risk of firms independent of firm size, firm age, and credit score. Third, the results of robust analysis also show that technology innovation activity is the important factor which decreases the default risk of firms. These results imply that a manager must show continuous interest and investment in technology innovation activity of one's firm. And a policymaker also need design an economic policy to promote the technology innovation activity of firms.

  • PDF

Comparison of the Mid-term Evaluation of Distance Lectures for the First Semester of 2020 and the First Semester of 2021: Targeting D Colleges in the Daegu Area (2020년도 1학기와 2021년도 1학기 원격수업에 대한 중간 강의평가 비교: 대구지역 D 전문대학을 대상으로)

  • Park, Jeong-Kyu
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.5
    • /
    • pp.675-681
    • /
    • 2021
  • Recently, the Ministry of Education stipulates in the distance class operation regulations that student lecture evaluations for distance learning subjects should be conducted at least twice per semester and the results should be disclosed to students. Therefore, the lecture evaluation of D college was compared with the first semester of 2020 and the first semester of 2021. As for the multiple-choice evaluation result of the distance learning mid-course evaluation, the overall average of the mid-course evaluation of the distance class in the first semester of 2020 increased from 4.1819 to 4.4000 in the mid-course evaluation in the first semester of 2021.In the case of the first semester of 2020, due to Corona 19, all non-face-to-face classes were held, but in the first semester of 2021, face-to-face classes increased. The overall satisfaction level rose from 4.18 points in the first semester of 2020 to 4.39 points in the first semester of 2021. The screen composition, sound and picture quality, playback time, face appearance, lecture material provision, and frequency of use of the top 3% and bottom 3% also increased. Despite the changes caused by the LMS replacement, which was a concern, student attendance, assignments, and test submission rates also increased compared to the previous year. The null hypothesis that 'the difference between the two scores is the same' is the null hypothesis because the probability of significance is 0.000 and less than 0.05 in the case of the best 3% of the test result of the test result of the mid-course evaluation of distance classes in the first semester of 2020 and the evaluation of the intermediate lectures in the first semester of 2021. As this was rejected, it can be seen that the best score for the 2021 school year has significantly increased compared to the first semester of 2020. Also, in the case of Worst 3% or less, the significance probability is 0.000, which is less than 0.05, so the null hypothesis that 'the difference between the two scores is the same' was rejected, indicating that the Worst score for the 2021 school year was significantly higher than that for the first semester of 2020.

Development of High-Resolution Fog Detection Algorithm for Daytime by Fusing GK2A/AMI and GK2B/GOCI-II Data (GK2A/AMI와 GK2B/GOCI-II 자료를 융합 활용한 주간 고해상도 안개 탐지 알고리즘 개발)

  • Ha-Yeong Yu;Myoung-Seok Suh
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1779-1790
    • /
    • 2023
  • Satellite-based fog detection algorithms are being developed to detect fog in real-time over a wide area, with a focus on the Korean Peninsula (KorPen). The GEO-KOMPSAT-2A/Advanced Meteorological Imager (GK2A/AMI, GK2A) satellite offers an excellent temporal resolution (10 min) and a spatial resolution (500 m), while GEO-KOMPSAT-2B/Geostationary Ocean Color Imager-II (GK2B/GOCI-II, GK2B) provides an excellent spatial resolution (250 m) but poor temporal resolution (1 h) with only visible channels. To enhance the fog detection level (10 min, 250 m), we developed a fused GK2AB fog detection algorithm (FDA) of GK2A and GK2B. The GK2AB FDA comprises three main steps. First, the Korea Meteorological Satellite Center's GK2A daytime fog detection algorithm is utilized to detect fog, considering various optical and physical characteristics. In the second step, GK2B data is extrapolated to 10-min intervals by matching GK2A pixels based on the closest time and location when GK2B observes the KorPen. For reflectance, GK2B normalized visible (NVIS) is corrected using GK2A NVIS of the same time, considering the difference in wavelength range and observation geometry. GK2B NVIS is extrapolated at 10-min intervals using the 10-min changes in GK2A NVIS. In the final step, the extrapolated GK2B NVIS, solar zenith angle, and outputs of GK2A FDA are utilized as input data for machine learning (decision tree) to develop the GK2AB FDA, which detects fog at a resolution of 250 m and a 10-min interval based on geographical locations. Six and four cases were used for the training and validation of GK2AB FDA, respectively. Quantitative verification of GK2AB FDA utilized ground observation data on visibility, wind speed, and relative humidity. Compared to GK2A FDA, GK2AB FDA exhibited a fourfold increase in spatial resolution, resulting in more detailed discrimination between fog and non-fog pixels. In general, irrespective of the validation method, the probability of detection (POD) and the Hanssen-Kuiper Skill score (KSS) are high or similar, indicating that it better detects previously undetected fog pixels. However, GK2AB FDA, compared to GK2A FDA, tends to over-detect fog with a higher false alarm ratio and bias.

Physicochemical Characteristics and Varietal Improvement Related to Palatability of Cooked Rice or Suitability to Food Processing in Rice (쌀 식미 및 가공적성에 관련된 이화학적 특성)

  • 최해춘
    • Proceedings of the Korean Journal of Food and Nutrition Conference
    • /
    • 2001.12a
    • /
    • pp.39-74
    • /
    • 2001
  • The endeavors enhancing the grain quality of high-yielding japonica rice were steadily continued during 1980s∼1990s along with the self-sufficiency of rice production and the increasing demands of high-quality rices. During this time, considerably great, progress and success was obtained in development of high-quality japonica cultivars and qualify evaluation techniques including the elucidation of interrelationship between the physicochemical properties of rice grain and the physical or palatability components of cooked rice. In 1990s, some high-quality japonica rice caltivars and special rices adaptable for food processing such as large kernel, chalky endosperm aromatic and colored rices were developed and its objective preference and utility was also examined by a palatability meter, rapid-visco analyzer and texture analyzer. The water uptake rate and the maximum water absorption ratio showed significantly negative correlations with the K/Mg ratio and alkali digestion value(ADV) of milled rice. The rice materials showing the higher amount of hot water absorption exhibited the larger volume expansion of cooked rice. The harder rices with lower moisture content revealed the higher rate of water uptake at twenty minutes after soaking and the higher ratio of maximum water uptake under the room temperature condition. These water uptake characteristics were not associated with the protein and amylose contents of milled rice and the palatability of cooked rice. The water/rice ratio (in w/w basis) for optimum cooking was averaged to 1.52 in dry milled rices (12% wet basis) with varietal range from 1.45 to 1.61 and the expansion ratio of milled rice after proper boiling was average to 2.63(in v/v basis). The major physicochemical components of rice grain associated with the palatability of cooked rice were examined using japonica rice materials showing narrow varietal variation in grain size and shape, alkali digestibility, gel consistency, amylose and protein contents, but considerable difference in appearance and torture of cooked rice. The glossiness or gross palatability score of cooked rice were closely associated with the peak. hot paste and consistency viscosities of viscogram with year difference. The high-quality rice variety “Ilpumbyeo” showed less portion of amylose on the outer layer of milled rice grain and less and slower change in iodine blue value of extracted paste during twenty minutes of boiling. This highly palatable rice also exhibited very fine net structure in outer layer and fine-spongy and well-swollen shape of gelatinized starch granules in inner layer and core of cooked rice kernel compared with the poor palatable rice through image of scanning electronic mcroscope. Gross sensory score of cooked rice could be estimated by multiple linear regression formula, deduced from relationship between rice quality components mentioned above and eating quality of cooked rice, with high Probability of determination. The ${\alpha}$ -amylose-iodine method was adopted for checking the varietal difference in retrogradation of cooked rice. The rice cultivars revealing the relatively slow retrogradation in aged cooked rice were Ilpumbyeo, Chucheongbyeo, Sasanishiki, Jinbubyeo and Koshihikari. A Tongil-type rice, Taebaegbyeo, and a japonica cultivar, Seomjinbyeo, shelved the relatively fast deterioration of cooked rice. Generally, the better rice cultivars in eating quality of cooked rice showed less retrogiadation and much sponginess in cooled cooked rice. Also, the rice varieties exhibiting less retrogradation in cooled cooked rice revealed higher hot viscosity and lower cool viscosity of rice flour in amylogram. The sponginess of cooled cooked rice was closely associated with magnesium content and volume expansion of cooked rice. The hardness-changed ratio of cooked rice by cooling was negatively correlated with solids amount extracted during boiling and volume expansion of cooked rice. The major physicochemical properties of rice grain closely related to the palatability of cooked rice may be directly or indirectly associated with the retrogradation characteristics of cooked rice. The softer gel consistency and lower amylose content in milled rice revealed the higher ratio of popped rice and larger bulk density of popping. The stronger hardness of rice grain showed relatively higher ratio of popping and the more chalky or less translucent rice exhibited the lower ratio of intact popped brown rice. The potassium and magnesium contents of milled rice were negatively associated with gross score of noodle making mixed with wheat flour in half and the better rice for noodle making revealed relatively less amount of solid extraction during boiling. The more volume expansion of batters for making brown rice bread resulted the better loaf formation and more springiness in rice bread. The higher protein rices produced relatively the more moist white rice bread. The springiness of rice bread was also significantly correlated with high amylose content and hard gel consistency. The completely chalky and large gram rices showed better suitability for fermentation and brewing. Our breeding efforts on rice quality improvement for the future should focus on enhancement of palatability of cooked rice and marketing qualify as well as the diversification in morphological and physicochemical characteristics of rice grain for various value-added rice food processings.

  • PDF

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

PARK Index for Preventable Major Trauma Death Rate (중증외상환자에서 TRISS를 활용한 예방가능 중증외상사망률 지표: PARK Index)

  • Park, Chan Yong;Yu, Byungchul;Kim, Ho Hyun;Hwang, Jung Joo;Lee, Jungnam;Cho, Hyun Min;Park, Han Na
    • Journal of Trauma and Injury
    • /
    • v.28 no.3
    • /
    • pp.115-122
    • /
    • 2015
  • Purpose: To calculate Preventable Trauma Death Rate (PTDR), Trauma and Injury Severity Score (TRISS) is the most utilized evaluation index of the trauma centers in South Korea. However, this method may have greater variation due to the small number of the denominator in each trauma center. Therefore, we would like to develop new indicators that can be used easily on quality improvement activities by increasing the denominator. Methods: The medical records of 1005 major trauma (ISS >15) patients who visited 2 regional trauma center (A center and B center) in 2014 were analyzed retrospectively. PTDR and PARK Index (Preventable Major Trauma Death Rate, PMTDR) were calculated in 731 patients with inclusion criteria. We invented PARK Index to minimize the variation of preventability of trauma death. In PTDR the denominator is all number of deaths, and in PARK Index the denominator is number of all patients who have survival probability (Ps) larger than 0.25. Numerator is the number of deaths from patients who have Ps larger than 0.25. Results: The size of denominator was 40 in A center, 49 in B center, and overall 89 in PTDR. The size of denominator was significantly increased, and 287 (7.2-fold) in A center, 422 (8.6-fold) in B center, and overall 709 (8.0-fold) in PARK Index. PARK Index was 12.9% in A center, 8.3% in B center, and overall 10.2%. Conclusion: PARK Index is calculated as a rate of mortality from all major trauma patients who have Ps larger than 0.25. PARK Index obtain an effect that denominator is increased 8.0-fold than PTDR. Therefore PARK Index is able to compensate for greater disadvantage of PTDR. PARK Index is expected to be helpful in implementing evaluation of mortality outcome and to be a new index that can be applied to a trauma center quality improvement activity.

  • PDF

Factors Associated with Burnout of Nurses Working for Cancer Patients (말기 암 환자 간호사의 직무소진 관련 요인 분석)

  • Leou, Chung-Soon;Kim, Kwang-Kee;Kim, Jeoung-Hee
    • Journal of Hospice and Palliative Care
    • /
    • v.8 no.1
    • /
    • pp.45-51
    • /
    • 2005
  • Purpose: The purpose of this study is to examine the factors surrounding burnout of nurses caring for cancer patients. Methods: The sample of this study was conveniently selected among nurses who had hospice care experiences working in General Hospitals located in Seoul. This study was conducted by a self-administered questionnaire. Two hundred forty four questionnaires were retrieved and the response rate was 81.3%. The period of data collection was from February 25th to March 5th in 1994. Mean, standard deviation, T-test ANOVA, and multiple regression analysis were performed for statistical analysis. Results: The data showed that respondents reported to have burnout as many as 2.71 out of a 5.0 score. Bivariate analyses indicated that those who had hospice education reported to have a lower burnout than those without hospice education. Multivariate regression analyses revealed factors associated with burnout the nurses have had. They include being a Christian, higher job satisfaction, and experiences of hospice education. Hospice education reducing burnout for the nurses was observed by hierarchial multiple regression analyses, after controlling out the effect of coping methods, sociodemographic characteristics, job satisfaction, and job-related stresses on experience of burnout. This observation was not hue for physical and psychological burnout but for burnout in general and emotional one. But this was not confirmed among the nurses with type A personality. Conclusion: The findings of this study have a weakness in generalizability due to the sampling methodology used in this study. However, for the better hospice care further research with a probability sampling method are necessary.

  • PDF

Presence and characteristics of dysphagia in stroke patients without awareness of dysphagia (연하장애에 대한 병식이 없는 뇌졸중 환자들의 연하장애 유무와 양상)

  • Shin, Joong-Il;Kam, Kyung-Yoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.1
    • /
    • pp.294-300
    • /
    • 2011
  • This study was performed to examine the presence of dysphagia and analyze characteristics of the symptoms in cerebrovascular accident(CVA) patients without awareness of dysphagia. A questionnaire for this study was given to CVA patients who had visited P rehabilitation medical center in Busan. Eleven patients (4 males and 7 females) who answered no awareness of dysphasia were given to VFSS, functional dysphasia scale, and NCSE. Descriptive statistics and Pearson correlation analysis were performed by SPSS 12.0. All of subjects without awareness of dysphasia showed characteristics of dysphasia symptoms. Prominent dysfunctions were problems in oral phase and delay of swallowing reflex in pharyngeal phase. For the aspect of cognition, they showed lower score in construction, memory, and similarity than other NCSE items. There was highly significant correlation between orientation, judgment and delay of swallowing reflex. Verbal comprehension and residual materials in oral cavity showed closed correlation. CVA patients without awareness had dysphagia with high probability. The early evaluation of dysphagia should be performed in CVA patients in order to prevent complications due to CVA, so it is necessary to increase the effectiveness of rehabilitation therapy.

Automated Detecting and Tracing for Plagiarized Programs using Gumbel Distribution Model (굼벨 분포 모델을 이용한 표절 프로그램 자동 탐색 및 추적)

  • Ji, Jeong-Hoon;Woo, Gyun;Cho, Hwan-Gue
    • The KIPS Transactions:PartA
    • /
    • v.16A no.6
    • /
    • pp.453-462
    • /
    • 2009
  • Studies on software plagiarism detection, prevention and judgement have become widespread due to the growing of interest and importance for the protection and authentication of software intellectual property. Many previous studies focused on comparing all pairs of submitted codes by using attribute counting, token pattern, program parse tree, and similarity measuring algorithm. It is important to provide a clear-cut model for distinguishing plagiarism and collaboration. This paper proposes a source code clustering algorithm using a probability model on extreme value distribution. First, we propose an asymmetric distance measure pdist($P_a$, $P_b$) to measure the similarity of $P_a$ and $P_b$ Then, we construct the Plagiarism Direction Graph (PDG) for a given program set using pdist($P_a$, $P_b$) as edge weights. And, we transform the PDG into a Gumbel Distance Graph (GDG) model, since we found that the pdist($P_a$, $P_b$) score distribution is similar to a well-known Gumbel distribution. Second, we newly define pseudo-plagiarism which is a sort of virtual plagiarism forced by a very strong functional requirement in the specification. We conducted experiments with 18 groups of programs (more than 700 source codes) collected from the ICPC (International Collegiate Programming Contest) and KOI (Korean Olympiad for Informatics) programming contests. The experiments showed that most plagiarized codes could be detected with high sensitivity and that our algorithm successfully separated real plagiarism from pseudo plagiarism.

Time-series Mapping and Uncertainty Modeling of Environmental Variables: A Case Study of PM10 Concentration Mapping (시계열 환경변수 분포도 작성 및 불확실성 모델링: 미세먼지(PM10) 농도 분포도 작성 사례연구)

  • Park, No-Wook
    • Journal of the Korean earth science society
    • /
    • v.32 no.3
    • /
    • pp.249-264
    • /
    • 2011
  • A multi-Gaussian kriging approach extended to space-time domain is presented for uncertainty modeling as well as time-series mapping of environmental variables. Within a multi-Gaussian framework, normal score transformed environmental variables are first decomposed into deterministic trend and stochastic residual components. After local temporal trend models are constructed, the parameters of the models are estimated and interpolated in space. Space-time correlation structures of stationary residual components are quantified using a product-sum space-time variogram model. The ccdf is modeled at all grid locations using this space-time variogram model and space-time kriging. Finally, e-type estimates and conditional variances are computed from the ccdf models for spatial mapping and uncertainty analysis, respectively. The proposed approach is illustrated through a case of time-series Particulate Matter 10 ($PM_{10}$) concentration mapping in Incheon Metropolitan city using monthly $PM_{10}$ concentrations at 13 stations for 3 years. It is shown that the proposed approach would generate reliable time-series $PM_{10}$ concentration maps with less mean bias and better prediction capability, compared to conventional spatial-only ordinary kriging. It is also demonstrated that the conditional variances and the probability exceeding a certain thresholding value would be useful information sources for interpretation.