• Title/Summary/Keyword: Prior

Search Result 11,837, Processing Time 0.044 seconds

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

Determination of Optimal Concentration of LPE (Lysophosphatidylethanolamine) for Postharvest Stability and Quality of Strawberry Fruit (딸기 수확 후 저장기간 연장 및 품질 개선을 위한 LPE (Lysophosphatidylethanolamine) 적정 처리농도 구명)

  • Choi, Ki-Young;Kim, Il-Seop;Yun, Young-Sik;Choi, Eun-Young
    • Journal of Bio-Environment Control
    • /
    • v.25 no.3
    • /
    • pp.153-161
    • /
    • 2016
  • This study aims to determine the optimal maturity of strawberry fruits as affected by the application of lysophosphatidylethanolamine (LPE) and its optimal concentration for postharvest stability and quality. Prior to application of treatments, fruits that were classified into levels of maturity (0%, 50%, 70% and 100%) were air-dried for 40 minutes and stored in the refrigerator at $4^{\circ}C$ for 12 days. Fruits at 70% maturity were dipped into 0, 10, 50 and $100mg{\cdot}L^{-1}$ LPE solutions for 1 minute. A lower range of concentration (0, 2.5, 5, 10 and $25mg{\cdot}L^{-1}$) was applied to fruits at different maturity levels. Data on fresh weight, hardness at vertical and horizontal loading positions, color index and sugar content during storage were collected. Based on fruits with 70% maturity dipped in LPE concentrations, there were no significant differences found on fresh weight, color index and sugar content. However, the application of $10mg{\cdot}L^{-1}$ LPE gave the highest hardness at vertical loading position while $100mg{\cdot}L^{-1}$ had the lowest average. At lower range of LPE concentrations, fresh weight was not significantly affected by LPE application and maturity levels. Hardness of fruits was mainly based on the maturity of the fruits. Increased hardness was observed in the fruits with 70% maturity dipped into the $5mg{\cdot}L^{-1}$ of LPE solution. The hardness and Hunter's $L^*$ and $b^*$ value of 100% matured fruits gave lowest values despite the application of $25mg{\cdot}L^{-1}$ LPE 12 days after storage.

Comparison of Early Germinating Vigor, Germination Speed and Germination Rate of Varieties in Poa pratensis L., Lolium perenne L. and Festuca arundinacea Schreb. Grown Under Different Growing Conditions (생육환경에 따른 Poa pratensis L., Lolium perenne L. 및 Festuca arundinacea Schreb.의 초종 및 품종별 발아세, 발아속도 및 발아율 비교)

  • 김경남;남상용
    • Asian Journal of Turfgrass Science
    • /
    • v.17 no.1
    • /
    • pp.1-12
    • /
    • 2003
  • Research was Initiated to investigate germination characteristics of cool-season grasses (CSG). Several turfgrasses were tested in different experiments. Experiments I and III were conducted under a room temperature condition of 16$^{\circ}C$ to 23 $^{\circ}C$ and under a constant light condition at 25 $^{\circ}C$, respectively. An alternative environment condition that is a requirement for a CSG germination test by International Seed Testing Association (ISTA) was applied in the Experiment II, consisting of 8-hr light at 25 $^{\circ}C$ and 16-hr dark at 15 $^{\circ}C$. In each experiment, data such as early germinating vigor, germination speed and germination rate were evaluated. Six turfgrass entries were comprised of two varieties each from Kentucky bluegrass (KB, Poa pratensis L.), perennial ryegrass (PR, Lolium perenne L.), and tall fescue (TF, Festuca arundinacea Schreb.), respectively. Significant differences were observed in early germinating vigor, germination speed and germination rate. Early germinating vigor as measured by days to 70% seed germination was variable according to environment conditions, turfgrasses and varieties. It was less than 6 days in PR and 6 to 9 days in TF. However, KB resulted in 11 to 13 days under an alternative condition and 11 to 28 days under a room temperature condition. The germination speed was fastest in PR of 7 to 10 days and slowest in KB of 14 to 21 days. However, intermediate speed of 10 to 14 days was associated with TF. There were considerable variations in germination rate among turfgrasses according to different conditions. Generally, PR and TF germinated well, regardless of environment conditions. However, a great difference was observed among KB varieties, when compared with others. Under a room temperature condition, total germination rate was 71.0% in Midnight and 77.7% in Award. And it increased under an alternative condition, which was 81.7% and 91.7% in Award and Midnight, respectively. However, the poorest rate was found under a constant temperature condition, resulting in 18.0% in Award and 15.3% in Midnight. These results suggest that an intensive germination test required by ISTA be needed prior to the decision of seeding rate, including early germinating vigor and germination speed as well as total germination rate. KB is very sensitive to environment conditions and thus its variety selection should be based on a careful expertise.

Evaluation of Tuberculosis Activity in Patients with Anthracofibrosis by Use of Serum Levels of IL-2 $sR{\alpha}$, IFN-${\gamma}$ and TBGL(Tuberculous Glycolipid) Antibody (Anthracofibrosis의 결핵활동성 지표로서 혈청 IL-2 $sR{\alpha}$, IFN-${\gamma}$, 그리고 TBGL(tuberculous glycolipid) antibody 측정의 의의)

  • Jeong, Do Young;Cha, Young Joo;Lee, Byoung Jun;Jung, Hye Ryung;Lee, Sang Hun;Shin, Jong Wook;Kim, Jae-Yeol;Park, In Won;Choi, Byoung Whui
    • Tuberculosis and Respiratory Diseases
    • /
    • v.55 no.3
    • /
    • pp.250-256
    • /
    • 2003
  • Background : Anthracofibrosis, a descriptive term for multiple black pigmentation with fibrosis on bronchoscopic examination, has a close relationship with active tuberculosis (TB). However, TB activity is determined in the later stage by the TB culture results in some cases of anthracofibrosis. Therefore, it is necessary to identify early markers of TB activity in anthracofibrosis. There have been several reports investigating the serum levels of IL-2 $sR{\alpha}$, IFN-${\gamma}$ and TBGL antibody for the evaluation of TB activity. In the present study, we tried to measure the above mentioned serologic markers for the evaluation of TB activity in patients with anthracofibrosis. Methods : Anthracofibrosis was defined when there was deep pigmentation (in more than two lobar bronchi) and fibrotic stenosis of the bronchi on bronchoscopic examination. The serum of patients with anthracofibrosis was collected and stored under refrigeration before the start of anti-TB medication. The serum of healthy volunteers (N=16), patients with active TB prior to (N=22), and after (N=13), 6 month-medication was also collected and stored. Serum IL-2 $sR{\alpha}$, IFN-${\gamma}$ were measured with ELISA kit (R&D system, USA) and serum TBGL antibody was measured with TBGL EIA kit (Kyowa Inc, Japan). Results : Serum levels of IL-2 $sR{\alpha}$ in healthy volunteers, active TB patients before and after medication, and patients with anthracofibrosis were $640{\pm}174$, $1,611{\pm}2,423$, $953{\pm}562$, and $863{\pm}401$ pg/ml, respectively. The Serum IFN-${\gamma}$ levels were 0, $8.16{\pm}17.34$, $0.70{\pm}2.53$, and $2.33{\pm}6.67$ pg/ml, and TBGL antibody levels were $0.83{\pm}0.80$, $5.91{\pm}6.71$, $6.86{\pm}6.85$, and $3.22{\pm}2.59$ U/ml, respectively. The serum level of TBGL antibody was lower than of other groups (p<0.05). There was no significant difference of serum IL-2 $sR{\alpha}$ and IFN-${\gamma}$ levels among the four groups. Conclusion : The serum levels of IL-2 $sR{\alpha}$, IFN-${\gamma}$ and TBGL antibody were not useful in the evaluation of TB activity in patients with anthracofibrosis. More useful ways need to be developed for the differentiation of active TB in patients with anthracofibrosis.

A Survey on Child Battering among Elementary School Children and Related Factors in Urban and Rural Areas (도시 및 농어촌 아동의 가정내 구타발생률 및 관련요인 조사)

  • Jeon, Kae-Soon;Park, Jung-Han
    • Journal of Preventive Medicine and Public Health
    • /
    • v.24 no.2 s.34
    • /
    • pp.232-242
    • /
    • 1991
  • To determine the incidence rate of child battering and related factors, a questionnaire survey was Conducted on 1,255 children in 4th and 5th grades of two elementary schools (one in the upper economic class area with 519 students and the other in the lower economic class area with 504 students) in Taegu and two schools in rural areas of Kyungpook province (120 and 112 students, respectively) from 1st May to 10th May 1990. Total number of children who were battered during one-month period (1-30 April 1990) prior to the survey was 918 (73.1%). Among the battered children 87 (6.9%) were severely battered (twice or more in a month by kicking or more severe method) and 831 children (66.2%) were moderately battered (all other battering than severe battering). The percentage of battered children and degree of battering were not significantly different between two schools in Taegu and between urban and rural areas. Common reasons for battering were disobediance (61.9%), making troubles (34.9%), and poor school performance (33.3%). However, 16.1% of severely battered children responded that the perpetrators battered them to wreak their anger and 5.7% of them did not know the reason why they were battered. A majority of the battered children (65%) regretted their fault after being battered but 20.7% of the severely battered children wanted to run away and 9.2% of them had an urge to commit suicide. While most of the physical injuries due to battering were minor as bruise (52.7%) but some of them were severe, e.g., bone fracture (2.5%), skin laceration (1.5%), and loss of consciousness. (0.2%). The common psycho-behavioral complaints of the severely battered children were unwillingness to study (31%), unwillingness to live (17.2%), and reluctance to go home (13.8%). The incidence rate of severe battering was significantly higher (p=0.018) among the children living in a quarter attached to a store (14.0%) than the children living in an apartment (6.6%) and individual house (6.2%). The incidence rate of severe battering was higher among children living in a rental house (8.4%) than children living in their own house 6.3%) (p=0.005). The children of father only working (5.1%) and mother only working (4.5%) had a lower incidence rate of severe battering than the children of both parents working (9.1%) and both parents unemployed (20.7%) (p=0.006). More children were battered when there was a sick family member (80.8%) compared with the children without a sick family member (71.4%) (p=0.001). The incidence rates of severe and moderate battering increased as the frequency of quarreling between mother and father increased (P=0.000). The percentage of unbattered children was higher among children whose father's occupation was professional (39.4%) than that of the total study subjects (26.9%) (p<0.001).

  • PDF

A Study on the Meaning and Future of the Moon Treaty (달조약의 의미와 전망에 관한 연구)

  • Kim, Han-Taek
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.21 no.1
    • /
    • pp.215-236
    • /
    • 2006
  • This article focused on the meaning of the 1979 Moon Treaty and its future. Although the Moon Treaty is one of the major 5 space related treaties, it was accepted by only 11 member states which are non-space powers, thus having the least enfluences on the field of space law. And this article analysed the relationship between the 1979 Moon Treay and 1967 Space Treaty which was the first principle treaty, and searched the meaning of the "Common Heritage of Mankind(hereinafter CHM)" stipulated in the Moon treaty in terms of international law. This article also dealt with the present and future problems arising from the Moon Treaty. As far as the 1967 Space Treaty is concerned the main standpoint is that outer space including the moon and the other celestial bodies is res extra commercium, areas not subject to national appropriation like high seas. It proclaims the principle non-appropriation concerning the celestial bodies in outer space. But the concept of CHM stipulated in the Moon Treaty created an entirely new category of territory in international law. This concept basically conveys the idea that the management, exploitation and distribution of natural resources of the area in question are matters to be decided by the international community and are not to be left to the initiative and discretion of individual states or their nationals. Similar provision is found in the 1982 Law of the Sea Convention that operates the International Sea-bed Authority created by the concept of CHM. According to the Moon Treaty international regime will be established as the exploitation of the natural resources of the celestial bodies other than the Earth is about to become feasible. Before the establishment of an international regime we could imagine moratorium upon the expoitation of the natural resources on the celestial bodies. But the drafting history of the Moon Treaty indicates that no moratorium on the exploitation of natural resources was intended prior to the setting up of the international regime. So each State Party could exploit the natural resources bearing in mind that those resouces are CHM. In this respect it would be better for Korea, now not a party to the Moon Treaty, to be a member state in the near future. According to the Moon Treaty the efforts of those countries which have contributed either directly or indirectly the exploitation of the moon shall be given special consideration. The Moon Treaty, which although is criticised by some space law experts represents a solid basis upon which further space exploration can continue, shows the expression of the common collective wisdom of all member States of the United Nations and responds the needs and possibilities of those that have already their technologies into outer space.

  • PDF

Effects of firm strategies on customer acquisition of Software as a Service (SaaS) providers: A mediating and moderating role of SaaS technology maturity (SaaS 기업의 차별화 및 가격전략이 고객획득성과에 미치는 영향: SaaS 기술성숙도 수준의 매개효과 및 조절효과를 중심으로)

  • Chae, SeongWook;Park, Sungbum
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.151-171
    • /
    • 2014
  • Firms today have sought management effectiveness and efficiency utilizing information technologies (IT). Numerous firms are outsourcing specific information systems functions to cope with their short of information resources or IT experts, or to reduce their capital cost. Recently, Software-as-a-Service (SaaS) as a new type of information system has become one of the powerful outsourcing alternatives. SaaS is software deployed as a hosted and accessed over the internet. It is regarded as the idea of on-demand, pay-per-use, and utility computing and is now being applied to support the core competencies of clients in areas ranging from the individual productivity area to the vertical industry and e-commerce area. In this study, therefore, we seek to quantify the value that SaaS has on business performance by examining the relationships among firm strategies, SaaS technology maturity, and business performance of SaaS providers. We begin by drawing from prior literature on SaaS, technology maturity and firm strategy. SaaS technology maturity is classified into three different phases such as application service providing (ASP), Web-native application, and Web-service application. Firm strategies are manipulated by the low-cost strategy and differentiation strategy. Finally, we considered customer acquisition as a business performance. In this sense, specific objectives of this study are as follows. First, we examine the relationships between customer acquisition performance and both low-cost strategy and differentiation strategy of SaaS providers. Secondly, we investigate the mediating and moderating effects of SaaS technology maturity on those relationships. For this purpose, study collects data from the SaaS providers, and their line of applications registered in the database in CNK (Commerce net Korea) in Korea using a questionnaire method by the professional research institution. The unit of analysis in this study is the SBUs (strategic business unit) in the software provider. A total of 199 SBUs is used for analyzing and testing our hypotheses. With regards to the measurement of firm strategy, we take three measurement items for differentiation strategy such as the application uniqueness (referring an application aims to differentiate within just one or a small number of target industry), supply channel diversification (regarding whether SaaS vendor had diversified supply chain) as well as the number of specialized expertise and take two items for low cost strategy like subscription fee and initial set-up fee. We employ a hierarchical regression analysis technique for testing moderation effects of SaaS technology maturity and follow the Baron and Kenny's procedure for determining if firm strategies affect customer acquisition through technology maturity. Empirical results revealed that, firstly, when differentiation strategy is applied to attain business performance like customer acquisition, the effects of the strategy is moderated by the technology maturity level of SaaS providers. In other words, securing higher level of SaaS technology maturity is essential for higher business performance. For instance, given that firms implement application uniqueness or a distribution channel diversification as a differentiation strategy, they can acquire more customers when their level of SaaS technology maturity is higher rather than lower. Secondly, results indicate that pursuing differentiation strategy or low cost strategy effectively works for SaaS providers' obtaining customer, which means that continuously differentiating their service from others or making their service fee (subscription fee or initial set-up fee) lower are helpful for their business success in terms of acquiring their customers. Lastly, results show that the level of SaaS technology maturity mediates the relationships between low cost strategy and customer acquisition. That is, based on our research design, customers usually perceive the real value of the low subscription fee or initial set-up fee only through the SaaS service provide by vender and, in turn, this will affect their decision making whether subscribe or not.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Changes in Immunogenicity of Preserved Aortic Allograft (보존된 동종동맥편 조직의 면역성 변화에 관한 연구)

  • 전예지;박영훈;강영선;최희숙;임창영
    • Journal of Chest Surgery
    • /
    • v.29 no.11
    • /
    • pp.1173-1181
    • /
    • 1996
  • The causes of degenerative changes in allograft cardiac valves are not well known to this day. Today's preserved allografts possess highly viable endothelial cells and degeneration of allografts can be facilitated by immune reaction which may be mediated by these viable cells. To test the antigenicity of endothelial cells, pieces from aortic wall were obtained from fresh and cryo-preserved rat allograft. Timings of sampling were prior to sterilization, after sterilization, after 1, 2, 7, 14 days of fresh preservation and cryopreservation. Endothelial cells were tested by immunohistochemical methods using monoclonal antibodies to MHC class I(MRC OX-18), class II(MRC OX-6) and ICAM-1 antigens. After transplantation of each group of aortic allograft at the subcutaneous layers of rats, population of CD4$^{+}$ T cell and CD8$^{+}$ T cell were analyzed with monoclonal antibodies after 1, 2, 3, 4, 6 and 8 weeks. MHC class I expression was 23.95% before preservation and increased to 35.53~48.08% after preservation(p=0.0183). MHC Class II expression was 9.72% before preservation and 10.13~13.39% after preservation(P=0.1599). ICAM-1 expression was 15.02% before preservation and increased to 19.85~35.33% after preservation(P=0.001). The proportion of CD4$^{+}$ T-cell was 42.13% before transplantation. And this was 49.23~36.8% after transplantation in No treat group (p=0.955), decreased to 29.56~32.80% in other group(p=0.0001~0.008). In all the groups, the proportion of CD8$^{+}$ T-cell increased from 25.57% before transplantation to 42.32~58.92% after transplantation(p=0.000l~0.0002). The CD4$^{+}$/CD8$^{+}$ ratio decreased from 1.22~2.28 at first week to 0.47~0.95 at eighth week(p=0.0001). The results revealed that the expression of MHC class I and ICAM-1 in aortic allograft endothelium were increased but that of MHC class II were not changed, despite the different method of preservation. During 8 weeks after transplantation of aortic allograft, the subpopulations of CD4$^{+}$ T cell were not changed or only slightly decreased but those of CD8$^{+}$ T cell were progressively increased.ely increased.

  • PDF