• Title/Summary/Keyword: cases study

Search Result 26,601, Processing Time 0.062 seconds

Image Watermarking for Copyright Protection of Images on Shopping Mall (쇼핑몰 이미지 저작권보호를 위한 영상 워터마킹)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.147-157
    • /
    • 2013
  • With the advent of the digital environment that can be accessed anytime, anywhere with the introduction of high-speed network, the free distribution and use of digital content were made possible. Ironically this environment is raising a variety of copyright infringement, and product images used in the online shopping mall are pirated frequently. There are many controversial issues whether shopping mall images are creative works or not. According to Supreme Court's decision in 2001, to ad pictures taken with ham products is simply a clone of the appearance of objects to deliver nothing but the decision was not only creative expression. But for the photographer's losses recognized in the advertising photo shoot takes the typical cost was estimated damages. According to Seoul District Court precedents in 2003, if there are the photographer's personality and creativity in the selection of the subject, the composition of the set, the direction and amount of light control, set the angle of the camera, shutter speed, shutter chance, other shooting methods for capturing, developing and printing process, the works should be protected by copyright law by the Court's sentence. In order to receive copyright protection of the shopping mall images by the law, it is simply not to convey the status of the product, the photographer's personality and creativity can be recognized that it requires effort. Accordingly, the cost of making the mall image increases, and the necessity for copyright protection becomes higher. The product images of the online shopping mall have a very unique configuration unlike the general pictures such as portraits and landscape photos and, therefore, the general image watermarking technique can not satisfy the requirements of the image watermarking. Because background of product images commonly used in shopping malls is white or black, or gray scale (gradient) color, it is difficult to utilize the space to embed a watermark and the area is very sensitive even a slight change. In this paper, the characteristics of images used in shopping malls are analyzed and a watermarking technology which is suitable to the shopping mall images is proposed. The proposed image watermarking technology divide a product image into smaller blocks, and the corresponding blocks are transformed by DCT (Discrete Cosine Transform), and then the watermark information was inserted into images using quantization of DCT coefficients. Because uniform treatment of the DCT coefficients for quantization cause visual blocking artifacts, the proposed algorithm used weighted mask which quantizes finely the coefficients located block boundaries and coarsely the coefficients located center area of the block. This mask improves subjective visual quality as well as the objective quality of the images. In addition, in order to improve the safety of the algorithm, the blocks which is embedded the watermark are randomly selected and the turbo code is used to reduce the BER when extracting the watermark. The PSNR(Peak Signal to Noise Ratio) of the shopping mall image watermarked by the proposed algorithm is 40.7~48.5[dB] and BER(Bit Error Rate) after JPEG with QF = 70 is 0. This means the watermarked image is high quality and the algorithm is robust to JPEG compression that is used generally at the online shopping malls. Also, for 40% change in size and 40 degrees of rotation, the BER is 0. In general, the shopping malls are used compressed images with QF which is higher than 90. Because the pirated image is used to replicate from original image, the proposed algorithm can identify the copyright infringement in the most cases. As shown the experimental results, the proposed algorithm is suitable to the shopping mall images with simple background. However, the future study should be carried out to enhance the robustness of the proposed algorithm because the robustness loss is occurred after mask process.

A Study for Improvement of Erythropoietin Responsiveness in Hemodialysis Patients (혈액 투석 환자에서 조혈 호르몬 치료 효과 향상에 대한 연구)

  • Park, Jong-Won;Do, Jun-Yeung;Yoon, Kyung-Woo
    • Journal of Yeungnam Medical Science
    • /
    • v.18 no.2
    • /
    • pp.226-238
    • /
    • 2001
  • Background: Anemia in chronic renal failure plays an important role in increasing morbidity of dialysis patients. The causes of the anemia are multifactorial. With using of erythropoietin(EPO) most of uremia-induced anemia can be overcome. However, about 10% of renal failure patients shows EPO-resistant anemia. Hyporesponsiveness to EPO has been related to many factors: iron deficiency, aluminum intoxication, inflammations, malignancies and secondary hyperparathyroidism. So I evaluated the improvement of EPO responsiveness after correction of above several factors. Materials and Methods: Seventy-two patients on hemodialysis over 6 months were treated with intravenous ascorbic acid(IVAA, 300 mg t.i.w. for 12 weeks), After administration of IVAA for 12 weeks, patients were classified into several groups according to iron status, serum aluminum levels and i-PTH levels. Indivisualized treatments were performed: increased iron supplement for absolute iron deficiency, active vitamin D3 for secondary hyperparathyroidism and desferrioxamine(DFO, 5 mg/kg t.i.w.) for aluminum intoxication or hyperferritinemia. Results: 1) Result of IVAA therapy for 12 weeks on all patients(n=72). Hemoglobin levels at 2, 4, 6 week were significantly elevated compared to baseline, but those of hemoglobin at 8, 10, 12 week were not significantly different. 2) Result of IVAA therapy for 20 weeks on patients with 100 ${\mu}g/l$ ${\leq}$ ferritin < 500 ${\mu}g/l$ and transferrin saturation(Tsat) below 30%(n=30). After treatment of IV AA for 12 weeks, patients were evaluated the response of therapy according to iron status. Patients with 100 ${\mu}g/l$ ${\leq}$ ferritin < 500 ${\mu}g/l$ and Tsat below 30% showed the most effective response. These patients were treated further for 8 weeks. Hemoglobin levels at 2, 4 week were significantly increased compared to baseline with significantly reduced doses of EPO at 2, 4, 6, 10, 12, 16, 20 week. Concomitantly significantly improvement of Tsat at 2, 6, 16, 20 week compared to baseline were identified. 3) Result of IVAA therapy for 12 weeks followed by DFO therapy for 8 weeks on patients with serum aluminum above 4 ${\mu}g/l$(n=12) Hemoglobin levels were not significantly increased during IVAA therapy for 12 weeks but dosages of EPO were significantly decreased at 2, 4, 6, 8 week during DFO therapy compared to pre-treatment status. Conclusion: IVAA can be helpful for the treatment of the anemia caused by functional iron deficiency and can reduce the dosage of EPO for anemia correction. And administration of low dose DFO, in cases of increased serum aluminum level, can reduce the requirement of EPO.

  • PDF

Evaluation of Tuberculosis Activity in Patients with Anthracofibrosis by Use of Serum Levels of IL-2 $sR{\alpha}$, IFN-${\gamma}$ and TBGL(Tuberculous Glycolipid) Antibody (Anthracofibrosis의 결핵활동성 지표로서 혈청 IL-2 $sR{\alpha}$, IFN-${\gamma}$, 그리고 TBGL(tuberculous glycolipid) antibody 측정의 의의)

  • Jeong, Do Young;Cha, Young Joo;Lee, Byoung Jun;Jung, Hye Ryung;Lee, Sang Hun;Shin, Jong Wook;Kim, Jae-Yeol;Park, In Won;Choi, Byoung Whui
    • Tuberculosis and Respiratory Diseases
    • /
    • v.55 no.3
    • /
    • pp.250-256
    • /
    • 2003
  • Background : Anthracofibrosis, a descriptive term for multiple black pigmentation with fibrosis on bronchoscopic examination, has a close relationship with active tuberculosis (TB). However, TB activity is determined in the later stage by the TB culture results in some cases of anthracofibrosis. Therefore, it is necessary to identify early markers of TB activity in anthracofibrosis. There have been several reports investigating the serum levels of IL-2 $sR{\alpha}$, IFN-${\gamma}$ and TBGL antibody for the evaluation of TB activity. In the present study, we tried to measure the above mentioned serologic markers for the evaluation of TB activity in patients with anthracofibrosis. Methods : Anthracofibrosis was defined when there was deep pigmentation (in more than two lobar bronchi) and fibrotic stenosis of the bronchi on bronchoscopic examination. The serum of patients with anthracofibrosis was collected and stored under refrigeration before the start of anti-TB medication. The serum of healthy volunteers (N=16), patients with active TB prior to (N=22), and after (N=13), 6 month-medication was also collected and stored. Serum IL-2 $sR{\alpha}$, IFN-${\gamma}$ were measured with ELISA kit (R&D system, USA) and serum TBGL antibody was measured with TBGL EIA kit (Kyowa Inc, Japan). Results : Serum levels of IL-2 $sR{\alpha}$ in healthy volunteers, active TB patients before and after medication, and patients with anthracofibrosis were $640{\pm}174$, $1,611{\pm}2,423$, $953{\pm}562$, and $863{\pm}401$ pg/ml, respectively. The Serum IFN-${\gamma}$ levels were 0, $8.16{\pm}17.34$, $0.70{\pm}2.53$, and $2.33{\pm}6.67$ pg/ml, and TBGL antibody levels were $0.83{\pm}0.80$, $5.91{\pm}6.71$, $6.86{\pm}6.85$, and $3.22{\pm}2.59$ U/ml, respectively. The serum level of TBGL antibody was lower than of other groups (p<0.05). There was no significant difference of serum IL-2 $sR{\alpha}$ and IFN-${\gamma}$ levels among the four groups. Conclusion : The serum levels of IL-2 $sR{\alpha}$, IFN-${\gamma}$ and TBGL antibody were not useful in the evaluation of TB activity in patients with anthracofibrosis. More useful ways need to be developed for the differentiation of active TB in patients with anthracofibrosis.

Clinical Applications and Efficacy of Korean Ginseng (고려인삼의 주요 효능과 그 임상적 응용)

  • Nam, Ki-Yeul
    • Journal of Ginseng Research
    • /
    • v.26 no.3
    • /
    • pp.111-131
    • /
    • 2002
  • Korean ginseng (Panax ginseng C.A. Meyer) received a great deal of attention from the Orient and West as a tonic agent, health food and/or alternative herbal therapeutic agent. However, controversy with respect to scientific evidence on pharmacological effects especially, evaluation of clinical efficacy and the methodological approach still remains to be solved. Author reviewed those articles published since 1980 when pharmacodynamic studies on ginseng have intensively started. Special concern was paid on metabolic disorders including diabetes mellitus, circulatory disorders, malignant tumor, sexual dysfunction, and physical and mental performance to give clear information to those who are interested in pharmacological study of ginseng and to promote its clinical use. With respect to chronic diseases such as diabetes mellitus, atherosclerosis, high blood pressure, malignant disorders, and sexual disorders, it seems that ginseng plays preventive and restorative role rather than therapeutics. Particularly, ginseng plays a significant role in ameliorating subjective symptoms and preventing quality of life from deteriorating by long term exposure of chemical therapeutic agents. Also it seems that the potency of ginseng is mild, therefore it could be more effective when used concomitantly with conventional therapy. Clinical studies on the tonic effect of ginseng on work performance demonstrated that physical and mental dysfunction induced by various stresses are improved by increasing adaptability of physical condition. However, the results obtained from clinical studies cannot be mentioned in the indication, which are variable upon the scientist who performed those studies. In this respect, standardized ginseng product and providing planning of the systematic clinical research in double-blind randomized controlled trials are needed to assess the real efficacy for proposing ginseng indication. Pharmacological mode of action of ginseng has not yet been fully elucidated. Pharmacodynamic and pharmacokinetic researches reveal that the role of ginseng not seem to be confined to a given single organ. It has been known that ginseng plays a beneficial role in such general organs as central nervous, endocrine, metabolic, immune systems, which means ginseng improves general physical and mental conditons. Such multivalent effect of ginseng can be attributed to the main active component of ginseng,ginsenosides or non-saponin compounds which are also recently suggested to be another active ingredients. As is generally the similar case with other herbal medicines, effects of ginseng cannot be attributed as a given single compound or group of components. Diversified ingredients play synergistic or antagonistic role each other and act in harmonized manner. A few cases of adverse effect in clinical uses are reported, however, it is not observed when standardized ginseng products are used and recommended dose was administered. Unfavorable interaction with other drugs has also been suggested, which the information on the products and administered dosage are not available. However, efficacy, safety, interaction or contraindication with other medicines has to be more intensively investigated in order to promote clinical application of ginseng. For example, daily recommended doses per day are not agreement as 1-2g in the West and 3-6 g in the Orient. Duration of administration also seems variable according to the purpose. Two to three months are generally recommended to feel the benefit but time- and dose-dependent effects of ginseng still need to be solved from now on. Furthermore, the effect of ginsenosides transformed by the intestinal microflora, and differential effect associated with ginsenosides content and its composition also should be clinically evaluated in the future. In conclusion, the more wide-spread use of ginseng as a herbal medicine or nutraceutical supplement warrants the more rigorous investigations to assess its effacy and safety. In addition, a careful quality control of ginseng preparations should be done to ensure an acceptable standardization of commercial products.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Structure of Export Competition between Asian NIEs and Japan in the U.S. Import Market and Exchange Rate Effects (한국(韓國)의 아시아신흥공업국(新興工業國) 및 일본(日本)과의 대미수출경쟁(對美輸出競爭) : 환율효과(換率效果)를 중심(中心)으로)

  • Jwa, Sung-hee
    • KDI Journal of Economic Policy
    • /
    • v.12 no.2
    • /
    • pp.3-49
    • /
    • 1990
  • This paper analyzes U.S. demand for imports from Asian NIEs and Japan, utilizing the Almost Ideal Demand System (AIDS) developed by Deaton and Muellbauer, with an emphasis on the effect of changes in the exchange rate. The empirical model assumes a two-stage budgeting process in which the first stage represents the allocation of total U.S. demand among three groups: the Asian NIEs and Japan, six Western developed countries, and the U.S. domestic non-tradables and import competing sector. The second stage represents the allocation of total U.S. imports from the Asian NIEs and Japan among them, by country. According to the AIDS model, the share equation for the Asia NIEs and Japan in U.S. nominal GNP is estimated as a single equation for the first stage. The share equations for those five countries in total U.S. imports are estimated as a system with the general demand restrictions of homogeneity, symmetry and adding-up, together with polynomially distributed lag restrictions. The negativity condition is also satisfied for all cases. The overall results of these complicated estimations, using quarterly data from the first quarter of 1972 to the fourth quarter of 1989, are quite promising in terms of the significance of individual estimators and other statistics. The conclusions drawn from the estimation results and the derived demand elasticities can be summarized as follows: First, the exports of each Asian NIE to the U.S. are competitive with (substitutes for) Japan's exports, while complementary to the exports of fellow NIEs, with the exception of the competitive relation between Hong Kong and Singapore. Second, the exports of each Asian NIE and of Japan to the U.S. are competitive with those of Western developed countries' to the U.S, while they are complementary to the U.S.' non-tradables and import-competing sector. Third, as far as both the first and second stages of budgeting are coneidered, the imports from each Asian NIE and Japan are luxuries in total U.S. consumption. However, when only the second budgeting stage is considered, the imports from Japan and Singapore are luxuries in U.S. imports from the NIEs and Japan, while those of Korea, Taiwan and Hong Kong are necessities. Fourth, the above results may be evidenced more concretely in their implied exchange rate effects. It appears that, in general, a change in the yen-dollar exchange rate will have at least as great an impact, on an NIE's share and volume of exports to the U.S. though in the opposite direction, as a change in the exchange rate of the NIE's own currency $vis-{\grave{a}}-vis$ the dollar. Asian NIEs, therefore, should counteract yen-dollar movements in order to stabilize their exports to the U.S.. More specifically, Korea should depreciate the value of the won relative to the dollar by approximately the same proportion as the depreciation rate of the yen $vis-{\grave{a}}-vis$ the dollar, in order to maintain the volume of Korean exports to the U.S.. In the worst case scenario, Korea should devalue the won by three times the maguitude of the yen's depreciation rate, in order to keep market share in the aforementioned five countries' total exports to the U.S.. Finally, this study provides additional information which may support empirical findings on the competitive relations among the Asian NIEs and Japan. The correlation matrices among the strutures of those five countries' exports to the U.S.. during the 1970s and 1980s were estimated, with the export structure constructed as the shares of each of the 29 industrial sectors' exports as defined by the 3 digit KSIC in total exports to the U.S. from each individual country. In general, the correlation between each of the four Asian NIEs and Japan, and that between Hong Kong and Singapore, are all far below .5, while the ones among the Asian NIEs themselves (except for the one between Hong Kong and Singapore) all greatly exceed .5. If there exists a tendency on the part of the U.S. to import goods in each specific sector from different countries in a relatively constant proportion, the export structures of those countries will probably exhibit a high correlation. To take this hypothesis to the extreme, if the U.S. maintained an absolutely fixed ratio between its imports from any two countries for each of the 29 sectors, the correlation between the export structures of these two countries would be perfect. Therefore, since any two goods purchased in a fixed proportion could be classified as close complements, a high correlation between export structures will imply a complementary relationship between them. Conversely, low correlation would imply a competitive relationship. According to this interpretation, the pattern formed by the correlation coefficients among the five countries' export structures to the U.S. are consistent with the empirical findings of the regression analysis.

  • PDF

A Study on the Improvement Plans of Police Fire Investigation (경찰화재조사의 개선방안에 관한 연구)

  • SeoMoon, Su-Cheol
    • Journal of Korean Institute of Fire Investigation
    • /
    • v.9 no.1
    • /
    • pp.103-121
    • /
    • 2006
  • We are living in more comfortable circumstances with the social developments and the improvement of the standard of living, but, on the other hand, we are exposed to an increase of the occurrences of tires on account of large-sized, higher stories, deeper underground building and the use of various energy resources. The materials of the floor in a residence modern society have been going through various alterations in accordance with the uses of a residence and are now used as final goods in interioring the bottom of apartments, houses and shops. There are so many kinds of materials you usually come in contact with, but in the first place, we need to make an experiment on the spread of the fire with the hypocaust used as the floors of apartments, etc. and the floor covers you usually can get easily. We, scientific investigators, can get in contact with the accidents caused by incendiarism or an accidental fire closely connected with petroleum stuffs on the floor materials that give rise to lots of problems. on this account, I'd like to propose that we conduct an experiment on fire shapes by each petroleum stuff and that discriminate an accidental tire from incendiarism. In an investigation, it seems that finding a live coal could be an essential part of clearing up the cause of a tire but it could not be the cause of a fire itself. And besides, all sorts of tire cases or fire accidents have some kind of legislation and standard to minimize and at an early stage cope with the damage by tires. That is to say, we are supposed to install each kind of electric apparatus, automatic alarm equipment, automatic fire extinguisher in order to protect ourselves from the danger of fires and check them at any time and also escape urgently in case of fire-outbreaking or build a tire-proof construction to prevent flames from proliferating to the neighboring areas. Namely, you should take several factors into consideration to investigate a cause of a case or an accident related to fire. That means it's not in reason for one investigator or one investigative team to make clear of the starting part and the cause of a tire. accordingly, in this thesis, explanations would be given set limits to the judgement and verification on the cause of a fire and the concrete tire-spreading part through investigation on the very spot that a fire broke out. The fire-discernment would also be focused on the early stage fire-spreading part fire-outbreaking resources, and I think the realities of police tire investigations and the problems are still a matter of debate. The cause of a fire must be examined into by logical judgement on the basis of abundant scientific knowledge and experience covering the whole of fire phenomena. The judgement of the cause should be made with fire-spreading situation at the spot as the central figure and in case of verifying, you are supposed to prove by the situational proof from the traces of the tire-spreading to the fire-outbreaking sources. The causal relation on a fire-outbreak should not be proved by arbitrary opinion far from concrete facts, and also there is much chance of making mistakes if you draw deduction from a coincidence. It is absolutely necessary you observe in an objective attitude and grasp the situation of a tire in the investigation of the cause. Having a look at the spot with a prejudice is not allowed. The source of tire-outbreak itself is likely to be considered as the cause of a tire and that makes us doubt about the results according to interests of the independent investigators. So to speak, they set about investigations, the police investigation in the hope of it not being incendiarism, the fire department in the hope of it not being problems in installments or equipments, insurance companies in the hope of it being any incendiarism, electric fields in the hope of it not being electric defects, the gas-related in the hope of it not being gas problems. You could not look forward to more fair investigation and break off their misgivings. It is because the firing source itself is known as the cause of a fire and civil or criminal responsibilities are respected to the firing source itself. On this occasion, investigating the cause of a fire should be conducted with research, investigation, emotion independent, and finally you should clear up the cause with the results put together.

  • PDF

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Analysis on Factors Influencing Welfare Spending of Local Authority : Implementing the Detailed Data Extracted from the Social Security Information System (지방자치단체 자체 복지사업 지출 영향요인 분석 : 사회보장정보시스템을 통한 접근)

  • Kim, Kyoung-June;Ham, Young-Jin;Lee, Ki-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.141-156
    • /
    • 2013
  • Researchers in welfare services of local government in Korea have rather been on isolated issues as disables, childcare, aging phenomenon, etc. (Kang, 2004; Jung et al., 2009). Lately, local officials, yet, realize that they need more comprehensive welfare services for all residents, not just for above-mentioned focused groups. Still cases dealt with focused group approach have been a main research stream due to various reason(Jung et al., 2009; Lee, 2009; Jang, 2011). Social Security Information System is an information system that comprehensively manages 292 welfare benefits provided by 17 ministries and 40 thousand welfare services provided by 230 local authorities in Korea. The purpose of the system is to improve efficiency of social welfare delivery process. The study of local government expenditure has been on the rise over the last few decades after the restarting the local autonomy, but these studies have limitations on data collection. Measurement of a local government's welfare efforts(spending) has been primarily on expenditures or budget for an individual, set aside for welfare. This practice of using monetary value for an individual as a "proxy value" for welfare effort(spending) is based on the assumption that expenditure is directly linked to welfare efforts(Lee et al., 2007). This expenditure/budget approach commonly uses total welfare amount or percentage figure as dependent variables (Wildavsky, 1985; Lee et al., 2007; Kang, 2000). However, current practice of using actual amount being used or percentage figure as a dependent variable may have some limitation; since budget or expenditure is greatly influenced by the total budget of a local government, relying on such monetary value may create inflate or deflate the true "welfare effort" (Jang, 2012). In addition, government budget usually contain a large amount of administrative cost, i.e., salary, for local officials, which is highly unrelated to the actual welfare expenditure (Jang, 2011). This paper used local government welfare service data from the detailed data sets linked to the Social Security Information System. The purpose of this paper is to analyze the factors that affect social welfare spending of 230 local authorities in 2012. The paper applied multiple regression based model to analyze the pooled financial data from the system. Based on the regression analysis, the following factors affecting self-funded welfare spending were identified. In our research model, we use the welfare budget/total budget(%) of a local government as a true measurement for a local government's welfare effort(spending). Doing so, we exclude central government subsidies or support being used for local welfare service. It is because central government welfare support does not truly reflect the welfare efforts(spending) of a local. The dependent variable of this paper is the volume of the welfare spending and the independent variables of the model are comprised of three categories, in terms of socio-demographic perspectives, the local economy and the financial capacity of local government. This paper categorized local authorities into 3 groups, districts, and cities and suburb areas. The model used a dummy variable as the control variable (local political factor). This paper demonstrated that the volume of the welfare spending for the welfare services is commonly influenced by the ratio of welfare budget to total local budget, the population of infants, self-reliance ratio and the level of unemployment factor. Interestingly, the influential factors are different by the size of local government. Analysis of determinants of local government self-welfare spending, we found a significant effect of local Gov. Finance characteristic in degree of the local government's financial independence, financial independence rate, rate of social welfare budget, and regional economic in opening-to-application ratio, and sociology of population in rate of infants. The result means that local authorities should have differentiated welfare strategies according to their conditions and circumstances. There is a meaning that this paper has successfully proven the significant factors influencing welfare spending of local government in Korea.

Surgical Treatment for Isolated Aortic Endocarditis: a Comparison with Isolated Mitral Endocarditis (대동맥 판막만을 침범한 감염성 심내막염의 수술적 치료: 승모판막만을 침범한 경우와 비교 연구)

  • Hong, Seong-Beom;Park, Jeong-Min;Lee, Kyo-Seon;Ryu, Sang-Woo;Yun, Ju-Sik;CheKar, Jay-Key;Yun, Chi-Hyeong;Kim, Sang-Hyung;Ahn, Byoung-Hee
    • Journal of Chest Surgery
    • /
    • v.40 no.9
    • /
    • pp.600-606
    • /
    • 2007
  • Background: Infective endocarditis shows high surgical mortality and morbidity rates, especially for aortic endocarditis. This study attempts to investigate the clinical characteristics and operative results of isolated aortic endocarditis. Material and Method: From July 1990 to May 2005, 25 patients with isolated aortic endocarditis (Group I, male female=18 : 7, mean age $43.2{\pm}18.6$ years) and 23 patients with isolated mitral endocarditis (Group II, male female=10 : 13, mean age $43.2{\pm}17.1$ years) underwent surgical treatment in our hospital. All the patients had native endocarditis and 7 patients showed a bicuspid aortic valve in Group I. Two patients had prosthetic valve endocarditis and one patients developed mitral endocarditis after a mitral valvuloplasty in Group II. Positive blood cultures were obtained from 11 (44.0%) patients in Group I, and 10 (43.3%) patients in Group II, The pre-operative left ventricular ejection fraction for each group was $60.8{\pm}8.7%$ and $62.1{\pm}8.1%$ (p=0.945), respectively. There was moderate to severe aortic regurgitation in 18 patients and vegetations were detected in 17 patients in Group I. There was moderate to severe mitral regurgitation in 19 patients and vegetations were found in 18 patients in Group II. One patient had a ventricular septal defect and another patient underwent a Maze operation with microwaves due to atrial fibrillation. We performed echocardiography before discharge and each year during follow-up. The mean follow-up period was $37.2{\pm}23.5$ (range $9{\sim}123$) months. Result: Postoperative complications included three cases of low cardiac output in Group I and one case each of re-surgery because of bleeding and low cardiac output in Group II. One patient died from an intra-cranial hemorrhage on the first day after surgery in Group I, but there were no early deaths in Group II. The 1, 3-, and 5-year valve related event free rates were 92.0%, 88.0%, and 88.0% for Group I patients, and 91.3%, 76.0%, and 76.0% for Group II patients, respectively. The 1, 3-, and 5-year survival rates were 96.0%, 96.0%, and 96.0% for Group I patients, and foo%, 84.9%, and 84.9% for Group II patients, respectively. Conclusion: Acceptable surgical results and mid-term clinical results for aortic endocarditis were seen.