• Title/Summary/Keyword: medium

Search Result 25,638, Processing Time 0.055 seconds

Development of Standard Process for Private Information Protection of Medical Imaging Issuance (개인정보 보호를 위한 의료영상 발급 표준 업무절차 개발연구)

  • Park, Bum-Jin;Yoo, Beong-Gyu;Lee, Jong-Seok;Jeong, Jae-Ho;Son, Gi-Gyeong;Kang, Hee-Doo
    • Journal of radiological science and technology
    • /
    • v.32 no.3
    • /
    • pp.335-341
    • /
    • 2009
  • Purpose : The medical imaging issuance is changed from conventional film method to Digital Compact Disk solution because of development on IT technology. However other medical record department's are undergoing identification check through and through whereas medical imaging department cannot afford to do that. So, we examine present applicant's recognition of private intelligence safeguard, and medical imaging issuance condition by CD & DVD medium toward various medical facility and then perform comparative analysis associated with domestic and foreign law & recommendation, lastly suggest standard for medical imaging issuance and process relate with internal environment. Materials and methods : First, we surveyed issuance process & required documents when situation of medical image issuance in the metropolitan medical facility by wire telephone between 2008.6.1$\sim$2008.7.1. in accordance with the medical law Article 21$\sim$clause 2, suggested standard through applicant's required documents occasionally - (1) in the event of oneself $\rightarrow$ verifying identification, (2) in the event of family $\rightarrow$ verifying applicant identification & family relations document (health insurance card, attested copy, and so on), (3) third person or representative $\rightarrow$ verifying applicant identification & letter of attorney & certificate of one's seal impression. Second, also checked required documents of applicant in accordance with upper standard when situation of medical image issuance in Kyung-hee university medical center during 3 month 2008.5.1$\sim$2008.7.31. Third, developed a work process by triangular position of issuance procedure for situation when verifying required documents & management of unpreparedness. Result : Look all over the our manufactured output in the hospital - satisfy the all conditions $\rightarrow$ 4 place(12%), possibly request everyone $\rightarrow$ 4 place(12%), and apply in the clinic section $\rightarrow$ 9 place(27%) that does not medical imaging issuance office, so we don't know about required documents condition. and look into whether meet or not the applicant's required documents on upper 3month survey - satisfy the all conditions $\rightarrow$ 629 case(49%), prepare a one part $\rightarrow$ 416 case(33%), insufficiency of all document $\rightarrow$ 226case(18%). On the authority of upper research result, we are establishing the service model mapping for objective reception when image export situation through triangular position of issuance procedure and reduce of friction with patient and promote the patient convenience. Conclusion : The PACS is classified under medical machinery that mean indicates about higher importance of medical information therefore medical information administrator's who already received professional education & mind, are performer about issuance process only and also have to provide under ID checking process exhaustively.

  • PDF

Determination of optimum fertilizer rates for barley reflecting the effect of soil and climate on the response to NPK fertilizers (기상(氣象) 및 토양조건(土壤條件)으로 본 대맥(大麥)의 NPK 시비적량결정(施肥適量決定))

  • Park, Nae Joung;Lee, Chun Soo;Ryu, In Soo;Park, Chun Sur
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.7 no.3
    • /
    • pp.177-184
    • /
    • 1974
  • An attempt was made to determine simple and the most reasonable fertilizer recommendation for barley utilizing the present knowledge about the effect of soil and climatic factors on barley response to NPK fertilizer in Korea and establishing the critical contents of available nutrients in soils. The results were summarized as follows. 1. The relationships between relative yields or fertilizers rates for maximum yields from quadratic response curves and contents of organic matter, available $P_2O_5$, exchangeable K in soils were examined. The trend was more prospective with relative yields because of smaller variation than with fertilizer rates. 2. Since the relationship between N relative yields and organic matter contents in soils was almost linear over the practical range, it was difficult to determine the critical content for nitrogen response by quadrant methods. However, 2.6%, country average of organic matter content in upland soils was recommended as the critical point. 3. There showed a trend that average optimum nitrogen rater was higher in heavy texture soils, colder regions. 4. The critical $P_2O_5$ contents in soil were 96 or 118 ppm in two different years, which were very close to the country average, 114 ppm of $P_2O_5$ contents in upland soils. The critical K content in soil was 0.32 me/100g, which was exactly coincident to the country average of exchangeable K in upland soils. 5. According to the contents of avaiiable $P_2O_5$ and exchangeable K, several ranges were established for the purpose of convenience in fertilizer recommendation, that is, very low, Low, Medium, High and very High. 6. More phosphate was recommended in the northern region, clayey soils, and paddy soils, whereas less in the southern region and sandy soils. More potash was recommended in the northern region and sandy soils, whereas less in the southern region and clayey soils. 7. The lower the PH, the more fertilizers were recommended. However, liming was considered to be more effective than increas in amount of fertilizers.

  • PDF

Actual Results on the Control of Illegal Fishing in Adjacent Sea Area of Korea (한국 연근해 불법어업의 지도 단속 실태)

  • Lee, Sang-Jo;Kim, Jin-Kun
    • Journal of Fisheries and Marine Sciences Education
    • /
    • v.10 no.2
    • /
    • pp.139-161
    • /
    • 1998
  • This thesis includes a study on the legal regulation, the system and formalities on the control of illegal fishing. And the author analyzed the details of the lists of illegal fishing controlled by fishing patrol vessels of Ministry of Maritime Affairs and Fisheries from 1994 to 1996 in adjacent sea area of Korea. The results are summarized as follows ; 1. The fishing patrol vessels controlled total 826 cases in 2,726 days of 292 voyages by 17 vessels in 1994, total 1,086 cases in 3,060 days of 333 voyages by 18 vessels in 1995 and total 933 cases in 3,126 days of 330 voyages by 19 vessels in 1996. 2. The fishing period of illegal fishing was generally concentrated from April to September. But year after year, illegal fishing was scattered throughout the year. 3. The most controlled sea area of illegal fishing was the south central sea area in the sea near Port of Tongyeong. The sea area occupied about 36~51% of totality and the controlled cases were gradually increased every year. The second was the south western sea area in the sea near Port of Yosu. The sea area occupied about 18-27% and the controlled cases were a little bit increased every year. The third was the south eastern sea area in the sea near Pusan. The sea area occupied about 13~23% and the controlled cases were gradually decreased year by year. 4. The most controlled kind of illegal fishing was the small size bottom trawl. This occupied about 81-95% of totality and the controlled cases were gradually increased year by year. The second was the medium size bottom trawl. This occupied about 4-7% and the controlled cases were gradually decreased year by year. The third was the trawl of the coastal sea, this occupied about 2~4% and the controlled cases were a little bit decreased every year. 5. The most controlled address of illegal fishing manager was Pusan city which occupied about 33-51% of totality. The second was Cheonnam which occupied about 24-29%. The third was Kyungnam which occupied about 16~35%. 6. The most controlled violation of regulations was Article 57 of the Fisheries Act which occupied about 56-64% of totality. The second was Article 23 of Protectorate for Fisheries Resources which occupied about 21-36%. And the controlled cases by it were gradually increased every year.

  • PDF

A THREE-DIMENSIONAL FEM COMPARISON STUDY ABOUT THE FORCE, DISPLACEMENT AND INITIAL STRESS DISTRIBUTION ON THE MAXILLARY FIRST MOLARS BY THE APPLICATION OF VAR10US ASYMMETRIC HEAD-GEAR (비대칭 헤드기어의 적용시 상악제 1 대구치에 나타나는힘과 변위 및 초기 응력분포에 관한 3차원 유한요소법적 연구)

  • Kim, Jong-Soo;Cha, Dyung-Suk;Ju, Jin-Won;Lee, Jin-Woo
    • The korean journal of orthodontics
    • /
    • v.31 no.1 s.84
    • /
    • pp.25-38
    • /
    • 2001
  • The purpose of this study was to compare the force, the displacement and the stress distribution on the maxillary first molars altered by the application of various asymmetric head-gear. For this study, the finite element models of unilateral Cl II maxillary dental arch was made. Also, the finite element models of asymmetric face-bow was made. Three types of asymmetric face-bow were made : each of the right side 15mm, 25mm and 35mm shorter than the left side. We compared the forces, the displacement and the distribution of stress that were generated by application of various asymmetric head-gear, The results were as follows. 1. The total forces that both maxillary first molars received were similar in all groups. But the forces that mesially positioned tooth received were increased as the length of the outer-bow shortened, and the forces that normally positioned tooth received were decreased as the length of the outer-bow shortened. 2. In lateral force comparison, the buccal forces that normally positioned tooth received were increased as the length of the outer-bow shortened, and the buccal fortes that mesially positioned tooth received were decreased as the length of the outer-bow shortened. Though the net lateral force moved to the buccal side of normally positioned tooth as the length of the outer-bow shortened, both maxillary first molars received the buccal force. That showed 'Avchiai Expansion Effect' 3. The distal forces, the extrusion forces and the magnitudes of the crown distal tipping that mesially positioned tooth received were increased as the length of the outer-bow shortened, and the forces that normally positioned tooth received were decreased as the length of the outer-bow was shortened. 4. The magnitude of the distal-in rotation that normally positioned tooth received were increased as the length of the outer-bow was shortened. But, mesially positioned tooth show two different results. For the outer-bow 15mm shortened, mesially positioned tooth showed the distal-in rotation, hut for the outer-bow 25mm and 35mn shortened, mesially positioned tooth showed the distal-out rotation. Thus, the turning point exists between 15mm and 25mm. 5. This study of the initial stress distribution of the periodontal ligament at slightly inferior of the furcation area revealed that the compressive stress in the distobuccal root of the normally positioned tooth moved from the palatal side to the distal side and the buccal side successively as the length of the outer-bow shortened. 6. This study of the initial stress distribution of the periodontal ligament at slightly inferior of the furcation area revealed that the magnitudes of stress were altered but the total stress distributions were not altered in the mesiobuccal root and the palatal root of normally positioned tooth, and also three roots of mesially positioned tooth as the length of the outer-bow shortened.

  • PDF

The Effect of Glucocorticoid on the Change of Nitric Oxide and Cytokine Levels in Induced Sputum from Patients with Bronchial Asthma (기관지 천식 환자에서 부신피질 스테로이드 투여 전후 유도객담내 Nitric Oxide 및 Cytokine의 변화)

  • Kim, Tae-Yon;Yoon, Hyeong-Kyu;Choi, Young-Mee;Lee, Sook-Young;Kwon, Soon-Seog;Kim, Young-Kyoon;Kim, Kwan-Hyoung;Moon, Hwa-Sik;Park, Sung-Hak;Song, Jeong-Sup
    • Tuberculosis and Respiratory Diseases
    • /
    • v.48 no.6
    • /
    • pp.922-931
    • /
    • 2000
  • Background : It has been well known that bronchia1 asthma is a chronic airway inflammatory disorder. Recently, sputum induced with hypertonic saline was introduced as a simple and useful nonivasive medium to investigate airway inflammation and symptom severity in patients with asthma. We examined the eosinophil, eosinophil cationic protein (ECP), interleukin(IL)-3, IL-5, granulocyte-macrophage colony-stimulating facta (GM-CSF), and nitric oxide (NO) derivatives in induced sputum from patients with bronchia1 asthma in order to determine the role of NO and various inflammatory cytokines as a useful markers of airway inflammation or changes in pulmonary function tests and symptoms. Methods : A total 30 patients with bronchia1 asthma received oral prednisolone 30 mg daily for 2 weeks. Forced expiratory volume in one second ($FEV_1$), total blood eosinophil count and induced sputum eosinophil count, ECP, IL-3, IL-5, GM-CSF, and NO derivatives were determined before and after the administration of prednisolone. Results : Of the 30 patients, 13 (43.3%) were male and 17 (56.7%) were female. The mean age of patients was 41.8 years (range 19-64 years). Two patients could not produce sputum at the second study and 3 could not be followed up after their first visit. Two weeks after the prednisolone administration, there was a significant increase in $FEV_1$ (% of predicted value) from 78.1$\pm$20.6 % to 90.3$\pm$ 18.3 % (P<0.001). The eosinophil percentages in induced sputum were significantly decreased after treatment with prednisolone, with values of 56.1$\pm$27.2 % versus 29.6$\pm$21.3 % (P<0.001), and ECP were $134.5\pm68.1\;{\mu}g/L$ versus $41.5\pm42.4\;{\mu}g/L$ (P<0.001) respectively. After the prednisolone treatments, the eotaxin concentration also showed a decreasing tendency from 26.7$\pm$12.8 pg/ml to 21.7$\pm$8.7 pg/ml. There was a decreasing tendency but no significant differences in total blood eosinophil count (425.7$\pm$265.9 vs 287.7$\pm$294.7) and in the concentration of NO derivatives ($70.4\pm44.6{\mu}mol/L$ vs $91.5\pm48.3\;{\mu}mol/L$) after the prednisolone treatments. IL-3, IL-5, GM-CSF were undetectable in the sputum of most subjects either before the prednisolone treatments or after the treatments. Before the prednisolone treatments, a significant inverse correlation was observed between FEV1 and sputum ECP (r=-D.364, P<0.05) and there was a significant correlation between sputum eosinophils and eotaxin (r=0.369, P<0.05) Conclusion : The eotaxin and ECP concentration in induced sputum may be used as markers of airway inflammation after treatments in bronchia1 asthma. In addition, the measurement of sputum eosinophil percent ages is believed to be a simple method displaying the degree of airway inflammation and airway obstruction before and after the prednisolone treatment in bronchia1 asthma. However, unlike exhaled NO, the examination of NO derivatives with Griess reaction in induced sputum is considered an ineffective marker of changing airway inflammation and obstructing symptoms.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Analysis of Twitter for 2012 South Korea Presidential Election by Text Mining Techniques (텍스트 마이닝을 이용한 2012년 한국대선 관련 트위터 분석)

  • Bae, Jung-Hwan;Son, Ji-Eun;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.141-156
    • /
    • 2013
  • Social media is a representative form of the Web 2.0 that shapes the change of a user's information behavior by allowing users to produce their own contents without any expert skills. In particular, as a new communication medium, it has a profound impact on the social change by enabling users to communicate with the masses and acquaintances their opinions and thoughts. Social media data plays a significant role in an emerging Big Data arena. A variety of research areas such as social network analysis, opinion mining, and so on, therefore, have paid attention to discover meaningful information from vast amounts of data buried in social media. Social media has recently become main foci to the field of Information Retrieval and Text Mining because not only it produces massive unstructured textual data in real-time but also it serves as an influential channel for opinion leading. But most of the previous studies have adopted broad-brush and limited approaches. These approaches have made it difficult to find and analyze new information. To overcome these limitations, we developed a real-time Twitter trend mining system to capture the trend in real-time processing big stream datasets of Twitter. The system offers the functions of term co-occurrence retrieval, visualization of Twitter users by query, similarity calculation between two users, topic modeling to keep track of changes of topical trend, and mention-based user network analysis. In addition, we conducted a case study on the 2012 Korean presidential election. We collected 1,737,969 tweets which contain candidates' name and election on Twitter in Korea (http://www.twitter.com/) for one month in 2012 (October 1 to October 31). The case study shows that the system provides useful information and detects the trend of society effectively. The system also retrieves the list of terms co-occurred by given query terms. We compare the results of term co-occurrence retrieval by giving influential candidates' name, 'Geun Hae Park', 'Jae In Moon', and 'Chul Su Ahn' as query terms. General terms which are related to presidential election such as 'Presidential Election', 'Proclamation in Support', Public opinion poll' appear frequently. Also the results show specific terms that differentiate each candidate's feature such as 'Park Jung Hee' and 'Yuk Young Su' from the query 'Guen Hae Park', 'a single candidacy agreement' and 'Time of voting extension' from the query 'Jae In Moon' and 'a single candidacy agreement' and 'down contract' from the query 'Chul Su Ahn'. Our system not only extracts 10 topics along with related terms but also shows topics' dynamic changes over time by employing the multinomial Latent Dirichlet Allocation technique. Each topic can show one of two types of patterns-Rising tendency and Falling tendencydepending on the change of the probability distribution. To determine the relationship between topic trends in Twitter and social issues in the real world, we compare topic trends with related news articles. We are able to identify that Twitter can track the issue faster than the other media, newspapers. The user network in Twitter is different from those of other social media because of distinctive characteristics of making relationships in Twitter. Twitter users can make their relationships by exchanging mentions. We visualize and analyze mention based networks of 136,754 users. We put three candidates' name as query terms-Geun Hae Park', 'Jae In Moon', and 'Chul Su Ahn'. The results show that Twitter users mention all candidates' name regardless of their political tendencies. This case study discloses that Twitter could be an effective tool to detect and predict dynamic changes of social issues, and mention-based user networks could show different aspects of user behavior as a unique network that is uniquely found in Twitter.

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.