• Title/Summary/Keyword: low-K

Search Result 67,654, Processing Time 0.097 seconds

Fatty Acid Composition of Achatina fucica Bowdich and Ampullarius insularus (식용달팽이와 왕우렁이의 지방산 조성)

  • Park, Il-Woong;Kim, Choong-Ki
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.21 no.1
    • /
    • pp.36-42
    • /
    • 1992
  • The lipid compositions of total lipid extracted from the flesh divided into albinic type and melanic type of culture shellfishes, i.e. Achatina fucica Bowdich, Ampullarius insularus were compared. Total lipid contents of shellfishes were $1.11{\sim}3.25%$, the levels were appeared higher in Ampullarius insularus than Achatina fucica Bowdich, and in albinic type than melanic type. It was found that the contents of neutral lipids $(31.79{\sim}40.60%)$ and phospholipids $(50.95{\sim}62.12%)$ were high, while that of glycolipids $(4.84{\sim}9.47%)$ were low. The major fatty acids in total lipid of each sample were $C_{18:2}(11.92{\sim}14.37%)$, $C_{18:1}(12.34{\sim}13.64%)$, $C_{20:4}(11.03{\sim}13.74%)$, $C_{16:0}(7.45{\sim}15.39%)$ and $C_{18:0}(7.34{\sim}11.80%)$ and additionally $C_{20:2}(9.62{\sim}10.19%)$ in Achatina fucica Bowdich, and the major fatty acid composition in total lipids of each sample showed no significant differences between albinic type and melanic type, respectively. Particularly the content of $C_{16:0}$ in total lipids was shown more abundant in Ampullarius insularus and that of $C_{18:0}$, C_{20:2}$ in Achatina fucica Bowdich. The content of polyene acids in total lipids occupied higher level in Achatina fucica Bowdich but $C_{22:6}$ was almost detected, and observed relatively higher amounts in Ampullarius insularus. The main fatty acids in neutral lipid of Achatina fucica Bowdich were $C_{18:2}(16.80{\sim}17.74%)$, $C_{20:2}(12.15{\sim}12.59%)$, $C_{18:1}(9.79{\sim}10.37%)$, $C_{18:0}(7.71{\sim}12.43%)$ and C_{16:0},\;C_{20:4}$ and additionally $C_{18:3} (20.90%)$ was shown predominant in melanic type and the level of polyene acid highest in neutral lipids. The neutral lipids in each type of Ampullarius insularus were mainly composed of $C_{16:0}(16.96{\sim}17.46%)$, $C_{18:1}(13.79{\sim}13.95%)$ and $C_{18:2} (12.90{\sim}15.70%)$ and additionally it chiefly consisted of $C_{18:1}$, $C_{20:4}$and$C_{22:6}$. The major fatty acids in each type of glycolipids were $C_{18:2}(19.01{\sim}19.72)$, $C_{16:0}(12.89{\sim}18.76%)$ and $C_{18:0}(12.68{\sim}17.52%)$ and additionally $C_{18:1}$ in Achatina fucica Bowdich, but $C_{22:1}$ was detected in relatively higher level by 6.95% in albinic type only. The major fatty acids in glycolipids were $C_{18:2}(12.46{\sim}18.21%)$, $C_{16:0}(10.43{\sim}18.48%)$, $C_{20:1}(10.51{\sim}14.59%)$, $C_{20:4}(8.24{\sim}12.34%)$ and additionally it chiefly consisted of $C_{18:0}\;and\;C_{18:1}$ in Ampullarius insularus. The fatty acid composition in phospholipids of each sample was very resembled to total lipids, respectively.

  • PDF

Effects of Molecular Weight of Polyethylene Glycol on the Dimensional Stabilization of Wood (Polyethylene Glycol의 분자량(分子量)이 목재(木材)의 치수 안정화(安定化)에 미치는 영향(影響))

  • Cheon, Cheol;Oh, Joung Soo
    • Journal of Korean Society of Forest Science
    • /
    • v.71 no.1
    • /
    • pp.14-21
    • /
    • 1985
  • This study was carried out in order to prevent the devaluation of wood itself and wood products causing by anisotropy, hygroscopicity, shrinkage and swelling - properties that wood itself only have, in order to improve utility of wood, by emphasizing the natural beautiful figures of wood, to develop the dimensional stabilization techniques of wood with PEG that it is a cheap, non-toxic and the impregnation treatment is not difficult, on the effects of PEG molecular weights (200, 400, 600, 1000, 1500, 2000, 4000, 6000) and species (Pinus densiflora S. et Z., Larix leptolepis Gordon., Cryptomeria japonica D. Don., Cornus controversa Hemsl., Quercus variabilis Blume., Prunus sargentii Rehder.). The results were as follows; 1) PEG loading showed the maximum value (137.22%, Pinus densiflora, in PEG 400), the others showed that relatively slow decrease. The lower specific gravity, the more polymer loading. 2) Bulking coefficient didn't particularly show the correlation with specific gravity, for the most part, indicated the maximum values in PEG 600, except that the bulking coefficient of Quercus variabilis distributed between the range of 12-18% in PEG 400-2000. In general, the bulking coefficient of hardwood was higher than that of softwood. 3) Although there was more or less an exception according to species, volumetric swelling reduction was the greatest in PEG 400. That is, its value of Cryptomeria japonica was the greatest value with 95.0%, the others indicated more than 80% except for Prunus sargentii, while volumetric swelling reduction was decreased less than 70% as the molecular weight increase more than 1000. 4) The relative effectiveness of hardwood with high specific gravity was outstandingly higher than softwood. In general, the relative effectiveness of low molecular weight PEG was superior to those of high molecular weight PEG except that Quercus variabilis showed more than 1.6 to the total molecular weight range, while it was no significant difference as the molecular weight increase more than 4000. 5) According to the analysis of the results mentioned above, the dimensional stabilization of hardwood was more effective than softwood. Although volumetric swelling reduction was the greatest at a molecular weight of 400. In the view of polymer loading, bulking coefficiency reduction of swelling and relative effectiveness, it is desirable to use the mixture of PEG of molecular weight in the range of 200-1500. To practical use, it is recommended to study about the effects on the mixed ratio on the bulking coefficient, reduction of swelling and relative effectiveness.

  • PDF

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

The Effect of Lime Application after Cultivating Winter Forage Crops on the Change of Major Characters and Yield of Peanut (동계사료작물 재배후 석회물질 시용이 땅콩의 주요 형질 및 수량에 미치는 영향)

  • Kim, Dae-Hyang;Chim, Jae-Seong
    • The Journal of Natural Sciences
    • /
    • v.7
    • /
    • pp.103-114
    • /
    • 1995
  • These experiments were conducted for decrease of injury by continuous cropping in the peanut fields of Chonbuk Wangkungarea. The continuous cropping field for four years was used in this experiment. Italian ryegrass and rye were cultivated andlime materials were distributed for improvement of soil fertility. The results were as follows; 1. Forage crops were cultivatedand lime materials were distributed on the continuous cropping field of peanut. The organic matter content of the expermentalplot cultivating Italian ryegrass was only 1.25%. The organic matter content of soil cultivated Italian ryegrass after distributedmagnesium lime was 1.37% and that of soil cultivated Italian ryegrass after distributed gypsum was 1.30%. It was highcontent comparing to that of soil distributed lime materials only. The organic matter content of soil cultivated rye after distributed gypsum was 1.77%. 2. The phosphate content of soil cutivated Italian ryegrass was 332ppm. The phosphate content ofsoil cultivated Italian ryegrass after distributed magnesium lime was 34Oppm and that of soil cultivated Italian ryegrass afterdistributed gypsum was 31 2ppm. The phosphate content of soil cultivated rye only was 386ppm. The phosphate content ofsoil cultivated rye after distributed gypsum was 41 8ppm. This phosphate content was lower than that of soil distributed limematerials only. 3. The phytotoxin content of soil cultivated Italian ryegrass after distributed magnesium lime was decreased to17.7% and that of soil cultivated Italian ryegrass after distributed gypsum was decreased to 25.3%. The phytotoxin content ofsoil cultivated rye after distributed magnesium lime was decreased to 12.0% and that of soil cultivated rye after distributedgypsum was decreased to 12.8% comparing to the phytotoxin content of soil distributed lime materials only. Italian ryegrasswas effective to decrease phytotoxin among the forage crops and gypsum was effective among the lime materials. 4. Abacterial wilt and a late spot of peanut which were known as, main reason of continuous cropping failure were surveyed.lnccidence of a bacterial wilt was 3.4% in the plot cultivated Italian ryegrass only and that was 2.9% in the plot cultivated ryeonly. lnccidence of a bacterial wilt was 2.5% in the plot cultivated Italian ryegrass after distributed magnesium lime and thatwas 2.3% in the plot cultivated rye after distributed gypsum. Inccidence plot cultivated forage crops was lower than that of plotdistributed lime materials. 5. Inccidence of a late spot was high in the plot cultivated forage crops ony, but it was low in the plotcultivated forage crops after distributed lime materials comparing to that of the control plot. 6. The growth and yield of peanutwere bad in the plot cultivated forage crops only comparing to the control plot distributed lime materials only. These resultswere same in the plot cultivated rye after distributed lime materials, but the growth and yield were grown up in the plotcultured Italian ryegrass after distributed lime materials.

  • PDF

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

Importance-Performance Analysis of Quality Attributes of Coffee Shops and a Comparison of Coffee Shop Visits between Koreans and Mongolians (한국인과 몽골인의 커피전문점 품질 속성에 대한 중요도-수행도 분석 및 커피전문점 이용 현황 비교)

  • Jo, Mi-Na;Purevsuren, Bolorerdene
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.42 no.9
    • /
    • pp.1499-1512
    • /
    • 2013
  • The purpose of this study was to compare the coffee shop visits of Koreans and Mongolians, and to determine the quality attributes that should be managed by Importance-Performance Analysis (IPA). The survey was conducted in Seoul and the Gyeonggi Province of Korea, and at Ulaanbaatar in Mongolia from April to May 2012. The questionnaire was distributed to 380 Koreans and 380 Mongolians, with 253 and 250 responses from the Koreans and Mongolians, respectively, used for statistical analyses. From the results, Koreans visited coffee shops more frequently than Mongolians, with both groups mainly visiting a coffee shop with friends. Koreans also spent more time in a coffee shop than Mongolians. In addition, they generally used a coffee shop, regardless of time. In terms of coffee preference, Koreans preferred Americano and Mongolians preferred Espresso. The most frequently stated purpose of Koreans for visiting a coffee shop was to rest, while Mongolians typically visited to drink coffee. The general price range respondents spent on coffee was less than 4~8 thousand won for the Koreans and 2~4 thousand won for the Mongolians. Both Koreans and Mongolians obtained information about coffee shops from recommendations. According to the IPA results of 20 quality attributes of coffee shops, the selection attributes with high importance but low satisfaction were quality, price, and kindness for Koreans, but none of the attributes was found for Mongolians.

Impact of Semantic Characteristics on Perceived Helpfulness of Online Reviews (온라인 상품평의 내용적 특성이 소비자의 인지된 유용성에 미치는 영향)

  • Park, Yoon-Joo;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.29-44
    • /
    • 2017
  • In Internet commerce, consumers are heavily influenced by product reviews written by other users who have already purchased the product. However, as the product reviews accumulate, it takes a lot of time and effort for consumers to individually check the massive number of product reviews. Moreover, product reviews that are written carelessly actually inconvenience consumers. Thus many online vendors provide mechanisms to identify reviews that customers perceive as most helpful (Cao et al. 2011; Mudambi and Schuff 2010). For example, some online retailers, such as Amazon.com and TripAdvisor, allow users to rate the helpfulness of each review, and use this feedback information to rank and re-order them. However, many reviews have only a few feedbacks or no feedback at all, thus making it hard to identify their helpfulness. Also, it takes time to accumulate feedbacks, thus the newly authored reviews do not have enough ones. For example, only 20% of the reviews in Amazon Review Dataset (Mcauley and Leskovec, 2013) have more than 5 reviews (Yan et al, 2014). The purpose of this study is to analyze the factors affecting the usefulness of online product reviews and to derive a forecasting model that selectively provides product reviews that can be helpful to consumers. In order to do this, we extracted the various linguistic, psychological, and perceptual elements included in product reviews by using text-mining techniques and identifying the determinants among these elements that affect the usability of product reviews. In particular, considering that the characteristics of the product reviews and determinants of usability for apparel products (which are experiential products) and electronic products (which are search goods) can differ, the characteristics of the product reviews were compared within each product group and the determinants were established for each. This study used 7,498 apparel product reviews and 106,962 electronic product reviews from Amazon.com. In order to understand a review text, we first extract linguistic and psychological characteristics from review texts such as a word count, the level of emotional tone and analytical thinking embedded in review text using widely adopted text analysis software LIWC (Linguistic Inquiry and Word Count). After then, we explore the descriptive statistics of review text for each category and statistically compare their differences using t-test. Lastly, we regression analysis using the data mining software RapidMiner to find out determinant factors. As a result of comparing and analyzing product review characteristics of electronic products and apparel products, it was found that reviewers used more words as well as longer sentences when writing product reviews for electronic products. As for the content characteristics of the product reviews, it was found that these reviews included many analytic words, carried more clout, and related to the cognitive processes (CogProc) more so than the apparel product reviews, in addition to including many words expressing negative emotions (NegEmo). On the other hand, the apparel product reviews included more personal, authentic, positive emotions (PosEmo) and perceptual processes (Percept) compared to the electronic product reviews. Next, we analyzed the determinants toward the usefulness of the product reviews between the two product groups. As a result, it was found that product reviews with high product ratings from reviewers in both product groups that were perceived as being useful contained a larger number of total words, many expressions involving perceptual processes, and fewer negative emotions. In addition, apparel product reviews with a large number of comparative expressions, a low expertise index, and concise content with fewer words in each sentence were perceived to be useful. In the case of electronic product reviews, those that were analytical with a high expertise index, along with containing many authentic expressions, cognitive processes, and positive emotions (PosEmo) were perceived to be useful. These findings are expected to help consumers effectively identify useful product reviews in the future.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Open Digital Textbook for Smart Education (스마트교육을 위한 오픈 디지털교과서)

  • Koo, Young-Il;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.177-189
    • /
    • 2013
  • In Smart Education, the roles of digital textbook is very important as face-to-face media to learners. The standardization of digital textbook will promote the industrialization of digital textbook for contents providers and distributers as well as learner and instructors. In this study, the following three objectives-oriented digital textbooks are looking for ways to standardize. (1) digital textbooks should undertake the role of the media for blended learning which supports on-off classes, should be operating on common EPUB viewer without special dedicated viewer, should utilize the existing framework of the e-learning learning contents and learning management. The reason to consider the EPUB as the standard for digital textbooks is that digital textbooks don't need to specify antoher standard for the form of books, and can take advantage od industrial base with EPUB standards-rich content and distribution structure (2) digital textbooks should provide a low-cost open market service that are currently available as the standard open software (3) To provide appropriate learning feedback information to students, digital textbooks should provide a foundation which accumulates and manages all the learning activity information according to standard infrastructure for educational Big Data processing. In this study, the digital textbook in a smart education environment was referred to open digital textbook. The components of open digital textbooks service framework are (1) digital textbook terminals such as smart pad, smart TVs, smart phones, PC, etc., (2) digital textbooks platform to show and perform digital contents on digital textbook terminals, (3) learning contents repository, which exist on the cloud, maintains accredited learning, (4) App Store providing and distributing secondary learning contents and learning tools by learning contents developing companies, and (5) LMS as a learning support/management tool which on-site class teacher use for creating classroom instruction materials. In addition, locating all of the hardware and software implement a smart education service within the cloud must have take advantage of the cloud computing for efficient management and reducing expense. The open digital textbooks of smart education is consdered as providing e-book style interface of LMS to learners. In open digital textbooks, the representation of text, image, audio, video, equations, etc. is basic function. But painting, writing, problem solving, etc are beyond the capabilities of a simple e-book. The Communication of teacher-to-student, learner-to-learnert, tems-to-team is required by using the open digital textbook. To represent student demographics, portfolio information, and class information, the standard used in e-learning is desirable. To process learner tracking information about the activities of the learner for LMS(Learning Management System), open digital textbook must have the recording function and the commnincating function with LMS. DRM is a function for protecting various copyright. Currently DRMs of e-boook are controlled by the corresponding book viewer. If open digital textbook admitt DRM that is used in a variety of different DRM standards of various e-book viewer, the implementation of redundant features can be avoided. Security/privacy functions are required to protect information about the study or instruction from a third party UDL (Universal Design for Learning) is learning support function for those with disabilities have difficulty in learning courses. The open digital textbook, which is based on E-book standard EPUB 3.0, must (1) record the learning activity log information, and (2) communicate with the server to support the learning activity. While the recording function and the communication function, which is not determined on current standards, is implemented as a JavaScript and is utilized in the current EPUB 3.0 viewer, ths strategy of proposing such recording and communication functions as the next generation of e-book standard, or special standard (EPUB 3.0 for education) is needed. Future research in this study will implement open source program with the proposed open digital textbook standard and present a new educational services including Big Data analysis.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.