• Title/Summary/Keyword: One Time Key

Search Result 1,294, Processing Time 0.03 seconds

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Delineating Transcription Factor Networks Governing Virulence of a Global Human Meningitis Fungal Pathogen, Cryptococcus neoformans

  • Jung, Kwang-Woo;Yang, Dong-Hoon;Maeng, Shinae;Lee, Kyung-Tae;So, Yee-Seul;Hong, Joohyeon;Choi, Jaeyoung;Byun, Hyo-Jeong;Kim, Hyelim;Bang, Soohyun;Song, Min-Hee;Lee, Jang-Won;Kim, Min Su;Kim, Seo-Young;Ji, Je-Hyun;Park, Goun;Kwon, Hyojeong;Cha, Sooyeon;Meyers, Gena Lee;Wang, Li Li;Jang, Jooyoung;Janbon, Guilhem;Adedoyin, Gloria;Kim, Taeyup;Averette, Anna K.;Heitman, Joseph;Cheong, Eunji;Lee, Yong-Hwan;Lee, Yin-Won;Bahn, Yong-Sun
    • 한국균학회소식:학술대회논문집
    • /
    • 2015.05a
    • /
    • pp.59-59
    • /
    • 2015
  • Cryptococcus neoformans causes life-threatening meningoencephalitis in humans, but the treatment of cryptococcosis remains challenging. To develop novel therapeutic targets and approaches, signaling cascades controlling pathogenicity of C. neoformans have been extensively studied but the underlying biological regulatory circuits remain elusive, particularly due to the presence of an evolutionarily divergent set of transcription factors (TFs) in this basidiomycetous fungus. In this study, we constructed a high-quality of 322 signature-tagged gene deletion strains for 155 putative TF genes, which were previously predicted using the DNA-binding domain TF database (http://www.transcriptionfactor.org/). We tested in vivo and in vitro phenotypic traits under 32 distinct growth conditions using 322 TF gene deletion strains. At least one phenotypic trait was exhibited by 145 out of 155 TF mutants (93%) and approximately 85% of the TFs (132/155) have been functionally characterized for the first time in this study. Through high-coverage phenome analysis, we discovered myriad novel TFs that play critical roles in growth, differentiation, virulence-factor (melanin, capsule, and urease) formation, stress responses, antifungal drug resistance, and virulence. Large-scale virulence and infectivity assays in insect (Galleria mellonella) and mouse host models identified 34 novel TFs that are critical for pathogenicity. The genotypic and phenotypic data for each TF are available in the C. neoformans TF phenome database (http://tf.cryptococcus.org). In conclusion, our phenome-based functional analysis of the C. neoformans TF mutant library provides key insights into transcriptional networks of basidiomycetous fungi and ubiquitous human fungal pathogens.

  • PDF

Development of Systematic Process for Estimating Commercialization Duration and Cost of R&D Performance (기술가치 평가를 위한 기술사업화 기간 및 비용 추정체계 개발)

  • Jun, Seoung-Pyo;Choi, Daeheon;Park, Hyun-Woo;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.139-160
    • /
    • 2017
  • Technology commercialization creates effective economic value by linking the company's R & D processes and outputs to the market. This technology commercialization is important in that a company can retain and maintain a sustained competitive advantage. In order for a specific technology to be commercialized, it goes through the stage of technical planning, technology research and development, and commercialization. This process involves a lot of time and money. Therefore, the duration and cost of technology commercialization are important decision information for determining the market entry strategy. In addition, it is more important information for a technology investor to rationally evaluate the technology value. In this way, it is very important to scientifically estimate the duration and cost of the technology commercialization. However, research on technology commercialization is insufficient and related methodology are lacking. In this study, we propose an evaluation model that can estimate the duration and cost of R & D technology commercialization for small and medium-sized enterprises. To accomplish this, this study collected the public data of the National Science & Technology Information Service (NTIS) and the survey data provided by the Small and Medium Business Administration. Also this study will develop the estimation model of commercialization duration and cost of R&D performance on using these data based on the market approach, one of the technology valuation methods. Specifically, this study defined the process of commercialization as consisting of development planning, development progress, and commercialization. We collected the data from the NTIS database and the survey of SMEs technical statistics of the Small and Medium Business Administration. We derived the key variables such as stage-wise R&D costs and duration, the factors of the technology itself, the factors of the technology development, and the environmental factors. At first, given data, we estimates the costs and duration in each technology readiness level (basic research, applied research, development research, prototype production, commercialization), for each industry classification. Then, we developed and verified the research model of each industry classification. The results of this study can be summarized as follows. Firstly, it is reflected in the technology valuation model and can be used to estimate the objective economic value of technology. The duration and the cost from the technology development stage to the commercialization stage is a critical factor that has a great influence on the amount of money to discount the future sales from the technology. The results of this study can contribute to more reliable technology valuation because it estimates the commercialization duration and cost scientifically based on past data. Secondly, we have verified models of various fields such as statistical model and data mining model. The statistical model helps us to find the important factors to estimate the duration and cost of technology Commercialization, and the data mining model gives us the rules or algorithms to be applied to an advanced technology valuation system. Finally, this study reaffirms the importance of commercialization costs and durations, which has not been actively studied in previous studies. The results confirm the significant factors to affect the commercialization costs and duration, furthermore the factors are different depending on industry classification. Practically, the results of this study can be reflected in the technology valuation system, which can be provided by national research institutes and R & D staff to provide sophisticated technology valuation. The relevant logic or algorithm of the research result can be implemented independently so that it can be directly reflected in the system, so researchers can use it practically immediately. In conclusion, the results of this study can be a great contribution not only to the theoretical contributions but also to the practical ones.

The Study of the Aternative Boadcasting System: in the Case of the Channel 4 in Britain (대안적 방송제작시스템 연구 : 영국 채널4의 외주제작시스템을 중심으로)

  • Eun, Hye-Chung
    • Korean journal of communication and information
    • /
    • v.17
    • /
    • pp.85-111
    • /
    • 2001
  • In this article, Channel 4 in Britain is the main theme since its alternative broadcasting system can shed the light to the Korean case. Korea is getting into the era of multimedia and including webcastings there are over thousands channels are available. However the infra-structure fur the broadcasting contents never seems to be matured to match its need. Instead Korean production system is rather vertically integrated into the Networks(KBS, MBC and SBS) which oligopolise the broadcasting in terms of supply. Even though 'Program Quota Regulation' has been established under the new Broadcasting Art(1999), the old habits die hard and still the independent producers have the unfair relationships with the Networks. Under this circumstance, Channel 4 can be the good example to show how well the alternative system can serve to the diversity of broadcasting and the taste of the minority. Channel 4 took almost 20 years to establish since there were enormous amount of debates about its public missions, ideal broadcasting system, whom it should serve for, etc. between all the social sectors including the independent producers. The social agreement was reached on the point that the new broadcaster should not produce but publish and it is called the 'publishing broadcaster'. In this sense, it can be managed effectively with comparatively little fund and at the same time, it can always have all different sorts of contents as well as genres very freely through 'commissioning process' or buying programs from even the most innovative producers. The 'commissioning process' is one of the key points which makes the Channel 4 so unique. The commissioning process is literally open to anybody, in particular, to the small scale producers with much innovative ideas. Channel 4 will support financially as well as with facilities and human resource to produce the program once after their program idea is accepted by the commissioning editor. Even better side of Channel 4 is about their financial success. From the beginning, the 'funding formula' helped in great deal to make the Channel 4 doing all sorts of innovative experiments. The history of 'funding formula' and its contribution are explained in the article, too. With all this effort, the article is hoped to bring discussion about the alternative broadcasting system which might help to prepare the new era of broadcasting.

  • PDF

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

Effects of Coenzyme Q10 on the Expression of Genes involved in Lipid Metabolism in Laying Hens (Coenzyme Q10 첨가 급여가 산란계의 지방대사 연관 유전자 발현에 미치는 영향)

  • Jang, In Surk;Moon, Yang Soo
    • Korean Journal of Poultry Science
    • /
    • v.43 no.1
    • /
    • pp.47-54
    • /
    • 2016
  • The aim of this study was to investigate the expression patterns of key genes involved in lipid metabolism in response to dietary Coenzyme Q10 (CoQ10) in hens. A total of 36 forty week-old Lohmann Brown were randomly allocated into 3 groups consisting of 4 replicates of 3 birds. Laying hens were subjected to one of following treatments: Control (BD, basal diet), T1 (BD+ CoQ10 100 mg/kg diet) and T2 (BD+ micellar of CoQ10 100 mg/kg diet). Birds were fed ad libitum a basal diet or the basal diet supplemented with CoQ10 for 5 weeks. Total RNA was extracted from the liver for quantitative RT-PCR. The mRNA levels of HMG-CoA reductase(HMGCR) and sterol regulatory element-binding proteins(SREBP)2 were decreased more than 30~50% in the liver of birds fed a basal diet supplemented with CoQ10 (p<0.05). These findings suggest that dietary CoQ10 can reduce cholesterol levels by the suppression of the hepatic HMGCR and SREBP2 genes. The gene expressions of liver X receptor (LXR) and SREBP1 were down regulated due to the addition of CoQ10 to the feed (p<0.05). The homeostasis of cholesterol can be regulated by LXR and SREBP1 in cholesterol-low-conditions. The supplement of CoQ10 caused a decreased expression of lipid metabolism-related genes including $PPAR{\gamma}$, XBP1, FASN, and GLUTs in the liver of birds (p<0.05). These data suggest that CoQ10 might be used as a dietary supplement to reduce cholesterol levels and to regulate lipid homeostasis in laying hens.

Development of an Automatic 3D Coregistration Technique of Brain PET and MR Images (뇌 PET과 MR 영상의 자동화된 3차원적 합성기법 개발)

  • Lee, Jae-Sung;Kwark, Cheol-Eun;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Park, Kwang-Suk
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.5
    • /
    • pp.414-424
    • /
    • 1998
  • Purpose: Cross-modality coregistration of positron emission tomography (PET) and magnetic resonance imaging (MR) could enhance the clinical information. In this study we propose a refined technique to improve the robustness of registration, and to implement more realistic visualization of the coregistered images. Materials and Methods: Using the sinogram of PET emission scan, we extracted the robust head boundary and used boundary-enhanced PET to coregister PET with MR. The pixels having 10% of maximum pixel value were considered as the boundary of sinogram. Boundary pixel values were exchanged with maximum value of sinogram. One hundred eighty boundary points were extracted at intervals of about 2 degree using simple threshold method from each slice of MR images. Best affined transformation between the two point sets was performed using least square fitting which should minimize the sum of Euclidean distance between the point sets. We reduced calculation time using pre-defined distance map. Finally we developed an automatic coregistration program using this boundary detection and surface matching technique. We designed a new weighted normalization technique to display the coregistered PET and MR images simultaneously. Results: Using our newly developed method, robust extraction of head boundary was possible and spatial registration was successfully performed. Mean displacement error was less than 2.0 mm. In visualization of coregistered images using weighted normalization method, structures shown in MR image could be realistically represented. Conclusion: Our refined technique could practically enhance the performance of automated three dimensional coregistration.

  • PDF

Implementing a Cervical Cancer Awareness Program in Low-income Settings in Western China: a Community-based Locally Affordable Intervention for Risk Reduction

  • Simayi, Dilixia;Yang, Lan;Li, Feng;Wang, Ying-Hong;Amanguli, A.;Zhang, Wei;Mohemaiti, Meiliguli;Tao, Lin;Zhao, Jin;Jing, Ming-Xia;Wang, Wei;Saimaiti, Abudukeyoumu;Zou, Xiao-Guang;Maimaiti, Ayinuer;Ma, Zhi-Ping;Hao, Xiao-Ling;Duan, Fen;Jing, Fang;Bai, Hui-Li;Liu, Zhao;Zhang, Lei;Chen, Cheng;Cong, Li;Zhang, Xi;Zhang, Hong-Yan;Zhan, Jin-Qiong;Zhang, Wen Jie
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.14 no.12
    • /
    • pp.7459-7466
    • /
    • 2013
  • Background: Some 60 years after introduction of the Papanicolaou smear worldwide, cervical cancer remains a burden in developing countries where >85% of world new cases and deaths occur, suggesting a failure to establish comprehensive cervical-cancer control programs. Effective interventions are available to control cervical cancer but are not all affordable in low-income settings. Disease awareness saves lives by risk-reduction as witnessed in reducing mortality of HIV/AIDS and smoking-related cancers. Subjects and Methods: We initiated a community-based awareness program on cervical cancer in two low-income Muslim Uyghur townships in Kashi (Kashgar) Prefecture, Xinjiang, China in 2008. The education involved more than 5,000 women from two rural townships and awareness was then evaluated in 2010 and 2011, respectively, using a questionnaire with 10 basic knowledge questions on cervical cancer. Demographic information was also collected and included in an EpiData database. A 10-point scoring system was used to score the awareness. Results: The effectiveness and feasibility of the program were evaluated among 4,475 women aged 19-70 years, of whom >92% lived on/below US$1.00/day. Women without prior education showed a poor average awareness rate of 6.4% (164/2,559). A onetime education intervention, however, sharply raised the awareness rate by 4-fold to 25.5% (493/1,916). Importantly, low income and illiteracy were two reliable factors affecting awareness before or after education intervention. Conclusions: Education intervention can significantly raise the awareness of cervical cancer in low-income women. Economic development and compulsory education are two important solutions in raising general disease awareness. We propose that implementing community-based awareness programs against cervical cancer is realistic, locally affordable and sustainable in low-income countries, which may save many lives over time and, importantly, will facilitate the integration of comprehensive programs when feasible. In this context, adopting this strategy may provide one good example of how to achieve "good health at low cost".

The Roles of Shop Owners in Boosting Conventional Markets (상권활성화에 있어서 상업자의 역할에 관한 연구)

  • Park, Seung-Je
    • Journal of Distribution Science
    • /
    • v.9 no.4
    • /
    • pp.93-102
    • /
    • 2011
  • With the increasing interest in boosting conventional markets, many authors have paid considerable attention to the roles of shop owners, store image improvement, and how to attract or maintain customers. Nevertheless, it is not easy to find papers related to the relationship between shop owners and their contribution to trading areas, directly or indirectly, in the academic world. Accordingly, research for answering the following question has been initiated: what kind of roles do shopkeepers should play in revitalizing poor conventional markets? Based on the previous studies focusing on enhancing traditional markets, this research was approached from the new insights that have been obtained concerning how to boost conventional markets, that is, from the perspectives of a shop owner and a trader. Therefore, this research aims at identifying some resolutions associated with the roles of shop owners to enhance a shopping district in a specific area, classifying their business roles into a few categories, depending on the degree of their participation in improving the shopping environment. Compared with previous studies focusing on emphasizing the importance of improving customer services from a shopkeeper's perspective, this research provides a new insight as far as how to boost conventional markets. It is, furthermore, necessary to note how market participants, particularly shop owners as they are the key players, can contribute to rebuilding their business area together with their customers. As a research technique for effectively achieving the research goal, the authors adopted a documentation methodology based on a large amount of the existing literature for studying how to rebuild traditional markets. Concerned about the ways to revitalize conventional markets, many authors have proposed a variety of strategies, and have suggested more detailed action plans from a practitioner's perspective. By analyzing these research results, the authors will have accomplished the research aim. Rather than simply identifying the roles of shop owners, the author found that they had to understand their social contribution for enhancing their trading areas, as well as their functional roles, in forming a regional society. The conventional market should be, thus, regarded as the place to share regional culture. Consequently, the authors draw some conclusions from the research results. In order to answer the above question, it was found that the roles of shop owners have been considered as one of the most important ways for revitalizing traditional markets. With respect to their roles, it is evident that their business activities are closely related to the improvement of the trading area in terms of sociality, regional development, and market revitalization, by selling products or services to the customers visiting that area. In a word, this implies that shop owners have to actively take part in boosting conventional markets as a core player. Although the authors have properly achieved the research aim, this study has a limitation, like most other research, in adopting a documentation method. Because the research is based on existing data results provided by the prior research conducted a long time ago, whether the research findings are applicable in a contemporary market should be re-examined in future research from a practitioner's perspective, rather than from an academic's perspective.

  • PDF

Development of a Detection Model for the Companies Designated as Administrative Issue in KOSDAQ Market (KOSDAQ 시장의 관리종목 지정 탐지 모형 개발)

  • Shin, Dong-In;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.157-176
    • /
    • 2018
  • The purpose of this research is to develop a detection model for companies designated as administrative issue in KOSDAQ market using financial data. Administration issue designates the companies with high potential for delisting, which gives them time to overcome the reasons for the delisting under certain restrictions of the Korean stock market. It acts as an alarm to inform investors and market participants of which companies are likely to be delisted and warns them to make safe investments. Despite this importance, there are relatively few studies on administration issues prediction model in comparison with the lots of studies on bankruptcy prediction model. Therefore, this study develops and verifies the detection model of the companies designated as administrative issue using financial data of KOSDAQ companies. In this study, logistic regression and decision tree are proposed as the data mining models for detecting administrative issues. According to the results of the analysis, the logistic regression model predicted the companies designated as administrative issue using three variables - ROE(Earnings before tax), Cash flows/Shareholder's equity, and Asset turnover ratio, and its overall accuracy was 86% for the validation dataset. The decision tree (Classification and Regression Trees, CART) model applied the classification rules using Cash flows/Total assets and ROA(Net income), and the overall accuracy reached 87%. Implications of the financial indictors selected in our logistic regression and decision tree models are as follows. First, ROE(Earnings before tax) in the logistic detection model shows the profit and loss of the business segment that will continue without including the revenue and expenses of the discontinued business. Therefore, the weakening of the variable means that the competitiveness of the core business is weakened. If a large part of the profits is generated from one-off profit, it is very likely that the deterioration of business management is further intensified. As the ROE of a KOSDAQ company decreases significantly, it is highly likely that the company can be delisted. Second, cash flows to shareholder's equity represents that the firm's ability to generate cash flow under the condition that the financial condition of the subsidiary company is excluded. In other words, the weakening of the management capacity of the parent company, excluding the subsidiary's competence, can be a main reason for the increase of the possibility of administrative issue designation. Third, low asset turnover ratio means that current assets and non-current assets are ineffectively used by corporation, or that asset investment by corporation is excessive. If the asset turnover ratio of a KOSDAQ-listed company decreases, it is necessary to examine in detail corporate activities from various perspectives such as weakening sales or increasing or decreasing inventories of company. Cash flow / total assets, a variable selected by the decision tree detection model, is a key indicator of the company's cash condition and its ability to generate cash from operating activities. Cash flow indicates whether a firm can perform its main activities(maintaining its operating ability, repaying debts, paying dividends and making new investments) without relying on external financial resources. Therefore, if the index of the variable is negative(-), it indicates the possibility that a company has serious problems in business activities. If the cash flow from operating activities of a specific company is smaller than the net profit, it means that the net profit has not been cashed, indicating that there is a serious problem in managing the trade receivables and inventory assets of the company. Therefore, it can be understood that as the cash flows / total assets decrease, the probability of administrative issue designation and the probability of delisting are increased. In summary, the logistic regression-based detection model in this study was found to be affected by the company's financial activities including ROE(Earnings before tax). However, decision tree-based detection model predicts the designation based on the cash flows of the company.