• Title/Summary/Keyword: research methodologies

Search Result 1,332, Processing Time 0.04 seconds

Development of Evaluation Model for ITS Project using the Probabilistic Risk Analysis (확률적 위험도분석을 이용한 ITS사업의 경제성평가모형)

  • Lee, Yong-Taeck;Nam, Doo-Hee;Lim, Kang-Won
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.3 s.81
    • /
    • pp.95-108
    • /
    • 2005
  • The purpose of this study is to develop the ITS evaluation model using the Probabilistic Risk Analysis (PRA) methodology and to demonstrate the goodness-of-fit of the large ITS projects through the comparative analysis between DEA and PRA model. The results of this study are summarized below. First, the evaluation mode] using PRA with Monte-Carlo Simulation(MCS) and Latin-Hypercube Sampling(LHS) is developed and applied to one of ITS projects initiated by local government. The risk factors are categorized with cost, benefit and social-economic factors. Then, PDF(Probability Density Function) parameters of these factors are estimated. The log-normal distribution, beta distribution and triangular distribution are well fitted with the market and delivered price. The triangular and uniform distributions are valid in benefit data from the simulation analysis based on the several deployment scenarios. Second, the decision making rules for the risk analysis of projects for cost and economic feasibility study are suggested. The developed PRA model is applied for the Daejeon metropolitan ITS model deployment project to validate the model. The results of cost analysis shows that Deterministic Project Cost(DPC), Deterministic Total Project Cost(DTPC) is the biased percentile values of CDF produced by PRA model and this project need Contingency Budget(CB) because these values are turned out to be less than Target Value(TV;85% value), Also, this project has high risk of DTPC and DPC because the coefficient of variation(C.V) of DTPC and DPC are 4 and 15 which are less than that of DTPC(19-28) and DPC(22-107) in construction and transportation projects. The results of economic analysis shows that total system and subsystem of this project is in type II, which means the project is economically feasible with high risk. Third, the goodness-of-fit of PRA model is verified by comparing the differences of the results between PRA and DEA model. The difference of evaluation indices is up to 68% in maximum. Because of this, the deployment priority of ITS subsystems are changed in each mode1. In results. ITS evaluation model using PRA considering the project risk with the probability distribution is superior to DEA. It makes proper decision making and the risk factors estimated by PRA model can be controlled by risk management program suggested in this paper. Further research not only to build the database of deployment data but also to develop the methodologies estimating the ITS effects with PRA model is needed to broaden the usage of PRA model for the evaluation of ITS projects.

Current Status and Prospects of Various Methods used for Screening Probiotic Microorganisms (Probiotic 미생물 검사에 사용되는 다양한 방법들에 대한 현황과 향후 전망)

  • Kim, Dong-Hyeon;Kim, Hong-Seok;Jeong, Dana;Chon, Jung-Whan;Kim, Hyunsook;Kim, Young-Ji;Kang, Il-Byung;Lee, Soo-Kyung;Song, Kwang-Young;Park, Jin-Hyeong;Chang, Ho-Seok;Seo, Kun-Ho
    • Journal of Dairy Science and Biotechnology
    • /
    • v.34 no.4
    • /
    • pp.203-216
    • /
    • 2016
  • Probiotic microorganisms are thought to provide health benefits when consumed. In 2001, the World Health Organization defined probiotics as "live microorganisms which confer a health benefit on the host, when administered in adequate amounts." Three methods for screening potential probiotics have currently widely available. (1) In vitro assays of potential probiotics are preferred because of their simplicity and low cost. (2) The use of in vivo approaches for exploring various potential probiotics reflects the enormous diversity in biological models with various complex mechanisms. (3) Potential probiotics have been analyzed using several genetic and omics technologies to identify gene expression or protein production patterns under various conditions. However, there is no ideal procedure for selecting potential probiotics than testing cadidate strains on the target population. Hence, in this review, we provide an overview of the different methodologies used to identify new probiotic strains. Furthermore, we describe futre perspectives for the use of in vitro, in vivo and omics in probiotic research.

An Investigation of the Relationships among College Backgrounds in Science, Attitudes toward Teaching Science, Science Teaching Self-Efficacy Beliefs, and Instructional Strategies of Elementary School Teachers (I) - Based on a Quantitative Data Analysis - (초등학교 교사들의 과학 교수 방법에 영향을 미치는 과학에 대한 학문적 배경, 과학 교수에 대한 태도, 과학 교수 효능에 대한 신념의 상호 관계성 조사 (I) - 양적 연구를 중심으로 -)

  • Park, Sung-Hye
    • Journal of The Korean Association For Science Education
    • /
    • v.20 no.4
    • /
    • pp.542-561
    • /
    • 2000
  • The purpose of this study was to investigate the relationships among elementary school teachers' high school and college backgrounds in science, their attitudes toward teaching science, their science teaching efficacy beliefs, and their instructional strategies. Both quantitative and qualitative research methodologies, were utilized in this study. This paper, however, presents only the results of the quantitative data analysis while expecting to report the qualitative data analysis outcomes afterwards. Four instruments were used to ascertain information concerning teachers' backgrounds in science(the number of high school science courses they took and the grades of courses, the number of college science courses and grades, the number of college science methods courses and grades), attitudes toward teaching science, science teaching self-efficacy beliefs(personal science teaching efficacy and science teaching outcome expectancy), and their instructional strategies(indirect, direct, and mixed methods). A sample of 340 practicing elementary school teachers participated in this study. To determine statistically significant results, Pearson's correlation coefficient was used to relate teachers' backgrounds in science, attitudes toward teaching science, science teaching self-efficacy beliefs and their instructional strategies. The correlation coefficients were statistically significant regarding four variables, teachers' backgrounds in science, attitudes toward teaching science, science teaching self-efficacy beliefs, and instructional strategies investigated in this study. These results can be interpreted that programs of teacher preparations and trainings which include science and science methods courses should help prospective and practicing teachers change in their attitudes and beliefs toward science teaching. It is expected that future studies concerning teachers' attitudes, beliefs, and behaviors toward teaching science can help to improve science teacher education in Korea.

  • PDF

A Study on Act on Certified Detective and Certified Detective Business (공인탐정 관련 법률(안)의 문제점과 개선방안에 관한 연구)

  • Kim, Bong-Soo;Choo, Bong-Jo
    • Korean Security Journal
    • /
    • no.61
    • /
    • pp.285-305
    • /
    • 2019
  • In the bill of [Act on Certified Detective and Certified Detective Business] (hereinafter referred to as the Certified Detective Act) proposed and represented by the member of National Assembly, Lee Wan-Yong in 2017, the legislative point of view showed that various incidents and accidents, including new crimes, are frequently increasing as society develops and becomes more complex, however, it is not possible to solve all the incidents and accidents with the investigation force of the state alone due to manpower and budget, and therefore, a certified detective or private investigator are required. According to the decision of the Constitutional Court in June 2018, Article 40 (4) of the Act on the Use and Protection of Credit Information is concerned with 'finding the location and contact information of a specific person or investigating privacy other than commerce relations such as financial transactions' are prohibited. It is for the purpose of preventing illegal acts in the process of investigation such as the location, contact information, and the privacy of a specific person and protecting the privacy and tranquility of personal privacy from misuse and abuse of the personal information etc. Such 'privacy investigation business' currently operates in the form of self-employment business, which becomes a social issue as some companies illegally collect and provide such privacy information by using illegal cameras or vehicle location trackers and also comes to be the objects of clampdown of the investigative agency. Considering this reality, because it is difficult to find a resolution to materialize the legislative purpose of the Act on the use and protection of credit information other than prohibiting 'investigation business including privacy etc' and it is possible to run a similar type of business as a detective business in the scope that the laws of credit research business, security service business, the position of the Constitutional Court is that 'the ban on the investigations of privacy etc' does not infringe the claimant's freedom to choose a job. In addition to this decision, the precedent positions of the Constitutional Court have been that, in principle, the legislative regulation of a particular occupation was a matter of legislative policy determined by the legislator's political, economic and social considerations, unless otherwise there were any special circumstances, and. the Constitutional Court also widely recognized the legislative formation rights of legislators in the qualifications system related to the freedom of a job. In this regard, this study examines the problems and improvement plans of the certified detective system, focusing on the certified detective bill recently under discussion, and tries to establish a legal basis for the certified detective and certified detective business, in order to cultivate and institutionalize the certified detective business, and to suggest methodologies to seek for the development of the businesses and protect the rights of the people.

Potential Contamination Sources on Fresh Produce Associated with Food Safety

  • Choi, Jungmin;Lee, Sang In;Rackerby, Bryna;Moppert, Ian;McGorrin, Robert;Ha, Sang-Do;Park, Si Hong
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.1
    • /
    • pp.1-12
    • /
    • 2019
  • The health benefits associated with consumption of fresh produce have been clearly demonstrated and encouraged by international nutrition and health authorities. However, since fresh produce is usually minimally processed, increased consumption of fresh fruits and vegetables has also led to a simultaneous escalation of foodborne illness cases. According to the report by the World Health Organization (WHO), 1 in 10 people suffer from foodborne diseases and 420,000 die every year globally. In comparison to other processed foods, fresh produce can be easily contaminated by various routes at different points in the supply chain from farm to fork. This review is focused on the identification and characterization of possible sources of foodborne illnesses from chemical, biological, and physical hazards and the applicable methodologies to detect potential contaminants. Agro-chemicals (pesticides, fungicides and herbicides), natural toxins (mycotoxins and plant toxins), and heavy metals (mercury and cadmium) are the main sources of chemical hazards, which can be detected by several methods including chromatography and nano-techniques based on nanostructured materials such as noble metal nanoparticles (NMPs), quantum dots (QDs) and magnetic nanoparticles or nanotube. However, the diversity of chemical structures complicates the establishment of one standard method to differentiate the variety of chemical compounds. In addition, fresh fruits and vegetables contain high nutrient contents and moisture, which promote the growth of unwanted microorganisms including bacterial pathogens (Salmonella, E. coli O157: H7, Shigella, Listeria monocytogenes, and Bacillus cereus) and non-bacterial pathogens (norovirus and parasites). In order to detect specific pathogens in fresh produce, methods based on molecular biology such as PCR and immunology are commonly used. Finally, physical hazards including contamination by glass, metal, and gravel in food can cause serious injuries to customers. In order to decrease physical hazards, vision systems such as X-ray inspection have been adopted to detect physical contaminants in food, while exceptional handling skills by food production employees are required to prevent additional contamination.

Comparison of the Physicochemical Properties of Meat and Viscera of Dried Abalone (Haliotis discus hannai) Prepared using Different Drying Methods (건조방법에 따른 건조 전복 (Haliotis discus hannai)의 이화학적 특성 비교)

  • Park, Jeong-Wook;Lee, Young-Jae;Park, In-Bae;Shin, Gung-Won;Jo, Yeong-Cheol;Koh, So-Mi;Kang, Seong-Gook;Kim, Jeong-Mok;Kim, Hae-Seop
    • Food Science and Preservation
    • /
    • v.16 no.5
    • /
    • pp.686-698
    • /
    • 2009
  • We sought basic data for product development and storage improvement of abalone. We explored drying methodologies, such as shade drying, cold air drying, and vacuum freeze drying. We also examined various physicochemical features of both meat and viscera. Raw abalone meat had $78.88{\pm}1.01%$ moisture, $9.24{\pm}0.27%$ crude protein, and $10.05{\pm}0.81%$ carbohydrate (all w/w). The moisture level of dried abalone meat was highest after cold air drying, at $18.38{\pm}0.91%$, and lowest after vacuum freeze drying, at $1.05{\pm}0.05%$. The total amino acid content of raw abalone meat was $17,124.05{\pm}493.18\;mg%$, and fell after shade-drying to $12,969.92{\pm}583.65\;mg%$, and to $13,328.78{\pm}653.11\;mg%$ after cold air drying. The total free amino acid content of raw abalone meat was $4,261.99{\pm}106.55\;mg%$, and rose after shade-drying to $6,336.50{\pm}285.15\;mg%$, to $5,072.04{\pm}248.53\;mg%$ after cold air drying, and to $4,638.85{\pm}218.03\;mg%$ after vacuum freeze drying. The fatty acid proportions in raw abalone meat were $47.00{\pm}0.99%$ saturated, $22.18{\pm}1.05%$ monounsaturated, and $30.82{\pm}1.45%$ polyunsaturated. In the viscera, however, the proportions were $36.72{\pm}0.74%$ saturated, $25.44{\pm}1.12%$ monounsaturated, and $37.84{\pm}1.67%$ polyunsaturated. The contents of chondroitin sulfate in raw abalone were $11.95{\pm}0.35%$ in meat and $7.71{\pm}0.19%$ in viscera (both w/w). After shade-drying, the chondroitin sulfate content was $16.57{\pm}0.90%$ in meat and $9.24{\pm}0.50%$ in viscera. The figures after cold air drying were $16.17{\pm}0.79%$ and $12.44{\pm}0.61%$, and those after vacuum freeze drying $25.17{\pm}1.16%$ and $15.22{\pm}0.70%$ (thus including the highest meat content). The level of collagen in raw abalone was $69.80{\pm}3.07\;mg/g$ in meat and $40.62{\pm}1.79\;mg/g$ in viscera. Meat and viscera dried in the shade had $144.05{\pm}7.78\;mg/g$ and $44.16{\pm}2.39\;mg/g$ collagen, respectively, whereas the figures after cold air drying were $133.29{\pm}6.53\;mg/g$ and $69.20{\pm}3.39\;mg/g$, and after vacuum freeze drying $137.51{\pm}6.33\;mg/g$ and $60.61{\pm}2.79\;mg/g$. Volatile basic nitrogen values of raw abalone showed a higher content in viscera, at $19.01{\pm}0.84\;mg%$, compared to meat ($10.10{\pm}0.44\;mg%$). The value for shade-dried abalone meat was $136.77{\pm}7.37\;mg%$ and that of viscera $197.97{\pm}10.69\;mg%$. After cold air drying the meat and visceral values were $27.32{\pm}1.34\;mg%$ and $71.37{\pm}3.50\;mg%$, respectively.

The Abuse and Invention of Tradition from Maintenance Process of Historic Site No.135 Buyeo Gungnamji Pond (사적 제135호 부여 궁남지의 정비과정으로 살펴본 전통의 남용과 발명)

  • Jung, Woo-Jin
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.35 no.2
    • /
    • pp.26-44
    • /
    • 2017
  • Regarded as Korea's traditional pond, Gungnamj Pond was surmised to be "Gungnamji" due to its geological positioning in the south of Hwajisan (花枝山) and relics of the Gwanbuk-ri (官北里) suspected of being components to the historical records of Muwang (武王)'s pond of The Chronicles of the Three States [三國史記] and Sabi Palace, respectively, yet was subjected to a restoration following a designation to national historic site. This study is focused on the distortion of authenticity identified in the course of the "Gungnamji Pond" restoration and the invention of tradition, whose summarized conclusions are as follows. 1. Once called Maraebangjuk (마래방죽), or Macheonji (馬川池) Pond, Gungnamji Pond was existent in the form of a low-level swamp of vast area encompassing 30,000 pyeong during the Japanese colonial period. Hong, Sa-jun, who played a leading role in the restoration of "Gungnamji Pond," said that even during the 1940s, the remains of the island and stone facilities suspected of being the relics of Gungnamji Pond of the Baekje period were found, and that the traces of forming a royal palace and garden were discovered on top of them. Hong, Sa-jun also expressed an opinion of establishing a parallel between "Gungnamji Pond" and "Maraebangjuk" in connection with a 'tale of Seodong [薯童說話]' in the aftermath of the detached palace of Hwajisan, which ultimately operated as a theoretical ground for the restoration of Gungnamj Pond. Assessing through Hong, Sa-jun's sketch, the form and scale of Maraebangjuk were visible, of which the form was in close proximity to that photographed during the Japanese colonial period. 2. The minimized restoration of Gungnamji Pond faced deterrence for the land redevelopment project implemented in the 1960s, and the remainder of the land size is an attestment. The fundamental problem manifest in the restoration of Gungnamji Pond numerously attempted from 1964 through 1967 was the failure of basing the restorative work in the archaeological facts yet in the perspective of the latest generations, ultimately yielding a replication of Hyangwonji Pond of Gyeongbok Palace. More specifically, the methodologies employed in setting an island and a pavilion within a pond, or bridging an island with a land evidenced as to how Gungnamji Pond was modeled after Hyangwonji Pond of Gyeongbok Palace. Furthermore, Chihyanggyo (醉香橋) Bridge referenced in the designing of the bridge was hardly conceived as a form indigenous to the Joseon Dynasty, whose motivation and idea of the misguided restoration design at the time all the more devaluated Gungnamji Pond. Such an utterly pure replication of the design widely known as an ingredient for the traditional landscape was purposive towards the aesthetic symbolism and preference retained by Gyeongbok Palace, which was intended to entitle Gungnamji Pond to a physical status of the value in par with that of Gyeongbok Palace. 3. For its detachment to the authenticity as a historical site since its origin, Gungnamji Pond represented distortions of the landscape beauty and tradition even through the restorative process. The restorative process for such a historical monument, devoid of constructive use and certain of distortion, maintains extreme intimacy with the nationalistic cultural policy promoted by the Park, Jeong-hee regime through the 1960s and 1970s. In the context of the "manipulated discussions of tradition," the Park's cultural policy transformed the citizens' recollection into an idealized form of the past, further magnifying it at best. Consequently, many of the historical sites emerged as fancy and grand as they possibly could beyond their status quo across the nation, and "Gungnamji Pond" was a victim to this monopolistic government-led cultural policy incrementally sweeping away with new buildings and structures instituted regardless of their original space, and hence, their value.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

  • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.95-110
    • /
    • 2013
  • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.