• Title/Summary/Keyword: optimal algorithm

Search Result 6,806, Processing Time 0.038 seconds

The effects of physical factors in SPECT (물리적 요소가 SPECT 영상에 미치는 영향)

  • 손혜경;김희중;나상균;이희경
    • Progress in Medical Physics
    • /
    • v.7 no.1
    • /
    • pp.65-77
    • /
    • 1996
  • Using the 2-D and 3-D Hoffman brain phantom, 3-D Jaszczak phantom and Single Photon Emission Computed Tomography, the effects of data acquisition parameter, attenuation, noise, scatter and reconstruction algorithm on image quantitation as well as image quality were studied. For the data acquisition parameters, the images were acquired by changing the increment angle of rotation and the radius. The less increment angle of rotation resulted in superior image quality. Smaller radius from the center of rotation gave better image quality, since the resolution degraded as increasing the distance from detector to object increased. Using the flood data in Jaszczak phantom, the optimal attenuation coefficients were derived as 0.12cm$\^$-1/ for all collimators. Consequently, the all images were corrected for attenuation using the derived attenuation coefficients. It showed concave line profile without attenuation correction and flat line profile with attenuation correction in flood data obtained with jaszczak phantom. And the attenuation correction improved both image qulity and image quantitation. To study the effects of noise, the images were acquired for 1min, 2min, 5min, 10min, and 20min. The 20min image showed much better noise characteristics than 1min image indicating that increasing the counting time reduces the noise characteristics which follow the Poisson distribution. The images were also acquired using dual-energy windows, one for main photopeak and another one for scatter peak. The images were then compared with and without scatter correction. Scatter correction improved image quality so that the cold sphere and bar pattern in Jaszczak phantom were clearly visualized. Scatter correction was also applied to 3-D Hoffman brain phantom and resulted in better image quality. In conclusion, the SPECT images were significantly affected by the factors of data acquisition parameter, attenuation, noise, scatter, and reconstruction algorithm and these factors must be optimized or corrected to obtain the useful SPECT data in clinical applications.

  • PDF

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Comparison between REML and Bayesian via Gibbs Sampling Algorithm with a Mixed Animal Model to Estimate Genetic Parameters for Carcass Traits in Hanwoo(Korean Native Cattle) (한우의 도체형질 유전모수 추정을 위한 REML과 Bayesian via Gibbs Sampling 방법의 비교 연구)

  • Roh, S.H.;Kim, B.W.;Kim, H.S.;Min, H.S.;Yoon, H.B.;Lee, D.H.;Jeon, J.T.;Lee, J.G.
    • Journal of Animal Science and Technology
    • /
    • v.46 no.5
    • /
    • pp.719-728
    • /
    • 2004
  • The aims of this study were to estimate genetic parameters for carcass traits on Hanwoo(Korean Native Cattle) and to compare two different statistical algorithms for estimating genetic parameters. Data obtained from 1526 steers at Hanwoo Improvement Center and Hanwoo Improvement Complex Area from 1996 to 2001 were used for the analyses. The carcass traits considered in these studies were carcass weight, dressing percent, eye muscle area, backfat thickness, and marbling score. Estimated genetic parameters using EM-REML algorithm were compared to those by Bayesian inference via Gibbs Sampling to find out statistical properties. The estimated heritabilities of carcass traits by REML method were 0.28, 0.25, 0.35, 0.39 and 0.51, respectively and those by Gibbs Sampling method were 0.29, 0.25, 0.40, 0.42 and 0.54, respectively. This estimates were not significantly different, even though the estimated heritabilities by Gibbs Sampling method were higher than ones by REML method. Since the estimated statistics by REML method and Gibbs Sampling method were not significantly different in this study, it is inferred that both mothods could be efficiently applied for the analysis of carcass traits of cattle. However, further studies are demanded to define an optimal statistical method for handling large scale performance data.

A User Optimer Traffic Assignment Model Reflecting Route Perceived Cost (경로인지비용을 반영한 사용자최적통행배정모형)

  • Lee, Mi-Yeong;Baek, Nam-Cheol;Mun, Byeong-Seop;Gang, Won-Ui
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.117-130
    • /
    • 2005
  • In both deteministic user Optimal Traffic Assignment Model (UOTAM) and stochastic UOTAM, travel time, which is a major ccriterion for traffic loading over transportation network, is defined by the sum of link travel time and turn delay at intersections. In this assignment method, drivers actual route perception processes and choice behaviors, which can become main explanatory factors, are not sufficiently considered: therefore may result in biased traffic loading. Even though there have been some efforts in Stochastic UOTAM for reflecting drivers' route perception cost by assuming cumulative distribution function of link travel time, it has not been fundamental fruitions, but some trials based on the unreasonable assumptions of Probit model of truncated travel time distribution function and Logit model of independency of inter-link congestion. The critical reason why deterministic UOTAM have not been able to reflect route perception cost is that the route perception cost has each different value according to each origin, destination, and path connection the origin and destination. Therefore in order to find the optimum route between OD pair, route enumeration problem that all routes connecting an OD pair must be compared is encountered, and it is the critical reason causing computational failure because uncountable number of path may be enumerated as the scale of transportation network become bigger. The purpose of this study is to propose a method to enable UOTAM to reflect route perception cost without route enumeration between an O-D pair. For this purpose, this study defines a link as a least definition of path. Thus since each link can be treated as a path, in two links searching process of the link label based optimum path algorithm, the route enumeration between OD pair can be reduced the scale of finding optimum path to all links. The computational burden of this method is no more than link label based optimum path algorithm. Each different perception cost is embedded as a quantitative value generated by comparing the sub-path from the origin to the searching link and the searched link.

Development of Estimation Equation for Minimum and Maximum DBH Using National Forest Inventory (국가산림자원조사 자료를 이용한 최저·최고 흉고직경 추정식 개발)

  • Kang, Jin-Taek;Yim, Jong-Su;Lee, Sun-Jeoung;Moon, Ga-Hyun;Ko, Chi-Ung
    • Journal of agriculture & life science
    • /
    • v.53 no.6
    • /
    • pp.23-33
    • /
    • 2019
  • In accordance with a change in the management information system containing the management record and planning for the entire national forest in South Korea by an amendment of the relevant law (The national forest management planning and methods, Korea Forest Service), in this study, average, the maximum, and the minimum values for DBH were presented while only average values were required before the amendment. In this regard, there is a need for an estimation algorithm by which all the existing values for DBH established before the revision can be converted to the highest and the lowest ones. The purpose of this study is to develop an estimation equation to automatically show the minimum and the maximum values for DBH for 12 main tree species from the data in the national forest management information system. In order to develop the estimation equation for the minimum and the maximum values for DBH, there was exploited the 6,858 fixed sample plots of the fifth and the sixth national forest inventory between in 2006 and 2015. Two estimation models were applied for DBH-tree age and DHB-tree height using such growth variables as DBH, tree age, and height, to draw the estimation equation for the maximum and the minimum values for DBH. The findings showed that the most suitable model to estimate the minimum and the maximum values for DBH was Dmin=a+bD+cH, Dmax=a+bD+cH with the variables of DBH and height. Based on these optimal models, the estimation equation was devised for the minimum and the maximum values for DBH for the 12 main tree species.

Utilizing the Idle Railway Sites: A Proposal for the Location of Solar Power Plants Using Cluster Analysis (철도 유휴부지 활용방안: 군집분석을 활용한 태양광발전 입지 제안)

  • Eunkyung Kang;Seonuk Yang;Jiyoon Kwon;Sung-Byung Yang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.79-105
    • /
    • 2023
  • Due to unprecedented extreme weather events such as global warming and climate change, many parts of the world suffer from severe pain, and economic losses are also snowballing. In order to address these problems, 'The Paris Agreement' was signed in 2016, and an intergovernmental consultative body was formed to keep the average temperature rise of the Earth below 1.5℃. Korea also declared 'Carbon Neutrality in 2050' to prevent climate catastrophe. In particular, it was found that the increase in temperature caused by greenhouse gas emissions hurts the environment and society as a whole, as well as the export-dependent economy of Korea. In addition, as the diversification of transportation types is accelerating, the change in means of choice is also increasing. As the development paradigm in the low-growth era changes to urban regeneration, interest in idle railway sites is rising due to reduced demand for routes, improvement of alignment, and relocation of urban railways. Meanwhile, it is possible to partially achieve the solar power generation goal of 'Renewable Energy 3020' by utilizing already developed but idle railway sites and take advantage of being free from environmental damage and resident acceptance issues surrounding the location; but the actual use and plan for these solar power facilities are still lacking. Therefore, in this study, using the big data provided by the Korea National Railway and the Renewable Energy Cloud Platform, we develop an algorithm to discover and analyze suitable idle sites where solar power generation facilities can be installed and identify potentially applicable areas considering conditions desired by users. By searching and deriving these idle but relevant sites, it is intended to devise a plan to save enormous costs for facilities or expansion in the early stages of development. This study uses various cluster analyses to develop an optimal algorithm that can derive solar power plant locations on idle railway sites and, as a result, suggests 202 'actively recommended areas.' These results would help decision-makers make rational decisions from the viewpoint of simultaneously considering the economy and the environment.

Development and assessment of pre-release discharge technology for response to flood on deteriorated reservoirs dealing with abnormal weather events (이상기후대비 노후저수지 홍수 대응을 위한 사전방류 기술개발 및 평가)

  • Moon, Soojin;Jeong, Changsam;Choi, Byounghan;Kim, Seungwook;Jang, Daewon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.11
    • /
    • pp.775-784
    • /
    • 2023
  • With the increasing trend of extreme rainfall that exceeds the design frequency of man-made structures due to extreme weather, it is necessary to review the safety of agricultural reservoirs designed in the past. However, there are no local government-managed reservoirs (13,685) that can be discharged in an emergency, except for reservoirs over a certain size under the jurisdiction of the Korea Rural Affairs Corporation. In this case, it is important to quickly deploy a mobile siphon to the site for preliminary discharge, and this study evaluated the applicability of a mobile siphon with a diameter of 200 mm, a minimum water level difference of 6 m, 420 (m2/h), and 10,000 (m2/day), which can perform both preliminary and emergency discharge functions, to the Yugum Reservoir in Gyeongju City. The test bed, Yugum Reservoir, is a facility that was completed in 1945 and has been in use for about 78 years. According to the hydrological stability analysis, the lowest height of the current dam crest section is 27.15 (EL.m), which is 0.29m lower than the reviewed flood level of 27.44 (EL.m), indicating that there is a possibility of lunar flow through the embankment, and the headroom is insufficient by 1.72 m, so it was reviewed as not securing hydrological safety. The water level-volume curve was arbitrarily derived because it was difficult to clearly establish the water level-flow relationship curve of the reservoir since the water level-flow measurement was not carried out regularly, and based on the derived curve, the algorithm for operating small and medium-sized old reservoirs was developed to consider the pre-discharge time, the amount of spillway discharge, and to predict the reservoir lunar flow time according to the flood volume by frequency, thereby securing evacuation time in advance and reducing the risk of collapse. Based on one row of 200 mm diameter mobile siphons, the optimal pre-discharge time to secure evacuation time (about 1 hour) while maintaining 80% of the upper limit water level (about 30,000 m2) during a 30-year flood was analyzed to be 12 hours earlier. If the pre-discharge technology utilizing siphons for small and medium-sized old reservoirs and the algorithm for reservoir operation are implemented in advance in case of abnormal weather and the decision-making of managers is supported, it is possible to secure the safety of residents in the risk area of reservoir collapse, resolve the anxiety of residents through the establishment of a support system for evacuating residents, and reduce risk factors by providing risk avoidance measures in the event of a reservoir risk situation.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.