• Title/Summary/Keyword: Calculating Method

Search Result 3,103, Processing Time 0.041 seconds

Comparative Analysis of Environmental Ecological Flow Based on Habitat Suitability Index (HSI) in Miho stream of Geum river system (서식지적합도지수(HSI)에 따른 환경생태유량 비교 분석 : 미호천을 중심으로)

  • Lee, Jong Jin;Hur, Jun Wook
    • Ecology and Resilient Infrastructure
    • /
    • v.9 no.1
    • /
    • pp.68-76
    • /
    • 2022
  • In this study, the Habitat Suitability Index (HSI) was calculated in the Miho stream of the Geum river system, and the environmental ecological flow by point was evaluated. Two points (St.3 and St.8) representing the up and downstream of Miho Stream were selected, in order to calculate the Habitat Suitability Index, the depth and velocity at point where each species is appeared were investigated. The Habitat Suitability Index (HSI) was calculated by the Washington Department of Fish and Wildlife (WDFW) method using the number collected by water depth and velocity section and the results of the flow rate survey. Two target species were selected in this study; dominant species and swimming species sensitive to flow. In the case of a single species of Zacco platypus, the water depth was 0.1 - 0.5 m and the velocity was 0.2 - 0.5 m/s. For species of swimming fish, the water depth was 0.2 - 0.5 m and the velocity was 0.2 - 0.5 m/s. The discharge-Weighted Useable Area (WUA) relationship curve and habitat suitability distribution were simulated at the Miho Stream points St.3 and St.8. At the upstream St.3 of Miho Stream, the optimal discharge was simulated as 4.0 m3/s for swimming fishes and 2.7 m3/s for Zacco platypus. At the downstream point of St.8, species of swimming fish were simulated as 8.8 m3/s and Zacco platypus was simulated as 7.6 m3/s. In both points, the optimal discharge of swimming fish was over estimated. This is a result that the Habitat Suitability Index for swimming fish requires a faster flow rate than the habitat conditions of the Zacco platypus. In the calculation of the minimum discharge, the discharge of Zacco platypus is smaller and is evaluated to provide more Weighted Useable Area. In the case of swimming fishes, narrow range of depth and velocity increases the required discharge and relatively decreases the Weighted Useable Area. Therefore, when calculating the Habitat Suitability Index for swimming fishes, it is more advantageous to calculate the index including the habitat of all fish species than to narrow the range.

Application of Remote Sensing Techniques to Survey and Estimate the Standing-Stock of Floating Debris in the Upper Daecheong Lake (원격탐사 기법 적용을 통한 대청호 상류 유입 부유쓰레기 조사 및 현존량 추정 연구)

  • Youngmin Kim;Seon Woong Jang ;Heung-Min Kim;Tak-Young Kim;Suho Bak
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.589-597
    • /
    • 2023
  • Floating debris in large quantities from land during heavy rainfall has adverse social, economic, and environmental impacts, but the monitoring system for the concentration area and amount is insufficient. In this study, we proposed an efficient monitoring method for floating debris entering the river during heavy rainfall in Daecheong Lake, the largest water supply source in the central region, and applied remote sensing techniques to estimate the standing-stock of floating debris. To investigate the status of floating debris in the upper of Daecheong Lake, we used a tracking buoy equipped with a low-orbit satellite communication terminal to identify the movement route and behavior characteristics, and used a drone to estimate the potential concentration area and standing-stock of floating debris. The location tracking buoys moved rapidly during the period when the cumulative rainfall for 3 days increased by more than 200 to 300 mm. In the case of Hotan Bridge, which showed the longest distance, it moved about 72.8 km for one day, and the maximum moving speed at this time was 5.71 km/h. As a result of calculating the standing-stock of floating debris using a drone after heavy rainfall, it was found to be 658.8 to 9,165.4 tons, with the largest amount occurring in the Seokhori area. In this study, we were able to identify the main concentrations of floating debris by using location-tracking buoys and drones. It is believed that remote sensing-based monitoring methods, which are more mobile and quicker than traditional monitoring methods, can contribute to reducing the cost of collecting and processing large amounts of floating debris that flows in during heavy rain periods in the future.

Sorghum Field Segmentation with U-Net from UAV RGB (무인기 기반 RGB 영상 활용 U-Net을 이용한 수수 재배지 분할)

  • Kisu Park;Chanseok Ryu ;Yeseong Kang;Eunri Kim;Jongchan Jeong;Jinki Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.521-535
    • /
    • 2023
  • When converting rice fields into fields,sorghum (sorghum bicolor L. Moench) has excellent moisture resistance, enabling stable production along with soybeans. Therefore, it is a crop that is expected to improve the self-sufficiency rate of domestic food crops and solve the rice supply-demand imbalance problem. However, there is a lack of fundamental statistics,such as cultivation fields required for estimating yields, due to the traditional survey method, which takes a long time even with a large manpower. In this study, U-Net was applied to RGB images based on unmanned aerial vehicle to confirm the possibility of non-destructive segmentation of sorghum cultivation fields. RGB images were acquired on July 28, August 13, and August 25, 2022. On each image acquisition date, datasets were divided into 6,000 training datasets and 1,000 validation datasets with a size of 512 × 512 images. Classification models were developed based on three classes consisting of Sorghum fields(sorghum), rice and soybean fields(others), and non-agricultural fields(background), and two classes consisting of sorghum and non-sorghum (others+background). The classification accuracy of sorghum cultivation fields was higher than 0.91 in the three class-based models at all acquisition dates, but learning confusion occurred in the other classes in the August dataset. In contrast, the two-class-based model showed an accuracy of 0.95 or better in all classes, with stable learning on the August dataset. As a result, two class-based models in August will be advantageous for calculating the cultivation fields of sorghum.

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.

  • Application of Westgard Multi-Rules for Improving Nuclear Medicine Blood Test Quality Control (핵의학 검체검사 정도관리의 개선을 위한 Westgard Multi-Rules의 적용)

    • Jung, Heung-Soo;Bae, Jin-Soo;Shin, Yong-Hwan;Kim, Ji-Young;Seok, Jae-Dong
      • The Korean Journal of Nuclear Medicine Technology
      • /
      • v.16 no.1
      • /
      • pp.115-118
      • /
      • 2012
    • Purpose: The Levey-Jennings chart controlled measurement values that deviated from the tolerance value (mean ${\pm}2SD$ or ${\pm}3SD$). On the other hand, the upgraded Westgard Multi-Rules are actively recommended as a more efficient, specialized form of hospital certification in relation to Internal Quality Control. To apply Westgard Multi-Rules in quality control, credible quality control substance and target value are required. However, as physical examinations commonly use quality control substances provided within the test kit, there are many difficulties presented in the calculation of target value in relation to frequent changes in concentration value and insufficient credibility of quality control substance. This study attempts to improve the professionalism and credibility of quality control by applying Westgard Multi-Rules and calculating credible target value by using a commercialized quality control substance. Materials and Methods : This study used Immunoassay Plus Control Level 1, 2, 3 of Company B as the quality control substance of Total T3, which is the thyroid test implemented at the relevant hospital. Target value was established as the mean value of 295 cases collected for 1 month, excluding values that deviated from ${\pm}2SD$. The hospital quality control calculation program was used to enter target value. 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T of Westgard Multi-Rules were applied in the Total T3 experiment, which was conducted 194 times for 20 days in August. Based on the applied rules, this study classified data into random error and systemic error for analysis. Results: Quality control substances 1, 2, and 3 were each established as 84.2 ng/$dl$, 156.7 ng/$dl$, 242.4 ng/$dl$ for target values of Total T3, with the standard deviation established as 11.22 ng/$dl$, 14.52 ng/$dl$, 14.52 ng/$dl$ respectively. According to error type analysis achieved after applying Westgard Multi-Rules based on established target values, the following results were obtained for Random error, 12s was analyzed 48 times, 13s was analyzed 13 times, R4s was analyzed 6 times, for Systemic error, 22s was analyzed 10 times, 41s was analyzed 11 times, 2 of 32s was analyzed 17 times, $10\bar{x}$ was analyzed 10 times, and 7T was not applied. For uncontrollable Random error types, the entire experimental process was rechecked and greater emphasis was placed on re-testing. For controllable Systemic error types, this study searched the cause of error, recorded the relevant cause in the action form and reported the information to the Internal Quality Control committee if necessary. Conclusions : This study applied Westgard Multi-Rules by using commercialized substance as quality control substance and establishing target values. In result, precise analysis of Random error and Systemic error was achieved through the analysis of 12s, 22s, 13s, 2 of 32s, R4s, 41s, $10\bar{x}$, 7T rules. Furthermore, ideal quality control was achieved through analysis conducted on all data presented within the range of ${\pm}3SD$. In this regard, it can be said that the quality control method formed based on the systematic application of Westgard Multi-Rules is more effective than the Levey-Jennings chart and can maximize error detection.

    • PDF

    A Study on the Observation of Soil Moisture Conditions and its Applied Possibility in Agriculture Using Land Surface Temperature and NDVI from Landsat-8 OLI/TIRS Satellite Image (Landsat-8 OLI/TIRS 위성영상의 지표온도와 식생지수를 이용한 토양의 수분 상태 관측 및 농업분야에의 응용 가능성 연구)

    • Chae, Sung-Ho;Park, Sung-Hwan;Lee, Moung-Jin
      • Korean Journal of Remote Sensing
      • /
      • v.33 no.6_1
      • /
      • pp.931-946
      • /
      • 2017
    • The purpose of this study is to observe and analyze soil moisture conditions with high resolution and to evaluate its application feasibility to agriculture. For this purpose, we used three Landsat-8 OLI (Operational Land Imager)/TIRS (Thermal Infrared Sensor) optical and thermal infrared satellite images taken from May to June 2015, 2016, and 2017, including the rural areas of Jeollabuk-do, where 46% of agricultural areas are located. The soil moisture conditions at each date in the study area can be effectively obtained through the SPI (Standardized Precipitation Index)3 drought index, and each image has near normal, moderately wet, and moderately dry soil moisture conditions. The temperature vegetation dryness index (TVDI) was calculated to observe the soil moisture status from the Landsat-8 OLI/TIRS images with different soil moisture conditions and to compare and analyze the soil moisture conditions obtained from the SPI3 drought index. TVDI is estimated from the relationship between LST (Land Surface Temperature) and NDVI (Normalized Difference Vegetation Index) calculated from Landsat-8 OLI/TIRS satellite images. The maximum/minimum values of LST according to NDVI are extracted from the distribution of pixels in the feature space of LST-NDVI, and the Dry/Wet edges of LST according to NDVI can be determined by linear regression analysis. The TVDI value is obtained by calculating the ratio of the LST value between the two edges. We classified the relative soil moisture conditions from the TVDI values into five stages: very wet, wet, normal, dry, and very dry and compared to the soil moisture conditions obtained from SPI3. Due to the rice-planing season from May to June, 62% of the whole images were classified as wet and very wet due to paddy field areas which are the largest proportions in the image. Also, the pixels classified as normal were analyzed because of the influence of the field area in the image. The TVDI classification results for the whole image roughly corresponded to the SPI3 soil moisture condition, but they did not correspond to the subdivision results which are very dry, wet, and very wet. In addition, after extracting and classifying agricultural areas of paddy field and field, the paddy field area did not correspond to the SPI3 drought index in the very dry, normal and very wet classification results, and the field area did not correspond to the SPI3 drought index in the normal classification. This is considered to be a problem in Dry/Wet edge estimation due to outlier such as extremely dry bare soil and very wet paddy field area, water, cloud and mountain topography effects (shadow). However, in the agricultural area, especially the field area, in May to June, it was possible to effectively observe the soil moisture conditions as a subdivision. It is expected that the application of this method will be possible by observing the temporal and spatial changes of the soil moisture status in the agricultural area using the optical satellite with high spatial resolution and forecasting the agricultural production.

    Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

    • Lee, Minchul;Kim, Hea-Jin
      • Journal of Intelligence and Information Systems
      • /
      • v.24 no.1
      • /
      • pp.183-203
      • /
      • 2018
    • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

    The Current Status of the Warsaw Convention and Subsequent Protocols in Leading Asian Countries (아시아 주요국가(主要國家)들에 있어서의 바르샤바 체제(體制)의 적용실태(適用實態)와 전망(展望))

    • Lee, Tae-Hee
      • The Korean Journal of Air & Space Law and Policy
      • /
      • v.1
      • /
      • pp.147-162
      • /
      • 1989
    • The current status of the application and interpretation of the Warsaw Convention and its subsequent Protocols in Asian countries is in its fredgling stages compared to the developed countries of Europe and North America, and there is thus little published information about the various Asian governments' treatment and courts' views of the Warsaw System. Due to that limitation, the accent of this paper will be on Korea and Japan. As one will be aware, the so-called 'Warsaw System' is made up of the Warsaw Convention of 1929, the Hague Protocol of 1955, the Guadalajara Convention of 1961, the Guatemala City Protocol of 1971 and the Montreal Additional Protocols Nos. 1,2,3 and 4 of 1975. Among these instruments, most of the countries in Asia are parties to both the Warsaw Convention and the Hague Protocol. However, the Republic of Korea and Mongolia are parties only to the Hague Protocol, while Burma, Indonesia and Sri Lanka are parties only to the Warsaw Convention. Thailand and Taiwan are not parties only to the convention or protocol. Among Asian states, Indonesia, the Phillipines and Pakistan are also parties to the Guadalajara Convention, but no country in Asia has signed the Guatemala City Protocol of 1971 or the Montreal Additional Protocols, which Protocols have not yet been put into force. The People's Republic of China has declared that the Warsaw Convention shall apply to the entire Chinese territory, including Taiwan. 'The application of the Warsaw Convention to one-way air carriage between a state which is a party only to the Warsaw Convention and a state which is a party only to the Hague Protocol' is of particular importance in Korea as it is a signatory only to the Hague Protocol, but it is involved in a great deal of air transportation to and from the united states, which in turn is a party only to the Warsaw Convention. The opinion of the Supreme Court of Korea appears to be, that parties to the Warsaw Convention were intended to be parties to the Hague Protocol, whether they actually signed it or not. The effect of this decision is that in Korea the United States and Korea will be considered by the courts to be in a treaty relationship, though neither State is a signatory to the same instrument as the other State. The first wrongful death claim in Korea related to international carriage by air under the Convention was made in Hyun-Mo Bang, et al v. Korean Air Lines Co., Ltd. case. In this case, the plaintiffs claimed for damages based upon breach of contract as well as upon tort under the Korean Civil Code. The issue in the case was whether the time limitation provisions of the Convention should be applicable to a claim based in tort as well as to a claim based in contract. The Appellate Court ruled on 29 August 1983 that 'however founded' in Article 24(1) of the Convention should be construed to mean that the Convention should be applicable to the claim regardless of whether the cause of action was based in tort or breach of contract, and that the plaintiffs' rights to damages had therefore extinguished because of the time limitation as set forth in Article 29(1) of the Convention. The difficult and often debated question of what exactly is meant by the words 'such default equivalent to wilful misconduct' in Article 25(1) of the Warsaw Convention, has also been litigated. The Supreme Court of Japan dealt with this issue in the Suzuki Shinjuten Co. v. Northwest Airlines Inc. case. The Supreme Court upheld the Appellate Court's ruling, and decided that 'such default equivalent to wilful misconduct' under Article 25(1) of the Convention was within the meaning of 'gross negligence' under the Japanese Commercial Code. The issue of the convention of the 'franc' into national currencies as provided in Article 22 of the Warsaw Convention as amended by the Hague Protocol has been raised in a court case in Korea, which is now before the District Court of Seoul. In this case, the plaintiff argues that the gold franc equivalent must be converted in Korean Won in accordance with the free market price of gold in Korea, as Korea has not enacted any law, order or regulation prescribing the proper method of calculating the equivalent in its national currency. while it is unclear if the court will accept this position, the last official price of gold of the United States as in the famous Franklin Mint case, Special Drawing Right(SDR) or the current French franc, Korean Air Lines has argued in favor of the last official price of gold of the United States by which the air lines converted such francs into us Dollars in their General Conditions of Carriage. It is my understanding that in India, an appellate court adopted the free market price valuation. There is a report as well saying that if a lawsuit concerning this issue were brought in Pakistan, the free market cost of gold would be applied there too. Speaking specifically about the future of the Warsaw System in Asia though I have been informed that Thailand is actively considering acceding to the Warsaw Convention, the attitudes of most Asian countries' governments towards the Warsaw System are still wnot ell known. There is little evidence that Asian countries are moving to deal concretely with the conversion of the franc into their own local currencies. So too it cannot be said that they are on the move to adhere to the Montreal Additional Protocols Nos. 3 & 4 which attempt to basically solve many of the current problems with the Warsaw System, by adopting the SDR as the unit of currency, by establishing the carrier's absolute liability and an unbreakable limit and by increasing the carrier's passenger limit of liability to SDR 100,000, as well as permiting the domestic introduction of supplemental compensation. To summarize my own sentiments regarding the future, I would say that given the fact that Asian air lines are now world leaders both in overall size and rate of growth, and the fact that both Asian individuals and governments are becoming more and more reliant on the global civil aviation networks as their economies become ever stronger, I am hopeful that Asian nations will henceforth play a bigger role in ensuring the orderly and hasty development of a workable unified system of rules governing international commercial air carriage.

    • PDF

    A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

    • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.2
      • /
      • pp.123-139
      • /
      • 2019
    • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

    Four-year change and tracking of serum lipids in Korean adolescents (강화지역 청소년의 4년간 혈청 지질의 변화와 지속성)

    • Lee, Kang-Hee;Suh, Il;Jee, Sun-Ha;Nam, Chung-Mo;Kim, Sung-Soon;Shim, Won-Heum;Ha, Jong-Won;Kim, Suk-Il;Kang, Hyung-Gon
      • Journal of Preventive Medicine and Public Health
      • /
      • v.30 no.1 s.56
      • /
      • pp.45-59
      • /
      • 1997
    • It has been known that there is a tracking phenomenon in the level of serum lipids. However, no study has been performed to examine the change and tracking of serum lipids in Korean adolescents. The purpose of this study is to examine the changes of serum lipids in Korean adolescents from 12 to 16 years of age, and to examine whether or not there is a tracking phenomenon in serum lipids level during the period. In 1992 serum lipids(total cholesterol(TC), triglyceride(TG), LDL cholesterol(LDL-C), HDL cholesterol(HDL-C)) were measured in 318 males, 365 females who were 12 years of age in Kangwha county, Korea. These participants have been followed up to 1996 and serum lipids level were examined in 1994 and 1996. Among the participants 162 males and 147 females completed all three examinations in fasting state. To examine the effect of eliminating adolescents with incomplete data, we compared serum lipids, blood pressure and anthropometric measures at baseline between adolescents with complete follow-up and adolescents who were withdrawn. To examine the change of serum lipids we compared mean values of serum lipids according to age in males and females. Repeated analysis of variance was used to test the change according to age. We used three methods to examine the existence of tracking. First, we analyzed the trends in serum lipids over 4-year period within quartile groups formed on the basis of the first-year serum lipids level to see whether or not the relative ranking of the mean serum lipids among the quartile groups remained in the same group for 4-year period. Second, we quantified the degree of tracking by calculating Spearman's rank correlation coefficient between every tests. Third, the persistence extreme quartile method was used. This method divides the population into quartile groups according to the initial level of blood lipids and then calculates the percent of the subjects who stayed in the same group at follow-up measurement. The decreases in levels were noted during 4 years for TC, LDL-C, primarily for boys. The level of HDL-C decreased between baseline and first follow-up for both sexes. Tracking, as measured by both correlation coefficients and persistence extreme quartiles, was evident for all of the lipids. The correlation coefficients of TC between baseline and 4 years later in boys and girls were 0.55 and 0.68, respectively. And the corresponding values for HDL-C were 0.58 and 0.69. More than 50% of adolescents who belonged to the highest quartile group in TC, HDL-C and LDL-C at the baseline were remained at the same group at the examination performed 2 years later for both sexes. The probabilities of remaining at the same group were more than 35% when examined 4 years later. The tracking phenomenon of TG was less evident compared with the other lipids. Percents of girls who stayed at the same group 2 years later and 4 years later were 42.9% and 25.7%, respectively. It was evident that serum lipid levels tracked in Korean adolescents. Researches with longer follow-up would be needed in the future to investigate the long-term change of lipids from adolescents to adults.

    • PDF