• Title/Summary/Keyword: Hybrid-model

Search Result 2,547, Processing Time 0.033 seconds

A Study on Public Interest-based Technology Valuation Models in Water Resources Field (수자원 분야 공익형 기술가치평가 시스템에 대한 연구)

  • Ryu, Seung-Mi;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.177-198
    • /
    • 2018
  • Recently, as economic property it has become necessary to acquire and utilize the framework for water resource measurement and performance management as the property of water resources changes to hold "public property". To date, the evaluation of water technology has been carried out by feasibility study analysis or technology assessment based on net present value (NPV) or benefit-to-cost (B/C) effect, however it is not yet systemized in terms of valuation models to objectively assess an economic value of technology-based business to receive diffusion and feedback of research outcomes. Therefore, K-water (known as a government-supported public company in Korea) company feels the necessity to establish a technology valuation framework suitable for technical characteristics of water resources fields in charge and verify an exemplified case applied to the technology. The K-water evaluation technology applied to this study, as a public interest goods, can be used as a tool to measure the value and achievement contributed to society and to manage them. Therefore, by calculating the value in which the subject technology contributed to the entire society as a public resource, we make use of it as a basis information for the advertising medium of performance on the influence effect of the benefits or the necessity of cost input, and then secure the legitimacy for large-scale R&D cost input in terms of the characteristics of public technology. Hence, K-water company, one of the public corporation in Korea which deals with public goods of 'water resources', will be able to establish a commercialization strategy for business operation and prepare for a basis for the performance calculation of input R&D cost. In this study, K-water has developed a web-based technology valuation model for public interest type water resources based on the technology evaluation system that is suitable for the characteristics of a technology in water resources fields. In particular, by utilizing the evaluation methodology of the Institute of Advanced Industrial Science and Technology (AIST) in Japan to match the expense items to the expense accounts based on the related benefit items, we proposed the so-called 'K-water's proprietary model' which involves the 'cost-benefit' approach and the FCF (Free Cash Flow), and ultimately led to build a pipeline on the K-water research performance management system and then verify the practical case of a technology related to "desalination". We analyze the embedded design logic and evaluation process of web-based valuation system that reflects characteristics of water resources technology, reference information and database(D/B)-associated logic for each model to calculate public interest-based and profit-based technology values in technology integrated management system. We review the hybrid evaluation module that reflects the quantitative index of the qualitative evaluation indices reflecting the unique characteristics of water resources and the visualized user-interface (UI) of the actual web-based evaluation, which both are appended for calculating the business value based on financial data to the existing web-based technology valuation systems in other fields. K-water's technology valuation model is evaluated by distinguishing between public-interest type and profitable-type water technology. First, evaluation modules in profit-type technology valuation model are designed based on 'profitability of technology'. For example, the technology inventory K-water holds has a number of profit-oriented technologies such as water treatment membranes. On the other hand, the public interest-type technology valuation is designed to evaluate the public-interest oriented technology such as the dam, which reflects the characteristics of public benefits and costs. In order to examine the appropriateness of the cost-benefit based public utility valuation model (i.e. K-water specific technology valuation model) presented in this study, we applied to practical cases from calculation of benefit-to-cost analysis on water resource technology with 20 years of lifetime. In future we will additionally conduct verifying the K-water public utility-based valuation model by each business model which reflects various business environmental characteristics.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

Study on the Variation of Optical Properties of Asian Dust Plumes according to their Transport Routes and Source Regions using Multi-wavelength Raman LIDAR System (다파장 라만 라이다 시스템을 이용한 발원지 및 이동 경로에 따른 황사의 광학적 특성 변화 연구)

  • Shin, Sung-Kyun;Noh, Youngmin;Lee, Kwonho;Shin, Dongho;Kim, KwanChul;Kim, Young J.
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.2
    • /
    • pp.241-249
    • /
    • 2014
  • The continuous observations for atmospheric aerosol were carried out during 3 years (2009-2011) by using a multi-wavelength Raman lidar at the Gwangju Institute of Science and Technology (GIST), Korea ($35.11^{\circ}N$, $126.54^{\circ}E$). The particle depolarization ratios were retrieved from the observations in order to distinguish the Asian dust layer. The vertical information of Asian dust layers were used as input parameter for the Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model for analysis of its backward trajectories. The source regions and transport pathways of the Asian dust layer were identified. The most frequent source region of Asian dust in Korea was Gobi desert during observation period in this study. The statistical analysis on the particle depolarization ratio of Asian dust was conducted according to their transport route in order to retrieve the variation of optical properties of Asian dust during long-range transport. The transport routes were classified into the Asian dust which was transported to observation site directly from the source regions, and the Asian dust which was passed over pollution regions of China. The particle depolarization ratios of Asian dust which were transported via industrial regions of China was ranged 0.07-0.1, whereas, the particle depolarization ratio of Asian dust which was transported directly from the source regions to observation site were comparably higher and ranged 0.11-0.15. It is considered that the pure Asian dust particle from source regions were mixed with pollution particles, which is likely to spherical particle, during transportation so that the values of particle depolarization of Asian dust mixed with pollution was decreased.

Inhibitory Effect of Purple Corn 'Seakso 1' Husk and Cob Extracts on Lipid Accumulation in Oleic Acid- Induced Non-Alcoholic Fatty Liver Disease HepG2 Model (올레산 유도 비알코올성 지방간세포에서 자색옥수수 색소 1호 포엽과 속대 추출물의 지질 축적 억제 효과)

  • Lee, Ki Yeon;Kim, Tae hee;Kim, Jai Eun;Bae, Son wha;Park, A-Reum;Lee, Hyo Young;Choi, Sun jin;Park, Jong yeol;Kwon, Soon bae;Kim, Hee Yeon
    • Journal of Food Hygiene and Safety
    • /
    • v.35 no.1
    • /
    • pp.93-101
    • /
    • 2020
  • Seakso 1, a maize hybrid, was developed in 2008 by Gangwon Agricultural Research and Extension Services in Korea and registered in 2011. It is single-cross hybrid, semi-flint, deep-purple variety of corn, variety of are yellow, while the husks and cobs are purple. Due to the sensitivity of Seakso 1 to excess moisture after seeding, water supply should be carefully managed, and it should be harvested at a suitable time to obtain the highest anthocyanin content. This study investigated the hepatoprotective effect of Saekso 1 corn husk and cob extracts (EHCS) in oleic acid-induced non-alcoholic fatty liver disease (NAFLD) in HepG2 cells. EHCS showed a high level of lipid accumulation inhibiting effect. EHCS also suppressed triglyceride accumulation and inhibited expression of lipid marker genes, such as sterol regulatory element binding protein-1c (SREBP-1c) and sterol regulatory element binding protein-1a (SREBP-1a). Analysis by western blot of the expression of p-AMPK, p-SREBP1, PPARα, and FAS proteins showed that the incidence of SREBP1 protein, a major factor involved in lipid metabolism in the liver, has decreased significantly after treatment with the extracts. Moreover, the protein-induced expression of FAS, a major enzyme involved in the biosynthetic pathways of fatty acids, was decreased significantly in all concentrations. These results suggest that EHCS is a potent agent for the treatment of NAFLD.

Reverse engineering technique on the evaluation of impression accuracy in angulated implants (경사진 임플란트에서 임플란트 인상의 정확도 평가를 위한 역공학 기법)

  • Jung, Hong-Taek;Lee, Ki-Sun;Song, So-Yeon;Park, Jin-Hong;Lee, Jeong-Yol
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.59 no.3
    • /
    • pp.261-270
    • /
    • 2021
  • Purpose. The aim of this study was (1) to compare the reverse engineering technique with other existing measurement methods and (2) to analyze the effect of implant angulations and impression coping types on implant impression accuracy with reverse engineering technique. Materials and methods. Three different master models were fabricated and the distance between the two implant center points in parallel master model was measured with different three methods; digital caliper measurement (Group DC), optical measuring (Group OM), and reverse engineering technique (Group RE). The 90 experimental models were fabricated with three types of impression copings for the three different implant angulation and the angular and distance error rate were calculated. One-way ANOVA was used for comparison among the evaluation methods (P < .05). The error rates of experimental groups were analyzed by two-way ANOVA (P < .05). Results. While there was significant difference between Group DC and RE (P < .05), Group OM had no significant difference compared with other groups (P > .05). The standard deviations in reverse engineering were much lower than those of digital caliper and optical measurement. Hybrid groups had no significant difference from the pick-up groups in distance error rates (P > .05). Conclusion. The reverse engineering technique demonstrated its potential as an evaluation technique of 3D accuracy of impression techniques.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

Developmental competence and Effects of Coculture after Crypreservation of Blastomere-Biopsied Mouse Embryos as a Preclinical Model for Preimplantation Genetic Diagnosis (착상 전 유전진단 기술 개발의 동물실험 모델로서 할구 생검된 생쥐 배아에서 동결보존 융해 후 배아 발생 양상과 공배양 효과에 관한 연구)

  • Kim, Seok-Hyun;Kim, Hee-Sun;Ryu, Buom-Yong;Choi, Sung-Mi;Pang, Myung-Geol;Oh, Sun-Kyung;Jee, Byung-Chul;Suh, Chang-Suk;Choi, Young-Min;Kim, Jung-Gu;Moon, Shin-Yong;Lee, Jin-Yong;Chae, Hee-Dong;Kim, Chung-Hoon
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.27 no.1
    • /
    • pp.47-57
    • /
    • 2000
  • Objective: The effects of cryopreservation with or without coculture on the in vitro development of blastomere-biopsied 8-cell mouse embryos were investigated. This experimental study was originally designed for the setup of a preclinical mouse model for the preimplantation genetic diagnosis (PGD) in human. Methods: Eight-cell embryos were obtained after in vitro fertilization (IVF) from F1 hybrid mice (C57BL(표현불가)/CBA(표현불가)). Using micromanipulation, one to four blastomeres were aspirated through a hole made in the zona pellucida by zona drilling (ZD) with acid Tyrode's solution (ATS). A slow-freezing and rapid-thawing protocol with 1.5M dimethyl sulfoxide (DMSO) and 0.1M sucrose as cryoprotectant was used for the cryopreservation of blastomere- biopsied 8-cell mouse embryos. After thawing, embryos were cultured for 110 hours in Ham's F-10 supplemented with 0.4% bovine serum albumin (BSA). In the coculture group, embryos were cultured for 110 hours on the monolayer of Vero cells in the same medium. The blastocyst formation was recorded, and the embryos developed beyond blastocyst stage were stained with 10% Giemsa to count the total number of nuclei in each embryo. Results: The survival rate of embryos after cryopreservation was significantly lower in the blastomere-biopsied (7/8, 6/8, 5/8, and 4/8 embryos) groups than in the non-biopsied, zona intact (ZI) group. Without the coculture, the blastocyst formation rate of embryos after cryopreservation was not significantly different among ZI, the zona drilling only (ZD), and the balstomere-biopsied groups, but it was significantly lower than in the non-cryopreserved control group. The mean number of cells in embryos beyond blastocyst stage was significantly higher in the control group ($50.2{\pm}14.0$) than in 6/8 ($26.5{\pm}6.2$), 5/8 ($25.0{\pm}5.5$), and 4/8 ($17.8{\pm}7.8$) groups. With the coculture using Vero cells, the blastocyst formation rate of embryos after cryopreservation was significantly lower in 5/8 and 4/8 groups, compared with the control, 7/8, and 6/8 groups. The mean number of cells in embryos beyond blastocyst stage was also significantly lower in 4/8 group ($25.9{\pm}10.2$), compared with the control ($50.2{\pm}14.0$), 7/8 ($56.0{\pm}22.2$), and 6/8 ($55.3{\pm}25.5$) groups. Conclusion: After cryopreservation, blastomere-biopsied mouse embryos have a significantly impaired developmental competence in vitro, but this detrimental effect might be prevented by the coculture with Vero cells in 8-cell mouse embryos biopsied one or two blastomeres. Biopsy of mouse embryos after ZD with ATS is a safe and highly efficient preclinical model for PGD of human embryos.

  • PDF

Effects of Coculture on Development of Biopsied Mouse Embryos as a Preclinical Model for Preimplantation Genetic Diagnosis of Human Embryos (생쥐 모델을 이용한 배아의 할구 생검법과 할구가 생검된 배아의 배양시 공배양 효과에 관한 연구: 인간에서의 착상 전 유전진단 기술 개발을 위한 동물실험 모델의 개발)

  • Kim, S.H.;Ryu, B.Y.;Jee, B.C.;Choi, S.M.;Kim, H.S.;Pang, M.G.;Oh, S.K.;Suh, C.S.;Choi, Y.M.;Kim, J.G.;Moon, S.Y.;Lee, J.Y.;Chae, H.D.;Kim, C.H.
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.26 no.1
    • /
    • pp.9-20
    • /
    • 1999
  • The genetic defects in human gametes and embryos can cause adverse effects on overall reproductive events. Biopsy of embryos for preimplantation genetic diagnosis (PGD) offers a new possibility of having children free of the genetic disease. In addition, advanced embryo culture method may enhance the effectiveness of embryo biopsy for the practical application of PGD. This experimental study was undertaken to evaluate the effects of coculture on the development in vitro of biopsied mouse embryos as a preclinical model for PGD of human embryos. Embryos were obtained after in vitro fertilization (IVF) from F1 hybrid mice (C57BLfemale/CBAmale). Using micromanipulation, 1, 2, 3 or 4 blastomeres of 8-cell stage embryos were aspirated through a hole made in the zona pellucida by zona drilling (ZD) with acidic Tyrode's solution (ATS). After biopsy of blastomeres, embryos were cultured in vitro for 110 hours in Ham's F-10 supplemented with 0.4% BSA or cocultured on the monolayer of Vero cells in the same medium. The frequence of blastocyst formation were recorded, and the embryos beyond blastocyst stage were stained with 10% Giemsa to count the total number of nuclei in each embryo. There was no significant difference in the blastocyst formation between the zona intact control group and the zona drilling (ZD) only, or biopsied groups. The hatching rate of all the treatment groups except 4/8 group was significantly higher than that of control group. In all the treatment groups, there was a significant reduction in the mean cell number of embryos beyond blastocyst stage ($50.2{\pm}14.0$ in control group vs. $41.2{\pm}7.9$ in ZD, $39.3{\pm}8.8$ in 7/8, $29.7{\pm}6.4$ in 6/8, $25.1{\pm}5.7$ in 5/8, and $22.1{\pm}4.3$ in 4/8 groups, p<0.05). When the same treatments were followed by coculture with Vero cells, a similar pattern was seen in the blastocyst formation and the hatching rate. However, in all the treatment groups, there was a significant increase in the mean cell number of embryos beyond blastocyst stage with coculture, compared with the parallel groups without coculture. In the cleavage rate of biopsied blastomeres cultured for 110 hours after IVF, there was no significant difference between coculture and non-coculture groups (87.2% vs. 78.7%). However, the mean cell number of embryos developed from the biopsied blastomeres was significantly higher in coculture group ($11.5{\pm}4.7\;vs.\;5.9{\pm}1.9$, p<0.05). In conclusion, biopsy of mouse embryos after ZD with ATS is a safe and highly efficient method for PGD, and coculture with Vero cells showed a positive effect on the development in vitro of biopsied mouse embryos and blastomeres as a preclinical model for PGD of human embryos.

  • PDF