Genetic Relationship between Populations and Analysis of Genetic Structure in Hanwoo Proven and Regional Area Populations (한우 종모우와 지역별 한우 집단의 유연관계와 유전적 구조 분석)
-
- Journal of Life Science
- /
- v.18 no.10
- /
- pp.1442-1446
- /
- 2008
Seven populations of 586 Hanwoo have been characterized by using 10 microsatellite DNA markers. Size of microsatellite markers decided using GeneMapper Software (v.4.0) after analyze in kinds of ABI machine of name of 3130. Frequencies of microsatellites markers were used to estimate heterozygosities and genetic distances. Genetic distancesbetween populations were obtained using Ne's DA distance method. Expected heterozygosity between each population was estimated very analogously. Genetic distances (0.0413) between Kangwan (KW) and Gyonggi (GG), Jeonpuk (JP) were nearest than distances between other populations by 0.021. Genetic distances between Gyonggi (GG) and Kyongpuk (KP) showed far distance than other populations by 0.032. In the UPGMA tree that is made based on DA distance matrix. Each individuals were not ramified to different group and were spread evenly in phylogenetic dendrogram about all Hanwoo of each regional area populations. But Hanwoo proven population was ramified to different group.
Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.
The purpose of this study was to identify the preventive and the progressive inhibitory effects of enamel demineralization with fluoride releasing light-and self-cured orthodontic sealants(FluoroBond), in vitro, under the polarizing light microscope and the scanning electon microscope. The polarizing light microscopic group was subdivided into seven groups(Group A-Group G). The scanning electron microscopic group was also subdivided into seven groups(Group A'-Goup G'). For polarizing light microscopic evaluation, longitudinal sections were made longitudinally by Maruto cutter(Maruto Co., Japan) and Maruto grinding machine(Maruto Co., Japan). Sections were examined and photographed by the polarizing light microscope(Olympus Optical Co., Japan) using crossed polars and with the enamel rod longitudinal axis oriented at
Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.
Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.
Purpose : Brain Stereotactic Radiosurgery can treat non-invasive diseases with high rates of complications due to surgical operations. However, brain stereotactic radiosurgery may be accompanied by radiation induced side effects such as fractionation radiation therapy because it uses radiation. The effects of Coplanar Volumetric Modulated Arc Therapy(C-VMAT) and Non-Coplanar Volumetric Modulated Arc Therapy(NC-VMAT) on surrounding normal tissues were analyzed in order to reduce the side effects caused fractionation radiation therapy such as head and neck. But, brain stereotactic radiosurgery these contents were not analyzed. In this study, we evaluated the usefulness of NC-VMAT by comparing and analyzing C-VMAT and NC-VMAT in patients who underwent brain stereotactic radiosurgery. Methods and materials : With C-VMAT and NC-VMAT, 13 treatment plans for brain stereotactic radiosurgery were established. The Planning Target Volume ranged from a minimum of 0.78 cc to a maximum of 12.26 cc, Prescription doses were prescribed between 15 and 24 Gy. Treatment machine was TrueBeam STx (Varian Medical Systems, USA). The energy used in the treatment plan was 6 MV Flattening Filter Free (6FFF) X-ray. The C-VMAT treatment plan used a half 2 arc or full 2 arc treatment plan, and the NC-VMAT treatment plan used 3 to 7 Arc 40 to 190 degrees. The angle of the couch was planned to be 3-7 angles. Results : The mean value of the maximum dose was
This study was conducted to survey the situation of direct rice seeding in Honam province in Korea to investigate problems and seek countermeasure of weed control in direct rice seeding. The total area of direct rice seeding in the south-western part of Korea (Chonbuk, Chonnam, and Chungnam) was 1650.8ha (732.1ha for direct seeding in dry field and 918.7ha for direct seeding in flooding field) in 1992. The followings are summary of the study. 1. In case of direct rice seeding in dry field, butachlor EC and G at 3 to 5 DAS was mostly selected by farmers to control weeds in dry field. Benthiocarb or chlornitrofen was also used in few cases. At 10 to 14 DAS just before rice emergence, tank misture of butachlor EC and paraquat was treated by some farmers. At 35 to 40 days, after flooding mixture of sulfonylurea derivatives was sequentially applied. Surviving weeds including barnyardgrass were finally controlled by mixture of bentazon+quinclorac WP foliage application. 2. In case of direct rice seeding in flooding field, weed control were mostly unsuccessful partially due to wrong selection of herbicide and missing the optimum application time. Three relatively successful weed control in the survey were summarized as follows. 1) Oxadiazon EC, butachlor or benthiocarb were treated just after puddling(5 to 7 days before seeding). then mixture of bentazone+quinclorac WP or sulfonylurea derivatives was sequently applied to control remaining weeds at 20 days after seeding. 2) Mixtures of bensulfuronmethyl+dimepiperate G, pyrazosulfuronethyl+molinate G, or bensulfuronmethyl+mefenacet+dymron G were applied at 11 days after puddling when barnyardgrass were at 2.0 leaf stage. Phytotoxicity was not found in case of mixture of bensulfuronmethyl+dimepiperate G but found in the other two cases but disappeared later. 3) Mixtures of bensulfuronmethyl+quinclorac G., pyrazosulfuronethyl+quinclorac G or betazone and quinclorac G were treated after 18 to 20 days after puddling when barnyardgrass was within 3.0 leaf stage. It showed good weed control in both annuals and perrenials without phytotoxicity. On the contrary, other sulfonylurea derivatives such as middle periodic herbicide showed poor weed control against barnyardgrass, so that sequential treatment of bentazone+quinclorac WP mixture was required. 3. Herbicidal characteristics and optimum application time of 45 rigistered herbicides in Korea were analyzed to discover new substitute for quinclorac mixture, that showed excellent weed control against barnyardgrass at its 3 leaf stage or older. The analysis revealed that 70% of herbicides were for preemergence and the others were post periodic herbicide. Most farmers favor to apply herbicide when rice seedlings completely rooted, at this time barnyardgrass are at 2.5-3.0 leaf stage. Therefore herbicide of which optimum application time had long is required. In this study. 6 middle periodic herbicides among sulfonylurea derivatives and 2 quinclorac mixture were selected and evaluated their weeding spectrums at different leaf stage of barnyardgrass in both soil application in flooding condition and foliage application in dry paddy field. The order of weeding spectrum in magnitude was as follows : bentazone+quinclorac WP> bentazone + quinclorac G>bensulfuronmethyl + quinclorac G>pyrazosulfuronethyl + quinclorac G> pyrazosulfuronethyl + Molinate G>bensulfuronmethyl + mefenacet + dymron G>bensulfuronmethyl + mefenacet G>bensulfuron methyl+benthiocarb G. The above results coincided with that of the survey. In conclusion, there is no proper substitute for quinclorac mixrure, which can control barnyardgrass at 3.0 leaf stage or even older. Therefore quinclorac should be supplied continuously to farmers in order to anchor direct rice seeding in Korea. Author suggested the followings to eastablish direct rice seeding technology effectively and quickly : 1) A tentatively named "The research committee for direct rice seeding" which was composed of farmers. researchers and goberment. should be eastablished to cooperate effectively. 2) Development of a pricise direct rice seeding machine for both dry and flooding paddy field. which is workable regardless of condition and varieties of seeds. 3) Study on protecting rice seed and seedling from sparrows. 4) Systematic studies of weed control techniques in direct rice seeding to standardize herbicide application. 5) Studies on farm-land reformation. techniques of precise land preparation. and direct rice seeding using an airplane.
Purpose : To evaluate the usefulness and reproducibility of
Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70