• Title/Summary/Keyword: SELECT method

Search Result 3,334, Processing Time 0.034 seconds

Protoplast Fusion of Nicotiana glauca and Solanum tuberosum Using Selectable Marker Genes (표식유전자를 이용한 담배와 감자의 원형질체 융합)

  • Park, Tae-Eun;Chung, Hae-Joun
    • The Journal of Natural Sciences
    • /
    • v.4
    • /
    • pp.103-142
    • /
    • 1991
  • These studies were carried out to select somatic hybrid using selectable marker genes of Nicotiana glauca transformed by NPTII gene and Solanum tuberosum transformed by T- DNA, and to study characteristics of transformant. The results are summarized as follows. 1. Crown gall tumors and hairy roots were formed on potato tuber disc infected by A. tumefaciens Ach5 and A. rhizogenes ATCC15834. These tumors and roots could be grown on the phytohormone free media. 2. Callus formation from hairy root was prompted on the medium containing 2, 4 D 2mg/I with casein hydrolysate lg/l. 3. The survival ratio of crown gall tumor callus derived from potato increased on the medium containing the activated charcoal 0. 5-2. 0mg/I because of the preventions on the other hand, hairy roots were necrosis on the same medium. 4. Callus derived from hairy root were excellently grown for a short time by suspension culture on liquid medium containing 2, 4-D 2mg/I and casein hydrolysate lg/l. 5. The binary vector pGA643 was mobilized from E. coli MC1000 into wild type Agrobacteriurn tumefaciens Ach5, A. tumefaciens $A_4T$ and disarmed A. tuniefaciens LBA4404 using a triparental mating method with E. ccli HB1O1/pRK2013. Transconjugants were obtained on the minimal media containing tetracycline and kanamycin. pGA643 vectors were confirmed by electrophoresis on 0.7% agarose gel. 6. Kanamycin resistant calli were selected on the media supplemented with 2, 4-D 0.5mg/1 and kanamycin $100\mug$/ml after co- cultivating with tobacco stem explants and A. tumefaciens LBA4404/pGA643, and selected calli propagated on the same medium. 7. The multiple shoots were regenerated from kanamycin resistant calli on the MS medium containing BA 2mg/l. 8. Leaf segments of transformed shoot were able to grow vigorusly on the medium supplemented with high concentration of kanamycin $1000\mug$/ml. 9. Kanamycin resistant shoots were rooting and elongated on medium containing kanamycin $100\mug$/ml, but normal shoot were not. 10. For the production of protoplast from potato calli transformed by T-DNA and mesophyll tissue transformed by NPTII gene, the former was isolated in the enzyme mixture of 2.0% celluase Onozuka R-10, 1.0% dricelase, 1.0% macerozyme. and 0.5M mannitol, the latter was isolated in the enzyme mixture 1.0% Celluase Onozuka R-10, 0.3% macerozyme, and 0.7M mannitol. 11. The optimal concentrationn of mannitol in the enzyme mixture for high protoplast yield was 0.8M at both transformed tobacco mesophyll and potato callus. The viabilities of protoplast were shown above 90%, respectively. 12. Both tobacco mesophyll and potato callus protoplasts were fused by using PEG solution. Cell walls were regenerated on hormone free media supplemented with kanamycin after 5 days, and colonies were observed after 4 weeks culture.

  • PDF

Suggestion of Urban Regeneration Type Recommendation System Based on Local Characteristics Using Text Mining (텍스트 마이닝을 활용한 지역 특성 기반 도시재생 유형 추천 시스템 제안)

  • Kim, Ikjun;Lee, Junho;Kim, Hyomin;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.149-169
    • /
    • 2020
  • "The Urban Renewal New Deal project", one of the government's major national projects, is about developing underdeveloped areas by investing 50 trillion won in 100 locations on the first year and 500 over the next four years. This project is drawing keen attention from the media and local governments. However, the project model which fails to reflect the original characteristics of the area as it divides project area into five categories: "Our Neighborhood Restoration, Housing Maintenance Support Type, General Neighborhood Type, Central Urban Type, and Economic Base Type," According to keywords for successful urban regeneration in Korea, "resident participation," "regional specialization," "ministerial cooperation" and "public-private cooperation", when local governments propose urban regeneration projects to the government, they can see that it is most important to accurately understand the characteristics of the city and push ahead with the projects in a way that suits the characteristics of the city with the help of local residents and private companies. In addition, considering the gentrification problem, which is one of the side effects of urban regeneration projects, it is important to select and implement urban regeneration types suitable for the characteristics of the area. In order to supplement the limitations of the 'Urban Regeneration New Deal Project' methodology, this study aims to propose a system that recommends urban regeneration types suitable for urban regeneration sites by utilizing various machine learning algorithms, referring to the urban regeneration types of the '2025 Seoul Metropolitan Government Urban Regeneration Strategy Plan' promoted based on regional characteristics. There are four types of urban regeneration in Seoul: "Low-use Low-Level Development, Abandonment, Deteriorated Housing, and Specialization of Historical and Cultural Resources" (Shon and Park, 2017). In order to identify regional characteristics, approximately 100,000 text data were collected for 22 regions where the project was carried out for a total of four types of urban regeneration. Using the collected data, we drew key keywords for each region according to the type of urban regeneration and conducted topic modeling to explore whether there were differences between types. As a result, it was confirmed that a number of topics related to real estate and economy appeared in old residential areas, and in the case of declining and underdeveloped areas, topics reflecting the characteristics of areas where industrial activities were active in the past appeared. In the case of the historical and cultural resource area, since it is an area that contains traces of the past, many keywords related to the government appeared. Therefore, it was possible to confirm political topics and cultural topics resulting from various events. Finally, in the case of low-use and under-developed areas, many topics on real estate and accessibility are emerging, so accessibility is good. It mainly had the characteristics of a region where development is planned or is likely to be developed. Furthermore, a model was implemented that proposes urban regeneration types tailored to regional characteristics for regions other than Seoul. Machine learning technology was used to implement the model, and training data and test data were randomly extracted at an 8:2 ratio and used. In order to compare the performance between various models, the input variables are set in two ways: Count Vector and TF-IDF Vector, and as Classifier, there are 5 types of SVM (Support Vector Machine), Decision Tree, Random Forest, Logistic Regression, and Gradient Boosting. By applying it, performance comparison for a total of 10 models was conducted. The model with the highest performance was the Gradient Boosting method using TF-IDF Vector input data, and the accuracy was 97%. Therefore, the recommendation system proposed in this study is expected to recommend urban regeneration types based on the regional characteristics of new business sites in the process of carrying out urban regeneration projects."

A Methodology to Develop a Curriculum based on National Competency Standards - Focused on Methodology for Gap Analysis - (국가직무능력표준(NCS)에 근거한 조경분야 교육과정 개발 방법론 - 갭분석을 중심으로 -)

  • Byeon, Jae-Sang;Ahn, Seong-Ro;Shin, Sang-Hyun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.43 no.1
    • /
    • pp.40-53
    • /
    • 2015
  • To train the manpower to meet the requirements of the industrial field, the introduction of the National Qualification Frameworks(hereinafter referred to as NQF) was determined in 2001 by National Competency Standards(hereinafter referred to as NCS) centrally of the Office for Government Policy Coordination. Also, for landscape architecture in the construction field, the "NCS -Landscape Architecture" pilot was developed in 2008 to be test operated for 3 years starting in 2009. Especially, as the 'realization of a competence-based society, not by educational background' was adopted as one of the major government projects in the Park Geun-Hye government(inaugurated in 2013) the NCS system was constructed on a nationwide scale as a detailed method for practicing this. However, in the case of the NCS developed by the nation, the ideal job performing abilities are specified, therefore there are weaknesses of not being able to reflect the actual operational problem differences in the student level between universities, problems of securing equipment and professors, and problems in the number of current curricula. For soft landing to practical curriculum, the process of clearly analyzing the gap between the current curriculum and the NCS must be preceded. Gap analysis is the initial stage methodology to reorganize the existing curriculum into NCS based curriculum, and based on the ability unit elements and performance standards for each NCS ability unit, the discrepancy between the existing curriculum within the department or the level of coincidence used a Likert scale of 1 to 5 to fill in and analyze. Thus, the universities wishing to operate NCS in the future measuring the level of coincidence and the gap between the current university curriculum and NCS can secure the basic tool to verify the applicability of NCS and the effectiveness of further development and operation. The advantages of reorganizing the curriculum through gap analysis are, first, that the government financial support project can be connected to provide quantitative index of the NCS adoption rate for each qualitative department, and, second, an objective standard is provided on the insufficiency or sufficiency when reorganizing to NCS based curriculum. In other words, when introducing in the subdivisions of the relevant NCS, the insufficient ability units and the ability unit elements can be extracted, and the supplementary matters for each ability unit element per existing subject can be extracted at the same time. There is an advantage providing directions for detailed class program and basic subject opening. The Ministry of Education and the Ministry of Employment and Labor must gather people from the industry to actively develop and supply the NCS standard a practical level to systematically reflect the requirements of the industrial field the educational training and qualification, and the universities wishing to apply NCS must reorganize the curriculum connecting work and qualification based on NCS. To enable this, the universities must consider the relevant industrial prospect and the relation between the faculty resources within the university and the local industry to clearly select the NCS subdivision to be applied. Afterwards, gap analysis must be used for the NCS based curriculum reorganization to establish the direction of the reorganization more objectively and rationally in order to participate in the process evaluation type qualification system efficiently.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Studies on the selection in soybean breeding. -II. Additional data on heritability, genotypic correlation and selection index- (대두육종에 있어서의 선발에 관한 실험적연구 -속보 : 유전력ㆍ유전상관, 그리고 선발지수의 재검토-)

  • Kwon-Yawl Chang
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.3
    • /
    • pp.89-98
    • /
    • 1965
  • The experimental studies were intended to clarify the effects of selection, and also aimed at estimating the heritabilities, the genotypic correlations among some agronomic characters, and at calculating the selection index on some selective characters for the selection of desirable lines, under different climatic conditions. Finally practical implications of these studies, especially on the selection index, were discussed. Twenty-two varieties, determinate growing habit type, were selected at random from the 138 soybean varieties cultivated the year before, were grown in a randomized block design with three replicates at Chinju, Korea, under May and June sowing conditions. The method of estimating heritabilities for the eleven agronomic characters-flowering date, maturity date, stem length, branch numbers per plant, stem diameter, plant weight, pod numbers per plant, grain numbers per plant and 100 grain weight, shown in Table 3, was the variance components procedures in a replicated trial for the varieties. The analysis of covariance was used to obtain the genotypic correlations and phenotypic correlations among the eight characters, and the selection indexes for some agronomic characters were calculated by Robinson's method. The results are summarized as follows: Heritabilities : The experiment on the genotype-environment interaction revealed that in almost all of the characters investigated the interaction was too large to be neglected and materially affected the estimates of various genotypic parameters. The variation in heritability due to the change of environments was larger in the characters of low heritability than in those of high heritability. Heritability values of flowering date, fruiting period (days from flowering to maturity), stem length and 100 grain weight were the highest in both environments, those of yield(grain weight) and other characters were showed the lower values(Table 3). These heritability values showed a decreasing trend with the delayed sowing in the experiments. Further, all calculated heritability values were higher than anticipated. This was expected since these values, which were the broad sense heritability, contain the variance due to dominance and epistasisf in addition to the additive genetic variance. Genotypic correlations : Genotypic correlations were slightly higher than the corresponding phenotypic correlations in both environments, but the variation in values due to the change of environment appeared between grain weight and some other characters, especially an increase between grain weight and flowering date, and the total growing period(Table 6). Genotypic correlations between grain weight and other characters indicated that high seed yield was genetically correlated with late flowering, late maturity, and the other five characters namely branch numbers per plant, stem diameter, plant weight, pod numbers per plant and grain numbers per plant, but not with 100 grain weight of soybeans. Pod numbers and grain numbers per plant were more closely correlated with seed yields than with other characters. Selection index : For the comparison and the use of selection indexes in the selection, two kinds of selection indexes were calculated, the former was called selection index A and the later selection index B as shown in Table 7. Selection index A was calculated by the values of grain weight per plant as the character of yield(character Y), but the other, selection index B, was calculated by the values of pod numbers per plant, instead of grain weight per plant, as the character of yield'(character Y'). These results suggest that selection index technique is useful in soybean breeding. In reality, however, as the selection index varies with population and environment, it must be calculated in each population to which selection is applied and in each environment in which the population is located. In spite of the expected usefulness of selection index technique in soybean breeding, unsolved problems such as the expense, time and labor involved in calculating the selection index remain. For these reasons and from these experimental studies, it was recognized that in the breeding of self-fertilized soybean plants the selection for yield should be based on a more simple selection index such as selection index B of these experiments rather than on the complex selection index such as selection index A. Furthermore, it was realized that the selection index for the selection should be calculated on the basis of the data of some 3-4 agronomic characters-maturity date(X$_1$), branch numbers per plant(X$_2$), stem diameter(X$_3$) and pod numbers per plant etc. It must be noted that it should be successful in selection to select for maturity date(X$_1$) which has high heritability, and the selection index should be calculated easily on the basis of the data of branch numbers per plant(X$_2$), stem diameter(X$_3$) and pod numbers per plant, directly after the harvest before drying and threshing. These characters should be very useful agronomic characters in the selection of Korean soybeans, determinate growing habit type, as they could be measured or counted easily thus saving time and expense in the duration from harvest to drying and threshing, and are affected more in soybean yields than the other agronomic characters.

  • PDF

A study of the plan dosimetic evaluation on the rectal cancer treatment (직장암 치료 시 치료계획에 따른 선량평가 연구)

  • Jeong, Hyun Hak;An, Beom Seok;Kim, Dae Il;Lee, Yang Hoon;Lee, Je hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.171-178
    • /
    • 2016
  • Purpose : In order to minimize the dose of femoral head as an appropriate treatment plan for rectal cancer radiation therapy, we compare and evaluate the usefulness of 3-field 3D conformal radiation therapy(below 3fCRT), which is a universal treatment method, and 5-field 3D conformal radiation therapy(below 5fCRT), and Volumetric Modulated Arc Therapy (VMAT). Materials and Methods : The 10 cases of rectal cancer that treated with 21EX were enrolled. Those cases were planned by Eclipse(Ver. 10.0.42, Varian, USA), PRO3(Progressive Resolution Optimizer 10.0.28) and AAA(Anisotropic Analytic Algorithm Ver. 10.0.28). 3fCRT and 5fCRT plan has $0^{\circ}$, $270^{\circ}$, $90^{\circ}$ and $0^{\circ}$, $95^{\circ}$, $45^{\circ}$, $315^{\circ}$, $265^{\circ}$ gantry angle, respectively. VMAT plan parameters consisted of 15MV coplanar $360^{\circ}$ 1 arac. Treatment prescription was employed delivering 54Gy to recum in 30 fractions. To minimize the dose difference that shows up randomly on optimizing, VMAT plans were optimized and calculated twice, and normalized to the target V100%=95%. The indexes of evaluation are D of Both femoral head and aceta fossa, total MU, H.I.(Homogeneity index) and C.I.(Conformity index) of the PTV. All VMAT plans were verified by gamma test with portal dosimetry using EPID. Results : D of Rt. femoral head was 53.08 Gy, 50.27 Gy, and 30.92 Gy, respectively, in the order of 3fCRT, 5fCRT, and VMAT treatment plan. Likewise, Lt. Femoral head showed average 53.68 Gy, 51.01 Gy and 29.23 Gy in the same order. D of Rt. aceta fossa was 54.86 Gy, 52.40 Gy, 30.37 Gy, respectively, in the order of 3fCRT, 5fCRT, and VMAT treatment plan. Likewise, Lt. Femoral head showed average 53.68 Gy, 51.01 Gy and 29.23 Gy in the same order. The maximum dose of both femoral head and aceta fossa was higher in the order of 3fCRT, 5fCRT, and VMAT treatment plan. C.I. showed the lowest VMAT treatment plan with an average of 1.64, 1.48, and 0.99 in the order of 3fCRT, 5fCRT, and VMAT treatment plan. There was no significant difference on H.I. of the PTV among three plans. Total MU showed that the VMAT treatment plan used 124.4MU and 299MU more than the 3fCRT and 5fCRT treatment plan, respectively. IMRT verification gamma test results for the VMAT plan passed over 90.0% at 2mm/2%. Conclusion : In rectal cancer treatment, the VMAT plan was shown to be advantageous in most of the evaluation indexes compared to the 3D plan, and the dose of the femoral head was greatly reduced. However, because of practical limitations there may be a case where it is difficult to select a VMAT treatment plan. 5fCRT has the advantage of reducing the dose of the femoral head as compared to the existing 3fCRT, without regard to additional problems. Therefore, not only would it extend survival time but the quality of life in general, if hospitals improved radiation therapy efficiency by selecting the treatment plan in accordance with the hospital's situation.

  • PDF

An Exploratory Study on the Components of Visual Merchandising of Internet Shopping Mall (인터넷쇼핑몰의 VMD 구성요인에 대한 탐색적 연구)

  • Kim, Kwang-Seok;Shin, Jong-Kuk;Koo, Dong-Mo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.19-45
    • /
    • 2008
  • This study is to empirically examine the primary dimensions of visual merchandising (VMD) of internet shopping mall, namely store design, merchandise, and merchandising cues, to be a attractive virtual store to the shoppers. The authors reviewed the literature related to the major components of VMD from the perspective of the AIDA model, which has been mainly applied to the offline store settings. The major purposes of the study are as follows; first, tries to derive the variables related with the components of visual merchandising through reviewing the existing literatures, establish the hypotheses, and test it empirically. Second, examines the relationships between the components of VMD and the attitude toward the VMD, however, putting more emphasis on finding out the component structure of the VMD. VMD needs to be examined with the perspective that an online shopping mall is a virtual self-service or clerkless store, which could reduce the number of employees, help the shoppers search, evaluate and purchase for themselves, and to be explored in terms of the in-store persuasion processes of customers. This study reviewed the literatures related to store design, merchandise, and merchandising cues which might be relevant to the store, product, and promotion respectively. VMD is a total communication tool, and AIDA model could explain the in-store consumer behavior of online shopping. Store design has to do with triggering a consumer attention to the online mall, merchandise with a product related interest, and merchandising cues with promotions such as recommendation and links that induce the desire to pruchase. These three steps might be seen as the processes for purchase actions. The theoretical rationale for the relationship between VMD and AIDA could be found in Tyagi(2005) that the three steps of consumer-oriented merchandising are a store, a product assortment, and placement, in Omar(1999) that three types of interior display are a architectural design display, commodity display, and point-of-sales(POS) display, and in Davies and Ward(2005) that the retail store interior image is related to an atmosphere, merchandise, and in-store promotion. Lee et al(2000) suggested as the web merchandising components a merchandising cues, a shopping metaphor which is an assistant tool for search, a store design, a layout(web design), and a product assortment. The store design which includes differentiation, simplicity and navigation is supposed to be related to the attention to the virtual store. Second, the merchandise dimensions comprising product assortments, visual information and product reputation have to do with the interest in the product offerings. Finally, the merchandising cues that refer to merchandiser(MD)'s recommendation of products and providing the hyperlinks to relevant goods for the shopper is concerned with attempt to induce the desire to purchase. The questionnaire survey was carried out to collect the data about the consumers who would shop at internet shopping malls frequently. To select the subject malls, the mall ranking data announced by a mall rating agency was used to differentiate the most popular and least popular five mall each. The subjects was instructed to answer the questions after navigating the designated mall for five minutes. The 300 questionnaire was distributed to the consumers, 166 samples were used in the final analysis. The empirical testing focused on identifying and confirming the dimensionality of VMD and its subdimensions using a structural equation modeling method. The confirmatory factor analysis for the endogeneous and exogeneous variables was carried out in four parts. The second-order factor analysis was done for a store design, a merchandise, and a merchandising cues, and first-order confirmatory factor analysis for the attitude toward the VMD. The model test results shows that the chi-square value of structural equation is 144.39(d.f 49), significant at 0.01 level which means the proposed model was rejected. But, judging from the ratio of chi-square value vs. degree of freedom, the ratio was 2.94 which smaller than an acceptable level of 3.0, RMR is 0.087 which is higher than a generally acceptable level of 0.08. GFI and AGFI is turned out to be 0.90 and 0.84 respectively. Both NFI and NNFI is 0.94, and CFI 0.95. The major test results are as follows; first, the second-order factor analysis and structural equational modeling reveals that the differentiation, simplicity and ease of identifying current status of the transaction are confirmed to be subdimensions of store design and to be a significant predictors of the dependent variable. This result implies that when designing an online shopping mall, it is necessary to differentiate visually from other malls to improve the effectiveness of the communications of store design. That is, the differentiated store design raise the contrast stimulus to sensory organs to promote the memory of the store and to have a favorable attitude toward the VMD of a store. The results that navigation which means the easiness of identifying current status of shopping affects the attitude to VMD could be interpreted that the navigating processes via the hyperlinks which is characteristics of an internet shopping is a complex and cognitive process and shoppers are likely to lack the sense of overall structure of the store. Consequently, shoppers are likely to be alost amid shopping not knowing where to go. The orientation tool enhance the accessibility of information to raise the perceptive power about the store environment.(Titus & Everett 1995) Second, the primary dimension of merchandise and its subdimensions was confirmed to be unidimensional respectively, have a construct validity, and nomological validity which the VMD dimensions supposed to have a positive correlation with the dependent variable. The subdimensions of product assortment, brand fame and information provision proved to have a positive effect on the attitude toward the VMD. It could be interpreted that the more plentiful the product and brand assortment of the mall is, the more likely the shoppers to favor it. Brand fame and information provision as well affect the VMD attitude, which means that the more famous the brand, the more likely the shoppers would trust and feel familiar with the mall, and the plentifully and visually presented information could have the shopper have a favorable attitude toward the store VMD. Third, it turned out to be that merchandising cue of product recommendation and hyperlinks affect the VMD attitude. This could be interpreted that recommended products could reduce the uncertainty related with the purchase decision, and the hyperlinks to relevant products would help the shopper save the cognitive effort exerted into the information search and gathering, which could lead to a favorable attitude to the VMD. This study tried to sheds some new light on the VMD of online store by reviewing the variables mentioned to be relevant with offline VMD in the existing literatures, and tried to link the VMD components from the perspective of AIDA model. The effect size of the VMD dimensions on the attitude was in the order of the merchandise, the store design and the merchandising cues.It is said that an internet has an unlimited place for display, however, the virtual store is not unlimited since the consumer has a limited amount of cognitive ability to process the external information and internal memory. Particularly, the shoppers are likely to face some difficulties in decision making on account of too many alternative and information overloads. Therefore, the internet shopping mall manager should take into consideration the cost of information search on the part of the consumer, to establish the optimal product placements and search routes. An efficient store composition would be possible by reducing the psychological burdens and cognitive efforts exerted to information search and alternatives evaluation. The store image is in most part determined by the product category and its brand it deals in. The results of this study support this proposition that the merchandise is most important to the VMD attitude than other components, the manager is required to take a strategic approach to VMD. The internet users are getting more accustomed and more knowledgeable about the internet media and more likely to accept the internet as a shopping channel as the period of time during which they use the internet to shop become longer. The web merchandiser should be aware that the product introduction using a moving pictures and a bulletin board become more important in order to present the interactive product information visually and communicate with customers more actively, therefore leading to making the quantity and quality of product information more rich.

  • PDF