• Title/Summary/Keyword: Goal Similarity

Search Result 94, Processing Time 0.021 seconds

A Study on the Factors Promoting the Use of Memory-based Emotion in Evaluating the Brand (상표평가에서 기억감정의 이용을 촉진하는 요인에 관한 연구)

  • Choi, Nak-Hwan;Na, Ji-Eun
    • Journal of Global Scholars of Marketing Science
    • /
    • v.13
    • /
    • pp.49-70
    • /
    • 2004
  • Recently, consumer researchers have suggested that consumers evaluate brand by subjective memory-based-emotion which was formed by their experience. However, relatively little consumer research has been done to explore conditions under which memory-based-emotion is used in brand evaluation. Therefore, this study explored the conditions to facilitate usage of memory-based- emotion such as accessibility, representativeness, relevance of consumption goal, when consumers evaluate brand. In addition, we investigated the factors that influence responses of memory-based- emotion, such as similarity of encoding and retrieving condition and level of organization. The results of this study suggest that level of organization was positively related to accessibility and representativeness. But similarity of encoding and retrieving condition was not related to them. And accessibility, representativeness influence on use of memory-based-emotion in consumer brand evaluation. But relevance of consumption goal did not influence on use of memory-based-emotion in consumer brand evaluation.

  • PDF

Modeling of Bank Asset Management System based on Intelligent Agent

  • Kim, Dae-Su;Kim, Chang-Suk
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.1 no.1
    • /
    • pp.81-86
    • /
    • 2001
  • In this paper, we investigated the modeling of Bank Asset Management System(BAME) based on intelligent agent. To achieve this goal, we introduced several kinds of agents that show intelligent features. BAMS is a user friendly system and adopts fuzzy converting system and fuzzy matching system that returns reasonable similarity matching results. Generation function of the proximity degree is suggested. Fuzzification of investment type categories and feature values are defined, and generation of proximity degree is also derived. An example of bank asset management system is introduced and simulated. Investment type matching utilizing fuzzy measure is tested and it showed quite reasonable similarity matching results.

  • PDF

Video Content Indexing using Kullback-Leibler Distance

  • Kim, Sang-Hyun
    • International Journal of Contents
    • /
    • v.5 no.4
    • /
    • pp.51-54
    • /
    • 2009
  • In huge video databases, the effective video content indexing method is required. While manual indexing is the most effective approach to this goal, it is slow and expensive. Thus automatic indexing is desirable and recently various indexing tools for video databases have been developed. For efficient video content indexing, the similarity measure is an important factor. This paper presents new similarity measures between frames and proposes a new algorithm to index video content using Kullback-Leibler distance defined between two histograms. Experimental results show that the proposed algorithm using Kullback-Leibler distance gives remarkable high accuracy ratios compared with several conventional algorithms to index video content.

Improvement of Relevance Feedback for Image Retrieval (영상 검색을 위한 적합성 피드백의 개선)

  • Yoon, Su-Jung;Park, Dong-Kwon;Won, Chee-Sun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.4
    • /
    • pp.28-37
    • /
    • 2002
  • In this paper, we present an image retrieval method for improving retrieval performance by fusion of probabilistic method and query point movement. In the proposed algorithm, the similarity for probabilistic method and the similarity for query point movement are fused in the computation of the similarity between a query image and database image. The probabilistic method used in this paper is suitable for handling negative examples. On the other hand, query point movement deals with the statistical property of positive examples. Combining these two methods, our goal is to overcome their shortcoming. Experimental results show that the proposed method yields better performances over the probabilistic method and query point movement, respectively.

Dual-loss CNN: A separability-enhanced network for current-based fault diagnosis of rolling bearings

  • Lingli Cui;Gang Wang;Dongdong Liu;Jiawei Xiang;Huaqing Wang
    • Smart Structures and Systems
    • /
    • v.33 no.4
    • /
    • pp.253-262
    • /
    • 2024
  • Current-based mechanical fault diagnosis is more convenient and low cost since additional sensors are not required. However, it is still challenging to achieve this goal due to the weak fault information in current signals. In this paper, a dual-loss convolutional neural network (DLCNN) is proposed to implement the intelligent bearing fault diagnosis via current signals. First, a novel similarity loss (SimL) function is developed, which is expected to maximize the intra-class similarity and minimize the inter-class similarity in the model optimization operation. In the loss function, a weight parameter is further introduced to achieve a balance and leverage the performance of SimL function. Second, the DLCNN model is constructed using the presented SimL and the cross-entropy loss. Finally, the two-phase current signals are fused and then fed into the DLCNN to provide more fault information. The proposed DLCNN is tested by experiment data, and the results confirm that the DLCNN achieves higher accuracy compared to the conventional CNN. Meanwhile, the feature visualization presents that the samples of different classes are separated well.

Image Data Classification using a Similarity Function based on Second Order Tensor (2차 텐서 기반 유사도 함수를 이용한 영상 데이터 분류)

  • Yoon, Dong-Woo;Lee, Kwan-Yong;Park, Hye-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.8
    • /
    • pp.664-672
    • /
    • 2009
  • Recently, studies on utilizing tensor expression on image data analysis and processing have been attracting much interest. The purpose of this study is to develop an efficient system for classifying image patterns by using second order tensor expression. To achieve the goal, we propose a data generation model expressed by class factors and environment factors with second order tensor representation. Based on the data generation model, we define a function for measuring similarities between two images. The similarity function is obtained by estimating the probability density of environment factors using a matrix normal distribution. Through computational experiments on a number of benchmark data sets, we confirm that we can make improvement in classification rates by using second order tensor, and that the proposed similarity function is more appropriate for image data compared to conventional similarity measures.

A deep learning model based on triplet losses for a similar child drawing selection algorithm (Triplet Loss 기반 딥러닝 모델을 통한 유사 아동 그림 선별 알고리즘)

  • Moon, Jiyu;Kim, Min-Jong;Lee, Seong-Oak;Yu, Yonggyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • The goal of this paper is to create a deep learning model based on triplet loss for generating similar child drawing selection algorithms. To assess the similarity of children's drawings, the distance between feature vectors belonging to the same class should be close, and the distance between feature vectors belonging to different classes should be greater. Therefore, a similar child drawing selection algorithm was developed in this study by building a deep learning model combining Triplet Loss and residual network(ResNet), which has an advantage in measuring image similarity regardless of the number of classes. Finally, using this model's similar child drawing selection algorithm, the similarity between the target child drawing and the other drawings can be measured and drawings with a high similarity can be chosen.

A Study on Updating of Analytic Model of Dynamics for Aircraft Structures Using Optimization Technique (최적화 기법을 이용한 비행체 구조물 동특성 해석 모델의 최신화 연구)

  • Lee, Ki-Du;Lee, Young-Shin;Kim, Dong-Soo
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.2
    • /
    • pp.131-138
    • /
    • 2009
  • Analytical modal verification is considered as the process to provide an acceptable description of the subject structure's behaviour. In general, results of original analytical model are different with actual structure results to uncertainty like non-linearity of material, boundary and modified shape, etc. In this paper, the dynamic model of glider's wing is correlated with static deformation and vibration test results by goal-attainment method, multi-objects optimization technique. The structural responses are predicted by using finite element method and optimization is carried out by using the SQP(sequential quadratic programming) method which is widely used in the constrained nonlinear optimization problem. The MAC(Modal Assurance Criterion) is used to modify the mode shapes and quantify the similarity.

Process Evaluation Model based on Goal-Scenario for Business Activity Monitoring

  • Baek, Su-Jin;Song, Young-Jae
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.4
    • /
    • pp.379-384
    • /
    • 2011
  • The scope of the problems that could be solved by monitoring and the improvement of the recognition time is directly correlated to the performance of the management function of the business process. However, the current monitoring process of business activities decides whether to apply warnings or not by assuming a fixed environment and showing expressions based on the design rules. Also, warnings are applied by carrying out the measuring process when the event attribute values are inserted at every point. Therefore, there is a limit for distinguishing the range of occurrence and the level of severity in regard to the new external problems occurring in a complicated environment. Such problems cannot be ed. Also, since it is difficult to expand the range of problems which can be possibly evaluated, it is impossible to evaluate any unexpected situation which could occur in the execution period. In this paper, a process-evaluating model based on the goal scenario is suggested to provide constant services through the current monitoring process in regard to the service demands of the new scenario which occurs outside. The new demands based on the outside situation are analyzed according to the goal scenario for the process activities. Also, by using the meta-heuristic algorithm, a similar process model is found and identified by combining similarity and interrelationship. The process can be stopped in advance or adjusted to the wanted direction.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.