• Title/Summary/Keyword: K-Means Similarity Clustering

Search Result 79, Processing Time 0.022 seconds

Classification in Different Genera by Cytochrome Oxidase Subunit I Gene Using CNN-LSTM Hybrid Model

  • Meijing Li;Dongkeun Kim
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.159-166
    • /
    • 2023
  • The COI gene is a sequence of approximately 650 bp at the 5' terminal of the mitochondrial Cytochrome c Oxidase subunit I (COI) gene. As an effective DeoxyriboNucleic Acid (DNA) barcode, it is widely used for the taxonomic identification and evolutionary analysis of species. We created a CNN-LSTM hybrid model by combining the gene features partially extracted by the Long Short-Term Memory ( LSTM ) network with the feature maps obtained by the CNN. Compared to K-Means Clustering, Support Vector Machines (SVM), and a single CNN classification model, after training 278 samples in a training set that included 15 genera from two orders, the CNN-LSTM hybrid model achieved 94% accuracy in the test set, which contained 118 samples. We augmented the training set samples and four genera into four orders, and the classification accuracy of the test set reached 100%. This study also proposes calculating the cosine similarity between the training and test sets to initially assess the reliability of the predicted results and discover new species.

A Study on the Safety Navigational Width of Bridges Across Waterways Considering Optimal Traffic Distribution (최적 교통분포를 고려한 해상교량의 안전 통항 폭에 관한 연구)

  • Son, Woo-Ju;Mun, Ji-Ha;Gu, Jung-Min;Cho, Ik-Soon
    • Journal of Navigation and Port Research
    • /
    • v.46 no.4
    • /
    • pp.303-312
    • /
    • 2022
  • Bridges across waterways act as interference factors, that reduce the navigable water area from the perspective of navigation safety. To analyze the safety navigational width of ships navigating bridges across waterways, the optimal traffic distribution based on AIS data was investigated, and ships were classified according to size through k-means clustering. As a result of the goodness-of-fit analysis of the clustered data, the lognormal distribution was found to be close to the optimal distribution for Incheon Bridge and Busan Harbor Bridge. Also, the normal distributions for Mokpo Bridge and Machang Bridge were analyzed. Based on the lognormal and normal distribution, the analysis results assumed that the safe passage range of the vessel was 95% of the confidence interval, As a result, regarding the Incheon Bridge, the difference between the normal distribution and the lognormal distribution was the largest, at 64m to 98m. The minimum difference was 10m, which was revealed for Machang Bridge. Accordingly, regarding Incheon Bridge, it was analyzed that it is more appropriate to present a safety width of traffic by assuming a lognormal distribution, rather than suggesting a safety navigation width by assuming a normal distribution. Regarding other bridges, it was analyzed that similar results could be obtained using any of the two distributions, because of the similarity in width between the normal and lognormal distributions. Based on the above results, it is judged that if a safe navigational range is presented, it will contribute to the safe operation of ships as well as the prevention of accidents.

Combined Image Retrieval System using Clustering and Condensation Method (클러스터링과 차원축약 기법을 통합한 영상 검색 시스템)

  • Lee Se-Han;Cho Jungwon;Choi Byung-Uk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.1 s.307
    • /
    • pp.53-66
    • /
    • 2006
  • This paper proposes the combined image retrieval system that gives the same relevance as exhaustive search method while its performance can be considerably improved. This system is combined with two different retrieval methods and each gives the same results that full exhaustive search method does. Both of them are two-stage method. One uses condensation of feature vectors, and the other uses binary-tree clustering. These two methods extract the candidate images that always include correct answers at the first stage, and then filter out the incorrect images at the second stage. Inasmuch as these methods use equal algorithm, they can get the same result as full exhaustive search. The first method condenses the dimension of feature vectors, and it uses these condensed feature vectors to compute similarity of query and images in database. It can be found that there is an optimal condensation ratio which minimizes the overall retrieval time. The optimal ratio is applied to first stage of this method. Binary-tree clustering method, searching with recursive 2-means clustering, classifies each cluster dynamically with the same radius. For preserving relevance, its range of query has to be compensated at first stage. After candidate clusters were selected, final results are retrieved by computing similarities again at second stage. The proposed method is combined with above two methods. Because they are not dependent on each other, combined retrieval system can make a remarkable progress in performance.

The Need for Paradigm Shift in Semantic Similarity and Semantic Relatedness : From Cognitive Semantics Perspective (의미간의 유사도 연구의 패러다임 변화의 필요성-인지 의미론적 관점에서의 고찰)

  • Choi, Youngseok;Park, Jinsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.111-123
    • /
    • 2013
  • Semantic similarity/relatedness measure between two concepts plays an important role in research on system integration and database integration. Moreover, current research on keyword recommendation or tag clustering strongly depends on this kind of semantic measure. For this reason, many researchers in various fields including computer science and computational linguistics have tried to improve methods to calculating semantic similarity/relatedness measure. This study of similarity between concepts is meant to discover how a computational process can model the action of a human to determine the relationship between two concepts. Most research on calculating semantic similarity usually uses ready-made reference knowledge such as semantic network and dictionary to measure concept similarity. The topological method is used to calculated relatedness or similarity between concepts based on various forms of a semantic network including a hierarchical taxonomy. This approach assumes that the semantic network reflects the human knowledge well. The nodes in a network represent concepts, and way to measure the conceptual similarity between two nodes are also regarded as ways to determine the conceptual similarity of two words(i.e,. two nodes in a network). Topological method can be categorized as node-based or edge-based, which are also called the information content approach and the conceptual distance approach, respectively. The node-based approach is used to calculate similarity between concepts based on how much information the two concepts share in terms of a semantic network or taxonomy while edge-based approach estimates the distance between the nodes that correspond to the concepts being compared. Both of two approaches have assumed that the semantic network is static. That means topological approach has not considered the change of semantic relation between concepts in semantic network. However, as information communication technologies make advantage in sharing knowledge among people, semantic relation between concepts in semantic network may change. To explain the change in semantic relation, we adopt the cognitive semantics. The basic assumption of cognitive semantics is that humans judge the semantic relation based on their cognition and understanding of concepts. This cognition and understanding is called 'World Knowledge.' World knowledge can be categorized as personal knowledge and cultural knowledge. Personal knowledge means the knowledge from personal experience. Everyone can have different Personal Knowledge of same concept. Cultural Knowledge is the knowledge shared by people who are living in the same culture or using the same language. People in the same culture have common understanding of specific concepts. Cultural knowledge can be the starting point of discussion about the change of semantic relation. If the culture shared by people changes for some reasons, the human's cultural knowledge may also change. Today's society and culture are changing at a past face, and the change of cultural knowledge is not negligible issues in the research on semantic relationship between concepts. In this paper, we propose the future directions of research on semantic similarity. In other words, we discuss that how the research on semantic similarity can reflect the change of semantic relation caused by the change of cultural knowledge. We suggest three direction of future research on semantic similarity. First, the research should include the versioning and update methodology for semantic network. Second, semantic network which is dynamically generated can be used for the calculation of semantic similarity between concepts. If the researcher can develop the methodology to extract the semantic network from given knowledge base in real time, this approach can solve many problems related to the change of semantic relation. Third, the statistical approach based on corpus analysis can be an alternative for the method using semantic network. We believe that these proposed research direction can be the milestone of the research on semantic relation.

Integrated Clustering Method based on Syntactic Structure and Word Similarity for Statistical Machine Translation (문장구조 유사도와 단어 유사도를 이용한 클러스터링 기반의 통계기계번역)

  • Kim, Hankyong;Na, Hwi-Dong;Li, Jin-Ji;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2009.10a
    • /
    • pp.44-49
    • /
    • 2009
  • 통계기계번역에서 도메인에 특화된 번역을 시도하여 성능향상을 얻는 방법이 있다. 이를 위하여 문장의 유형이나 장르에 따라 클러스터링을 수행한다. 그러나 기존의 연구 중 문장의 유형 정보와 장르에 따른 정보를 동시에 사용한 경우는 없었다. 본 논문에서는 문장 사이의 문법적 구조 유사성으로 문장을 유형별로 분류하는 새로운 기법을 제시하였고, 단어 유사도 정보로 문서의 장르를 구분하여 기존의 두 기법을 통합하였다. 이렇게 분류된 말뭉치에서 추출한 모델과 전체 말뭉치에서 추출된 모델에서 보간법(interpolation)을 사용하여 통계기계번역의 성능을 향상하였다. 문장구조의 유사성과 단어 유사도 계산을 위하여 각각 커널과 코사인 유사도를 적용하였으며, 두 유사도를 적용하여 말뭉치를 분류하는 과정은 K-Means 알고리즘과 유사한 기계학습 기법을 사용하였다. 이를 일본어-영어의 특허문서에서 실험한 결과 최선의 경우 약 2.5%의 상대적인 성능 향상을 얻었다.

  • PDF

An Object Recognition Method Based on Depth Information for an Indoor Mobile Robot (실내 이동로봇을 위한 거리 정보 기반 물체 인식 방법)

  • Park, Jungkil;Park, Jaebyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.10
    • /
    • pp.958-964
    • /
    • 2015
  • In this paper, an object recognition method based on the depth information from the RGB-D camera, Xtion, is proposed for an indoor mobile robot. First, the RANdom SAmple Consensus (RANSAC) algorithm is applied to the point cloud obtained from the RGB-D camera to detect and remove the floor points. Next, the removed point cloud is classified by the k-means clustering method as each object's point cloud, and the normal vector of each point is obtained by using the k-d tree search. The obtained normal vectors are classified by the trained multi-layer perceptron as 18 classes and used as features for object recognition. To distinguish an object from another object, the similarity between them is measured by using Levenshtein distance. To verify the effectiveness and feasibility of the proposed object recognition method, the experiments are carried out with several similar boxes.

A SNA Based Loads Analysis of Naval Submarine Maintenance

  • Song, Ji-Seok;Kang, Dongsu;Lee, Sang-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.11
    • /
    • pp.201-210
    • /
    • 2020
  • Navy submarines are developed into complex weapons systems with various equipment, which directly leads to difficulties in submarine maintenance. In addition, the method of establishing a maintenance plan for submarines is limited in efficient maintenance because it relies on statistical access to the number of people, number of target ships, and consumption time. For efficient maintenance, it is necessary to derive and maintain major maintenance factors based on an understanding of the target. In this paper, the maintenance loads rate is defined as a key maintenance factor. the submarine maintenance data is analyzed using the SNA scheme to identify phenomena by focusing on the relationship between the analysis targets. Through this, maintenance loads characteristics that have not been previously revealed in quantitative analysis are derived to identify areas that the maintenance manager should focus on.

Recommender Systems using Structural Hole and Collaborative Filtering (구조적 공백과 협업필터링을 이용한 추천시스템)

  • Kim, Mingun;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.107-120
    • /
    • 2014
  • This study proposes a novel recommender system using the structural hole analysis to reflect qualitative and emotional information in recommendation process. Although collaborative filtering (CF) is known as the most popular recommendation algorithm, it has some limitations including scalability and sparsity problems. The scalability problem arises when the volume of users and items become quite large. It means that CF cannot scale up due to large computation time for finding neighbors from the user-item matrix as the number of users and items increases in real-world e-commerce sites. Sparsity is a common problem of most recommender systems due to the fact that users generally evaluate only a small portion of the whole items. In addition, the cold-start problem is the special case of the sparsity problem when users or items newly added to the system with no ratings at all. When the user's preference evaluation data is sparse, two users or items are unlikely to have common ratings, and finally, CF will predict ratings using a very limited number of similar users. Moreover, it may produces biased recommendations because similarity weights may be estimated using only a small portion of rating data. In this study, we suggest a novel limitation of the conventional CF. The limitation is that CF does not consider qualitative and emotional information about users in the recommendation process because it only utilizes user's preference scores of the user-item matrix. To address this novel limitation, this study proposes cluster-indexing CF model with the structural hole analysis for recommendations. In general, the structural hole means a location which connects two separate actors without any redundant connections in the network. The actor who occupies the structural hole can easily access to non-redundant, various and fresh information. Therefore, the actor who occupies the structural hole may be a important person in the focal network and he or she may be the representative person in the focal subgroup in the network. Thus, his or her characteristics may represent the general characteristics of the users in the focal subgroup. In this sense, we can distinguish friends and strangers of the focal user utilizing the structural hole analysis. This study uses the structural hole analysis to select structural holes in subgroups as an initial seeds for a cluster analysis. First, we gather data about users' preference ratings for items and their social network information. For gathering research data, we develop a data collection system. Then, we perform structural hole analysis and find structural holes of social network. Next, we use these structural holes as cluster centroids for the clustering algorithm. Finally, this study makes recommendations using CF within user's cluster, and compare the recommendation performances of comparative models. For implementing experiments of the proposed model, we composite the experimental results from two experiments. The first experiment is the structural hole analysis. For the first one, this study employs a software package for the analysis of social network data - UCINET version 6. The second one is for performing modified clustering, and CF using the result of the cluster analysis. We develop an experimental system using VBA (Visual Basic for Application) of Microsoft Excel 2007 for the second one. This study designs to analyzing clustering based on a novel similarity measure - Pearson correlation between user preference rating vectors for the modified clustering experiment. In addition, this study uses 'all-but-one' approach for the CF experiment. In order to validate the effectiveness of our proposed model, we apply three comparative types of CF models to the same dataset. The experimental results show that the proposed model outperforms the other comparative models. In especial, the proposed model significantly performs better than two comparative modes with the cluster analysis from the statistical significance test. However, the difference between the proposed model and the naive model does not have statistical significance.

Computational Approaches for Structural and Functional Genomics

  • Brenner, Steven-E.
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2000.11a
    • /
    • pp.17-20
    • /
    • 2000
  • Structural genomics aims to provide a good experimental structure or computational model of every tractable protein in a complete genome. Underlying this goal is the immense value of protein structure, especially in permitting recognition of distant evolutionary relationships for proteins whose sequence analysis has failed to find any significant homolog. A considerable fraction of the genes in all sequenced genomes have no known function, and structure determination provides a direct means of revealing homology that may be used to infer their putative molecular function. The solved structures will be similarly useful for elucidating the biochemical or biophysical role of proteins that have been previously ascribed only phenotypic functions. More generally, knowledge of an increasingly complete repertoire of protein structures will aid structure prediction methods, improve understanding of protein structure, and ultimately lend insight into molecular interactions and pathways. We use computational methods to select families whose structures cannot be predicted and which are likely to be amenable to experimental characterization. Methods to be employed included modern sequence analysis and clustering algorithms. A critical component is consultation of the presage database for structural genomics, which records the community's experimental work underway and computational predictions. The protein families are ranked according to several criteria including taxonomic diversity and known functional information. Individual proteins, often homologs from hyperthermophiles, are selected from these families as targets for structure determination. The solved structures are examined for structural similarity to other proteins of known structure. Homologous proteins in sequence databases are computationally modeled, to provide a resource of protein structure models complementing the experimentally solved protein structures.

  • PDF

Personalized insurance product based on similarity (유사도를 활용한 맞춤형 보험 추천 시스템)

  • Kim, Joon-Sung;Cho, A-Ra;Oh, Hayong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1599-1607
    • /
    • 2022
  • The data mainly used for the model are as follows: the personal information, the information of insurance product, etc. With the data, we suggest three types of models: content-based filtering model, collaborative filtering model and classification models-based model. The content-based filtering model finds the cosine of the angle between the users and items, and recommends items based on the cosine similarity; however, before finding the cosine similarity, we divide into several groups by their features. Segmentation is executed by K-means clustering algorithm and manually operated algorithm. The collaborative filtering model uses interactions that users have with items. The classification models-based model uses decision tree and random forest classifier to recommend items. According to the results of the research, the contents-based filtering model provides the best result. Since the model recommends the item based on the demographic and user features, it indicates that demographic and user features are keys to offer more appropriate items.