• Title/Summary/Keyword: Indexing Model

Search Result 169, Processing Time 0.029 seconds

Anlaysis of Eukaryotic Sequence Pattern using GenScan (GenScan을 이용한 진핵생물의 서열 패턴 분석)

  • Jung, Yong-Gyu;Lim, I-Suel;Cha, Byung-Heun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.113-118
    • /
    • 2011
  • Sequence homology analysis in the substances in the phenomenon of life is to create database by sorting and indexing and to demonstrate the usefulness of informatics. In this paper, Markov models are used in GenScan program to convert the pattern of complex eukaryotic protein sequences. It becomes impossible to navigate the minimum distance, complexity increases exponentially as the exact calculation. It is used scorecard in amino acid substitutions between similar amino acid substitutions to have a differential effect score, and is applied the Markov models sophisticated concealment of the transition probability model. As providing superior method to translate sequences homologous sequences in analysis using blast p, Markov models. is secreted protein structure of sequence translations.

3D Model Retrieval Using Geometric Information (기하학 정보를 이용한 3차원 모델 검색)

  • Lee Kee-Ho;Kim Nac-Woo;Kim Tae-Yong;Choi Jong-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.10C
    • /
    • pp.1007-1016
    • /
    • 2005
  • This paper presents a feature extraction method for shape based retrieval of 3D models. Since the feature descriptor of 3D model should be invariant to translation, rotation and scaling, it is necessary to preprocess the 3D models to represent them in a canonical coordinate system. We use the PCA(Principal Component Analysis) method to preprocess the 3D models. Also, we apply that to make a MBR(Minimum Boundary Rectangle) and a circumsphere. The proposed algorithm is as follows. We generate a circumsphere around 3D models, where radius equals 1(r=1) and locate each model in the center of the circumsphere. We produce the concentric spheres with a different radius($r_i=i/n,\;i=1,2,{\ldots},n$). After looking for meshes intersected with the concentric spheres, we compute the curvature of the meshes. We use these curvatures as the model descriptor. Experimental results numerically show the performance improvement of proposed algorithm from min. 0.1 to max. 0.6 in comparison with conventional methods by ANMRR, although our method uses .relatively small bins. This paper uses $R{^*}-tree$ as the indexing.

A Study of Developing Variable-Scale Maps for Management of Efficient Road Network (효율적인 네트워크 데이터 관리를 위한 가변-축척 지도 제작 방안)

  • Joo, Yong Jin
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.4
    • /
    • pp.143-150
    • /
    • 2013
  • The purpose of this study is to suggest the methodology to develop variable-scale network model, which is able to induce large-scale road network in detailed level corresponding to small-scale linear objects with various abstraction in higher level. For this purpose, the definition of terms, the benefits and the specific procedures related with a variable-scale model were examined. Second, representation level and the components of layer to design the variable-scale map were presented. In addition, rule-based data generating method and indexing structure for higher LoD were defined. Finally, the implementation and verification of the model were performed to road network in study area (Jeju -do) so that the proposed algorithm can be practical. That is, generated variable scale road network were saved and managed in spatial database (Oracle Spatial) and performance analysis were carried out for the effectiveness and feasibility of the model.

A Semantic Text Model with Wikipedia-based Concept Space (위키피디어 기반 개념 공간을 가지는 시멘틱 텍스트 모델)

  • Kim, Han-Joon;Chang, Jae-Young
    • The Journal of Society for e-Business Studies
    • /
    • v.19 no.3
    • /
    • pp.107-123
    • /
    • 2014
  • Current text mining techniques suffer from the problem that the conventional text representation models cannot express the semantic or conceptual information for the textual documents written with natural languages. The conventional text models represent the textual documents as bag of words, which include vector space model, Boolean model, statistical model, and tensor space model. These models express documents only with the term literals for indexing and the frequency-based weights for their corresponding terms; that is, they ignore semantical information, sequential order information, and structural information of terms. Most of the text mining techniques have been developed assuming that the given documents are represented as 'bag-of-words' based text models. However, currently, confronting the big data era, a new paradigm of text representation model is required which can analyse huge amounts of textual documents more precisely. Our text model regards the 'concept' as an independent space equated with the 'term' and 'document' spaces used in the vector space model, and it expresses the relatedness among the three spaces. To develop the concept space, we use Wikipedia data, each of which defines a single concept. Consequently, a document collection is represented as a 3-order tensor with semantic information, and then the proposed model is called text cuboid model in our paper. Through experiments using the popular 20NewsGroup document corpus, we prove the superiority of the proposed text model in terms of document clustering and concept clustering.

Recommender Systems using Structural Hole and Collaborative Filtering (구조적 공백과 협업필터링을 이용한 추천시스템)

  • Kim, Mingun;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.107-120
    • /
    • 2014
  • This study proposes a novel recommender system using the structural hole analysis to reflect qualitative and emotional information in recommendation process. Although collaborative filtering (CF) is known as the most popular recommendation algorithm, it has some limitations including scalability and sparsity problems. The scalability problem arises when the volume of users and items become quite large. It means that CF cannot scale up due to large computation time for finding neighbors from the user-item matrix as the number of users and items increases in real-world e-commerce sites. Sparsity is a common problem of most recommender systems due to the fact that users generally evaluate only a small portion of the whole items. In addition, the cold-start problem is the special case of the sparsity problem when users or items newly added to the system with no ratings at all. When the user's preference evaluation data is sparse, two users or items are unlikely to have common ratings, and finally, CF will predict ratings using a very limited number of similar users. Moreover, it may produces biased recommendations because similarity weights may be estimated using only a small portion of rating data. In this study, we suggest a novel limitation of the conventional CF. The limitation is that CF does not consider qualitative and emotional information about users in the recommendation process because it only utilizes user's preference scores of the user-item matrix. To address this novel limitation, this study proposes cluster-indexing CF model with the structural hole analysis for recommendations. In general, the structural hole means a location which connects two separate actors without any redundant connections in the network. The actor who occupies the structural hole can easily access to non-redundant, various and fresh information. Therefore, the actor who occupies the structural hole may be a important person in the focal network and he or she may be the representative person in the focal subgroup in the network. Thus, his or her characteristics may represent the general characteristics of the users in the focal subgroup. In this sense, we can distinguish friends and strangers of the focal user utilizing the structural hole analysis. This study uses the structural hole analysis to select structural holes in subgroups as an initial seeds for a cluster analysis. First, we gather data about users' preference ratings for items and their social network information. For gathering research data, we develop a data collection system. Then, we perform structural hole analysis and find structural holes of social network. Next, we use these structural holes as cluster centroids for the clustering algorithm. Finally, this study makes recommendations using CF within user's cluster, and compare the recommendation performances of comparative models. For implementing experiments of the proposed model, we composite the experimental results from two experiments. The first experiment is the structural hole analysis. For the first one, this study employs a software package for the analysis of social network data - UCINET version 6. The second one is for performing modified clustering, and CF using the result of the cluster analysis. We develop an experimental system using VBA (Visual Basic for Application) of Microsoft Excel 2007 for the second one. This study designs to analyzing clustering based on a novel similarity measure - Pearson correlation between user preference rating vectors for the modified clustering experiment. In addition, this study uses 'all-but-one' approach for the CF experiment. In order to validate the effectiveness of our proposed model, we apply three comparative types of CF models to the same dataset. The experimental results show that the proposed model outperforms the other comparative models. In especial, the proposed model significantly performs better than two comparative modes with the cluster analysis from the statistical significance test. However, the difference between the proposed model and the naive model does not have statistical significance.

Spatial Analysis to Capture Person Environment Interactions through Spatio-Temporally Extended Topology (시공간적으로 확장된 토폴로지를 이용한 개인 환경간 상호작용 파악 공간 분석)

  • Lee, Byoung-Jae
    • Journal of the Korean Geographical Society
    • /
    • v.47 no.3
    • /
    • pp.426-439
    • /
    • 2012
  • The goal of this study is to propose a new method to capture the qualitative person spatial behavior. Beyond tracking or indexing the change of the location of a person, the changes in the relationships between a person and its environment are considered as the main source for the formal model of this study. Specifically, this paper focuses on the movement behavior of a person near the boundary of a region. To capture the behavior of person near the boundary of regions, a new formal approach for integrating an object's scope of influence is described. Such an object, a spatio-temporally extended point (STEP), is considered here by addressing its scope of influence as potential events or interactions area in conjunction with its location. The formalism presented is based on a topological data model and introduces a 12-intersection model to represent the topological relations between a region and the STEP in 2-dimensional space. From the perspective of STEP concept, a prototype analysis results are provided by using GPS tracking data in real world.

  • PDF

Interactive Colision Detection for Deformable Models using Streaming AABBs

  • Zhang, Xinyu;Kim, Young-J.
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02c
    • /
    • pp.306-317
    • /
    • 2007
  • We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At run-time, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30~100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm [2] and observed about two times improvement.

  • PDF

Implementation of an Efficient Requirements Analysis supporting System using Similarity Measure Techniques (유사도 측정 기법을 이용한 효율적인 요구 분석 지원 시스템의 구현)

  • Kim, Hark-Soo;Ko, Young-Joong;Park, Soo-Yong;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.1
    • /
    • pp.13-23
    • /
    • 2000
  • As software becomes more complicated and large-scaled, user's demands become more varied and his expectation levels about software products are raised. Therefore it is very important that a software engineer analyzes user's requirements precisely and applies it effectively in the development step. This paper presents a requirements analysis system that reduces and revises errors of requirements specifications analysis effectively. As this system measures the similarity among requirements documents and sentences, it assists users in analyzing the dependency among requirements specifications and finding the traceability, redundancy, inconsistency and incompleteness among requirements sentences. It also extracts sentences that contain ambiguous words. Indexing method for the similarity measurement combines sliding window model and dependency structure model. This method can complement each model's weeknesses. This paper verifies the efficiency of similarity measure techniques through experiments and presents a proccess of the requirements specifications analysis using the embodied system.

  • PDF

Degradation Quantification Method and Degradation and Creep Life Prediction Method for Nickel-Based Superalloys Based on Bayesian Inference (베이지안 추론 기반 니켈기 초합금의 열화도 정량화 방법과 열화도 및 크리프 수명 예측의 방법)

  • Junsang, Yu;Hayoung, Oh
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.15-26
    • /
    • 2023
  • The purpose of this study is to determine the artificial intelligence-based degradation index from the image of the cross-section of the microstructure taken with a scanning electron microscope of the specimen obtained by the creep test of DA-5161 SX, a nickel-based superalloy used as a material for high-temperature parts. It proposes a new method of quantification and proposes a model that predicts degradation based on Bayesian inference without destroying components of high-temperature parts of operating equipment and a creep life prediction model that predicts Larson-Miller Parameter (LMP). It is proposed that the new degradation indexing method that infers a consistent representative value from a small amount of images based on the geometrical characteristics of the gamma prime phase, a nickel-base superalloy microstructure, and the prediction method of degradation index and LMP with information on the environmental conditions of the material without destroying high-temperature parts.

An Integrated Ontological Approach to Effective Information Management in Science and Technology (과학기술 분야 통합 개념체계의 구축 방안 연구)

  • 정영미;김명옥;이재윤;한승희;유재복
    • Journal of the Korean Society for information Management
    • /
    • v.19 no.1
    • /
    • pp.135-161
    • /
    • 2002
  • This study presents a multilingual integrated ontological approach that enables linking classification systems. thesauri. and terminology databases in science and technology for more effective indexing and information retrieval online. In this integrated system, we designed a thesaurus model with concept as a unit and designated essential data elements for a terminology database on the basis of ISO 12620 standard. The classification system for science and technology adopted in this study provides subject access channels from other existing classification systems through its mapping table. A prototype system was implemented with the field of nuclear energy as an application area.