• Title/Summary/Keyword: 구조기반 검색

Search Result 1,173, Processing Time 0.024 seconds

A Study on Refined Information Generation through Classes Composition Based on Reengineering (재공학 기반의 클래스 합성을 통한 정련화된 정보 생성에 관한 연구)

  • 김행곤;한은주
    • Journal of Korea Multimedia Society
    • /
    • v.1 no.2
    • /
    • pp.239-248
    • /
    • 1998
  • Software reengineering is making various research for solutions against problem of maintain existing system. Reengineering has a meaning of development of softwares on existing systems through the reverse-engineering and the forward-engineering. It extracts classes from existing system's softwares to increase the comprehension of the system and enhance the maintenability of softwares. Most of the important concepts used in reengineering is composition that is restructuring of the existing objects from other components. The classes and clusters in storage have structural relationship with system's main components to reuse in the higher level. These are referenced as dynamic informations through structuring an architect for each of them. The classes are created by extractor, searcher and composer through representing existing object-oriented source code. Each of classes and clusters extract refined informations through optimization. New architecture is created from the cluster based on its classes' relationship in storage. This information can be used as an executable code later on. In this paper, we propose the tools, it presented by this thesis presents a new information to users through analysing, based on reengineering, Object-Oriented informations and practicing composition methodology. These composite classes will increase reusability and produce higher comprehension information to consist maintainability for existing codes.

  • PDF

PreSPI: Design and Implementation of Protein-Protein Interaction Prediction Service System (PreSPI:단백질 상호작용 예측 서비스 시스템 설계 및 구현)

  • Kim, Hong-Soog;Jang, Woo-Hyuk;Lee, Sung-Doke;Han, Dong-Soo
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2004.11a
    • /
    • pp.86-100
    • /
    • 2004
  • 계산을 통한 단백질 상호작용 예측 기법의 중요성이 제기되면서 많은 단백질 상호 작용 예측 기법이 제안되고 있다. 하지만 이러한 기법들이 일반 사용자가 손쉽게 사용할 수 있는 서비스 형태로 제공되고 있는 경우는 드물다. 본 논문에서는 현재까지 알려진 단백질 상호작용 예측 기법 중 예측 기법의 완성도가 높고 상대적으로 예측 정확도가 높은 것으로 알려진 도메인 조합 기반 단백질 상호 작용 예측 기법을 PreSPI(Prediction System for Protein Interaction)라는 서비스 시스템으로 설계하고 구현하였다. 구현된 시스템이 제공하는 기능은 크게 도메인 조합 기반 단백질 상호 작용 예측 기법을 서비스 형태로 만들어 제공하는 기능으로 입력 단백질 쌍에 대한 상호작용 예측이 중심이 된 핵심기능과, 핵심 기능으로부터 파생되는 기능인 부가 기능, 그리고 주어진 단백질에 대한 도메인 정보검색 기능과 같이 단백질 상호작용에 관하여 연구하는 연구자에게 도움이 되는 일반적인 기능으로 구성되어 있다. 계산을 통해 단백질 상호 작용을 예측하는 시스템은 대규모계산이 요구되는 경우가 많아 좋은 성능을 갖추는 것이 중요하다. 본 논문에서 구현된 PreSPI 시스템은 서비스에 따라 적절히 그 처리를 병렬화 함으로써 시스템의 성능 향상을 도모하였고, PreSPI 가 제공하는 기능을 웹 서비스 API 로 Deploy 하여 시스템의 개방성을 지원하고 있다. 또한 인터넷 환경에서 변화되는 단백질 상호 작용 및 도메인에 관한 정보를 유연하게 반영할 수 있도록 시스템을 계층 구조로 설계하였다. 본 논문에서는 PreSPI 가 제공하는 몇 가지 대표적인 서비스에 관하여 사용자 인터페이스를 중심으로 상술함으로써 초기 PreSPI 사용자가 PreSPI 가 제공하는 서비스를 이해하고 사용하는 데에도 도움이 되도록 하였다.있어서 자각증상, 타각소견(他覺所見)과 함께 이상(異常)은 확인되지 않았으며 부작용도 없었다. 이상의 결과로부터, ‘펩타이드 음료’는 경증고혈압 혹은 경계역고혈압자(境界域高血壓者)의 혈압을, 자각증상 및 혈액${\cdot}$뇨검사에도 전혀 영향을 미치지 않고 저하시킨다고 결론지었다.이병엽을 염색하여 흰가루 병균의 균사생장과 포자형성 등을 관찰한 결과 균사가 용균되는 것을 볼 수 있었으며, 균사의 용균정도와 분생포자형성 억제 정도는 병 방제효과와 일치하는 경향을 보였다.을 의미한다. IV형은 가장 후기에 포획된 유체포유물이며, 광산 주변에 분포하는 석회암체 등의 변성퇴적암류로부터 $CO_{2}$ 성분과 다양한 성분의 유체가 공급되어 생성된 것으로 여겨진다. 정동이 발달하고 있지 않으며, 백운모를 함유하고 있는 대유페그마타이트는 변성작용에 의한 부분용융에 의해 형성된 멜트에서 결정화되었으며, 상당히 높은 압력의 환경에서 대유페그마타이트의 결정화작용 과정에서 용리한 유체의 성분이 전기석에 포획되어 있다. 이때 용리된 유체는 다양한 성분을 지니고 있었으며, 매우 낮은 공융온도와 다양한 딸결정은 포유물 내에 NaCl, KCl 이외에 적어도 $CaCl_{2},\;MgCl_{2}$와 같은 성분을 포함하고 있음을 지시한다. 유체의 용리는 적어도 $2.7{\sim}5.3$ kbar 이상의 압력과 $230{\sim}328^{\circ}C$ 이상의 온도에서 시작되었다.없었다. 결론적으로 일부 한방제와 생약제제는 육계에서 항생제를 대체하여 사용이 가능하며 특히 혈액의 성분에 유의한 영향을 미치는 것으로 사료된다. 실증연구가 필요할 것으로 사료된다.trip과 Sof-Lex disc로 얻어진 표면은 레진전색제의 사용으로 표면조도의 개선

  • PDF

Efficient Path Finding in 3D Games by Using Visibility Tests (가시성 검사를 이용한 3차원 게임에서의 효율적인 경로 탐색)

  • Kim, Hyung-Il;Jung, Dong-Min;Um, Ky-Hyun;Cho, Hyung-Je;Kim, Jun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1483-1495
    • /
    • 2006
  • The navigation mesh represents a terrain as a set of triangles on which characters may move around. The navigation mesh cab be generated automatically, and it is more flexible in representing 3D surface. The number of triangles to represent a terrain may vary according to the structure of the terrain. As characters are moving around on a navigation mesh, the path planning can be performed more easily by projecting the 3D surfaces into 2D space. However, when the terrain is represented with an elaborated mesh of large number of triangles to achieve more realistic movements, the path finding can be very inefficient because there are too many states(triangles) to be searched. In this paper, we propose an efficient method of path finding in 3D games where the terrain is represented by navigation meshes. Our method uses the visibility tests. When the graph-based search is applied to elaborated polygonal meshes for detailed terrain representation, the path finding can be very inefficient because there are too many states(polygons) to be searched. In our method, we reduce the search space by using visibility tests so that the search can be fast even on the detailed terrain with large number of polygons. First we find the visible vertices of the obstacles, and define the heuristic function as the distance to the goal through those vertices. By doing that, the number of states that the graph-based search visits can be substantially reduced compared to the plane search with straight-line distance heuristic.

  • PDF

Pre-aggregation Index Method Based on the Spatial Hierarchy in the Spatial Data Warehouse (공간 데이터 웨어하우스에서 공간 데이터의 개념계층기반 사전집계 색인 기법)

  • Jeon, Byung-Yun;Lee, Dong-Wook;You, Byeong-Seob;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1421-1434
    • /
    • 2006
  • Spatial data warehouses provide analytical information for decision supports using SOLAP (Spatial On-Line Analytical Processing) operations. Many researches have been studied to reduce analysis cost of SOLAP operations using pre-aggregation methods. These methods use the index composed of fixed size nodes for supporting the concept hierarchy. Therefore, these methods have many unused entries in sparse data area. Also, it is impossible to support the concept hierarchy in dense data area. In this paper, we propose a dynamic pre-aggregation index method based on the spatial hierarchy. The proposed method uses the level of the index for supporting the concept hierarchy. In sparse data area, if sibling nodes have a few used entries, those entries are integrated in a node and the parent entries share the node. In dense data area, if a node has many objects, the node is connected with linked list of several nodes and data is stored in linked nodes. Therefore, the proposed method saves the space of unused entries by integrating nodes. Moreover it can support the concept hierarchy because a node is not divided by linked nodes. Experimental result shows that the proposed method saves both space and aggregation search cost with the similar building cost of other methods.

  • PDF

A DNA Index Structure using Frequency and Position Information of Genetic Alphabet (염기문자의 빈도와 위치정보를 이용한 DNA 인덱스구조)

  • Kim Woo-Cheol;Park Sang-Hyun;Won Jung-Im;Kim Sang-Wook;Yoon Jee-Hee
    • Journal of KIISE:Databases
    • /
    • v.32 no.3
    • /
    • pp.263-275
    • /
    • 2005
  • In a large DNA database, indexing techniques are widely used for rapid approximate sequence searching. However, most indexing techniques require a space larger than original databases, and also suffer from difficulties in seamless integration with DBMS. In this paper, we suggest a space-efficient and disk-based indexing and query processing algorithm for approximate DNA sequence searching, specially exact match queries, wildcard match queries, and k-mismatch queries. Our indexing method places a sliding window at every possible location of a DNA sequence and extracts its signature by considering the occurrence frequency of each nucleotide. It then stores a set of signatures using a multi-dimensional index, such as R*-tree. Especially, by assigning a weight to each position of a window, it prevents signatures from being concentrated around a few spots in index space. Our query processing algorithm converts a query sequence into a multi-dimensional rectangle and searches the index for the signatures overlapped with the rectangle. The experiments with real biological data sets revealed that the proposed method is at least three times, twice, and several orders of magnitude faster than the suffix-tree-based method in exact match, wildcard match, and k- mismatch, respectively.

Character-based Subtitle Generation by Learning of Multimodal Concept Hierarchy from Cartoon Videos (멀티모달 개념계층모델을 이용한 만화비디오 컨텐츠 학습을 통한 등장인물 기반 비디오 자막 생성)

  • Kim, Kyung-Min;Ha, Jung-Woo;Lee, Beom-Jin;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.451-458
    • /
    • 2015
  • Previous multimodal learning methods focus on problem-solving aspects, such as image and video search and tagging, rather than on knowledge acquisition via content modeling. In this paper, we propose the Multimodal Concept Hierarchy (MuCH), which is a content modeling method that uses a cartoon video dataset and a character-based subtitle generation method from the learned model. The MuCH model has a multimodal hypernetwork layer, in which the patterns of the words and image patches are represented, and a concept layer, in which each concept variable is represented by a probability distribution of the words and the image patches. The model can learn the characteristics of the characters as concepts from the video subtitles and scene images by using a Bayesian learning method and can also generate character-based subtitles from the learned model if text queries are provided. As an experiment, the MuCH model learned concepts from 'Pororo' cartoon videos with a total of 268 minutes in length and generated character-based subtitles. Finally, we compare the results with those of other multimodal learning models. The Experimental results indicate that given the same text query, our model generates more accurate and more character-specific subtitles than other models.

An Exploratory Study on the Big Data Convergence-based NCS Homepage : focusing on the Use of Splunk (빅데이터 융합 기반 NCS 홈페이지에 관한 탐색적 연구: 스플렁크 활용을 중심으로)

  • Park, Seong-Taek;Lee, Jae Deug;Kim, Tae Ung
    • Journal of Digital Convergence
    • /
    • v.16 no.7
    • /
    • pp.107-116
    • /
    • 2018
  • One of the key mission is to develop and prompte the use National Competency Standards, which is defined to be the systemization of competencies(knowledge, skills and attitudes) required to perform duties at the workplace by the nation for each industrial sector and level. This provides the basis for the design of training and detailed specifications for workplace assessment. To promote the data-driven service improvement, the commercial product Splunk was introduced, and has grown to become an extremely useful platform because it enables the users to search, collect, and organize data in a far more comprehensive, far less labor-intensive way than traditional databases. Leveraging Splunk's built-in data visualization and analytical features, HRD Korea have built custom tools to gain new insight and operational intelligence that organizations have never had before. This paper analyzes the NCS homepage. Concretely, applying Splunk in creating visualizations, dashboards and performing various functional and statistical analysis and structure without Web development skills. We presented practical use and implications through case studies.

An Improvement in K-NN Graph Construction using re-grouping with Locality Sensitive Hashing on MapReduce (MapReduce 환경에서 재그룹핑을 이용한 Locality Sensitive Hashing 기반의 K-Nearest Neighbor 그래프 생성 알고리즘의 개선)

  • Lee, Inhoe;Oh, Hyesung;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.11
    • /
    • pp.681-688
    • /
    • 2015
  • The k nearest neighbor (k-NN) graph construction is an important operation with many web-related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Despite its many elegant properties, the brute force k-NN graph construction method has a computational complexity of $O(n^2)$, which is prohibitive for large scale data sets. Thus, (Key, Value)-based distributed framework, MapReduce, is gaining increasingly widespread use in Locality Sensitive Hashing which is efficient for high-dimension and sparse data. Based on the two-stage strategy, we engage the locality sensitive hashing technique to divide users into small subsets, and then calculate similarity between pairs in the small subsets using a brute force method on MapReduce. Specifically, generating a candidate group stage is important since brute-force calculation is performed in the following step. However, existing methods do not prevent large candidate groups. In this paper, we proposed an efficient algorithm for approximate k-NN graph construction by regrouping candidate groups. Experimental results show that our approach is more effective than existing methods in terms of graph accuracy and scan rate.

The Construction of Multiform User Profiles Based on Transaction for Effective Recommendation and Segmentation (효과적인 추천과 세분화를 위한 트랜잭션 기반 여러 형태 사용자 프로파일의 구축)

  • Koh, Jae-Jin;An, Hyoung-Keun
    • The KIPS Transactions:PartD
    • /
    • v.13D no.5 s.108
    • /
    • pp.661-670
    • /
    • 2006
  • With the development of e-Commerce and the proliferation of easily accessible information, information filtering systems such as recommender and SDI systems have become popular to prune large information spaces so that users are directed toward those items that best meet their needs and preferences. Until now, many information filtering methods have been proposed to support filtering systems. XML is emerging as a new standard for information. Recently, filtering systems need new approaches in dealing with XML documents. So, in this paper our system suggests a method to create multiform user profiles with XML's ability to represent structure. This system consists of two parts; one is an administrator profile definition part that an administrator defines to analyze users purchase pattern before a transaction such as purchase happens directly. an other is a user profile creation part module which is applied by the defined profile. Administrator profiles are made from DTD information and it is supposed to point the specific part of a document conforming to the DTD. Proposed system builds user's profile more accurately to get adaptability for user's behavior of buying and provide useful product information without inefficient searching based on such user's profile.

Analysis of Magnetic Flux Leakage based Local Damage Detection Sensitivity According to Thickness of Steel Plate (누설자속 기반 강판 두께별 국부 손상 진단 감도 분석)

  • Kim, Ju-Won;Yu, Byoungjoon;Park, Sehwan;Park, Seunghee
    • Journal of Korean Society of Disaster and Security
    • /
    • v.11 no.2
    • /
    • pp.53-60
    • /
    • 2018
  • To diagnosis the local damages of the steel plates, magnetic flux leakage (MFL) method that is known as a adaptable non-destructive evaluation (NDE) method for continuum ferromagnetic members was applied in this study. To analysis the sensitivity according to thickness of steel plate in MFL method based damage diagnosis, several steel plate specimens that have different thickness were prepared and three depths of artificial damage were formed to the each specimens. To measured the MFL signals, a MFL sensor head that have a constant magnetization intensity were fabricated using a hall sensor and a magnetization yoke using permanent magnets. The magnetic flux signals obtained by using MFL sensor head were improved through a series of signal processing methods. The capability of local damage detection was verified from the measured MFL signals from each damage points. And, the peak to peak values (P-P value) extracted from the detected MFL signals from each thickness specimen were compared each other to analysis the MFL based local damage detection sensitivity according to the thickness of steel plate.