• Title/Summary/Keyword: 노드 비교

Search Result 1,563, Processing Time 0.024 seconds

Rough Computational Annotation and Hierarchical Conserved Area Viewing Tool for Genomes Using Multiple Relation Graph. (다중 관계 그래프를 이용한 유전체 보존영역의 계층적 시각화와 개략적 전사 annotation 도구)

  • Lee, Do-Hoon
    • Journal of Life Science
    • /
    • v.18 no.4
    • /
    • pp.565-571
    • /
    • 2008
  • Due to rapid development of bioinformatics technologies, various biological data have been produced in silico. So now days complicated and large scale biodata are used to accomplish requirement of researcher. Developing visualization and annotation tool using them is still hot issues although those have been studied for a decade. However, diversity and various requirements of users make us hard to develop general purpose tool. In this paper, I propose a novel system, Genome Viewer and Annotation tool (GenoVA), to annotate and visualize among genomes using known information and multiple relation graph. There are several multiple alignment tools but they lose conserved area for complexity of its constrains. The GenoVA extracts all associated information between all pair genomes by extending pairwise alignment. High frequency conserved area and high BLAST score make a block node of relation graph. To represent multiple relation graph, the system connects among associated block nodes. Also the system shows the known information, COG, gene and hierarchical path of block node. In this case, the system can annotates missed area and unknown gene by navigating the special block node's clustering. I experimented ten bacteria genomes for extracting the feature to visualize and annotate among them. GenoVA also supports simple and rough computational annotation of new genome.

Power Consumption Analysis of Routing Protocols using Sensor Network Simulator (센서 네트워크 시뮬레이터를 이용한 라우팅 프로토콜의 전력소모량 분석)

  • Kim, Bang-Hyun;Jung, Yong-Doc;Kim, Tea-Kyu;Kim, Jong-Hyun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10a
    • /
    • pp.414-418
    • /
    • 2006
  • 유비쿼터스 컴퓨팅의 인프라가 되는 센서 네트워크는 매우 작은 하드웨어로 이루어지는 많은 수의 센서 노드들로 구성된다. 이 네트워크의 토폴로지와 라우팅 방식은 그 목적에 따라 결정되어야 하며, 하드웨어 및 소프트웨어도 필요한 경우에는 변경되어야 한다. 따라서 그러한 네트워크를 최적으로 설계하기 위해서는 시스템 동작을 확인하고 성능을 예측할 수 있는 센서 네트워크 시뮬레이터가 필요하다. 현존하는 몇몇 센서 네트워크 시뮬레이터들은 특정 하드웨어나 운영체제에 맞추어 개발되었기 때문에, 그러한 특정 시스템들을 위해서만 사용될 수 있다. 그리고 시스템 설계 상의 주요 이슈가 되는 전력 소모량 및 프로그램 실행 시간을 추정하기 위한 어떤 수단도 지원하지 못하고 있다. 이 연구에서는 기존의 센서 네트워크 시뮬레이터들이 갖고 있는 문제점을 해결한 시뮬레이터를 개발하고, 센서 네트워크의 계층적 라우팅 프로토콜인 LEACH, TEEN, APTEEN의 전력소모량을 시뮬레이션을 이용하여 분석하였다. 시뮬레이션의 작업부하인 명령어 트레이스로는 ATmega128L 마이크로컨트롤러용 크로스컴파일러에 의해 생성된 실행 이미지를 사용하였다. 따라서 각각의 라우팅 프로토콜을 실제 센서 보드에서 동작하는 응용 프로그램으로 구현하고, 컴파일된 실행 이미지를 작업부하로 사용하여 시뮬레이션 하였다. 라우팅 프로그램들은 ETRI의 센서 네트워크 운영체제인 Nano-Q+ 1.6.1을 기반으로 구현되었으며, 하드웨어 플랫폼은 옥타컴의 센서 보드인 Nano-24이다. 시뮬레이션 결과에 따르면, 센서 네트워크는 그 사용 목적에 따라 라우팅 프로토콜을 적절히 선택해야 한다는 것을 알 수 있다. 즉, LEACH는 주기적으로 네트워크의 상황을 체크해야 하는 경우에 적합하고, TEEN은 환경의 변화를 수시로 감지해야 하는 경우에 적합하다. 그리고 APTEEN은 전력소모량과 기능 측면을 모두 고려할 때 가장 효과적인 라우팅 프로토콜이라고 할 수 있다.iRNA 상의 의존관계를 분석할 수 있었다.수안보 등 지역에서 나타난다 이러한 이상대 주변에는 대개 온천이 발달되어 있었거나 새로 개발되어 있는 곳이다. 온천에 이용하고 있는 시추공의 자료는 배제하였으나 온천이응으로 직접적으로 영향을 받지 않은 시추공의 자료는 사용하였다 이러한 온천 주변 지역이라 하더라도 실제는 온천의 pumping 으로 인한 대류현상으로 주변 일대의 온도를 올려놓았기 때문에 비교적 높은 지열류량 값을 보인다. 한편 한반도 남동부 일대는 이번 추가된 자료에 의해 새로운 지열류량 분포 변화가 나타났다 강원 북부 오색온천지역 부근에서 높은 지열류량 분포를 보이며 또한 우리나라 대단층 중의 하나인 양산단층과 같은 방향으로 발달한 밀양단층, 모량단층, 동래단층 등 주변부로 NNE-SSW 방향의 지열류량 이상대가 발달한다. 이것으로 볼 때 지열류량은 지질구조와 무관하지 않음을 파악할 수 있다. 특히 이러한 단층대 주변은 지열수의 순환이 깊은 심도까지 가능하므로 이러한 대류현상으로 지표부근까지 높은 지온 전달이 되어 나타나는 것으로 판단된다.의 안정된 방사성표지효율을 보였다. $^{99m}Tc$-transferrin을 이용한 감염영상을 성공적으로 얻을 수 있었으며, $^{67}Ga$-citrate 영상과 비교하여 더 빠른 시간 안에 우수한 영상을 얻을 수 있었다. 그러므로 $^{99m}Tc$-transierrin이 감염 병소의 영상진단에 사용될 수 있을 것으로 기대된다.리를 정량화 하였다. 특히 선조체에서의 도파민 유리에 의한 수용체 결합능의 감소는 흡연에 의한 혈중 니코틴의 축적 농도와 양의 상관관계를 보였다(rho=0.9, p=0.04). 결론: $[^{11}C]raclopride$ PET을 이용하여 비흡연 정상인에서 흡연에 의한 도파민 유리를 영상화 및 정량화 하였고, 흡연에 의한 선조체내 도파민 유리는 흡연시 흡수된

  • PDF

PBFiltering: An Energy Efficient Skyline Query Processing Method using Priority-based Bottom-up Filtering in Wireless Sensor Networks (PBFiltering: 무선 센서 네트워크에서 우선순위 기반 상향식 필터링을 이용한 에너지 효율적인 스카이라인 질의 처리 기법)

  • Seong, Dong-Ook;Park, Jun-Ho;Kim, Hak-Sin;Park, Hyoung-Soon;Roh, Kyu-Jong;Yeo, Myung-Ho;Yoo, Jae-Soo
    • Journal of KIISE:Databases
    • /
    • v.36 no.6
    • /
    • pp.476-485
    • /
    • 2009
  • In sensor networks, many methods have been proposed to process in-network aggregation effectively. Unlike general aggregation queries, skyline query processing compares multi-dimensional data for the result. Therefore, it is very difficult to process the skyline queries in sensor networks. It is important to filter unnecessary data for energy-efficient skyline query processing. Existing approach like MFTAC restricts unnecessary data transitions by deploying filters to whole sensors. However, network lifetime is reduced by energy consumption for many false positive data and filters transmission. In this paper, we propose a bottom up filtering-based skyline query processing algorithm of in-network for reducing energy consumption by filters transmission and a PBFiltering technique for improving performance of filtering. The proposed algorithm creates the skyline filter table (SFT) in the data gathering process which sends from sensor nodes to the base station and filters out unnecessary transmissions using it. The experimental results show that our algorithm reduces false positives and improves the network lifetime over the existing method.

Characterization of Ecological Networks on Wetland Complexes by Dispersal Models (분산 모형에 따른 습지경관의 생태 네트워크 특성 분석)

  • Kim, Bin;Park, Jeryang
    • Journal of Wetlands Research
    • /
    • v.21 no.1
    • /
    • pp.16-26
    • /
    • 2019
  • Wetlands that provide diverse ecosystem services, such as habitat provision and hydrological control of flora and fauna, constitute ecosystems through interaction between wetlands existing in a wetlandscape. Therefore, to evaluate the wetland functions such as resilience, it is necessary to analyze the ecological connectivity that is formed between wetlands which also show hydrologically dynamic behaviors. In this study, by defining wetlands as ecological nodes, we generated ecological networks through the connection of wetlands according to the dispersal model of wetland species. The characteristics of these networks were then analyzed using various network metrics. In the case of the dispersal based on a threshold distance, while a high local clustering is observed compared to the exponential dispersal kernel and heavy-tailed dispersal model, it showed a low efficiency in the movement between wetlands. On the other hand, in the case of the stochastic dispersion model, a low local clustering with high efficiency in the movement was observed. Our results confirmed that the ecological network characteristics are completely different depending on which dispersal model is chosen, and one should be careful on selecting the appropriate model for identifying network properties which highly affect the interpretation of network structure and function.

Exploration of Knowledge Hiding Research Trends Using Keyword Network Analysis (키워드 네트워크 분석을 활용한 지식은폐 연구동향 분석)

  • Joo, Jaehong;Song, Ji Hoon
    • Knowledge Management Research
    • /
    • v.22 no.1
    • /
    • pp.217-242
    • /
    • 2021
  • The purpose of this study is to examine the research trends in the filed of individual knowledge hiding through keyword network analysis. As individuals intentionally hide their knowledge beyond not sharing their knowledge in organizations and the research on knowledge hiding steadily spreads, it is necessary to examine the research trends regarding knowledge hiding behaviors. For keyword network analyses, we collected 346 kinds of 578 keywords from 120 articles associated with knowledge hiding behaviors. We also transformed the keywords to 86 nodes and 667 links by data standardizing criteria and finally analyzed the keyword network among them. Moreover, this study scrutinized knowledge hiding trends by comparing the conceptual model for knowledge hiding based on literature review and the network structure based on keyword network analysis. As results, first, the network centrality degree, knowledge sharing, creativity, and performance was higher than others in Degree, Betweenness, Closeness centrality. Second, this study analyzed ego networks about psychological ownership and individual emotion theoretically associated with knowledge hiding and explored the relationship between variables through comparing with the conceptual model for knowledge hiding. Finally, the study suggested theoretical and practical implications and provided the limitations and suggestions for future research based on study findings.

Content Diversity Analysis of Elementary Science Authorized Textbooks according to the 2015 Revised Curriculum: Focusing on the "Weight of an Object" Unit (2015 개정 교육과정에 따른 초등 과학 검정 교과서 내용 다양성 분석 - '물체의 무게' 단원을 중심으로 -)

  • Shin, Jung-Yun;Park, Sang-Woo;Jeong, Hyeon-Ji;Hong, Mi-Na;Kim, Hyeon-Jae
    • Journal of Korean Elementary Science Education
    • /
    • v.41 no.2
    • /
    • pp.307-324
    • /
    • 2022
  • This study examined the content diversity of seven authorized science textbooks by comparing the characteristics of the science concept description and the contents of inquiry activities in the "weight of objects" unit. For each textbook, the flow of concept description content and the uniqueness of the concept description process were analyzed, and the number of nodes and links and words with high connections were determined using language network analysis. In addition, for the inquiry activities described in each textbook, the inquiry subject, inquiry type, science process skill, and uniqueness were investigated. Results showed that the authorized textbooks displayed no more diversity than expected in their scientific concept description method or their inquiry activity composition. The learning elements, inclusion of subconcepts, and central words were similar for each textbook. The comparison of inquiry activities showed similarities in their contents, inquiry types, and scientific process skills. Specifically, these textbooks did not introduce any research topics or experimental methods that were absent in previous textbooks. However, despite the fact that the authorized textbook system was developed based on the same curriculum, some efforts were made to make use of its strengths. Since the sequence of subconcepts to explain the core contents differed across textbooks, this explanation process was divided into several types, and although the contents of inquiry activities were the same, the materials for inquiry activities were shown differently for each textbook to improve and overcome the difficulties in the existing experiments. These findings necessitate the continuation of efforts to utilize the strengths of certified textbooks.

Estimation of Displacements Using Artificial Intelligence Considering Spatial Correlation of Structural Shape (구조형상 공간상관을 고려한 인공지능 기반 변위 추정)

  • Seung-Hun Shin;Ji-Young Kim;Jong-Yeol Woo;Dae-Gun Kim;Tae-Seok Jin
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.1
    • /
    • pp.1-7
    • /
    • 2023
  • An artificial intelligence (AI) method based on image deep learning is proposed to predict the entire displacement shape of a structure using the feature of partial displacements. The performance of the method was investigated through a structural test of a steel frame. An image-to-image regression (I2IR) training method was developed based on the U-Net layer for image recognition. In the I2IR method, the U-Net is modified to generate images of entire displacement shapes when images of partial displacement shapes of structures are input to the AI network. Furthermore, the training of displacements combined with the location feature was developed so that nodal displacement values with corresponding nodal coordinates could be used in AI training. The proposed training methods can consider correlations between nodal displacements in 3D space, and the accuracy of displacement predictions is improved compared with artificial neural network training methods. Displacements of the steel frame were predicted during the structural tests using the proposed methods and compared with 3D scanning data of displacement shapes. The results show that the proposed AI prediction properly follows the measured displacements using 3D scanning.

Semantic Access Path Generation in Web Information Management (웹 정보의 관리에 있어서 의미적 접근경로의 형성에 관한 연구)

  • Lee, Wookey
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.2
    • /
    • pp.51-56
    • /
    • 2003
  • The structuring of Web information supports a strong user side viewpoint that a user wants his/her own needs on snooping a specific Web site. Not only the depth first algorithm or the breadth-first algorithm, but also the Web information is abstracted to a hierarchical structure. A prototype system is suggested in order to visualize and to represent a semantic significance. As a motivating example, the Web test site is suggested and analyzed with respect to several keywords. As a future research, the Web site model should be extended to the whole WWW and an accurate assessment function needs to be devised by which several suggested models should be evaluated.

  • PDF

Automatic Target Recognition Study using Knowledge Graph and Deep Learning Models for Text and Image data (지식 그래프와 딥러닝 모델 기반 텍스트와 이미지 데이터를 활용한 자동 표적 인식 방법 연구)

  • Kim, Jongmo;Lee, Jeongbin;Jeon, Hocheol;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.145-154
    • /
    • 2022
  • Automatic Target Recognition (ATR) technology is emerging as a core technology of Future Combat Systems (FCS). Conventional ATR is performed based on IMINT (image information) collected from the SAR sensor, and various image-based deep learning models are used. However, with the development of IT and sensing technology, even though data/information related to ATR is expanding to HUMINT (human information) and SIGINT (signal information), ATR still contains image oriented IMINT data only is being used. In complex and diversified battlefield situations, it is difficult to guarantee high-level ATR accuracy and generalization performance with image data alone. Therefore, we propose a knowledge graph-based ATR method that can utilize image and text data simultaneously in this paper. The main idea of the knowledge graph and deep model-based ATR method is to convert the ATR image and text into graphs according to the characteristics of each data, align it to the knowledge graph, and connect the heterogeneous ATR data through the knowledge graph. In order to convert the ATR image into a graph, an object-tag graph consisting of object tags as nodes is generated from the image by using the pre-trained image object recognition model and the vocabulary of the knowledge graph. On the other hand, the ATR text uses the pre-trained language model, TF-IDF, co-occurrence word graph, and the vocabulary of knowledge graph to generate a word graph composed of nodes with key vocabulary for the ATR. The generated two types of graphs are connected to the knowledge graph using the entity alignment model for improvement of the ATR performance from images and texts. To prove the superiority of the proposed method, 227 documents from web documents and 61,714 RDF triples from dbpedia were collected, and comparison experiments were performed on precision, recall, and f1-score in a perspective of the entity alignment..

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.