• Title/Summary/Keyword: competition graph

Search Result 15, Processing Time 0.02 seconds

Graph-based modeling for protein function prediction (단백질 기능 예측을 위한 그래프 기반 모델링)

  • Hwang Doosung;Jung Jae-Young
    • The KIPS Transactions:PartB
    • /
    • v.12B no.2 s.98
    • /
    • pp.209-214
    • /
    • 2005
  • The use of protein interaction data is highly reliable for predicting functions to proteins without function in proteomics study. The computational studies on protein function prediction are mostly based on the concept of guilt-by-association and utilize large-scale interaction map from revealed protein-protein interaction data. This study compares graph-based approaches such as neighbor-counting and $\chi^2-statistics$ methods using protein-protein interaction data and proposes an approach that is effective in analyzing large-scale protein interaction data. The proposed approach is also based protein interaction map but sequence similarity and heuristic knowledge to make prediction results more reliable. The test result of the proposed approach is given for KDD Cup 2001 competition data along with those of neighbor-counting and $\chi^2-statistics$ methods.

A Study on the generator Planning of GENCO in the competitive power markets (경쟁시장에서 발전업자의 발전설비계획에 관한 연구)

  • Kim, Tae-Young;Kim, Kang-Won;Han, Seok-Man;Kang, Dong-Joo;H. Kim, Bal-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2008.11a
    • /
    • pp.418-420
    • /
    • 2008
  • The GENCO of competition in the market for profit maximization of business development for the study of how to build a facility plan. The total revenue for this thesis may be deducted from the total cost calculations and the cumulative profit, the result of cumulative profit through the profit and loss, and up to the turn-off to find a generator of canonicalization. Fortran, using a model to formally implement the program, and a graph that displays the results to build any power plants is the most efficient visibility of the eye can see.

  • PDF

19 years of change in community structure of Quercus acutissima dominant stand on Mt. Danseok-san in Gyeongju national park, South Korea (경주 단석산 상수리나무 우점식분 군집구조의 19년간의 변화)

  • Ko, Jae Ki
    • Journal of Wetlands Research
    • /
    • v.20 no.3
    • /
    • pp.243-248
    • /
    • 2018
  • This study was carried out to clarify changes in community structure of Quercus acutissima dominant stand on the south slope of Mt. Danseok-san with fixed twenty quadrates. Five field surveys were conducted from Aug. 1999 to May 2018. During the period, the density lessened to 0.20 in 2012 comparing with 0.33 in 1999. However recent field study in 2018 showed moderate rising to 0.24. In 1999, the DBH class distribution of all trees formed reverse J curve. However, the reverse J curve was torn down, forming bell curve. In 2018, the curve showed similar shape of reverse J shape on the group of young trees, forming bell shape on the group of mature trees. It reveals that DBH 13cm is on the threshold of trees competition trend where the downtrend in the trees are on uptrend. The most dominant Q. acutissima formed bell curve. The peak of the curve shows the shift to the right of the graph as it gets lower by year.In case of Q. mongolica, shows a change in the shape of a low bell as the distribution curve increases. The oak stand in this study is in the process of changing from the initial stage of the secondary forest succession to the intermediate stage. The most dominant tree is Q. acutissima, and the sub-dominant tree is Q. mongolica in present. Considering the age distribution of the two competing tree species, the succession of this stand is expected to transfer to the Q. mongolica-dominant community.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Structural features and Diffusion Patterns of Gartner Hype Cycle for Artificial Intelligence using Social Network analysis (인공지능 기술에 관한 가트너 하이프사이클의 네트워크 집단구조 특성 및 확산패턴에 관한 연구)

  • Shin, Sunah;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.107-129
    • /
    • 2022
  • It is important to preempt new technology because the technology competition is getting much tougher. Stakeholders conduct exploration activities continuously for new technology preoccupancy at the right time. Gartner's Hype Cycle has significant implications for stakeholders. The Hype Cycle is a expectation graph for new technologies which is combining the technology life cycle (S-curve) with the Hype Level. Stakeholders such as R&D investor, CTO(Chef of Technology Officer) and technical personnel are very interested in Gartner's Hype Cycle for new technologies. Because high expectation for new technologies can bring opportunities to maintain investment by securing the legitimacy of R&D investment. However, contrary to the high interest of the industry, the preceding researches faced with limitations aspect of empirical method and source data(news, academic papers, search traffic, patent etc.). In this study, we focused on two research questions. The first research question was 'Is there a difference in the characteristics of the network structure at each stage of the hype cycle?'. To confirm the first research question, the structural characteristics of each stage were confirmed through the component cohesion size. The second research question is 'Is there a pattern of diffusion at each stage of the hype cycle?'. This research question was to be solved through centralization index and network density. The centralization index is a concept of variance, and a higher centralization index means that a small number of nodes are centered in the network. Concentration of a small number of nodes means a star network structure. In the network structure, the star network structure is a centralized structure and shows better diffusion performance than a decentralized network (circle structure). Because the nodes which are the center of information transfer can judge useful information and deliver it to other nodes the fastest. So we confirmed the out-degree centralization index and in-degree centralization index for each stage. For this purpose, we confirmed the structural features of the community and the expectation diffusion patterns using Social Network Serice(SNS) data in 'Gartner Hype Cycle for Artificial Intelligence, 2021'. Twitter data for 30 technologies (excluding four technologies) listed in 'Gartner Hype Cycle for Artificial Intelligence, 2021' were analyzed. Analysis was performed using R program (4.1.1 ver) and Cyram Netminer. From October 31, 2021 to November 9, 2021, 6,766 tweets were searched through the Twitter API, and converting the relationship user's tweet(Source) and user's retweets (Target). As a result, 4,124 edgelists were analyzed. As a reult of the study, we confirmed the structural features and diffusion patterns through analyze the component cohesion size and degree centralization and density. Through this study, we confirmed that the groups of each stage increased number of components as time passed and the density decreased. Also 'Innovation Trigger' which is a group interested in new technologies as a early adopter in the innovation diffusion theory had high out-degree centralization index and the others had higher in-degree centralization index than out-degree. It can be inferred that 'Innovation Trigger' group has the biggest influence, and the diffusion will gradually slow down from the subsequent groups. In this study, network analysis was conducted using social network service data unlike methods of the precedent researches. This is significant in that it provided an idea to expand the method of analysis when analyzing Gartner's hype cycle in the future. In addition, the fact that the innovation diffusion theory was applied to the Gartner's hype cycle's stage in artificial intelligence can be evaluated positively because the Gartner hype cycle has been repeatedly discussed as a theoretical weakness. Also it is expected that this study will provide a new perspective on decision-making on technology investment to stakeholdes.