• 제목/요약/키워드: Artificial Intelligence-Based Language Model

검색결과 112건 처리시간 0.02초

딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로 (Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit)

  • 정여진;안성만;양지헌;이재준
    • 지능정보연구
    • /
    • 제23권2호
    • /
    • pp.1-17
    • /
    • 2017
  • 딥러닝 프레임워크의 대표적인 기능으로는 '자동미분'과 'GPU의 활용' 등을 들 수 있다. 본 논문은 파이썬의 라이브러리 형태로 사용 가능한 프레임워크 중에서 구글의 텐서플로와 마이크로소프트의 CNTK, 그리고 텐서플로의 원조라고 할 수 있는 티아노를 비교하였다. 본문에서는 자동미분의 개념과 GPU의 활용형태를 간단히 설명하고, 그 다음에 logistic regression을 실행하는 예를 통하여 각 프레임워크의 문법을 알아본 뒤에, 마지막으로 대표적인 딥러닝 응용인 CNN의 예제를 실행시켜보고 코딩의 편의성과 실행속도 등을 확인해 보았다. 그 결과, 편의성의 관점에서 보면 티아노가 가장 코딩 하기가 어렵고, CNTK와 텐서플로는 많은 부분이 비슷하게 추상화 되어 있어서 코딩이 비슷하지만 가중치와 편향을 직접 정의하느냐의 여부에서 차이를 보였다. 그리고 각 프레임워크의 실행속도에 대한 평가는 '큰 차이는 없다'는 것이다. 텐서플로는 티아노에 비하여 속도가 느리다는 평가가 있어왔는데, 본 연구의 실험에 의하면, 비록 CNN 모형에 국한되었지만, 텐서플로가 아주 조금이지만 빠른 것으로 나타났다. CNTK의 경우에도, 비록 실험환경이 달랐지만, 실험환경의 차이에 의한 속도의 차이의 편차범위 이내에 있는 것으로 판단이 되었다. 본 연구에서는 세 종류의 딥러닝 프레임워크만을 살펴보았는데, 위키피디아에 따르면 딥러닝 프레임워크의 종류는 12가지가 있으며, 각 프레임워크의 특징을 15가지 속성으로 구분하여 차이를 특정하고 있다. 그 많은 속성 중에서 사용자의 입장에서 볼 때 중요한 속성은 어떤 언어(파이썬, C++, Java, 등)로 사용가능한지, 어떤 딥러닝 모형에 대한 라이브러리가 잘 구현되어 있는지 등일 것이다. 그리고 사용자가 대규모의 딥러닝 모형을 구축한다면, 다중 GPU 혹은 다중 서버를 지원하는지의 여부도 중요할 것이다. 또한 딥러닝 모형을 처음 학습하는 경우에는 사용설명서가 많은지 예제 프로그램이 많은지 여부도 중요한 기준이 될 것이다.

유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안 (Semantic Process Retrieval with Similarity Algorithms)

  • 이홍주
    • Asia pacific journal of information systems
    • /
    • 제18권1호
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.