• 제목/요약/키워드: network power

검색결과 5,962건 처리시간 0.026초

소셜 뉴스를 위한 시간 종속적인 메타데이터 기반의 컨텍스트 공유 프레임워크 (Context Sharing Framework Based on Time Dependent Metadata for Social News Service)

  • 가명현;오경진;홍명덕;조근식
    • 지능정보연구
    • /
    • 제19권4호
    • /
    • pp.39-53
    • /
    • 2013
  • 인터넷의 발달과 SNS의 등장으로 정보흐름의 방식이 크게 바뀌었다. 이러한 변화에 따라 소셜 미디어가 급부상하고 있으며 소셜 미디어와 비디오 콘텐츠가 융합된 소셜 TV, 소셜 뉴스의 중요성이 강조되고 있다. 이러한 환경 속에서 사용자들은 단순히 콘텐츠를 탐색만 하는 것이 아니라 같은 콘텐츠를 이용하고 있는 친구들이나 지인들과 콘텐츠에 대한 정보나 경험들을 공유하고 더 나아가 새로운 콘텐츠를 만들어내기도 한다. 하지만 기존의 소셜 뉴스에서는 이러한 사용자들의 특성을 반영해 주지 못하고 있다. 특히 이용자들의 참여성만을 고려하고 있어서 서비스간의 차별화가 어렵고 뉴스 콘텐츠에 대한 정보나 경험 공유 시 컨텍스트 공유가 어렵다는 문제가 있다. 이를 해결하기 위해 본 논문에서는 뉴스를 내용별로 분할하고 분할된 뉴스에서 추출된 시간 종속적인 메타데이터를 제공하는 프레임워크를 제안한다. 제안하는 프레임워크에서는 스토리 분할 방법을 이용하여 뉴스 대본을 내용별로 분할한다. 또한 뉴스 전체내용을 대표하는 태그, 분할된 뉴스를 나타내는 서브 태그, 분할된 뉴스가 비디오에서 시작하는 위치 즉, 시간 종속적인 메타데이터를 제공한다. 소셜 뉴스 이용자들에게 시간 종속적인 메타데이터를 제공한다면 이용자들은 전체의 뉴스 내용 중에 자신이 원하는 부분만을 탐색 할 수 있으며 이 부분에 대한 견해를 남길 수 있다. 그리고 뉴스의 전달이나 의견 공유 시 메타데이터를 함께 전달함으로써 전달하고자 하는 내용에 바로 접근이 가능하며 프레임워크의 성능은 추출된 서브 태그가 뉴스의 실제 내용을 얼마나 잘 나타내 주느냐에 따라 결정된다. 그리고 서브 태그는 스토리 분할의 정확성과 서브 태그를 추출하는 방법에 따라 다르게 추출된다. 이 점을 고려하여 의미적 유사도 기반의 스토리 분할 방법을 프레임워크에 적용하였고 벤치마크 알고리즘과 성능 비교 실험을 수행하였으며 분할된 뉴스에서 추출된 서브 태그들과 실제 뉴스의 내용을 비교하여 서브 태그들의 정확도를 분석하였다. 결과적으로 의미적 유사도를 고려한 스토리 분할 방법이 더 우수한 성능을 보였으며 추출된 서브 태그들도 컨텍스트와 관련된 단어들이 추출 되었다.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF