• Title/Summary/Keyword: I-graph

Search Result 327, Processing Time 0.027 seconds

NUMERICAL STUDY OF A CENTRIFUGAL PUMP PERFORMANCE WITH VARIOUS VOLUTE SHAPE (볼루트의 형상 변화가 원심펌프 성능에 미치는 영향에 대한 수치해석)

  • Lee, J.H.;Hur, N.;Yoon, I.S.
    • Journal of computational fluids engineering
    • /
    • v.20 no.3
    • /
    • pp.35-40
    • /
    • 2015
  • Centrifugal pumps consume considerable amounts of energy in various industrial applications. Therefore, improving the efficiency of pumps machine is a crucial challenge in industrial world. This paper presents numerical investigation of flow characteristics in volutes of centrifugal pumps in order to compare the energy consumption. A wide range of volumetric flow rate has been investigated for each case. The standard k-${\varepsilon}$ is adopted as the turbulence model. The impeller rotation is simulated employing the Multi Reference Frames(MRF) method. First, two different conventional design methods, i.e., the constant angular momentum(CAM) and the constant mean velocity (CMV) are studied and compared to a baseline volute model. The CAM volute profile is a logarithmic spiral. The CMV volute profile shape is an Archimedes spiral curve. The modified volute models show lower head value than baseline volute model, but in case of efficiency graph, CAM curve has higher values than others. Finally for this part, CAM curve is selected to be used in the simulation of different cross-section shape. Two different types of cross-section are generated. One is a simple rectangular shape, and the other one is fan shape. In terms of different cross-section shape, simple rectangular geometry generated higher head and efficiency. Overall, simulation results showed that the volute designed using constant angular momentum(CAM) method has higher characteristic performances than one by CMV volute.

Collapse of steel cantilever roof of tribune induced by snow loads

  • Altunisik, Ahmet C.;Ates, Sevket;Husem, Metin;Genc, Ali F.
    • Steel and Composite Structures
    • /
    • v.23 no.3
    • /
    • pp.273-283
    • /
    • 2017
  • In this paper, it is aimed to present a detail investigation related to structural behavior of laterally unrestrained steel cantilever roof of tribune with slender cross section. The structure is located in Tutak town in $A{\breve{g}}r{{\i}}$ and collapsed on October 25, 2015 at eastern part of Turkey is considered as a case study. This mild sloped roof structure was built from a variable I beam, and supported on steel columns of 5.5 m height covering totally $240m^2$ closed area in plan. The roof of tribune collapsed completely without any indication during first snowfall after construction at midnight a winter day, fortunately before the opening hours. The meteorological records and observations of local persons are combined together to estimate the intensity of snow load in the region and it is compared with the code specified values. Also, the wide/thickness and height/thickness ratios for flange and web are evaluated according to the design codes. Three dimensional finite element model of the existing steel tribune roof is generated considering project drawings and site investigations using commercially available software ANSYS. The displacements, principal stresses and strains along to the cantilever length and column height are given as contour diagrams and graph format. In addition to site investigation, the numerical and analytical works conducted in this study indicate that the unequivocal reasons of the collapse are overloading action of snow load intensity, some mistakes made in the design of steel cantilever beams, insufficient strength and rigidity of the main structural elements, and construction workmanship errors.

The Minimum number of Mobile Guards Algorithm for Art Gallery Problem (화랑 문제의 최소 이동 경비원 수 알고리즘)

  • Lee, Sang-Un;Choi, Myeong-Bok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.63-69
    • /
    • 2012
  • Given art gallery P with n vertices, the maximum (sufficient) number of mobile guards is${\lfloor}n/4{\rfloor}$ for simple polygon and${\lfloor}(3n+4)/16{\rfloor}$ for simple orthogonal polygon. However, there is no polynomial time algorithm for minimum number of mobile guards. This paper suggests polynomial time algorithm for the minimum number of mobile guards. Firstly, we obtain the visibility graph which is connected all edges if two vertices can be visible each other. Secondly, we select vertex u with ${\Delta}(G)$ and v with ${\Delta}(G)$ in $N_G(u)$ and delete visible edges from u,v and incident edges. Thirdly, we select $w_i$ in partial graphs and select edges that is the position of mobile guards. This algorithm applies various art galley problems with simple polygons and orthogonal polygons art gallery. As a results, the running time of proposed algorithm is linear time complexity and can be obtain the minimum number of mobile guards.

A Development of Expected Loss Control Chart Using Reflected Normal Loss Function (역정규 손실함수를 이용한 기대손실 관리도의 개발)

  • Kim, Dong-Hyuk;Chung, Young-Bae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.2
    • /
    • pp.37-45
    • /
    • 2016
  • Control chart is representative tools of statistical process control (SPC). It is a graph that plotting the characteristic values from the process. It has two steps (or Phase). First step is a procedure for finding a process parameters. It is called Phase I. This step is to find the process parameters by using data obtained from in-controlled process. It is a step that the standard value was not determined. Another step is monitoring process by already known process parameters from Phase I. It is called Phase II. These control chart is the process quality characteristic value for management, which is plotted dot whether the existence within the control limit or not. But, this is not given information about the economic loss that occurs when a product characteristic value does not match the target value. In order to meet the customer needs, company not only consider stability of the process variation but also produce the product that is meet the target value. Taguchi's quadratic loss function is include information about economic loss that occurred by the mismatch the target value. However, Taguchi's quadratic loss function is very simple quadratic curve. It is difficult to realistically reflect the increased amount of loss that due to a deviation from the target value. Also, it can be well explained by only on condition that the normal process. Spiring proposed an alternative loss function that called reflected normal loss function (RNLF). In this paper, we design a new control chart for overcome these disadvantage by using the Spiring's RNLF. And we demonstrate effectiveness of new control chart by comparing its average run length (ARL) with ${\bar{x}}-R$ control chart and expected loss control chart (ELCC).

Analysis of Acoustic Emission Signals during Long-Term Strength Tests of Brittle Materials (취성재료의 장기 강도시험 중 미소파괴음 신호 분석)

  • Cheon, Dae-Sung;Jung, Yong-Bok
    • Tunnel and Underground Space
    • /
    • v.27 no.3
    • /
    • pp.121-131
    • /
    • 2017
  • We studied the time-dependent behaviors of rock and concrete materials by conducting the static and dynamic long-term strength tests. In particular, acoustic emission(AE) signals generated while the tests were analyzed and used for the long-term stability evaluation. In the static subcritical crack growth test, the long-term behavior and AE characteristics of Mode I and Mode II were investigated. In the dynamic long-term strength test, the fatigue limit and characteristics of generation of AE were analyzed through cyclic four points bending test. The graph of the cumulative AE hits versus time showed a shape similar to that of the creep curve with the first, second and third stages. The possibility for evaluating the static and dynamic long-term stability of rock and concrete is presented from the log - log relationship between the slope of the secondary stage of cumulative AE hits curve and the delayed failure time.

Allocation Techniques for NVM-Based Fast Storage Considering Application Characteristics (응용의 특성을 고려한 NVM 기반 고속 스토리지의 배치 방안)

  • Kim, Jisun;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.4
    • /
    • pp.65-69
    • /
    • 2019
  • This paper presents an optimized adoption of NVM for the storage system considering application characteristics. To do so, we first characterize the storage access patterns for different application types, and make two prominent observations that can be exploited in allocating NVM storage efficiently. The first observation is that a bulk of I/O does not happen on a single storage partition, but it is varied significantly for different application categories. Our second observation is that there exists a large proportion of single accessing in storage data. Based on these observations, we show that maximizing the storage performance with NVM is not obtained by fixing it as a specific storage partition but by allocating it adaptively for different applications. Specifically, for graph, database, and web applications, using NVM as a swap, a journal, and a file system partitions, respectively, performs well.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.

Modeling for Nuclear Energy for IoT Systems as Green Fuels in Mitigating COVID-19 (COVID-19 완화를 위한 녹색 연료로서 IoT 시스템용 원자력 에너지 모델링)

  • Jang, Kyung Bae;Baek, Chang Hyun;Woo, Tae Ho
    • Journal of Internet of Things and Convergence
    • /
    • v.7 no.2
    • /
    • pp.13-19
    • /
    • 2021
  • It is analyzed that the energy pattern is affected by the social matters of the disease trend where the energy consumption has been reduced following the depression of the national economy. The campaign of social distance for the people has been done by voluntary or legally due to the epidemic of the Coronavirus Disease 2019 (COVID-19). Some economic stimulus policies have been done in some countries including the United States, South Korea, and some others. It is shown the susceptible, infectious, and recovered (SIR) modeling applied by system dynamics (SD) where the logical modeling is constructed with S, I, and R. Especially, the I is connected with Society including Population, Race, and Maturity. In addition, Economy and Politics are connected to Income, GDP, Resources, President, Popularity, Ruling Government, and Leadership. The graph shows the big jump on 2020 April when is the starting month of the S value multiplication. This shows the effect of the COVID-19 and its related post-pandemic trend. The trends of OECD and non-OECD are very similar and the effect of the virus hazards causes significantly to the economic depressions.

Proof Algorithm of Erdös-Faber-Lovász Conjecture (Erdös-Faber-Lovász 추측 증명 알고리즘)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.269-276
    • /
    • 2015
  • This paper proves the Erd$\ddot{o}$s-Faber-Lov$\acute{a}$sz conjecture of the vertex coloring problem, which is so far unresolved. The Erd$\ddot{o}$s-Faber-Lov$\acute{a}$sz conjecture states that "the union of k copies of k-cliques intersecting in at most one vertex pairwise is k-chromatic." i.e., x(G)=k. In a bid to prove this conjecture, this paper employs a method in which it determines the number of intersecting vertices and that of cliques that intersect at one vertex so as to count a vertex of the minimum degree ${\delta}(G)$ in the Minimum Independent Set (MIS) if both the numbers are even and to count a vertex of the maximum degree ${\Delta}(G)$ in otherwise. As a result of this algorithm, the number of MIS obtained is x(G)=k. When applied to $K_k$-clique sum intersecting graphs wherein $3{\leq}k{\leq}8$, the proposed method has proved to be successful in obtaining x(G)=k in all of them. To conclude, the Erd$\ddot{o}$s-Faber-Lov$\acute{a}$sz conjecture implying that "the k-number of $K_k$-clique sum intersecting graph is k-chromatic" is proven.