• Title/Summary/Keyword: Complexity model

Search Result 1,958, Processing Time 0.027 seconds

A Review on Ultimate Lateral Capacity Prediction of Rigid Drilled Shafts Installed in Sand (사질토에 설치된 강성현장타설말뚝의 극한수평지지력 예측에 관한 재고)

  • Cho Nam Jun;Kulhawy F.H
    • Journal of the Korean Geotechnical Society
    • /
    • v.21 no.2
    • /
    • pp.113-120
    • /
    • 2005
  • An understanding of soil-structure interaction is the key to rational and economical design for laterally loaded drilled shafts. It is very difficult to formulate the ultimate lateral capacity into a general equation because of the inherent soil nonlincarity, nonhomogeneity, and complexity enhanced by the three dimensional and asymmetric nature of the problem though extensive research works on the behavior of deep foundations subjected to lateral loads have been conducted for several decades. This study reviews the four most well known methods (i.e., Reese, Broms, Hansen, and Davidson) among many design methods according to the specific site conditions, the drilled shaft geometric characteristics (D/B ratios), and the loading conditions. And the hyperbolic lateral capacities (H$_h$) interpreted by the hyperbolic transformation of the load-displacement curves obtained from model tests carried out as a part of this research have been compared with the ultimate lateral capacities (Hu) predicted by the four methods. The H$_u$ / H$_h$ ratios from Reese's and Hansen's methods are 0.966 and 1.015, respectively, which shows both the two methods yield results very close to the test results. Whereas the H$_u$ predicted by Davidson's method is larger than H$_h$ by about $30\%$, the C.0.V. of the predicted lateral capacities by Davidson is the smallest among the four. Broms' method, the simplest among the few methods, gives H$_u$ / H$_h$ : 0.896, which estimates the ultimate lateral capacity smaller than the others because some other resisting sources against lateral loading are neglected in this method. But it results in one of the most reliable methods with the smallest S.D. in predicting the ultimate lateral capacity. Conclusively, none of the four can be superior to the others in a sense of the accuracy of predicting the ultimate lateral capacity. Also, regardless of how sophisticated or complicated the calculating procedures are, the reliability in the lateral capacity predictions seems to be a different issue.

Intelligent Optimal Route Planning Based on Context Awareness (상황인식 기반 지능형 최적 경로계획)

  • Lee, Hyun-Jung;Chang, Yong-Sik
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.117-137
    • /
    • 2009
  • Recently, intelligent traffic information systems have enabled people to forecast traffic conditions before hitting the road. These convenient systems operate on the basis of data reflecting current road and traffic conditions as well as distance-based data between locations. Thanks to the rapid development of ubiquitous computing, tremendous context data have become readily available making vehicle route planning easier than ever. Previous research in relation to optimization of vehicle route planning merely focused on finding the optimal distance between locations. Contexts reflecting the road and traffic conditions were then not seriously treated as a way to resolve the optimal routing problems based on distance-based route planning, because this kind of information does not have much significant impact on traffic routing until a a complex traffic situation arises. Further, it was also not easy to take into full account the traffic contexts for resolving optimal routing problems because predicting the dynamic traffic situations was regarded a daunting task. However, with rapid increase in traffic complexity the importance of developing contexts reflecting data related to moving costs has emerged. Hence, this research proposes a framework designed to resolve an optimal route planning problem by taking full account of additional moving cost such as road traffic cost and weather cost, among others. Recent technological development particularly in the ubiquitous computing environment has facilitated the collection of such data. This framework is based on the contexts of time, traffic, and environment, which addresses the following issues. First, we clarify and classify the diverse contexts that affect a vehicle's velocity and estimates the optimization of moving cost based on dynamic programming that accounts for the context cost according to the variance of contexts. Second, the velocity reduction rate is applied to find the optimal route (shortest path) using the context data on the current traffic condition. The velocity reduction rate infers to the degree of possible velocity including moving vehicles' considerable road and traffic contexts, indicating the statistical or experimental data. Knowledge generated in this papercan be referenced by several organizations which deal with road and traffic data. Third, in experimentation, we evaluate the effectiveness of the proposed context-based optimal route (shortest path) between locations by comparing it to the previously used distance-based shortest path. A vehicles' optimal route might change due to its diverse velocity caused by unexpected but potential dynamic situations depending on the road condition. This study includes such context variables as 'road congestion', 'work', 'accident', and 'weather' which can alter the traffic condition. The contexts can affect moving vehicle's velocity on the road. Since these context variables except for 'weather' are related to road conditions, relevant data were provided by the Korea Expressway Corporation. The 'weather'-related data were attained from the Korea Meteorological Administration. The aware contexts are classified contexts causing reduction of vehicles' velocity which determines the velocity reduction rate. To find the optimal route (shortest path), we introduced the velocity reduction rate in the context for calculating a vehicle's velocity reflecting composite contexts when one event synchronizes with another. We then proposed a context-based optimal route (shortest path) algorithm based on the dynamic programming. The algorithm is composed of three steps. In the first initialization step, departure and destination locations are given, and the path step is initialized as 0. In the second step, moving costs including composite contexts into account between locations on path are estimated using the velocity reduction rate by context as increasing path steps. In the third step, the optimal route (shortest path) is retrieved through back-tracking. In the provided research model, we designed a framework to account for context awareness, moving cost estimation (taking both composite and single contexts into account), and optimal route (shortest path) algorithm (based on dynamic programming). Through illustrative experimentation using the Wilcoxon signed rank test, we proved that context-based route planning is much more effective than distance-based route planning., In addition, we found that the optimal solution (shortest paths) through the distance-based route planning might not be optimized in real situation because road condition is very dynamic and unpredictable while affecting most vehicles' moving costs. For further study, while more information is needed for a more accurate estimation of moving vehicles' costs, this study still stands viable in the applications to reduce moving costs by effective route planning. For instance, it could be applied to deliverers' decision making to enhance their decision satisfaction when they meet unpredictable dynamic situations in moving vehicles on the road. Overall, we conclude that taking into account the contexts as a part of costs is a meaningful and sensible approach to in resolving the optimal route problem.

A Study on the Abstract Types of the Contemporary Landscape Design (현대조경디자인의 추상유형에 관한 연구)

  • Kim, Jun-Yon;Lee, Haeung-Yul;Bang, Kwang-Ja
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.36 no.6
    • /
    • pp.1-11
    • /
    • 2009
  • This study focuses on Abstract Types in Contemporary Landscape Design. The formation and artistry of contemporary landscape design reveals many areas which Previously have not been able to be expressed in scenic landscape thanks to the deviation of the genre in contemporary landscape and the hybridization that has occurred among architecture, landscape and art genres. The focus of this study is basic research concerning "the abstract", which is used as a creative artistic theory in a variety of art fields such as landscape, architecture and painting. Through a theoretical establishment of "the abstract", its process of change, and the discovery of its contemporary principles, the relationship between each art field in landscapes and the formation of the abstract, abstract language, and abstract properties have been studied. The use of the abstract in contemporary landscape design can be classified in three ways: Inductive abstract representing conceptual transcendental symbols not logically but rather through intuition and transcendental cognition to display the inner expressions, ideas and minds of the artists. Second, a deductive abstract represents an expansive, logical model for the simplification of objects, distortion, exaggeration based on knowledge and logical reasoning about objective fact based on traditional realism. The complexity of the abstract is a concept that is bound to both the deductive & inductive abstract. As a major trend, the concept of "The abstract" in contemporary landscape has been putting forth ever-deeper roots. New trends like abstract works and landscape architecture reflecting the artist's inner expression, in particular, will provide fertile soil for landscape in the future. Further research about the concept of "the abstract" will also be necessary in the time to come.

Numerical and Experimental Study on the Coal Reaction in an Entrained Flow Gasifier (습식분류층 석탄가스화기 수치해석 및 실험적 연구)

  • Kim, Hey-Suk;Choi, Seung-Hee;Hwang, Min-Jung;Song, Woo-Young;Shin, Mi-Soo;Jang, Dong-Soon;Yun, Sang-June;Choi, Young-Chan;Lee, Gae-Goo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.2
    • /
    • pp.165-174
    • /
    • 2010
  • The numerical modeling of a coal gasification reaction occurring in an entrained flow coal gasifier is presented in this study. The purposes of this study are to develop a reliable evaluation method of coal gasifier not only for the basic design but also further system operation optimization using a CFD(Computational Fluid Dynamics) method. The coal gasification reaction consists of a series of reaction processes such as water evaporation, coal devolatilization, heterogeneous char reactions, and coal-off gaseous reaction in two-phase, turbulent and radiation participating media. Both numerical and experimental studies are made for the 1.0 ton/day entrained flow coal gasifier installed in the Korea Institute of Energy Research (KIER). The comprehensive computer program in this study is made basically using commercial CFD program by implementing several subroutines necessary for gasification process, which include Eddy-Breakup model together with the harmonic mean approach for turbulent reaction. Further Lagrangian approach in particle trajectory is adopted with the consideration of turbulent effect caused by the non-linearity of drag force, etc. The program developed is successfully evaluated against experimental data such as profiles of temperature and gaseous species concentration together with the cold gas efficiency. Further intensive investigation has been made in terms of the size distribution of pulverized coal particle, the slurry concentration, and the design parameters of gasifier. These parameters considered in this study are compared and evaluated each other through the calculated syngas production rate and cold gas efficiency, appearing to directly affect gasification performance. Considering the complexity of entrained coal gasification, even if the results of this study looks physically reasonable and consistent in parametric study, more efforts of elaborating modeling together with the systematic evaluation against experimental data are necessary for the development of an reliable design tool using CFD method.

Research on Earthquake Occurrence Characteristics Through the Comparison of the Yangsan-ulsan Fault System and the Futagawa-Hinagu Fault System (양산-울산 단층계와 후타가와-히나구 단층계의 비교를 통한 지진발생특성 연구)

  • Lee, Jinhyun;Gwon, Sehyeon;Kim, Young-Seog
    • The Journal of the Petrological Society of Korea
    • /
    • v.25 no.3
    • /
    • pp.195-209
    • /
    • 2016
  • The understanding of geometric complexity of strike-slip Fault system can be an important factor to control fault reactivation and surface rupture propagation under the regional stress regime. The Kumamoto earthquake was caused by dextral reactivation of the Futagawa-Hinagu Fault system under the E-W maximum horizontal principal stress. The earthquakes are a set of earthquakes, including a foreshock earthquake with a magnitude 6.2 at the northern tip of the Hinagu Fault on April 14, 2016 and a magnitude 7.0 mainshock which generated at the intersection of the two faults on April 16, 2016. The hypocenters of the main shock and aftershocks have moved toward NE direction along the Futagawa Fault and terminated at Mt. Aso area. The intersection of the two faults has a similar configuration of ${\lambda}$-fault. The geometries and kinematics, of these faults were comparable to the Yansan-Ulsan Fault system in SE Korea. But slip rate is little different. The results of age dating show that the Quaternary faults distributed along the northern segment of the Yangsan Fault and the Ulsan Fault are younger than those along the southern segment of the Yansan Fault. This result is well consistent with the previous study with Column stress model. Thus, the seismic activity along the middle and northern segment of the Yangsan Fault and the Ulsan Fault might be relatively active compared with that of the southern segment of the Yangsan Fault. Therefore, more detailed seismic hazard and paleoseismic studies should be carried out in this area.

Prioritization of Species Selection Criteria for Urban Fine Dust Reduction Planting (도시 미세먼지 저감 식재를 위한 수종 선정 기준의 우선순위 도출)

  • Cho, Dong-Gil
    • Korean Journal of Environment and Ecology
    • /
    • v.33 no.4
    • /
    • pp.472-480
    • /
    • 2019
  • Selection of the plant material for planting to reduce fine dust should comprehensively consider the visual characteristics, such as the shape and texture of the plant leaves and form of bark, which affect the adsorption function of the plant. However, previous studies on reduction of fine dust through plants have focused on the absorption function rather than the adsorption function of plants and on foliage plants, which are indoor plants, rather than the outdoor plants. In particular, the criterion for selection of fine dust reduction species is not specific, so research on the selection criteria for plant materials for fine dust reduction in urban areas is needed. The purpose of this study is to identify the priorities of eight indicators that affect the fine dust reduction by using the fuzzy multi-criteria decision-making model (MCDM) and establish the tree selection criteria for the urban planting to reduce fine dust. For the purpose, we conducted a questionnaire survey of those who majored in fine dust-related academic fields and those with experience of researching fine dust. A result of the survey showed that the area of leaf and the tree species received the highest score as the factors that affect the fine dust reduction. They were followed by the surface roughness of leaves, tree height, growth rate, complexity of leaves, edge shape of leaves, and bark feature in that order. When selecting the species that have leaves with the coarse surface, it is better to select the trees with wooly, glossy, and waxy layers on the leaves. When considering the shape of the leaves, it is better to select the two-type or three-type leaves and palm-shaped leaves than the single-type leaves and to select the serrated leaves than the smooth edged leaves to increase the surface area for adsorbing fine dust in the air on the surface of the leaves. When considering the characteristics of the bark, it is better to select trees that have cork layers or show or are likely to show the bark loosening or cracks than to select those with lenticel or patterned barks. This study is significant in that it presents the priorities of the selection criteria of plant material based on the visual characteristics that affect the adsorption of fine dust for the planning of planting to reduce fine dust in the urban area. The results of this study can be used as basic data for the selection of trees for plantation planning in the urban area.

A Study on World University Evaluation Systems: Focusing on U-Multirank of the European Union (유럽연합의 세계 대학 평가시스템 '유-멀티랭크' 연구)

  • Lee, Tae-Young
    • Korean Journal of Comparative Education
    • /
    • v.27 no.4
    • /
    • pp.187-209
    • /
    • 2017
  • The purpose of this study was to highlight the necessity of a conceptual reestablishment of world university evaluations. The hitherto most well-known and validated world university evaluation systems such as Times Higher Education (THE), Quacquarelli Symonds (QS) or Academic Ranking of World Universities (ARWU) primarily assess big universities with quantitative evaluation indicators and performance results in the rankings. Those Systems have instigated a kind of elitism in higher education and neglect numerous small or local institutions of higher education, instead of providing stakeholders with comprehensive information about the real possibilities of tertiary education so that they can choose an institution that is individually tailored to their needs. Also, the management boards of universities and policymakers in higher education have partly been manipulated by and partly taken advantage of the elitist ranking systems with an economic emphasis, as indicated by research-centered evaluations and industry-university cooperation. To supplement such educational defects and to redress the lack of world university evaluation systems, a new system called 'U-Multirank' has been implemented with the financial support of the European Commission since 2012. U-Multirank was designed and is enforced by an international team of project experts led by CHE(Centre for Higher Education/Germany), CHEPS(Center for Higher Education Policy Studies/Netherlands) and CWTS(Centre for Science and Technology Studies at Leiden University/Netherlands). The significant features of U-Multirank, compared with e.g., THE and ARWU, are its qualitative, multidimensional, user-oriented and individualized assessment methods. Above all, its website and its assessment results, based on a mobile operating system and designed simply for international users, present a self-organized and evolutionary model of world university evaluation systems in the digital and global era. To estimate the universal validity of the redefinition of the world university evaluation system using U-Multirank, an epistemological approach will be used that relies on Edgar Morin's Complexity Theory and Karl Popper's Philosophy of Science.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.