• Title/Summary/Keyword: Computational power

Search Result 1,954, Processing Time 0.024 seconds

Wind Field Change Simulation before and after the Regional Development of the Eunpyeong Area at Seoul Using a CFD_NIMR_SNU Model (CFD_NIMR_SNU 모형을 활용한 은평구 건설 전후의 바람환경 변화 모사 연구)

  • Cho, Kyoungmi;Koo, Hae-Jung;Kim, Kyu Rang;Choi, Young-Jean
    • Journal of Environmental Impact Assessment
    • /
    • v.20 no.4
    • /
    • pp.539-555
    • /
    • 2011
  • Newly constructed, high-rise dense building areas by urban development can cause changes in local wind fields. Wind fields were analyzed to assess the impact on the local meteorology due to the land use changes during the urban redevelopment called "Eunpyeong new town" in north-western Seoul using CFD_NIMR_SNU (Computational Fluid Dynamics, National Institute of Meteorological Research, Seoul National University) model. Initial value of wind speed and direction use analysis value of AWS (Automatic Weather Station) data during 5 years. In the case of the pre-construction with low rise built-up area, it was simulated that the spatial distribution of horizontal wind fields depends on the topography and wind direction of initial inflow. But, in the case of the post-construction with high rise built-up area, it was analyzed that the wind field was affected by high rise buildings as well as terrain. High-rise buildings can generate new circulations among buildings. In addition, small size vortexes were newly generated by terrain and high rise buildings after the construction. As high-rise buildings act as a barrier, we found that the horizontal wind flow was separated and wind speed was reduced behind the buildings. CFD_NIMR_SNU was able to analyze the impact of high-rise buildings during the urban development. With the support of high power computing, it will be more common to utilize sophisticated numerical analysis models such as CFD_NIMR_SNU in evaluating the impact of urban development on wind flow or channel.

Performance Analysis of Co- and Cross-tier Device-to-Device Communication Underlaying Macro-small Cell Wireless Networks

  • Li, Tong;Xiao, Zhu;Georges, Hassana Maigary;Luo, Zhinian;Wang, Dong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.4
    • /
    • pp.1481-1500
    • /
    • 2016
  • Device-to-Device (D2D) communication underlaying macro-small cell networks, as one of the promising technologies in the era of 5G, is able to improve spectral efficiency and increase system capacity. In this paper, we model the cross- and co-tier D2D communications in two-tier macro-small cell networks. To avoid the complicated interference for cross-tier D2D, we propose a mode selection scheme with a dedicated resource sharing strategy. For co-tier D2D, we formulate a joint optimization problem of power control and resource reuse with the aim of maximizing the overall outage capacity. To solve this non-convex optimization problem, we devise a heuristic algorithm to obtain a suboptimal solution and reduce the computational complexity. System-level simulations demonstrate the effectiveness of the proposed method, which can provide enhanced system performance and guarantee the quality-of-service (QoS) of all devices in two-tier macro-small cell networks. In addition, our study reveals the high potential of introducing cross- and co-tier D2D in small cell networks: i) cross-tier D2D obtains better performance at low and medium small cell densities than co-tier D2D, and ii) co-tier D2D achieves a steady performance improvement with the increase of small cell density.

Semantic Computing for Big Data: Approaches, Tools, and Emerging Directions (2011-2014)

  • Jeong, Seung Ryul;Ghani, Imran
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.6
    • /
    • pp.2022-2042
    • /
    • 2014
  • The term "big data" has recently gained widespread attention in the field of information technology (IT). One of the key challenges in making use of big data lies in finding ways to uncover relevant and valuable information. The high volume, velocity, and variety of big data hinder the use of solutions that are available for smaller datasets, which involve the manual interpretation of data. Semantic computing technologies have been proposed as a means of dealing with these issues, and with the advent of linked data in recent years, have become central to mainstream semantic computing. This paper attempts to uncover the state-of-the-art semantics-based approaches and tools that can be leveraged to enrich and enhance today's big data. It presents research on the latest literature, including 61 studies from 2011 to 2014. In addition, it highlights the key challenges that semantic approaches need to address in the near future. For instance, this paper presents cutting-edge approaches to ontology engineering, ontology evolution, searching and filtering relevant information, extracting and reasoning, distributed (web-scale) reasoning, and representing big data. It also makes recommendations that may encourage researchers to more deeply explore the applications of semantic technology, which could improve the processing of big data. The findings of this study contribute to the existing body of basic knowledge on semantics and computational issues related to big data, and may trigger further research on the field. Our analysis shows that there is a need to put more effort into proposing new approaches, and that tools must be created that support researchers and practitioners in realizing the true power of semantic computing and solving the crucial issues of big data.

RADIOLOGICAL DOSE ASSESSMENT ACCORDING TO METHODOLOGIES FOR THE EVALUATION OF ACCIDENTAL SOURCE TERMS

  • Jeong, Hae Sun;Jeong, Hyo Joon;Kim, Eun Han;Han, Moon Hee;Hwang, Won Tae
    • Journal of Radiation Protection and Research
    • /
    • v.39 no.4
    • /
    • pp.176-181
    • /
    • 2014
  • The object of this paper is to evaluate the fission product inventories and radiological doses in a non-LOCA event, based on the U.S. NRC's regulatory methodologies recommended by the TID-14844 and the RG 1.195. For choosing a non-LOCA event, one fuel assembly was assumed to be melted by a channel blockage accident. The Hanul nuclear power reactor unit 6 and the CE $16{\times}16$ fuel assembly were selected as the computational models. The burnup cross section library for depletion calculations was produced using the TRITON module in the SCALE6.1 computer code system. Based on the recently licensed values for fuel enrichment and burnup, the source term calculation was performed using the ORIGEN-ARP module. The fission product inventories released into the environment were obtained with the assumptions of the TID-14844 and the RG 1.195. With two kinds of source terms, the radiological doses of public in normal environment reflecting realistic circumstances were evaluated by applying the average condition of meteorology, inhalation rate, and shielding factor. The statistical analysis was first carried out using consecutive three year-meteorological data measured at the Hanul site. The annual-averaged atmospheric dispersion factors were evaluated at the shortest representative distance of 1,000 m, where the residents are actually able to live from the reactor core, according to the methodology recommended by the RG 1.111. The Korean characteristic-inhalation rate and shielding factor of a building were considered for a series of dose calculations.

Time-domain Equalization Algorithm for a DMT-based xDSL Modem (DMT 방식의 xDSL 모뎀을 위한 시간영역 등화 알고리듬)

  • Kim, Jae-Gwon;Yang, Won-Yeong;Jeong, Man-Yeong;Jo, Yong-Su;Baek, Jong-Ho;Yu, Yeong-Hwan;Song, Hyeong-Gyu
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.1A
    • /
    • pp.167-177
    • /
    • 2000
  • In this paper, a new algorithm to design a time-domain equalizer (TEQ) for an xDSL system employing the discrete multitone (DMT) modulation is proposed. The proposed algorithm, derived by neglecting the terms whichdo not affect the performance of a DMT system in ARMA modeling, is shown to have similar performance tothe previous TEQ algorithms such as matrix inverse algorithm, fast algorithm, iterative algorithm, and inversepower method, even with the significantly lower computational complexity. In addition, since the proposedalgorithm requires only the received signal, the information on the channel impulse response or training sequenceis not needed. It is also shown that for the case where bridged tap is not included, the number of TEQ tapsrequired can be reduced to half(from 16 to 8) without affecting the overall performance. The performances of theproposed and previous TEQ algorithms are compared by applying them to ADSL environment.

  • PDF

A Study on Utilization Ratio and Operation of Transmission Lines (송전선로의 이용률 평가 및 합리적 운영에 관한 연구)

  • Kim, Dong-Min;Bae, In-Su;Cho, Jong-Man;Kim, Jin-O
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.55 no.10
    • /
    • pp.426-432
    • /
    • 2006
  • This paper describes the concepts of Static Line Rating (SLR) and Dynamic Line Rating (DLR) and the computational methods to demonstrate them. Calculation of the line capacity needs the heat balance equation which is also used for computing the reduced tension in terms of line aging. SLR is calculated with the data from the worst condition of weather throughout the year. Even now, the utilization ratio is obtained from this SLR data in Korea. DLR is the improved method compared to SLR. A process for DLR reveals not only improved line ratings but also more accurate allowed line ratings based on line aging and real time conditions of weather. In order to reflect overhead transmission line aging in DLR, this paper proposes the method that considers the amount of decreased tension since the lines have been installed. Therefore, the continuous allowed temperature for remaining life time is newly acquired. In order to forecast DLR, this paper uses weather forecast models, and applies the concept of Thermal Overload Risk Probability (TORP). Then, the new concept of Dynamic Utilization Ratio (DUR) is defined, replacing Static Utilization Ratio (SUR). For the case study, the two main transmission lines which are responsible for the north bound power flow in the Seoul metropolitan area are chosen for computing line rating and utilization ratio. And then line rating and utilization ratio are analyzed for each transmission line, so that comparison of the present and estimated utilization ratios becomes available. Finally, this paper proves the validity of predictive DUR as the objective index, with simulations of emergency state caused by system outages, overload and so on.

Design and Implementation of Sensor Network Actors Supporting Ptolemy Tool (Ptolemy Tool을 지원하는 무선 센서네트워크 Actor의 설계 및 구현)

  • An, Ki-Jin;Joo, Hyun-Chul;Oh, Hyung-Rae;Kim, Young-Duk;Yang, Yeon-Mo;Song, Hwang-Jun
    • Journal of the Korea Society for Simulation
    • /
    • v.17 no.4
    • /
    • pp.113-122
    • /
    • 2008
  • Emerging of wireless sensor networks results in several modeling and design issues on the network simulations. In previous works, most researchers have used three evaluation approaches such as analysis method, computer simulation and real test-bed measurement in order to verify the performance of wireless sensor networks. Among these approaches, analysis method is widely used since the other approaches have significant drawbacks such as limitation of network power, computational problem of distribute processing and complication of debugging process. However, the analysis method also shows poor performance when it deals with complex operations in huge wireless sensor networks. Thus, in this paper, we design and develop SMAC and AODV protocols by using Ptolemy tool in order to improve the simulation performance. The developed Ptolemy actor can be easily adapted to compare and evaluate the various protocols for wireless sensor networks.

  • PDF

A Design of Measuring impact of Distance between a mobile device and Cloudlet (모바일 장치와 클라우드 사이 거리의 영향 측정에 대한 연구)

  • Eric, Niyonsaba;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.232-235
    • /
    • 2015
  • In recent years, mobile devices are equipped with functionalities comparable to those computers. However, mobile devices have limited resources due to constraints, such as low processing power, limited memory, unpredictable connectivity, and limited battery life. To enhance the capacity of mobile devices, an interesting idea is to use cloud computing and virtualization techniques to shift the workload from mobile devices to a computational infrastructure. Those techniques consist of migrating resource-intensive computations from a mobile device to the resource-rich cloud, or server (called nearby infrastructure). In order to achieve their goals, researchers designed mobile cloud applications models (examples: CloneCloud, Cloudlet, and Weblet). In this paper, we want to highlight on cloudlet architecture (nearby infrastructure with mobile device), its methodology and discuss about the impact of distance between cloudlet and mobile device in our work design.

  • PDF

FIRE PROPAGATION EQUATION FOR THE EXPLICIT IDENTIFICATION OF FIRE SCENARIOS IN A FIRE PSA

  • Lim, Ho-Gon;Han, Sang-Hoon;Moon, Joo-Hyun
    • Nuclear Engineering and Technology
    • /
    • v.43 no.3
    • /
    • pp.271-278
    • /
    • 2011
  • When performing fire PSA in a nuclear power plant, an event mapping method, using an internal event PSA model, is widely used to reduce the resources used by fire PSA model development. Feasible initiating events and component failure events due to fire are identified to transform the fault tree (FT) for an internal event PSA into one for a fire PSA using the event mapping method. A surrogate event or damage term method is used to condition the FT of the internal PSA. The surrogate event or the damage term plays the role of flagging whether the system/component in a fire compartment is damaged or not, depending on the fire being initiated from a specified compartment. These methods usually require explicit states of all compartments to be modeled in a fire area. Fire event scenarios, when using explicit identification, such as surrogate or damage terms, have two problems: (1) there is no consideration of multiple fire propagation beyond a single propagation to an adjacent compartment, and (2) there is no consideration of simultaneous fire propagations in which an initiating fire event is propagated to multiple paths simultaneously. The present paper suggests a fire propagation equation to identify all possible fire event scenarios for an explicitly treated fire event scenario in the fire PSA. Also, a method for separating fire events was developed to make all fire events a set of mutually exclusive events, which can facilitate arithmetic summation in fire risk quantification. A simple example is given to confirm the applicability of the present method for a $2{\times}3$ rectangular fire area. Also, a feasible asymptotic approach is discussed to reduce the computational burden for fire risk quantification.

Development of Auto Tracking System for Baseball Pitching (투구된 공의 실시간 위치 자동추적 시스템 개발)

  • Lee, Ki-Chung;Bae, Sung-Jae;Shin, In-Sik
    • Korean Journal of Applied Biomechanics
    • /
    • v.17 no.1
    • /
    • pp.81-90
    • /
    • 2007
  • The effort identifying positioning information of the moving object in real time has been a issue not only in sport biomechanics but also other academic areas. In order to solve this issue, this study tried to track the movement of a pitched ball that might provide an easier prediction because of a clear focus and simple movement of the object. Machine learning has been leading the research of extracting information from continuous images such as object tracking. Though the rule-based methods in artificial intelligence prevailed for decades, it has evolved into the methods of statistical approach that finds the maximum a posterior location in the image. The development of machine learning, accompanied by the development of recording technology and computational power of computer, made it possible to extract the trajectory of pitched baseball from recorded images. We present a method of baseball tracking, based on object tracking methods in machine learning. We introduce three state-of-the-art researches regarding the object tracking and show how we can combine these researches to yield a novel engine that finds trajectory from continuous pitching images. The first research is about mean shift method which finds the mode of a supposed continuous distribution from a set of data. The second research is about the research that explains how we can find the mode and object region effectively when we are given the previous image's location of object and the region. The third is about the research of representing data into features that we can deal with. From those features, we can establish a distribution to generate a set of data for mean shift. In this paper, we combine three works to track baseball's location in the continuous image frames. From the information of locations from two sets of images, we can reconstruct the real 3-D trajectory of pitched ball. We show how this works in real pitching images.