• Title/Summary/Keyword: computational power

Search Result 1,946, Processing Time 0.024 seconds

Sensitivity Analysis of Wake Diffusion Patterns in Mountainous Wind Farms according to Wake Model Characteristics on Computational Fluid Dynamics (전산유체역학 후류모델 특성에 따른 산악지형 풍력발전단지 후류확산 형태 민감도 분석)

  • Kim, Seong-Gyun;Ryu, Geon Hwa;Kim, Young-Gon;Moon, Chae-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.265-278
    • /
    • 2022
  • The global energy paradigm is rapidly changing by centering on carbon neutrality, and wind energy is positioning itself as a leader in renewable energy-based power sources. The success of onshore and offshore wind energy projects focuses on securing the economic feasibility of the project, which depends on securing high-quality wind resources and optimal arrangement of wind turbines. In the process of constructing the wind farm, the optimal arrangement method of wind turbines considering the main wind direction is important, and this is related to minimizing the wake effect caused by the fluid passing through the structure located on the windward side. The accuracy of the predictability of the wake effect is determined by the wake model and modeling technique that can properly simulate it. Therefore, in this paper, using WindSim, a commercial CFD model, the wake diffusion pattern is analyzed through the sensitivity study of each wake model of the proposed onshore wind farm located in the mountainous complex terrain in South Korea, and it is intended to be used as basic research data for wind energy projects in complex terrain in the future.

Development of transient Monte Carlo in a fissile system with β-delayed emission from individual precursors using modified open source code OpenMC(TD)

  • J. Romero-Barrientos;F. Molina;J.I. Marquez Damian;M. Zambra;P. Aguilera;F. Lopez-Usquiano;S. Parra
    • Nuclear Engineering and Technology
    • /
    • v.55 no.5
    • /
    • pp.1593-1603
    • /
    • 2023
  • In deterministic and Monte Carlo transport codes, b-delayed emission is included using a group structure where all of the precursors are grouped together in 6 groups or families, but given the increase in computational power, nowadays there is no reason to keep this structure. Furthermore, there have been recent efforts to compile and evaluate all the available b-delayed neutron emission data and to measure new and improved data on individual precursors. In order to be able to perform a transient Monte Carlo simulation, data from individual precursors needs to be implemented in a transport code. This work is the first step towards the development of a tool to explore the effect of individual precursors in a fissile system. In concrete, individual precursor data is included by expanding the capabilities of the open source Monte Carlo code OpenMC. In the modified code - named Time Dependent OpenMC or OpenMC(TD)- time dependency related to β-delayed neutron emission was handled by using forced decay of precursors and combing of the particle population. The data for continuous energy neutron cross-sections was taken from JEFF-3.1.1 library. Regarding the data needed to include the individual precursors, cumulative yields were taken from JEFF-3.1.1 and delayed neutron emission probabilities and delayed neutron spectra were taken from ENDF-B/VIII.0. OpenMC(TD) was tested in a monoenergetic system, an energy dependent unmoderated system where the precursors were taken individually or in a group structure, and in a light-water moderated energy dependent system, using 6-groups, 50 and 40 individual precursors. Neutron flux as a function of time was obtained for each of the systems studied. These results show the potential of OpenMC(TD) as a tool to study the impact of individual precursor data on fissile systems, thus motivating further research to simulate more complex fissile systems.

Study of Small Craft Resistance under Different Loading Conditions using Model Test and Numerical Simulations (모형시험과 수치해석을 이용한 하중조건 변화에 따른 소형선박의 저항성능 변화에 관한 연구)

  • Jun-Taek, Lim;Michael;Nam-Kyun, Im;Kwang-Cheol, Seo
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.6
    • /
    • pp.672-680
    • /
    • 2023
  • Weight is a critical factor in the ship design process given that it has a substantial impact on the hydrodynamic performance of ships. Typically, ships are optimally designed for specific conditions with a fixed draft and displacement. However, in reality, weight and draft can vary within a certain range owing to operational activities, such as fuel consumption, ballast adjustments, and loading conditions . Therefore, we investigated how resistance changes under three different loading conditions, namely overload, design-load, and lightship, for small craft, using both model experiments and numerical simulations. Additionally, we examined the sensitivity of weight changes to resistance to enhance the performance of ships, ultimately reducing power requirements in support of the International Maritime Organization's (IMO) goal of reducing CO2 emissions by 50% by 2050. We found that weight changes have a more significant impact at low Froude Numbers. Operating under overload conditions, which correspond to a 5% increase in draft and an 11.1% increase in displacement, can lead to a relatively substantial increase in total resistance, up to 15.97% and 14.31% in towing tests and CFD simulations, respectively.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Two-dimensional Velocity Measurements of Campbell Glacier in East Antarctica Using Coarse-to-fine SAR Offset Tracking Approach of KOMPSAT-5 Satellite Image (KOMPSAT-5 위성영상의 Coarse-to-fine SAR 오프셋트래킹 기법을 활용한 동남극 Campbell Glacier의 2차원 이동속도 관측)

  • Chae, Sung-Ho;Lee, Kwang-Jae;Lee, Sungu
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_3
    • /
    • pp.2035-2046
    • /
    • 2021
  • Glacier movement speed is the most basic measurement for glacial dynamics research and is a very important indicator in predicting sea level rise due to climate change. In this study, the two-dimensional velocity measurements of Campbell Glacier located in Terra Nova Bay in East Antarctica were observed through the SAR offset tracking technique. For this purpose, domestic KOMPSAT-5 SAR satellite images taken on July 9, 2021 and August 6, 2021 were acquired. The Multi-kernel SAR offset tracking proposed through previous studies is a technique to obtain the optimal result that satisfies both resolution and precision. However, since offset tracking is repeatedly performed according to the size of the kernel, intensive computational power and time are required. Therefore, in this study, we strategically proposed a coarse-to-fine offset tracking approach. Through coarse-to-fine SAR offset tracking, it is possible to obtain a result with improved observation precision (especially, about 4 times in azimuth direction) while maintaining resolution compared to general offset tracking results. Using this proposed technique, a two-dimensional velocity measurements of Campbell Glacier were generated. As a result of analyzing the two-dimensional movement velocity image, it was observed that the grounding line of Campbell Glacier exists at approximately latitude -74.56N. The flow velocity of Campbell Glacier Tongue analyzed in this study (185-237 m/yr) increased compared to that of 1988-1989 (140-240 m/yr). And compared to the flow velocity (181-268 m/yr) in 2010-2012, the movement speed near the ground line was similar, but it was confirmed that the movement speed at the end of the Campbell Glacier Tongue decreased. However, there is a possibility that this is an error that occurs because the study result of this study is an annual rate of glacier movement that occurred for 28 days. For accurate comparison, it will be necessary to expand the data in time series and accurately calculate the annual rate. Through this study, the two-dimensional velocity measurements of the glacier were observed for the first time using the KOMPSAT-5 satellite image, a domestic X-band SAR satellite. It was confirmed that the coarse-to-fine SAR offset tracking approach of the KOMPSAT-5 SAR image is very useful for observing the two-dimensional velocity of glacier movements.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.