• Title/Summary/Keyword: computational results

Search Result 10,026, Processing Time 0.038 seconds

Forecasting Hourly Demand of City Gas in Korea (국내 도시가스의 시간대별 수요 예측)

  • Han, Jung-Hee;Lee, Geun-Cheol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.2
    • /
    • pp.87-95
    • /
    • 2016
  • This study examined the characteristics of the hourly demand of city gas in Korea and proposed multiple regression models to obtain precise estimates of the hourly demand of city gas. Forecasting the hourly demand of city gas with accuracy is essential in terms of safety and cost. If underestimated, the pipeline pressure needs to be increased sharply to meet the demand, when safety matters. In the opposite case, unnecessary inventory and operation costs are incurred. Data analysis showed that the hourly demand of city gas has a very high autocorrelation and that the 24-hour demand pattern of a day follows the previous 24-hour demand pattern of the same day. That is, there is a weekly cycle pattern. In addition, some conditions that temperature affects the hourly demand level were found. That is, the absolute value of the correlation coefficient between the hourly demand and temperature is about 0.853 on average, while the absolute value of the correlation coefficient on a specific day improves to 0.861 at worst and 0.965 at best. Based on this analysis, this paper proposes a multiple regression model incorporating the hourly demand ahead of 24 hours and the hourly demand ahead of 168 hours, and another multiple regression model with temperature as an additional independent variable. To show the performance of the proposed models, computational experiments were carried out using real data of the domestic city gas demand from 2009 to 2013. The test results showed that the first regression model exhibits a forecasting accuracy of MAPE (Mean Absolute Percentage Error) around 4.5% over the past five years from 2009 to 2013, while the second regression model exhibits 5.13% of MAPE for the same period.

A Comparative Study on the Improvement of Curriculum in the Junior College for the Industrial Design Major (2년제 대학 산업디자인전공의 교육과정 개선방안에 관한 비교연구)

  • 강사임
    • Archives of design research
    • /
    • v.13 no.1
    • /
    • pp.209-218
    • /
    • 2000
  • The purpose of this study was to improve the curriculum for industrial design department in the junior colleges. In order to achieve the purpose, two methodologies were carried out. First is job analysis of the industrial designers who have worked in the small & medium manufacturing companies, second is survey for the opinions of professors in the junior colleges. Some results were as follows: 1. The period of junior college for industrial designers is 2 years according to present. But selectively 1 year of advanced course can be established. 2. The practice subjects same as computational formative techniques needed to product development have to be increased. In addition kinds of selection subjects same as foreign language, manufacturing process, new product information and consumer behavior investigation have to be extended. 3. The next subjects need to adjust the title, contents and hours. (1) The need of 3.D related subjects same as computer modeling, computer rendering, 3.D modeling was high. The use of computer is required to design presentation subjects. (2)The need of advertising and sale related subjects same as printing, merchandise, package, typography, photography was low, the need of presentation techniques of new product development was high. (3) The need of field practice, special lecture on practice and reading original texts related subjects was same as at present, but these are not attached importance to form. As the designers feel keenly the necessity of using foreign language, the need of language subject was high.

  • PDF

Multi-task Learning Based Tropical Cyclone Intensity Monitoring and Forecasting through Fusion of Geostationary Satellite Data and Numerical Forecasting Model Output (정지궤도 기상위성 및 수치예보모델 융합을 통한 Multi-task Learning 기반 태풍 강도 실시간 추정 및 예측)

  • Lee, Juhyun;Yoo, Cheolhee;Im, Jungho;Shin, Yeji;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1037-1051
    • /
    • 2020
  • The accurate monitoring and forecasting of the intensity of tropical cyclones (TCs) are able to effectively reduce the overall costs of disaster management. In this study, we proposed a multi-task learning (MTL) based deep learning model for real-time TC intensity estimation and forecasting with the lead time of 6-12 hours following the event, based on the fusion of geostationary satellite images and numerical forecast model output. A total of 142 TCs which developed in the Northwest Pacific from 2011 to 2016 were used in this study. The Communications system, the Ocean and Meteorological Satellite (COMS) Meteorological Imager (MI) data were used to extract the images of typhoons, and the Climate Forecast System version 2 (CFSv2) provided by the National Center of Environmental Prediction (NCEP) was employed to extract air and ocean forecasting data. This study suggested two schemes with different input variables to the MTL models. Scheme 1 used only satellite-based input data while scheme 2 used both satellite images and numerical forecast modeling. As a result of real-time TC intensity estimation, Both schemes exhibited similar performance. For TC intensity forecasting with the lead time of 6 and 12 hours, scheme 2 improved the performance by 13% and 16%, respectively, in terms of the root mean squared error (RMSE) when compared to scheme 1. Relative root mean squared errors(rRMSE) for most intensity levels were lessthan 30%. The lower mean absolute error (MAE) and RMSE were found for the lower intensity levels of TCs. In the test results of the typhoon HALONG in 2014, scheme 1 tended to overestimate the intensity by about 20 kts at the early development stage. Scheme 2 slightly reduced the error, resulting in an overestimation by about 5 kts. The MTL models reduced the computational cost about 300% when compared to the single-tasking model, which suggested the feasibility of the rapid production of TC intensity forecasts.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

External Gravity Field in the Korean Peninsula Area (한반도 지역에서의 상층중력장)

  • Jung, Ae Young;Choi, Kwang-Sun;Lee, Young-Cheol;Lee, Jung Mo
    • Economic and Environmental Geology
    • /
    • v.48 no.6
    • /
    • pp.451-465
    • /
    • 2015
  • The free-air anomalies are computed using a data set from various types of gravity measurements in the Korean Peninsula area. The gravity values extracted from the Earth Gravitational Model 2008 are used in the surrounding region. The upward continuation technique suggested by Dragomir is used in the computation of the external free-air anomalies at various altitudes. The integration radius 10 times the altitude is used in order to keep the accuracy of results and computational resources. The direct geodesic formula developed by Bowring is employed in integration. At the 1-km altitude, the free-air anomalies vary from -41.315 to 189.327 mgal with the standard deviation of 22.612 mgal. At the 3-km altitude, they vary from -36.478 to 156.209 mgal with the standard deviation of 20.641 mgal. At the 1,000-km altitude, they vary from 3.170 to 5.864 mgal with the standard deviation of 0.670 mgal. The predicted free-air anomalies at 3-km altitude are compared to the published free-air anomalies reduced from the airborne gravity measurements at the same altitude. The rms difference is 3.88 mgal. Considering the reported 2.21-mgal airborne gravity cross-over accuracy, this rms difference is not serious. Possible causes in the difference appear to be external free-air anomaly simulation errors in this work and/or the gravity reduction errors of the other. The external gravity field is predicted by adding the external free-air anomaly to the normal gravity computed using the closed form formula for the gravity above and below the surface of the ellipsoid. The predicted external gravity field in this work is expected to reasonably present the real external gravity field. This work seems to be the first structured research on the external free-air anomaly in the Korean Peninsula area, and the external gravity field can be used to improve the accuracy of the inertial navigation system.

Analysis of Fluid Flows in a High Rate Spiral Clarifier and the Evaluation of Field Applicability for Improvement of Water Quality (고속 선회류 침전 장치의 유동 해석 및 수질 개선을 위한 현장 적용 가능성 평가)

  • Kim, Jin Han;Jun, Se Jin
    • Journal of Wetlands Research
    • /
    • v.16 no.1
    • /
    • pp.41-50
    • /
    • 2014
  • The purpose of this study is to evaluate the High Rate Spiral Clarifier(HRSC) availability for the improvement of polluted retention pond water quality. A lab scale and a pilot scale test was performed for this. The fluid flow patterns in a HRSC were studied using Fluent which is one of the computational fluid dynamic(CFD) programs, with inlet velocity and inlet diameter, length of body($L_B$) and length of lower cone(Lc), angle and gap between the inverted sloping cone, the lower exit hole installed or not installed. A pilot scale experimental apparatus was made on the basis of the results from the fluid flow analysis and lab scale test, then a field test was executed for the retention pond. In the study of inside fluid flow for the experimental apparatus, we found out that the inlet velocity had a greater effect on forming spiral flow than inlet flow rate and inlet diameter. There was no observable effect on forming spiral flow LB in the range of 1.2 to $1.6D_B$(body diameter) and Lc in the range of 0.35 to $0.5L_B$, but decreased the spiral flow with a high ratio of $L_B/D_B$ 2.0, $Lc/L_B$ 0.75. As increased the angle of the inverted sloping cone, velocity gradually dropped and evenly distributed in the inverted sloping cone. The better condition was a 10cm distance of the inverted sloping cone compared to 20cm to prevent turbulent flow. The condition that excludes the lower exit hole was better to prevent channeling and to distribute effluent flow rate evenly. From the pilot scale field test it was confirmed that particulate matters were effectively removed, therefore, this apparatus could be used for one of the plans to improve water quality for a large water body such as retention ponds.

Numerical and Experimental Study on the Coal Reaction in an Entrained Flow Gasifier (습식분류층 석탄가스화기 수치해석 및 실험적 연구)

  • Kim, Hey-Suk;Choi, Seung-Hee;Hwang, Min-Jung;Song, Woo-Young;Shin, Mi-Soo;Jang, Dong-Soon;Yun, Sang-June;Choi, Young-Chan;Lee, Gae-Goo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.2
    • /
    • pp.165-174
    • /
    • 2010
  • The numerical modeling of a coal gasification reaction occurring in an entrained flow coal gasifier is presented in this study. The purposes of this study are to develop a reliable evaluation method of coal gasifier not only for the basic design but also further system operation optimization using a CFD(Computational Fluid Dynamics) method. The coal gasification reaction consists of a series of reaction processes such as water evaporation, coal devolatilization, heterogeneous char reactions, and coal-off gaseous reaction in two-phase, turbulent and radiation participating media. Both numerical and experimental studies are made for the 1.0 ton/day entrained flow coal gasifier installed in the Korea Institute of Energy Research (KIER). The comprehensive computer program in this study is made basically using commercial CFD program by implementing several subroutines necessary for gasification process, which include Eddy-Breakup model together with the harmonic mean approach for turbulent reaction. Further Lagrangian approach in particle trajectory is adopted with the consideration of turbulent effect caused by the non-linearity of drag force, etc. The program developed is successfully evaluated against experimental data such as profiles of temperature and gaseous species concentration together with the cold gas efficiency. Further intensive investigation has been made in terms of the size distribution of pulverized coal particle, the slurry concentration, and the design parameters of gasifier. These parameters considered in this study are compared and evaluated each other through the calculated syngas production rate and cold gas efficiency, appearing to directly affect gasification performance. Considering the complexity of entrained coal gasification, even if the results of this study looks physically reasonable and consistent in parametric study, more efforts of elaborating modeling together with the systematic evaluation against experimental data are necessary for the development of an reliable design tool using CFD method.

A Fluid Analysis Study on Centrifugal Pump Performance Improvement by Impeller Modification (원심펌프 회전차 Modification시 성능개선에 관한 유동해석 연구)

  • Lee, A-Yeong;Jang, Hyun-Jun;Lee, Jin-Woo;Cho, Won-Jeong
    • Journal of the Korean Institute of Gas
    • /
    • v.24 no.2
    • /
    • pp.1-8
    • /
    • 2020
  • Centrifugal pump is a facility that transfers energy to fluid through centrifugal force, which is usually generated by rotating the impeller at high speed, and is a major process facility used in many LNG production bases such as vaporization seawater pump, industrial water and fire extinguishing pump using seawater. to be. Currently, pumps in LNG plant sites are subject to operating conditions that vary depending on the amount of supply desired by the customer for a long period of time. Pumps in particular occupy a large part of the consumption strategy at the plant site, and if the optimum operation condition is not available, it can incur enormous energy loss in long term plant operation. In order to solve this problem, it is necessary to identify the performance deterioration factor through the flow analysis and the result analysis according to the fluctuations of the pump's operating conditions and to determine the optimal operation efficiency. In order to evaluate operation efficiency through experimental techniques, considerable time and cost are incurred, such as on-site operating conditions and manufacturing of experimental equipment. If the performance of the pump is not suitable for the site, and the performance of the pump needs to be reduced, a method of changing the rotation speed or using a special liquid containing high viscosity or solids is used. Especially, in order to prevent disruptions in the operation of LNG production bases, a technology is required to satisfy the required performance conditions by processing the existing impeller of the pump within a short time. Therefore, in this study, the rotation difference of the pump was applied to the ANSYS CFX program by applying the modified 3D modeling shape. In addition, the results obtained from the flow analysis and the curve fitting toolbox of the MATLAB program were analyzed numerically to verify the outer diameter correction theory.

Detection with a SWNT Gas Sensor and Diffusion of SF6 Decomposition Products by Corona Discharges (탄소나노튜브 가스센서의 SF6 분해생성물 검출 및 확산현상에 관한 연구)

  • Lee, J.C.;Jung, S.H.;Baik, S.H.
    • Journal of the Korean Vacuum Society
    • /
    • v.18 no.1
    • /
    • pp.66-72
    • /
    • 2009
  • The detection methods are required to monitor and diagnose the abnormality on the insulation condition inside a gas-insulated switchgear (GIS). Due to a good sensitivity to the products decomposed by partial discharges (PDs) in $SF_6$ gas, the development of a SWNT gas sensor is actively in progress. However, a few numerical studies on the diffusion mechanism of the $SF_6$ decomposition products by PD have been reported. In this study, we modeled $SF_6$ decomposition process in a chamber by calculating temperature, pressure and concentration of the decomposition products by using a commercial CFD program in conjunction with experimental data. It was assumed that the mass production rate and the generation temperature of the decomposition products were $5.04{\times}10^{-10}$ [g/s] and over 773 K respectively. To calculate the concentration equation, the Schmidt number was specified to get the diffusion coefficient functioned by viscosity and density of $SF_6$ gas instead rather than setting it directly. The results showed that the drive potential is governed mainly by the gradient of the decomposition concentration. A lower concentration of the decomposition products was observed as the sensors were placed more away from the discharge region. Also, the concentration increased by increasing the discharge time. By installing multiple sensors the location of PD is expected to be identified by monitoring the response time of the sensors, and the information should be very useful for the diagnosis and maintenance of GIS.

Game Theoretic Optimization of Investment Portfolio Considering the Performance of Information Security Countermeasure (정보보호 대책의 성능을 고려한 투자 포트폴리오의 게임 이론적 최적화)

  • Lee, Sang-Hoon;Kim, Tae-Sung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.37-50
    • /
    • 2020
  • Information security has become an important issue in the world. Various information and communication technologies, such as the Internet of Things, big data, cloud, and artificial intelligence, are developing, and the need for information security is increasing. Although the necessity of information security is expanding according to the development of information and communication technology, interest in information security investment is insufficient. In general, measuring the effect of information security investment is difficult, so appropriate investment is not being practice, and organizations are decreasing their information security investment. In addition, since the types and specification of information security measures are diverse, it is difficult to compare and evaluate the information security countermeasures objectively, and there is a lack of decision-making methods about information security investment. To develop the organization, policies and decisions related to information security are essential, and measuring the effect of information security investment is necessary. Therefore, this study proposes a method of constructing an investment portfolio for information security measures using game theory and derives an optimal defence probability. Using the two-person game model, the information security manager and the attacker are assumed to be the game players, and the information security countermeasures and information security threats are assumed as the strategy of the players, respectively. A zero-sum game that the sum of the players' payoffs is zero is assumed, and we derive a solution of a mixed strategy game in which a strategy is selected according to probability distribution among strategies. In the real world, there are various types of information security threats exist, so multiple information security measures should be considered to maintain the appropriate information security level of information systems. We assume that the defence ratio of the information security countermeasures is known, and we derive the optimal solution of the mixed strategy game using linear programming. The contributions of this study are as follows. First, we conduct analysis using real performance data of information security measures. Information security managers of organizations can use the methodology suggested in this study to make practical decisions when establishing investment portfolio for information security countermeasures. Second, the investment weight of information security countermeasures is derived. Since we derive the weight of each information security measure, not just whether or not information security measures have been invested, it is easy to construct an information security investment portfolio in a situation where investment decisions need to be made in consideration of a number of information security countermeasures. Finally, it is possible to find the optimal defence probability after constructing an investment portfolio of information security countermeasures. The information security managers of organizations can measure the specific investment effect by drawing out information security countermeasures that fit the organization's information security investment budget. Also, numerical examples are presented and computational results are analyzed. Based on the performance of various information security countermeasures: Firewall, IPS, and Antivirus, data related to information security measures are collected to construct a portfolio of information security countermeasures. The defence ratio of the information security countermeasures is created using a uniform distribution, and a coverage of performance is derived based on the report of each information security countermeasure. According to numerical examples that considered Firewall, IPS, and Antivirus as information security countermeasures, the investment weights of Firewall, IPS, and Antivirus are optimized to 60.74%, 39.26%, and 0%, respectively. The result shows that the defence probability of the organization is maximized to 83.87%. When the methodology and examples of this study are used in practice, information security managers can consider various types of information security measures, and the appropriate investment level of each measure can be reflected in the organization's budget.