• Title/Summary/Keyword: computational cost

Search Result 1,639, Processing Time 0.024 seconds

Dehumidification and Temperature Control for Green Houses using Lithium Bromide Solution and Cooling Coil (리튬브로마이드(LiBr) 용액의 흡습성질과 냉각코일을 이용한 온실 습도 및 온도 제어)

  • Lee, Sang Yeol;Lee, Chung Geon;Euh, Seung Hee;Oh, Kwang Cheol;Oh, Jae Heun;Kim, Dea Hyun
    • Journal of Bio-Environment Control
    • /
    • v.23 no.4
    • /
    • pp.337-341
    • /
    • 2014
  • Due to the nature of the ambient air temperature in summer in korea, the growth of crops in greenhouse normally requires cooling and dehumidification. Even though various cooling and dehumidification methods have been presented, there are many obstacles to figure out in practical application such as excessive energy use, cost, and performance. To overcome this problem, the lab scale experiments using lithium bromide(LiBr) solution and cooling coil for dehumidification and cooling in greenhouses were performed. In this study, preliminary experiment of dehumidification and cooling for the greenhouse was done using LiBr solution as the dehumidifying materials, and cooling coil separately and then combined system was tested as well. Hot and humid air was dehumidified from 85% to 70% by passing through a pad soaked with LiBr, and cooled from 308K to 299K through the cooling coil. computational Fluid Dynamics(CFD) analysis and analytical solution were done for the change of air temperature by heat transfer. Simulation results showed that the final air temperature was calculated 299.7K and 299.9K respectively with the deviation of 0.7K comparing the experimental value having good agreement. From this result, LiBr solution with cooling coil system could be applicable in the greenhouse.

Feasibility of Automated Detection of Inter-fractional Deviation in Patient Positioning Using Structural Similarity Index: Preliminary Results (Structural Similarity Index 인자를 이용한 방사선 분할 조사간 환자 체위 변화의 자동화 검출능 평가: 초기 보고)

  • Youn, Hanbean;Jeon, Hosang;Lee, Jayeong;Lee, Juhye;Nam, Jiho;Park, Dahl;Kim, Wontaek;Ki, Yongkan;Kim, Donghyun
    • Progress in Medical Physics
    • /
    • v.26 no.4
    • /
    • pp.258-266
    • /
    • 2015
  • The modern radiotherapy technique which delivers a large amount of dose to patients asks to confirm the positions of patients or tumors more accurately by using X-ray projection images of high-definition. However, a rapid increase in patient's exposure and image information for CT image acquisition may be additional burden on the patient. In this study, by introducing structural similarity (SSIM) index that can effectively extract the structural information of the image, we analyze the differences between daily acquired x-ray images of a patient to verify the accuracy of patient positioning. First, for simulating a moving target, the spherical computational phantoms changing the sizes and positions were created to acquire projected images. Differences between the images were automatically detected and analyzed by extracting their SSIM values. In addition, as a clinical test, differences between daily acquired x-ray images of a patient for 12 days were detected in the same way. As a result, we confirmed that the SSIM index was changed in the range of 0.85~1 (0.006~1 when a region of interest (ROI) was applied) as the sizes or positions of the phantom changed. The SSIM was more sensitive to the change of the phantom when the ROI was limited to the phantom itself. In the clinical test, the daily change of patient positions was 0.799~0.853 in SSIM values, those well described differences among images. Therefore, we expect that SSIM index can provide an objective and quantitative technique to verify the patient position using simple x-ray images, instead of time and cost intensive three-dimensional x-ray images.

Forecasting Hourly Demand of City Gas in Korea (국내 도시가스의 시간대별 수요 예측)

  • Han, Jung-Hee;Lee, Geun-Cheol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.2
    • /
    • pp.87-95
    • /
    • 2016
  • This study examined the characteristics of the hourly demand of city gas in Korea and proposed multiple regression models to obtain precise estimates of the hourly demand of city gas. Forecasting the hourly demand of city gas with accuracy is essential in terms of safety and cost. If underestimated, the pipeline pressure needs to be increased sharply to meet the demand, when safety matters. In the opposite case, unnecessary inventory and operation costs are incurred. Data analysis showed that the hourly demand of city gas has a very high autocorrelation and that the 24-hour demand pattern of a day follows the previous 24-hour demand pattern of the same day. That is, there is a weekly cycle pattern. In addition, some conditions that temperature affects the hourly demand level were found. That is, the absolute value of the correlation coefficient between the hourly demand and temperature is about 0.853 on average, while the absolute value of the correlation coefficient on a specific day improves to 0.861 at worst and 0.965 at best. Based on this analysis, this paper proposes a multiple regression model incorporating the hourly demand ahead of 24 hours and the hourly demand ahead of 168 hours, and another multiple regression model with temperature as an additional independent variable. To show the performance of the proposed models, computational experiments were carried out using real data of the domestic city gas demand from 2009 to 2013. The test results showed that the first regression model exhibits a forecasting accuracy of MAPE (Mean Absolute Percentage Error) around 4.5% over the past five years from 2009 to 2013, while the second regression model exhibits 5.13% of MAPE for the same period.

Multi-task Learning Based Tropical Cyclone Intensity Monitoring and Forecasting through Fusion of Geostationary Satellite Data and Numerical Forecasting Model Output (정지궤도 기상위성 및 수치예보모델 융합을 통한 Multi-task Learning 기반 태풍 강도 실시간 추정 및 예측)

  • Lee, Juhyun;Yoo, Cheolhee;Im, Jungho;Shin, Yeji;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1037-1051
    • /
    • 2020
  • The accurate monitoring and forecasting of the intensity of tropical cyclones (TCs) are able to effectively reduce the overall costs of disaster management. In this study, we proposed a multi-task learning (MTL) based deep learning model for real-time TC intensity estimation and forecasting with the lead time of 6-12 hours following the event, based on the fusion of geostationary satellite images and numerical forecast model output. A total of 142 TCs which developed in the Northwest Pacific from 2011 to 2016 were used in this study. The Communications system, the Ocean and Meteorological Satellite (COMS) Meteorological Imager (MI) data were used to extract the images of typhoons, and the Climate Forecast System version 2 (CFSv2) provided by the National Center of Environmental Prediction (NCEP) was employed to extract air and ocean forecasting data. This study suggested two schemes with different input variables to the MTL models. Scheme 1 used only satellite-based input data while scheme 2 used both satellite images and numerical forecast modeling. As a result of real-time TC intensity estimation, Both schemes exhibited similar performance. For TC intensity forecasting with the lead time of 6 and 12 hours, scheme 2 improved the performance by 13% and 16%, respectively, in terms of the root mean squared error (RMSE) when compared to scheme 1. Relative root mean squared errors(rRMSE) for most intensity levels were lessthan 30%. The lower mean absolute error (MAE) and RMSE were found for the lower intensity levels of TCs. In the test results of the typhoon HALONG in 2014, scheme 1 tended to overestimate the intensity by about 20 kts at the early development stage. Scheme 2 slightly reduced the error, resulting in an overestimation by about 5 kts. The MTL models reduced the computational cost about 300% when compared to the single-tasking model, which suggested the feasibility of the rapid production of TC intensity forecasts.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

A Fluid Analysis Study on Centrifugal Pump Performance Improvement by Impeller Modification (원심펌프 회전차 Modification시 성능개선에 관한 유동해석 연구)

  • Lee, A-Yeong;Jang, Hyun-Jun;Lee, Jin-Woo;Cho, Won-Jeong
    • Journal of the Korean Institute of Gas
    • /
    • v.24 no.2
    • /
    • pp.1-8
    • /
    • 2020
  • Centrifugal pump is a facility that transfers energy to fluid through centrifugal force, which is usually generated by rotating the impeller at high speed, and is a major process facility used in many LNG production bases such as vaporization seawater pump, industrial water and fire extinguishing pump using seawater. to be. Currently, pumps in LNG plant sites are subject to operating conditions that vary depending on the amount of supply desired by the customer for a long period of time. Pumps in particular occupy a large part of the consumption strategy at the plant site, and if the optimum operation condition is not available, it can incur enormous energy loss in long term plant operation. In order to solve this problem, it is necessary to identify the performance deterioration factor through the flow analysis and the result analysis according to the fluctuations of the pump's operating conditions and to determine the optimal operation efficiency. In order to evaluate operation efficiency through experimental techniques, considerable time and cost are incurred, such as on-site operating conditions and manufacturing of experimental equipment. If the performance of the pump is not suitable for the site, and the performance of the pump needs to be reduced, a method of changing the rotation speed or using a special liquid containing high viscosity or solids is used. Especially, in order to prevent disruptions in the operation of LNG production bases, a technology is required to satisfy the required performance conditions by processing the existing impeller of the pump within a short time. Therefore, in this study, the rotation difference of the pump was applied to the ANSYS CFX program by applying the modified 3D modeling shape. In addition, the results obtained from the flow analysis and the curve fitting toolbox of the MATLAB program were analyzed numerically to verify the outer diameter correction theory.

Three-Dimensional High-Frequency Electromagnetic Modeling Using Vector Finite Elements (벡터 유한 요소를 이용한 고주파 3차원 전자탐사 모델링)

  • Son Jeong-Sul;Song Yoonho;Chung Seung-Hwan;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.5 no.4
    • /
    • pp.280-290
    • /
    • 2002
  • Three-dimensional (3-D) electromagnetic (EM) modeling algorithm has been developed using finite element method (FEM) to acquire more efficient interpretation techniques of EM data. When FEM based on nodal elements is applied to EM problem, spurious solutions, so called 'vector parasite', are occurred due to the discontinuity of normal electric fields and may lead the completely erroneous results. Among the methods curing the spurious problem, this study adopts vector element of which basis function has the amplitude and direction. To reduce computational cost and required core memory, complex bi-conjugate gradient (CBCG) method is applied to solving complex symmetric matrix of FEM and point Jacobi method is used to accelerate convergence rate. To verify the developed 3-D EM modeling algorithm, its electric and magnetic field for a layered-earth model are compared with those of layered-earth solution. As we expected, the vector based FEM developed in this study does not cause ny vector parasite problem, while conventional nodal based FEM causes lots of errors due to the discontinuity of field variables. For testing the applicability to high frequencies 100 MHz is used as an operating frequency for the layer structure. Modeled fields calculated from developed code are also well matched with the layered-earth ones for a model with dielectric anomaly as well as conductive anomaly. In a vertical electric dipole source case, however, the discontinuity of field variables causes the conventional nodal based FEM to include a lot of errors due to the vector parasite. Even for the case, the vector based FEM gave almost the same results as the layered-earth solution. The magnetic fields induced by a dielectric anomaly at high frequencies show unique behaviors different from those by a conductive anomaly. Since our 3-D EM modeling code can reflect the effect from a dielectric anomaly as well as a conductive anomaly, it may be a groundwork not only to apply high frequency EM method to the field survey but also to analyze the fold data obtained by high frequency EM method.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.