• Title/Summary/Keyword: computational results

Search Result 9,960, Processing Time 0.048 seconds

Development of Agent-based Platform for Coordinated Scheduling in Global Supply Chain (글로벌 공급사슬에서 경쟁협력 스케줄링을 위한 에이전트 기반 플랫폼 구축)

  • Lee, Jung-Seung;Choi, Seong-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.213-226
    • /
    • 2011
  • In global supply chain, the scheduling problems of large products such as ships, airplanes, space shuttles, assembled constructions, and/or automobiles are complicated by nature. New scheduling systems are often developed in order to reduce inherent computational complexity. As a result, a problem can be decomposed into small sub-problems, problems that contain independently small scheduling systems integrating into the initial problem. As one of the authors experienced, DAS (Daewoo Shipbuilding Scheduling System) has adopted a two-layered hierarchical architecture. In the hierarchical architecture, individual scheduling systems composed of a high-level dock scheduler, DAS-ERECT and low-level assembly plant schedulers, DAS-PBS, DAS-3DS, DAS-NPS, and DAS-A7 try to search the best schedules under their own constraints. Moreover, the steep growth of communication technology and logistics enables it to introduce distributed multi-nation production plants by which different parts are produced by designated plants. Therefore vertical and lateral coordination among decomposed scheduling systems is necessary. No standard coordination mechanism of multiple scheduling systems exists, even though there are various scheduling systems existing in the area of scheduling research. Previous research regarding the coordination mechanism has mainly focused on external conversation without capacity model. Prior research has heavily focuses on agent-based coordination in the area of agent research. Yet, no scheduling domain has been developed. Previous research regarding the agent-based scheduling has paid its ample attention to internal coordination of scheduling process, a process that has not been efficient. In this study, we suggest a general framework for agent-based coordination of multiple scheduling systems in global supply chain. The purpose of this study was to design a standard coordination mechanism. To do so, we first define an individual scheduling agent responsible for their own plants and a meta-level coordination agent involved with each individual scheduling agent. We then suggest variables and values describing the individual scheduling agent and meta-level coordination agent. These variables and values are represented by Backus-Naur Form. Second, we suggest scheduling agent communication protocols for each scheduling agent topology classified into the system architectures, existence or nonexistence of coordinator, and directions of coordination. If there was a coordinating agent, an individual scheduling agent could communicate with another individual agent indirectly through the coordinator. On the other hand, if there was not any coordinating agent existing, an individual scheduling agent should communicate with another individual agent directly. To apply agent communication language specifically to the scheduling coordination domain, we had to additionally define an inner language, a language that suitably expresses scheduling coordination. A scheduling agent communication language is devised for the communication among agents independent of domain. We adopt three message layers which are ACL layer, scheduling coordination layer, and industry-specific layer. The ACL layer is a domain independent outer language layer. The scheduling coordination layer has terms necessary for scheduling coordination. The industry-specific layer expresses the industry specification. Third, in order to improve the efficiency of communication among scheduling agents and avoid possible infinite loops, we suggest a look-ahead load balancing model which supports to monitor participating agents and to analyze the status of the agents. To build the look-ahead load balancing model, the status of participating agents should be monitored. Most of all, the amount of sharing information should be considered. If complete information is collected, updating and maintenance cost of sharing information will be increasing although the frequency of communication will be decreasing. Therefore the level of detail and updating period of sharing information should be decided contingently. By means of this standard coordination mechanism, we can easily model coordination processes of multiple scheduling systems into supply chain. Finally, we apply this mechanism to shipbuilding domain and develop a prototype system which consists of a dock-scheduling agent, four assembly- plant-scheduling agents, and a meta-level coordination agent. A series of experiments using the real world data are used to empirically examine this mechanism. The results of this study show that the effect of agent-based platform on coordinated scheduling is evident in terms of the number of tardy jobs, tardiness, and makespan.

Dehumidification and Temperature Control for Green Houses using Lithium Bromide Solution and Cooling Coil (리튬브로마이드(LiBr) 용액의 흡습성질과 냉각코일을 이용한 온실 습도 및 온도 제어)

  • Lee, Sang Yeol;Lee, Chung Geon;Euh, Seung Hee;Oh, Kwang Cheol;Oh, Jae Heun;Kim, Dea Hyun
    • Journal of Bio-Environment Control
    • /
    • v.23 no.4
    • /
    • pp.337-341
    • /
    • 2014
  • Due to the nature of the ambient air temperature in summer in korea, the growth of crops in greenhouse normally requires cooling and dehumidification. Even though various cooling and dehumidification methods have been presented, there are many obstacles to figure out in practical application such as excessive energy use, cost, and performance. To overcome this problem, the lab scale experiments using lithium bromide(LiBr) solution and cooling coil for dehumidification and cooling in greenhouses were performed. In this study, preliminary experiment of dehumidification and cooling for the greenhouse was done using LiBr solution as the dehumidifying materials, and cooling coil separately and then combined system was tested as well. Hot and humid air was dehumidified from 85% to 70% by passing through a pad soaked with LiBr, and cooled from 308K to 299K through the cooling coil. computational Fluid Dynamics(CFD) analysis and analytical solution were done for the change of air temperature by heat transfer. Simulation results showed that the final air temperature was calculated 299.7K and 299.9K respectively with the deviation of 0.7K comparing the experimental value having good agreement. From this result, LiBr solution with cooling coil system could be applicable in the greenhouse.

Feasibility of Automated Detection of Inter-fractional Deviation in Patient Positioning Using Structural Similarity Index: Preliminary Results (Structural Similarity Index 인자를 이용한 방사선 분할 조사간 환자 체위 변화의 자동화 검출능 평가: 초기 보고)

  • Youn, Hanbean;Jeon, Hosang;Lee, Jayeong;Lee, Juhye;Nam, Jiho;Park, Dahl;Kim, Wontaek;Ki, Yongkan;Kim, Donghyun
    • Progress in Medical Physics
    • /
    • v.26 no.4
    • /
    • pp.258-266
    • /
    • 2015
  • The modern radiotherapy technique which delivers a large amount of dose to patients asks to confirm the positions of patients or tumors more accurately by using X-ray projection images of high-definition. However, a rapid increase in patient's exposure and image information for CT image acquisition may be additional burden on the patient. In this study, by introducing structural similarity (SSIM) index that can effectively extract the structural information of the image, we analyze the differences between daily acquired x-ray images of a patient to verify the accuracy of patient positioning. First, for simulating a moving target, the spherical computational phantoms changing the sizes and positions were created to acquire projected images. Differences between the images were automatically detected and analyzed by extracting their SSIM values. In addition, as a clinical test, differences between daily acquired x-ray images of a patient for 12 days were detected in the same way. As a result, we confirmed that the SSIM index was changed in the range of 0.85~1 (0.006~1 when a region of interest (ROI) was applied) as the sizes or positions of the phantom changed. The SSIM was more sensitive to the change of the phantom when the ROI was limited to the phantom itself. In the clinical test, the daily change of patient positions was 0.799~0.853 in SSIM values, those well described differences among images. Therefore, we expect that SSIM index can provide an objective and quantitative technique to verify the patient position using simple x-ray images, instead of time and cost intensive three-dimensional x-ray images.

Fast Full Search Block Matching Algorithm Using The Search Region Subsampling and The Difference of Adjacent Pixels (탐색 영역 부표본화 및 이웃 화소간의 차를 이용한 고속 전역 탐색 블록 정합 알고리듬)

  • Cheong, Won-Sik;Lee, Bub-Ki;Lee, Kyeong-Hwan;Choi, Jung-Hyun;Kim, Kyeong-Kyu;Kim, Duk-Gyoo;Lee, Kuhn-Il
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.11
    • /
    • pp.102-111
    • /
    • 1999
  • In this paper, we propose a fast full search block matching algorithm using the search region subsampling and the difference of adjacent pixels in current block. In the proposed algorithm, we calculate the lower bound of mean absolute difference (MAD) at each search point using the MAD value of neighbor search point and the difference of adjacent pixels in current block. After that, we perform block matching process only at the search points that need block matching process using the lower bound of MAD at each search point. To calculate the lower bound of MAD at each search point, we need the MAD value of neighbor search point. Therefore, the search points are subsampled at the factor of 4 and the MAD value at the subsampled search points are calculated by the block matching process. And then, the lower bound of MAD at the rest search points are calculated using the MAD value of the neighbor subsampled search point and the difference of adjacent pixels in current block. Finally, we discard the search points that have the lower bound of MAD value exceed the reference MAD which is the minimum MAD value of the MAD values at the subsampled search points and we perform the block matching process only at the search points that need block matching process. By doing so, we can reduce the computation complexity drastically while the motion compensated error performance is kept the same as that of full search block matching algorithm (FSBMA). The experimental results show that the proposed method has a much lower computational complexity than that of FSBMA while the motion compensated error performance of the proposed method is kept same as that of FSBMA.

  • PDF

Forecasting Hourly Demand of City Gas in Korea (국내 도시가스의 시간대별 수요 예측)

  • Han, Jung-Hee;Lee, Geun-Cheol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.2
    • /
    • pp.87-95
    • /
    • 2016
  • This study examined the characteristics of the hourly demand of city gas in Korea and proposed multiple regression models to obtain precise estimates of the hourly demand of city gas. Forecasting the hourly demand of city gas with accuracy is essential in terms of safety and cost. If underestimated, the pipeline pressure needs to be increased sharply to meet the demand, when safety matters. In the opposite case, unnecessary inventory and operation costs are incurred. Data analysis showed that the hourly demand of city gas has a very high autocorrelation and that the 24-hour demand pattern of a day follows the previous 24-hour demand pattern of the same day. That is, there is a weekly cycle pattern. In addition, some conditions that temperature affects the hourly demand level were found. That is, the absolute value of the correlation coefficient between the hourly demand and temperature is about 0.853 on average, while the absolute value of the correlation coefficient on a specific day improves to 0.861 at worst and 0.965 at best. Based on this analysis, this paper proposes a multiple regression model incorporating the hourly demand ahead of 24 hours and the hourly demand ahead of 168 hours, and another multiple regression model with temperature as an additional independent variable. To show the performance of the proposed models, computational experiments were carried out using real data of the domestic city gas demand from 2009 to 2013. The test results showed that the first regression model exhibits a forecasting accuracy of MAPE (Mean Absolute Percentage Error) around 4.5% over the past five years from 2009 to 2013, while the second regression model exhibits 5.13% of MAPE for the same period.

A Comparative Study on the Improvement of Curriculum in the Junior College for the Industrial Design Major (2년제 대학 산업디자인전공의 교육과정 개선방안에 관한 비교연구)

  • 강사임
    • Archives of design research
    • /
    • v.13 no.1
    • /
    • pp.209-218
    • /
    • 2000
  • The purpose of this study was to improve the curriculum for industrial design department in the junior colleges. In order to achieve the purpose, two methodologies were carried out. First is job analysis of the industrial designers who have worked in the small & medium manufacturing companies, second is survey for the opinions of professors in the junior colleges. Some results were as follows: 1. The period of junior college for industrial designers is 2 years according to present. But selectively 1 year of advanced course can be established. 2. The practice subjects same as computational formative techniques needed to product development have to be increased. In addition kinds of selection subjects same as foreign language, manufacturing process, new product information and consumer behavior investigation have to be extended. 3. The next subjects need to adjust the title, contents and hours. (1) The need of 3.D related subjects same as computer modeling, computer rendering, 3.D modeling was high. The use of computer is required to design presentation subjects. (2)The need of advertising and sale related subjects same as printing, merchandise, package, typography, photography was low, the need of presentation techniques of new product development was high. (3) The need of field practice, special lecture on practice and reading original texts related subjects was same as at present, but these are not attached importance to form. As the designers feel keenly the necessity of using foreign language, the need of language subject was high.

  • PDF

Multi-task Learning Based Tropical Cyclone Intensity Monitoring and Forecasting through Fusion of Geostationary Satellite Data and Numerical Forecasting Model Output (정지궤도 기상위성 및 수치예보모델 융합을 통한 Multi-task Learning 기반 태풍 강도 실시간 추정 및 예측)

  • Lee, Juhyun;Yoo, Cheolhee;Im, Jungho;Shin, Yeji;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1037-1051
    • /
    • 2020
  • The accurate monitoring and forecasting of the intensity of tropical cyclones (TCs) are able to effectively reduce the overall costs of disaster management. In this study, we proposed a multi-task learning (MTL) based deep learning model for real-time TC intensity estimation and forecasting with the lead time of 6-12 hours following the event, based on the fusion of geostationary satellite images and numerical forecast model output. A total of 142 TCs which developed in the Northwest Pacific from 2011 to 2016 were used in this study. The Communications system, the Ocean and Meteorological Satellite (COMS) Meteorological Imager (MI) data were used to extract the images of typhoons, and the Climate Forecast System version 2 (CFSv2) provided by the National Center of Environmental Prediction (NCEP) was employed to extract air and ocean forecasting data. This study suggested two schemes with different input variables to the MTL models. Scheme 1 used only satellite-based input data while scheme 2 used both satellite images and numerical forecast modeling. As a result of real-time TC intensity estimation, Both schemes exhibited similar performance. For TC intensity forecasting with the lead time of 6 and 12 hours, scheme 2 improved the performance by 13% and 16%, respectively, in terms of the root mean squared error (RMSE) when compared to scheme 1. Relative root mean squared errors(rRMSE) for most intensity levels were lessthan 30%. The lower mean absolute error (MAE) and RMSE were found for the lower intensity levels of TCs. In the test results of the typhoon HALONG in 2014, scheme 1 tended to overestimate the intensity by about 20 kts at the early development stage. Scheme 2 slightly reduced the error, resulting in an overestimation by about 5 kts. The MTL models reduced the computational cost about 300% when compared to the single-tasking model, which suggested the feasibility of the rapid production of TC intensity forecasts.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

External Gravity Field in the Korean Peninsula Area (한반도 지역에서의 상층중력장)

  • Jung, Ae Young;Choi, Kwang-Sun;Lee, Young-Cheol;Lee, Jung Mo
    • Economic and Environmental Geology
    • /
    • v.48 no.6
    • /
    • pp.451-465
    • /
    • 2015
  • The free-air anomalies are computed using a data set from various types of gravity measurements in the Korean Peninsula area. The gravity values extracted from the Earth Gravitational Model 2008 are used in the surrounding region. The upward continuation technique suggested by Dragomir is used in the computation of the external free-air anomalies at various altitudes. The integration radius 10 times the altitude is used in order to keep the accuracy of results and computational resources. The direct geodesic formula developed by Bowring is employed in integration. At the 1-km altitude, the free-air anomalies vary from -41.315 to 189.327 mgal with the standard deviation of 22.612 mgal. At the 3-km altitude, they vary from -36.478 to 156.209 mgal with the standard deviation of 20.641 mgal. At the 1,000-km altitude, they vary from 3.170 to 5.864 mgal with the standard deviation of 0.670 mgal. The predicted free-air anomalies at 3-km altitude are compared to the published free-air anomalies reduced from the airborne gravity measurements at the same altitude. The rms difference is 3.88 mgal. Considering the reported 2.21-mgal airborne gravity cross-over accuracy, this rms difference is not serious. Possible causes in the difference appear to be external free-air anomaly simulation errors in this work and/or the gravity reduction errors of the other. The external gravity field is predicted by adding the external free-air anomaly to the normal gravity computed using the closed form formula for the gravity above and below the surface of the ellipsoid. The predicted external gravity field in this work is expected to reasonably present the real external gravity field. This work seems to be the first structured research on the external free-air anomaly in the Korean Peninsula area, and the external gravity field can be used to improve the accuracy of the inertial navigation system.

Analysis of Fluid Flows in a High Rate Spiral Clarifier and the Evaluation of Field Applicability for Improvement of Water Quality (고속 선회류 침전 장치의 유동 해석 및 수질 개선을 위한 현장 적용 가능성 평가)

  • Kim, Jin Han;Jun, Se Jin
    • Journal of Wetlands Research
    • /
    • v.16 no.1
    • /
    • pp.41-50
    • /
    • 2014
  • The purpose of this study is to evaluate the High Rate Spiral Clarifier(HRSC) availability for the improvement of polluted retention pond water quality. A lab scale and a pilot scale test was performed for this. The fluid flow patterns in a HRSC were studied using Fluent which is one of the computational fluid dynamic(CFD) programs, with inlet velocity and inlet diameter, length of body($L_B$) and length of lower cone(Lc), angle and gap between the inverted sloping cone, the lower exit hole installed or not installed. A pilot scale experimental apparatus was made on the basis of the results from the fluid flow analysis and lab scale test, then a field test was executed for the retention pond. In the study of inside fluid flow for the experimental apparatus, we found out that the inlet velocity had a greater effect on forming spiral flow than inlet flow rate and inlet diameter. There was no observable effect on forming spiral flow LB in the range of 1.2 to $1.6D_B$(body diameter) and Lc in the range of 0.35 to $0.5L_B$, but decreased the spiral flow with a high ratio of $L_B/D_B$ 2.0, $Lc/L_B$ 0.75. As increased the angle of the inverted sloping cone, velocity gradually dropped and evenly distributed in the inverted sloping cone. The better condition was a 10cm distance of the inverted sloping cone compared to 20cm to prevent turbulent flow. The condition that excludes the lower exit hole was better to prevent channeling and to distribute effluent flow rate evenly. From the pilot scale field test it was confirmed that particulate matters were effectively removed, therefore, this apparatus could be used for one of the plans to improve water quality for a large water body such as retention ponds.