• Title/Summary/Keyword: Computation Time

Search Result 3,160, Processing Time 0.03 seconds

Software Development for Optimal Productivity and Service Level Management in Ports (항만에서 최적 생산성 및 서비스 수준 관리를 위한 소프트웨어 개발)

  • Park, Sang-Kook
    • Journal of Navigation and Port Research
    • /
    • v.41 no.3
    • /
    • pp.137-148
    • /
    • 2017
  • Port service level is a metric of competitiveness among ports for the operating/managing bodies such as the terminal operation company (TOC), Port Authority, or the government, and is used as an important indicator for shipping companies and freight haulers when selecting a port. Considering the importance of metrics, we developed software to objectively define and manage six important service indicators exclusive to container and bulk terminals including: berth occupancy rate, ship's waiting ratio, berth throughput, number of berths, average number of vessels waiting, and average waiting time. We computed the six service indicators utilizing berth 1 through berth 5 in the container terminals and berth 1 through berth 4 in the bulk terminals. The software model allows easy computation of expected ship's waiting ratio over berth occupancy rate, berth throughput, counts of berth, average number of vessels waiting and average waiting time. Further, the software allows prediction of yearly throughput by utilizing a ship's waiting ratio and other productivity indicators and making calculations based on arrival patterns of ship traffic. As a result, a TOC is able to make strategic decisions on the trade-offs in the optimal operating level of the facility with better predictors of the service factors (ship's waiting ratio) and productivity factors (yearly throughput). Successful implementation of the software would attract more shipping companies and shippers and maximize TOC profits.

A Tree-structured XPath Query Reduction Scheme for Enhancing XML Query Processing Performance (XML 질의의 수행성능 향상을 위한 트리 구조 XPath 질의의 축약 기법에 관한 연구)

  • Lee, Min-Soo;Kim, Yun-Mi;Song, Soo-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.585-596
    • /
    • 2007
  • XML data generally consists of a hierarchical tree-structure which is reflected in mechanisms to store and retrieve XML data. Therefore, when storing XML data in the database, the hierarchical relationships among the XML elements are taken into consideration during the restructuring and storing of the XML data. Also, in order to support the search queries from the user, a mechanism is needed to compute the hierarchical relationship between the element structures specified by the query. The structural join operation is one solution to this problem, and is an efficient computation method for hierarchical relationships in an in database based on the node numbering scheme. However, in order to process a tree structured XML query which contains a complex nested hierarchical relationship it still needs to carry out multiple structural joins and results in another problem of having a high query execution cost. Therefore, in this paper we provide a preprocessing mechanism for effectively reducing the cost of multiple nested structural joins by applying the concept of equivalence classes and suggest a query path reduction algorithm to shorten the path query which consists of a regular expression. The mechanism is especially devised to reduce path queries containing branch nodes. The experimental results show that the proposed algorithm can reduce the time requited for processing the path queries to 1/3 of the original execution time.

A Study on Improvement of the DDHV Estimating Method (설계시간교통량 산정방법 개선)

  • 문미경;장명순;강재수
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.5
    • /
    • pp.61-71
    • /
    • 2003
  • Existent DDHV draws and is calculating K coefficient. D coefficient from sum of traffic volume two-directions time. There is difference of design order and actuality order, error of DDHV estimation value, problem of irregular change etc. of DDHV thereby. In this study, among traffic volume of each other independent two direction(going up, going down), decide design target order in the directional traffic volume, presented way(way) applying without separating K coefficient and D coefficient at the same time. The result were analysis about national highway permanent count point 360 points 30 orders by existing DDHV estimation value method(separation plan) analysis wave and following variation appear. - design order and actuality order are collision at 357 agencies(99.2%) - actuality order special quality : Measuring efficiency of average 80 orders, maximum 1,027 order, minimum 2 orders - error distribution of design order and actuality order : inside 10 hours is(30$\pm$10hour) 106 points(29.4%), 254 points(70.6%) more than 30 orders and $\pm$10 orders error occurrence be - DDHV estimation value : Average 8.4%, maximum 46.7% The other side, average 50 orders. error improvement effect of DDHV 8.4% was analysed that is at design hourly volume computation by inseparability method in case of AADT premises correct thing because inseparability plan agrees actuality order at whole agency with design order and measuring efficiency of DDHV estimation value is "0".t;0".uot;.

A Convergence Study on the 5-axis Machining Technology using the DICOM Image of the Humerus Bone (위팔뼈 의료용 디지털 영상 및 통신 표준 영상을 이용한 5축 가공기술의 융합적 연구)

  • Yoon, Jae-Ho;Ji, Tae-Jeong;Yoon, Joon;Kim, Hyeong-Gyun
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.11
    • /
    • pp.115-121
    • /
    • 2017
  • The present study aimed to obtain basic knowledge of a customized artificial joint based on the convergence research of Digital Imaging and Communications in Medicine(DICOM) and 5-axis machining technology. In the case of the research method, three-dimensional modeling was generated based on the medical image of the humerus bone, and the shape was machined using a chemical wood material. Then, the anatomical characteristics and the modeling machining computation times were compared. The results showed that the Stereolithography (STL) modeling required twice more time for semi-finishing and 10 times more time for finishing compared to the Initial Graphics Exchange Specification(IGES) modeling. For the 5-axis machining humerus bone, the anatomical structures of the anatomic neck, greater tubercle, lesser tubercle, and intertubercular groove were similar to those in the three-dimensional medical image. In the future, the convergence machining technology, where 5-axis machining of various structures(e.g., the surgical neck undercut of the humerus bone) is performed as described above, can be efficiently applied to the manufacture of a customized joint that pursues the precise model of a human body.

Fast Motion Estimation for Variable Motion Block Size in H.264 Standard (H.264 표준의 가변 움직임 블록을 위한 고속 움직임 탐색 기법)

  • 최웅일;전병우
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.209-220
    • /
    • 2004
  • The main feature of H.264 standard against conventional video standards is the high coding efficiency and the network friendliness. In spite of these outstanding features, it is not easy to implement H.264 codec as a real-time system due to its high requirement of memory bandwidth and intensive computation. Although the variable block size motion compensation using multiple reference frames is one of the key coding tools to bring about its main performance gain, it demands substantial computational complexity due to SAD (Sum of Absolute Difference) calculation among all possible combinations of coding modes to find the best motion vector. For speedup of motion estimation process, therefore, this paper proposes fast algorithms for both integer-pel and fractional-pel motion search. Since many conventional fast integer-pel motion estimation algorithms are not suitable for H.264 having variable motion block sizes, we propose the motion field adaptive search using the hierarchical block structure based on the diamond search applicable to variable motion block sizes. Besides, we also propose fast fractional-pel motion search using small diamond search centered by predictive motion vector based on statistical characteristic of motion vector.

A Study on the Improvement of Digital Periapical Images using Image Interpolation Methods (영상보간법을 이용한 디지털 치근단 방사선영상의 개선에 관한 연구)

  • Song Nam-Kyu;Koh Kawng-Joon
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.28 no.2
    • /
    • pp.387-413
    • /
    • 1998
  • Image resampling is of particular interest in digital radiology. When resampling an image to a new set of coordinate, there appears blocking artifacts and image changes. To enhance image quality, interpolation algorithms have been used. Resampling is used to increase the number of points in an image to improve its appearance for display. The process of interpolation is fitting a continuous function to the discrete points in the digital image. The purpose of this study was to determine the effects of the seven interpolation functions when image resampling in digital periapical images. The images were obtained by Digora, CDR and scanning of Ektaspeed plus periapical radiograms on the dry skull and human subject. The subjects were exposed to intraoral X-ray machine at 60kVp and 70 kVp with exposure time varying between 0.01 and 0.50 second. To determine which interpolation method would provide the better image, seven functions were compared; (1) nearest neighbor (2) linear (3) non-linear (4) facet model (5) cubic convolution (6) cubic spline (7) gray segment expansion. And resampled images were compared in terms of SNR(Signal to Noise Ratio) and MTF(Modulation Transfer Function) coefficient value. The obtained results were as follows ; 1. The highest SNR value(75.96dB) was obtained with cubic convolution method and the lowest SNR value(72.44dB) was obtained with facet model method among seven interpolation methods. 2. There were significant differences of SNR values among CDR, Digora and film scan(P<0.05). 3. There were significant differences of SNR values between 60kVp and 70kVp in seven interpolation methods. There were significant differences of SNR values between facet model method and those of the other methods at 60kVp(P<0.05), but there were not significant differences of SNR values among seven interpolation methods at 70kVp(P>0.05). 4. There were significant differences of MTF coefficient values between linear interpolation method and the other six interpolation methods (P< 0.05). 5. The speed of computation time was the fastest with nearest -neighbor method and the slowest with non-linear method. 6. The better image was obtained with cubic convolution, cubic spline and gray segment method in ROC analysis. 7. The better sharpness of edge was obtained with gray segment expansion method among seven interpolation methods.

  • PDF

Efficient Structure-Oriented Filter-Edge Preserving (SOF-EP) Method using the Corner Response (모서리 반응을 이용한 효과적인 Structure-Oriented Filter-Edge Preserving (SOF-EP) 기법)

  • Kim, Bona;Byun, Joongmoo;Seol, Soon Jee
    • Geophysics and Geophysical Exploration
    • /
    • v.20 no.3
    • /
    • pp.176-184
    • /
    • 2017
  • To interpret the seismic image precisely, random noises should be suppressed and the continuity of the image should be enhanced by using the appropriate smoothing techniques. Structure-Oriented Filter-Edge Preserving (SOF-EP) technique is one of the methods, that have been actively researched and used until now, to efficiently smooth seismic data while preserving the continuity of signal. This technique is based on the principle that diffusion occurs from large amplitude to small one. In a continuous structure such as a horizontal layer, diffusion or smoothing is operated along the layer, thereby increasing the continuity of layers and eliminating random noise. In addition, diffusion or smoothing across boundaries at discontinuous structures such as faults can be avoided by employing the continuity decision factor. Accordingly, the precision of the smoothing technique can be improved. However, in the case of the structure-oriented semblance technique, which has been used to calculate the continuity factor, it takes lots of time depending on the size of the filter and data. In this study, we first implemented the SOF-EP method and confirmed its effectiveness by applying it step by step to the field data. Next, we proposed and applied the corner response method which can efficiently calculate the continuity decision factor instead of structure-oriented semblance. As a result, we could confirm that the computation time can be reduced by about 6,000 times or more by applying the corner response method.

A Study on Design of Reference Stations and Integrity Monitors for Maritime DGPS Recapitalization (해양용 DGPS 구조개선을 위한 RSIM 설계에 관한 연구)

  • Park, Sang-Hyun;Seo, Ki-Yeol;Cho, Deuk-Jae;Suh, Sang-Hyun
    • Journal of Navigation and Port Research
    • /
    • v.33 no.10
    • /
    • pp.691-697
    • /
    • 2009
  • Hardware dedicated off-the-shelf maritime differential GPS RSIM lacks the open architecture to meet all the minimum maritime user requirements and to include future GNSS improvements after recapitalization. This paper carries out a study to replace existing hardware dedicated differential GPS RSIM with software differential GPS RSIM in order to make up the weak point of hardware dedicated off-the-shelf maritime differential GPS RSIM. In this paper, the architecture of software RSIM is proposed for maritime DGPS recapitalization. And the feasibility analysis of the proposed software differential GPS RSIM is performed as the first phase to realize the proposed architecture. For the feasibility analysis, the prototype RF module and DSP module are implemented with properties as wide RF bandwidth, high sampling frequency, and high speed transmission interface. This paper shows that the proposed architecture has the possibility of real time operation of software RSIM functionality onto the PC-based platform through the analysis of computation time. Finally, this paper verifies that the L1/L2 dual frequency software differential RSIM designed according to the proposed method satisfies the performance specifications set out in RTCM paper 221-2006-SC104-STD.

A Study on the application of Critical Rainfall Duration for the Estimation of Design Flood (설계홍수량 산정에 따른 임계지속시간의 적용성에 관한 연구)

  • Chang, Seong Mo;Kang, In Joo;Lee, Eun Tae
    • Journal of Wetlands Research
    • /
    • v.6 no.3
    • /
    • pp.119-126
    • /
    • 2004
  • In recent, the critical rainfall duration concept is widely used but we do not have understandable criteria yet. However, the critical rainfall duration is usually calculated considering concentration time, runoff model using effective rainfall, and unit hydrograph for the estimation of design flood. This study is to derive the regression equations between the critical rainfall duration and hydrologic components such as the basin area, slope, length, CN, and so on. We use a GIS tool which is called the ArcView for the estimation of hydrologic components and the HEC-1 module which is provided in WMS model is used for the runoff computation. As the results, the basin area, basin slope, and basin length had a great influence on the estimations of peak runoff and critical rainfall duration. We also investigated the sensitivities for the peak runoff and critical duration of rainfall from the correlation analysis for the involved components in the runoff estimation.

  • PDF

Analysis of RTM Process Using the Extended Finite Element Method (확장 유한 요소 법을 적용한 RTM 공정 해석)

  • Jung, Yeonhee;Kim, Seung Jo;Han, Woo-Suck
    • Composites Research
    • /
    • v.26 no.6
    • /
    • pp.363-372
    • /
    • 2013
  • Numerical simulation for Resin Transfer Molding manufacturing process is attempted by using the eXtended Finite Element Method (XFEM) combined with the level set method. XFEM allows to obtaining a good numerical precision of the pressure near the resin flow front, where its gradient is discontinuous. The enriched shape functions of XFEM are derived by using the level set values so as to correctly describe the interpolation with the resin flow front. In addition, the level set method is used to transport the resin flow front at each time step during the mold filling. The level set values are calculated by an implicit characteristic Galerkin FEM. The multi-frontal solver of IPSAP is adopted to solve the system. This work is validated by comparing the obtained results with analytic solutions. Moreover, a localization method of XFEM and level set method is proposed to increase the computing efficiency. The computation domain is reduced to the small region near the resin flow front. Therefore, the total computing time is strongly reduced by it. The efficiency test is made with a simple channel flow model. Several application examples are analyzed to demonstrate ability of this method.