• Title/Summary/Keyword: optimization of experiments

Search Result 1,458, Processing Time 0.029 seconds

Development of a Daily Pattern Clustering Algorithm using Historical Profiles (과거이력자료를 활용한 요일별 패턴분류 알고리즘 개발)

  • Cho, Jun-Han;Kim, Bo-Sung;Kim, Seong-Ho;Kang, Weon-Eui
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.4
    • /
    • pp.11-23
    • /
    • 2011
  • The objective of this paper is to develop a daily pattern clustering algorithm using historical traffic data that can reliably detect under various traffic flow conditions in urban streets. The developed algorithm in this paper is categorized into two major parts, that is to say a macroscopic and a microscopic points of view. First of all, a macroscopic analysis process deduces a daily peak/non-peak hour and emphasis analysis time zones based on the speed time-series. A microscopic analysis process clusters a daily pattern compared with a similarity between individuals or between individual and group. The name of the developed algorithm in microscopic analysis process is called "Two-step speed clustering (TSC) algorithm". TSC algorithm improves the accuracy of a daily pattern clustering based on the time-series speed variation data. The experiments of the algorithm have been conducted with point detector data, installed at a Ansan city, and verified through comparison with a clustering techniques using SPSS. Our efforts in this study are expected to contribute to developing pattern-based information processing, operations management of daily recurrent congestion, improvement of daily signal optimization based on TOD plans.

Inverse Estimation of Geoacoustic Parameters in Shallow Water Using tight Bulb Sound Source (천해환경에서 전구음원을 이용한 지음향인자의 역추정)

  • 한주영;이성욱;나정열;김성일
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1
    • /
    • pp.8-16
    • /
    • 2004
  • An inversion method is presented for the determination of the compressional wave speed, compressional wave attenuation, thickness of the sediment layer and density as a function of depth for a horizontally stratified ocean bottom. An experiment for estimating those properties was conducted in the shallow water of South Sea in Korea. In the experiment, a light bulb implosion and the propagating sound were measured using a VLA (vertical line array). As a method for estimating the geoacoustic properties, a coherent broadband matched field processing combined with Genetic Algorithm was employed. When a time-dependent signal is very short, the Fourier transform results are not accurate, since the frequency components are not locatable in time and the windowed Fourier transform is limited by the length of the window. However, it is possible to do this using the wavelet transform a transform that yields a time-frequency representation of a signal. In this study, this transform is used to identify and extract the acoustic components from multipath time series. The inversion is formulated as an optimization problem which maximizes the cost function defined as a normalized correlation between the measured and modeled signals in the wavelet transform coefficient vector. The experiments and procedures for deploying the light bulbs and the coherent broadband inversion method are described, and the estimated geoacoustic profile in the vicinity of the VLA site is presented.

Greedy Heuristic Algorithm for the Optimal Location Allocation of Pickup Points: Application to the Metropolitan Seoul Subway System (Pickup Point 최적입지선정을 위한 Greedy Heuristic Algorithm 개발 및 적용: 서울 대도시권 지하철 시스템을 대상으로)

  • Park, Jong-Soo;Lee, Keum-Sook
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.14 no.2
    • /
    • pp.116-128
    • /
    • 2011
  • Some subway passengers may want to have their fresh vegetables purchased through internet at a service facility within the subway station of the Metropolitan Seoul subway system on the way to home, which raises further questions about which stations are chosen to locate service facilities and how many passengers can use the facilities. This problem is well known as the pickup problem, and it can be solved on a traffic network with traffic flows which should be identified from origin stations to destination stations. Since flows of the subway passengers can be found from the smart card transaction database of the Metropolitan Seoul smart card system, the pickup problem in the Metropolitan Seoul subway system is to select subway stations for the service facilities such that captured passenger flows are maximized. In this paper, we have formulated a model of the pickup problem on the Metropolitan Seoul subway system with subway passenger flows, and have proposed a fast heuristic algorithm to select pickup stations which can capture the most passenger flows in each step from an origin-destination matrix which represents the passenger flows. We have applied the heuristic algorithm to select the pickup stations from a large volume of traffic network, the Metropolitan Seoul subway system, with about 400 subway stations and five millions passenger transactions daily. We have obtained not only the experimental results in fast response time, but also displayed the top 10 pickup stations in a subway guide map. In addition, we have shown that the resulting solution is nearly optimal by a few more supplementary experiments.

  • PDF

Two-phases Hybrid Approaches and Partitioning Strategy to Solve Dynamic Commercial Fleet Management Problem Using Real-time Information (실시간 정보기반 동적 화물차량 운용문제의 2단계 하이브리드 해법과 Partitioning Strategy)

  • Kim, Yong-Jin
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.2 s.73
    • /
    • pp.145-154
    • /
    • 2004
  • The growing demand for customer-responsive, made-to-order manufacturing is stimulating the need for improved dynamic decision-making processes in commercial fleet operations. Moreover, the rapid growth of electronic commerce through the internet is also requiring advanced and precise real-time operation of vehicle fleets. Accompanying these demand side developments/pressures, the growing availability of technologies such as AVL(Automatic Vehicle Location) systems and continuous two-way communication devices is driving developments on the supply side. These technologies enable the dispatcher to identify the current location of trucks and to communicate with drivers in real time affording the carrier fleet dispatcher the opportunity to dynamically respond to changes in demand, driver and vehicle availability, as well as traffic network conditions. This research investigates key aspects of real time dynamic routing and scheduling problems in fleet operation particularly in a truckload pickup-and-delivery problem under various settings, in which information of stochastic demands is revealed on a continuous basis, i.e., as the scheduled routes are executed. The most promising solution strategies for dealing with this real-time problem are analyzed and integrated. Furthermore, this research develops. analyzes, and implements hybrid algorithms for solving them, which combine fast local heuristic approach with an optimization-based approach. In addition, various partitioning algorithms being able to deal with large fleet of vehicles are developed based on 'divided & conquer' technique. Simulation experiments are developed and conducted to evaluate the performance of these algorithms.

Adaptive Row Major Order: a Performance Optimization Method of the Transform-space View Join (적응형 행 기준 순서: 변환공간 뷰 조인의 성능 최적화 방법)

  • Lee Min-Jae;Han Wook-Shin;Whang Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.345-361
    • /
    • 2005
  • A transform-space index indexes objects represented as points in the transform space An advantage of a transform-space index is that optimization of join algorithms using these indexes becomes relatively simple. However, the disadvantage is that these algorithms cannot be applied to original-space indexes such as the R-tree. As a way of overcoming this disadvantages, the authors earlier proposed the transform-space view join algorithm that joins two original- space indexes in the transform space through the notion of the transform-space view. A transform-space view is a virtual transform-space index that allows us to perform join in the transform space using original-space indexes. In a transform-space view join algorithm, the order of accessing disk pages -for which various space filling curves could be used -makes a significant impact on the performance of joins. In this paper, we Propose a new space filling curve called the adaptive row major order (ARM order). The ARM order adaptively controls the order of accessing pages and significantly reduces the one-pass buffer size (the minimum buffer size required for guaranteeing one disk access per page) and the number of disk accesses for a given buffer size. Through analysis and experiments, we verify the excellence of the ARM order when used with the transform-space view join. The transform-space view join with the ARM order always outperforms existing ones in terms of both measures used: the one-pass buffer size and the number of disk accesses for a given buffer size. Compared to other conventional space filling curves used with the transform-space view join, it reduces the one-pass buffer size by up to 21.3 times and the number of disk accesses by up to $74.6\%$. In addition, compared to existing spatial join algorithms that use R-trees in the original space, it reduces the one-pass buffer size by up to 15.7 times and the number of disk accesses by up to $65.3\%$.

Design and Environmental/Economic Performance Evaluation of Wastewater Treatment Plants Using Modeling Methodology (모델링 기법을 이용한 하수처리 공정 설계와 환경성 및 경제성 평가)

  • Kim, MinHan;Yoo, ChangKyoo
    • Korean Chemical Engineering Research
    • /
    • v.46 no.3
    • /
    • pp.610-618
    • /
    • 2008
  • It is not easy to compare the treatment processes and find an optimum operating condition by the experiments due to influent conditions, treatment processes, various operational conditions and complex factors in real wastewater treatment system and also need a lot of time and costs. In this paper, the activated sludge models are applied to four principal biological wastewater treatment processes, $A_2O$(anaerobic/anoxic/oxic process), Bardenpho(4 steps), VIP(Virginia Initiative Plant) and UCT(University of Cape Town), and are used to compare their environmental and economic assessment for four key processes. In order to evaluate each processes, a new assessment index which can compare the efficiency of treatment performances in various processes is proposed, which considers both environmental and economic cost. It shows that the proposed index can be used to select the optimum processes among the candidate treatment processes as well as to find the optimum condition in each process. And it can find the change of economic and environmental index under the changes of influent flowrate and aerobic reaction size and predict the optimum index under various operation conditions.

Performance Analysis of Cache and Internal Memory of a High Performance DSP for an Optimal Implementation of Motion Picture Encoder (고성능 DSP에서 동영상 인코더의 최적화 구현을 위한 캐쉬 및 내부 메모리 성능 분석)

  • Lim, Se-Hun;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.5
    • /
    • pp.72-81
    • /
    • 2008
  • High Performance DSP usually supports cache and internal memory. For an optimal implementation of a multimedia stream application on such a high performance DSP, one needs to utilize the cache and internal memory efficiently. In this paper, we investigate performance analysis of cache, and internal memory configuration and placement necessary to achieve an optimal implementation of multimedia stream applications like motion picture encoder on high performance DSP, TMS320C6000 series, and propose strategies to improve performance for cache and internal memory placement. From the results of analysis and experiments, it is verified that 2-way L2 cache configuration with the remaining memory configured as internal memory shows relatively good performance. Also, it is shown that L1P cache hit rate is enhanced when frequently called routines and routines having caller-callee relationships with them are continuously placed in the internal memory and that L1D cache hit rate is enhanced by the simple change of the data size. The results in the paper are expected to contribute to the optimal implementation of multimedia stream applications on high performance DSPs.

Single-Channel Seismic Data Processing via Singular Spectrum Analysis (특이 스펙트럼 분석 기반 단일 채널 탄성파 자료처리 연구)

  • Woodon Jeong;Chanhee Lee;Seung-Goo Kang
    • Geophysics and Geophysical Exploration
    • /
    • v.27 no.2
    • /
    • pp.91-107
    • /
    • 2024
  • Single-channel seismic exploration has proven effective in delineating subsurface geological structures using small-scale survey systems. The seismic data acquired through zero- or near-offset methods directly capture subsurface features along the vertical axis, facilitating the construction of corresponding seismic sections. However, substantial noise in single-channel seismic data hampers precise interpretation because of the low signal-to-noise ratio. This study introduces a novel approach that integrate noise reduction and signal enhancement via matrix rank optimization to address this issue. Unlike conventional rank-reduction methods, which retain selected singular values to mitigate random noise, our method optimizes the entire singular value spectrum, thus effectively tackling both random and erratic noises commonly found in environments with low signal-to-noise ratio. Additionally, to enhance the horizontal continuity of seismic events and mitigate signal loss during noise reduction, we introduced an adaptive weighting factor computed from the eigenimage of the seismic section. To access the robustness of the proposed method, we conducted numerical experiments using single-channel Sparker seismic data from the Chukchi Plateau in the Arctic Ocean. The results demonstrated that the seismic sections had significantly improved signal-to-noise ratios and minimal signal loss. These advancements hold promise for enhancing single-channel and high-resolution seismic surveys and aiding in the identification of marine development and submarine geological hazards in domestic coastal areas.

Medium Optimization for Fibrinolytic Enzyme Production by Bacillus subtilis MG410 Isolated (Bacillus subtilis MG410에 의한 Fibrin 분해효소 생산배지의 최적화)

  • Lee Ju-Youn;Paek Nam-Soo;Kim Young-Man
    • The Korean Journal of Food And Nutrition
    • /
    • v.18 no.1
    • /
    • pp.39-47
    • /
    • 2005
  • Using the bacteria isolated from Chungkookjang, Bacillus sublilis MG410 which is excellent in fibrinolytic enzyme activity was isolated. In increase the high production of fibrinolytic enzyme from Bacillus sublilis MG410, the effect of various carbon sources, nitrogen sources, inorganic sources, the initial pH of medium were investigated. The most effective carbon and nitrogen sources were founded cellobiose 0.5%(w/v) and soybean meal 2%(w/v) respectively. None of inorganic sources examined had any detectable stimulating effect on fibrinolytic enzyme production except Na₂HPO₄·12H₂O. The initial optimum pH for fibrinolytic enzyme production ranged from 5∼6 and agitation speed was effect at 150rpm. In jar fermentor experiments under optimal culture conditions, the activity of fibrinolytic enzyme reached about 5.050 unit after 48hours.

Subsequence Matching Under Time Warping in Time-Series Databases : Observation, Optimization, and Performance Results (시계열 데이터베이스에서 타임 워핑 하의 서브시퀀스 매칭 : 관찰, 최적화, 성능 결과)

  • Kim Man-Soon;Kim Sang-Wook
    • The KIPS Transactions:PartD
    • /
    • v.11D no.7 s.96
    • /
    • pp.1385-1398
    • /
    • 2004
  • This paper discusses an effective processing of subsequence matching under time warping in time-series databases. Time warping is a trans-formation that enables finding of sequences with similar patterns even when they are of different lengths. Through a preliminary experiment, we first point out that the performance bottleneck of Naive-Scan, a basic method for processing of subsequence matching under time warping, is on the CPU processing step. Then, we propose a novel method that optimizes the CPU processing step of Naive-Scan. The proposed method maximizes the CPU performance by eliminating all the redundant calculations occurring in computing the time warping distance between the query sequence and data subsequences. We formally prove the proposed method does not incur false dismissals and also is the optimal one for processing Naive-Scan. Also, we discuss the we discuss to apply the proposed method to the post-processing step of LB-Scan and ST-Filter, the previous methods for processing of subsequence matching under time warping. Then, we quantitatively verify the performance improvement ef-fects obtained by the proposed method via extensive experiments. The result shows that the performance of all the three previous methods im-proves by employing the proposed method. Especially, Naive-Scan, which is known to show the worst performance, performs much better than LB-Scan as well as ST-Filter in all cases when it employs the proposed method for CPU processing. This result is so meaningful in that the performance inversion among Nive- Scan, LB-Scan, and ST-Filter has occurred by optimizing the CPU processing step, which is their perform-ance bottleneck.