• Title/Summary/Keyword: Optimal Algorithm

Search Result 6,866, Processing Time 0.034 seconds

Development of a Freeway Travel Time Forecasting Model for Long Distance Section with Due Regard to Time-lag (시간처짐현상을 고려한 장거리구간 통행시간 예측 모형 개발)

  • 이의은;김정현
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.4
    • /
    • pp.51-61
    • /
    • 2002
  • In this dissertation, We demonstrated the Travel Time forecasting model in the freeway of multi-section with regard of drives' attitude. Recently, the forecasted travel time that is furnished based on expected travel time data and advanced experiment isn't being able to reflect the time-lag phenomenon specially in case of long distance trip, so drivers don't believe any more forecasted travel time. And that's why the effects of ATIS(Advanced Traveler Information System) are reduced. Therefore, in this dissertation to forecast the travel time of the freeway of multi-section reflecting the time-lag phenomenon & the delay of tollgate, we used traffic volume data & TCS data that are collected by Korea Highway Cooperation. Also keep the data of mixed unusual to applicate real system. The applied model for forecasting is consisted of feed-forward structure which has three input units & two output units and the back-propagation is utilized as studying method. Furthermore, the optimal alternative was chosen through the twelve alternative ideas which is composed of the unit number of hidden-layer & repeating number which affect studying speed & forecasting capability. In order to compare the forecasting capability of developed ANN model. the algorithm which are currently used as an information source for freeway travel time. During the comparison with reference model, MSE, MARE, MAE & T-test were executed, as the result, the model which utilized the artificial neural network performed more superior forecasting capability among the comparison index. Moreover, the calculated through the particularity of data structure which was used in this experiment.

Isolated Word Recognition Using k-clustering Subspace Method and Discriminant Common Vector (k-clustering 부공간 기법과 판별 공통벡터를 이용한 고립단어 인식)

  • Nam, Myung-Woo
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.42 no.1
    • /
    • pp.13-20
    • /
    • 2005
  • In this paper, I recognized Korean isolated words using CVEM which is suggested by M. Bilginer et al. CVEM is an algorithm which is easy to extract the common properties from training voice signals and also doesn't need complex calculation. In addition CVEM shows high accuracy in recognition results. But, CVEM has couple of problems which are impossible to use for many training voices and no discriminant information among extracted common vectors. To get the optimal common vectors from certain voice classes, various voices should be used for training. But CVEM is impossible to get continuous high accuracy in recognition because CVEM has a limitation to use many training voices and the absence of discriminant information among common vectors can be the source of critical errors. To solve above problems and improve recognition rate, k-clustering subspace method and DCVEM suggested. And did various experiments using voice signal database made by ETRI to prove the validity of suggested methods. The result of experiments shows improvements in performance. And with proposed methods, all the CVEM problems can be solved with out calculation problem.

A Study on Reliability Based Design Criteria for Reinforced Concrete Bridge Superstructures (철근(鐵筋)콘크리트 도로교(道路橋) 상부구조(上部構造) 신뢰성(信賴性) 설계규준(設計規準)에 관한 연구(研究))

  • Cho, Hyo Nam
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.2 no.3
    • /
    • pp.87-99
    • /
    • 1982
  • This study proposes a reliability based design criteria for the R.C. superstructures of highway bridges. Uncertainties associated with the resistance of T or rectangular sections are investigated, and a set of appropriate uncertainties associated with the bridge dead and traffic live loads are proposed by reflecting our level of practice. Major 2nd moment reliability analysis and design theories including both Cornell's MFOSM(Mean First Order 2nd Moment) Methods and Lind-Hasofer's AFOSM(Advanced First Order 2nd Moment) Methods are summarized and compared, and it has been found that Ellingwood's algorithm and an approximate log-normal type reliability formula are well suited for the proposed reliability study. A target reliability index (${\beta}_0=3.5$) is selected as an optimal value considering our practice based on the calibration with the current R.C. bridge design safety provisions. A set of load and resistance factors is derived by the proposed uncertainties and the methods corresponding to the target reliability. Furthermore, a set of nominal safety factors and allowable stresses are proposed for the current W.S.D. design provisions. It may be asserted that the proposed L.R.F.D. reliability based design criteria for the R.C. highway bridges may have to be incorporated into the current R.C. bridge design codes as a design provision corresponding to the U.S.D. provisions of the current R.C. design code.

  • PDF

Magnetic Resonance Elastography (자기 공명 탄성법)

  • Kim, Dong-Hyun;Yang, Jae-Won;Kim, Myeong-Jin
    • Investigative Magnetic Resonance Imaging
    • /
    • v.11 no.1
    • /
    • pp.10-19
    • /
    • 2007
  • Conventional MRI methods using T1-, T2-, diffusion-, perfusion-weighting, and functional imaging rely on characterizing the physical and functional properties of the tissue. In this review, we introduce an imaging modality based on measured the mechanical properties of soft tissue, namely magnetic resonance elastography (MRE). The use of palpation to identify the stiffness of tissue remains a fundamental diagnostic tool. MRE can quantify the stiffness of the tissue thereby providing a objective means to measure the mechanical properties. To accomplish a successful clinical setting using MRE, hardware and software techniques in the area of transducer, pulse sequence, and imaging processing algorithm need to be developed. Transducer, a mechanical vibrator, is the core of MRE application to make wave propagate invivo. For this reason, considerations of the frame of human body, pressure and friction of the interface, and high magnetic field of a MRI system needs to be taken into account when designing a transducer. Given that the wave propagates through human body effectively, developing an appropriate pulse sequence is another important issue in obtaining an optimal image. In this review paper, we introduce the technical aspects needed for MRE experiments and introduce several applications of this new field.

  • PDF

A Classified Space VQ Design for Text-Independent Speaker Recognition (문맥 독립 화자인식을 위한 공간 분할 벡터 양자기 설계)

  • Lim, Dong-Chul;Lee, Hanig-Sei
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.673-680
    • /
    • 2003
  • In this paper, we study the enhancement of VQ (Vector Quantization) design for text independent speaker recognition. In a concrete way, we present a non-iterative method which makes a vector quantization codebook and this method performs non-iterative learning so that the computational complexity is epochally reduced The proposed Classified Space VQ (CSVQ) design method for text Independent speaker recognition is generalized from Semi-noniterative VQ design method for text dependent speaker recognition. CSVQ contrasts with the existing desiEn method which uses the iterative learninE algorithm for every traininE speaker. The characteristics of a CSVQ design is as follows. First, the proposed method performs the non-iterative learning by using a Classified Space Codebook. Second, a quantization region of each speaker is equivalent for the quantization region of a Classified Space Codebook. And the quantization point of each speaker is the optimal point for the statistical distribution of each speaker in a quantization region of a Classified Space Codebook. Third, Classified Space Codebook (CSC) is constructed through Sample Vector Formation Method (CSVQ1, 2) and Hyper-Lattice Formation Method (CSVQ 3). In the numerical experiment, we use the 12th met-cepstrum feature vectors of 10 speakers and compare it with the existing method, changing the codebook size from 16 to 128 for each Classified Space Codebook. The recognition rate of the proposed method is 100% for CSVQ1, 2. It is equal to the recognition rate of the existing method. Therefore the proposed CSVQ design method is, reducing computational complexity and maintaining the recognition rate, new alternative proposal and CSVQ with CSC can be applied to a general purpose recognition.

Fast Generation of Elliptic Curve Base Points Using Efficient Exponentiation over $GF(p^m)$) (효율적인 $GF(p^m)$ 멱승 연산을 이용한 타원곡선 기저점의 고속 생성)

  • Lee, Mun-Kyu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.3
    • /
    • pp.93-100
    • /
    • 2007
  • Since Koblitz and Miller suggested the use of elliptic curves in cryptography, there has been an extensive literature on elliptic curve cryptosystem (ECC). The use of ECC is based on the observation that the points on an elliptic curve form an additive group under point addition operation. To realize secure cryptosystems using these groups, it is very important to find an elliptic curve whose group order is divisible by a large prime, and also to find a base point whose order equals this prime. While there have been many dramatic improvements on finding an elliptic curve and computing its group order efficiently, there are not many results on finding an adequate base point for a given curve. In this paper, we propose an efficient method to find a random base point on an elliptic curve defined over $GF(p^m)$. We first show that the critical operation in finding a base point is exponentiation. Then we present efficient algorithms to accelerate exponentiation in $GF(p^m)$. Finally, we implement our algorithms and give experimental results on various practical elliptic curves, which show that the new algorithms make the process of searching for a base point 1.62-6.55 times faster, compared to the searching algorithm based on the binary exponentiation.

Development of a Flood Disaster Evacuation Map Using Two-dimensional Flood Analysis and BIM Technology (2차원 침수해석과 BIM 기술을 활용한 홍수재난 대피지도 작성)

  • Jeong, Changsam
    • Journal of Korean Society of Disaster and Security
    • /
    • v.13 no.2
    • /
    • pp.53-63
    • /
    • 2020
  • In this study, the two-dimensional flow analysis model Hydro_AS-2D model was used to simulate the situation of flooding in Seongsangu and Uichang-gu in Changwon in the event of rising sea levels and extreme flooding, and the results were expressed on three-dimensional topography and the optimal evacuation path was derived using BIM technology. Climate change significantly affects two factors in terms of flood damage: rising sea levels and increasing extreme rainfall ideas. The rise in sea level itself can not only have the effect of flooding coastal areas and causing flooding, but it also raises the base flood level of the stream, causing the rise of the flood level throughout the stream. In this study, the rise of sea level by climate change, the rise of sea level by storm tidal wave by typhoon, and the extreme rainfall by typhoon were set as simulated conditions. The three-dimensional spatial information of the entire basin was constructed using the information of topographical space in Changwon and the information of the river crossing in the basic plan for river refurbishment. Using BIM technology, the target area was constructed as a three-dimensional urban information model that had information such as the building's height and location of the shelter on top of the three-dimensional topographical information, and the results of the numerical model were expressed on this model and used for analysis for evacuation planning. In the event of flooding, the escape route is determined by an algorithm that sets the path to the shelter according to changes in the inundation range over time, and the set path is expressed on intuitive three-dimensional spatial information and provided to the user.

Development of a Control Law to Pneumatic Control for an Anti-G Suit (Anti-G 슈트 공압 제어를 위한 제어법칙 개발)

  • Kim, Chong-sup;Hwang, Byung-moon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.43 no.6
    • /
    • pp.548-556
    • /
    • 2015
  • The highly maneuverable fighter aircraft such as F-22, F-16 and F-15have the high maneuverability to maximize the combat performance, whereas the high maneuver characteristics might degrade the pilot's mission efficiency due to fatigue's increase by exposing him to the high gravity and, in the worst case, the pilot could face GLOC (Gravity-induced Loss Of Consciousness). The advanced aerospace company has applied the various technologies to improve the pilot's tolerance to the gravity acceleration, in order to prevent the pilot from entering the situation of the loss of consciousness. Especially, the Anti-G Suit(AGS) equipment to protect the pilot against the high gravity in flight could improve the mission success rate by decreasing the pilot's fatigue in the combat maneuver as well as prevent the pilot from facing GLOC. In this paper, a control algorithm is developed and verified to provide an optimal air pressure to AGS according to the gravity increase during the high performance maneuver. This result is expected, as the key technology, to contribute to the KF-X(Korean Fighter eXperimental), project in the near future.

Rapid Optimization of Multiple Isocenters Using Computer Search for Linear Accelerator-based Stereotactic Radiosurgery (Multiple isocenter를 이용한 뇌정위적 방사선 수술시 컴퓨터 자동 추적 방법에 의한 고속의 선량 최적화)

  • Suh Tae-suk;Park Charn Il;Ha Sung Whan;Yoon Sei Chul;Kim Moon Chan;Bahk Yong Whee;Shinn Kyung Sub
    • Radiation Oncology Journal
    • /
    • v.12 no.1
    • /
    • pp.109-115
    • /
    • 1994
  • The purpose of this paper is to develop an efficient method for the quick determination of multiple isocenters plans to provide optimal dose distribution in sterotactic radiosurgery. A Spherical dose model was developed through the use of fit to the exact dose data calculated in a 18cm diameter of spherical head phantom. It computes dose quickly for each spherical part and is useful to estimate dose distribution for multiple isocenters. An automatic computer search algorithm was developed using the relationship between the isocenter move and the change of dose shape, and adapted with a spherical dose model to determine isocenter separation and cellimator sizes quickly and automatically. A spheric81 dose model shows a comparable isodose distribution with exact dose data and permits rapid calculations of 3-D isodoses. the computer search can provide reasonable isocenter settings more quickly than trial and error types of plans, while producing steep dose gradient around target boundary. A spherical dose model can be used for the quick determination of the multiple isocenter plans with 3 computer automatic search. Our guideline is useful to determine the initial multiple isocenter plans.

  • PDF

Solution Algorithms for Logit Stochastic User Equilibrium Assignment Model (확률적 로짓 통행배정모형의 해석 알고리듬)

  • 임용택
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.2
    • /
    • pp.95-105
    • /
    • 2003
  • Because the basic assumptions of deterministic user equilibrium assignment that all network users have perfect information of network condition and determine their routes without errors are known to be unrealistic, several stochastic assignment models have been proposed to relax this assumption. However. it is not easy to solve such stochastic assignment models due to the probability distribution they assume. Also. in order to avoid all path enumeration they restrict the number of feasible path set, thereby they can not preciously explain the travel behavior when the travel cost is varied in a network loading step. Another problem of the stochastic assignment models is stemmed from that they use heuristic approach in attaining optimal moving size, due to the difficulty for evaluation of their objective function. This paper presents a logit-based stochastic assignment model and its solution algorithm to cope with the problems above. We also provide a stochastic user equilibrium condition of the model. The model is based on path where all feasible paths are enumerated in advance. This kind of method needs a more computing demand for running the model compared to the link-based one. However, there are same advantages. It could describe the travel behavior more exactly, and too much computing time does not require than we expect, because we calculate the path set only one time in initial step Two numerical examples are also given in order to assess the model and to compare it with other methods.