• Title/Summary/Keyword: Fast computation

Search Result 748, Processing Time 0.03 seconds

A Study on Dynamic Behaviour of Cable-Stayed Bridge by Vehicle Load (차량하중에 의한 사장교의 동적거동에 관한 연구)

  • Park, Cheun Hyek;Han, Jai Ik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.14 no.6
    • /
    • pp.1299-1308
    • /
    • 1994
  • This paper is considered on the dynamic behavior and the dynamic impact coefficient on the cable-stayed bridge under the vehicle load. The method of static analysis, that is, the transfer matrix method is used to get influence values about displacements, section forces of girder and cable forces. Gotten influence values were used as basic data to analyse dynamic behavior. This paper used the transfer matrix method because it is relatively simpler than the finite element method, and calculating speed of computer is very fast and the precision of computation is high. In the process of dynamic analysis, the uncoupled equation of motion is derived from simultaneous equation of the motion of cable-stayed bridge and vehicle travelling by using mode shape, which was borne from system of undamped free vibration. The solution of the uncoupled equation of motion, that is, time history of response of deflections, velocity and acceleration on reference coordinate system, is found by Newmark-${\beta}$ method, a kind of direct integral method. After the time history of dynamic response was gotten, and it was transfered to the time history of dynamic response of cable-stayed bridge by linear transformation of coordinates. As a result of this numerical analysis, effect of dynamic behavior for cable-stayed bridge under the vehicle load has varied depending on parameter of design, that is, the ratio of span, the ratio of main span length, tower height, the flexural rigidity of longitudinal girder, the flexural rigidity of tower, and the cable stiffness, investigated. Very good agreements with the existing solution in the literature are shown for the uncracked plate as well as the cracked plate.

  • PDF

Content-based Image Retrieval Using Color Adjacency and Gradient (칼라 인접성과 기울기를 이용한 내용 기반 영상 검색)

  • Jin, Hong-Yan;Lee, Ho-Young;Kim, Hee-Soo;Kim, Gi-Seok;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.1
    • /
    • pp.104-115
    • /
    • 2001
  • A new content-based color image retrieval method integrating the features of the color adjacency and the gradient is proposed in this paper. As the most used feature of color image, color histogram has its own advantages that it is invariant to the changes in viewpoint and the rotation of the image etc., and the computation of the feature is simple and fast. However, it is difficult to distinguish those different images having similar color distributions using histogram-based image retrieval, because the color histogram is generated on uniformly quantized colors and the histogram itself contains no spatial information. And another shortcoming of the histogram-based image retrieval is the storage of the features is usually very large. In order to prevent the above drawbacks, the gradient that is the largest color difference of neighboring pixels is calculated in the proposed method instead of the uniform quantization which is commonly used at most histogram-based methods. And the color adjacency information which indicates major color composition feature of an image is extracted and represented as a binary form to reduce the amount of feature storage. The two features are integrated to allow the retrieval more robust to the changes of various external conditions.

  • PDF

Determination of the Gravity Anomaly in the Ocean Area of Korean Peninsula using Satellite Altimeter Data (위성 고도자료를 이용한 한반도 해상지역에서의 중력이상의 결정)

  • 김광배;최재화;윤홍식;이석배
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.13 no.2
    • /
    • pp.177-185
    • /
    • 1995
  • Gravity anomalies were recovered on a $5'\times{5'}$grid using the sea surface height data obtained from the combination of Geosat, ERS-1, Topex/Poseidon altimeter data around Korean Peninsula bounded by latitude between $30^\circ{N}\;and\;50^\circ{N}$ and longitude $120^\circ{E}\;to\;140^\circ{E.}$ In order to recover the gravity anomalies from SSH(Sea Surface Height), inverse FFT technique was applied. The estimated gravity anomalies were compared with gravity anomalies measured by shipboard around Korean Peninsula. In comparison with the differences of gravity anomaly between measured data and altimeter data, the mean and the standard deviation were found to be -0.51 mGal and 13.48 mGal, respectively. In case of comparison between the measured data and the OSU91A geopotential model, the mean and the standard deviation were found to be 11.93 mGal and 19.19 mGal, respectively. The comparison of gravity anomalies obtained from the OSU91A geopotential model and the altimeter data was carried out. The results were mean of 5.30 meal and standard deviation of 19.62 mGal. From the results, we could be concluded that the gravity anomalies computed from the altimeter data is used to the geoid computation instead of the measured data.

  • PDF

A Comparison of the Gravimetric Geoid and the Geometric Geoid Using GPS/Leveling Data (GPS/Leveling 데이터를 이용한 기하지오이드와 중력지오이드의 비교 분석)

  • Kim, Young-Gil;Choi, Yun-Soo;Kwon, Jay-Hyoun;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.2
    • /
    • pp.217-222
    • /
    • 2010
  • The geoid is the level surface that closely approximates mean sea level and usually used for the origin of vertical datum. For the computation of geoid, various sources of gravity measurements are used in South Korea and, as a consequence, the geoid models may show different results. however, a limited analysis has been performed due to a lack of controlled data, namely the GPS/Leveling data. Therefore, in this study, the gravimetric geoids are compared with the geodetic geoid which is obtained through the GPS/Leveling procedures. The gravimetric geoids are categorized into geoid from airborne gravimetry, geoid from the terrestrial gravimetry, NGII geoid(geoids published by National Geographic Information Institute) and NORI geoid(geoi published by National Oceanographic Research Institute), respectively. For the analysis, the geometric geoid is obtained at each unified national control point and the difference between geodetic and gravimetric geoid is computed. Also, the geoid height data is gridded on a regular $10{\times}10-km$ grid so that the FFT method can be applied to analyze the geoid height differences in frequency domain. The results show that no significant differences in standard deviation are observed when the geoids from the airborne and terrestrial gravimetry are compared with the geomertric geoid while relatively large difference are shown when NGII geoid and NORI geoid are compared with geometric geoid. Also, NGII geoid and NORI geoid are analyzed in frequency domain and the deviations occurs in long-wavelength domain.

Primary Solution Evaluations for Interpreting Electromagnetic Data (전자탐사 자료 해석을 위한 1차장 계산)

  • Kim, Hee-Joon;Choi, Ji-Hyang;Han, Nu-Ree;Song, Yoon-Ho;Lee, Ki-Ha
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.4
    • /
    • pp.361-366
    • /
    • 2009
  • Layered-earth Green's functions in electormagnetic (EM) surveys play a key role in modeling the response of exploration targets. They are computed through the Hankel transforms of analytic kernels. Computational precision depends upon the choice of algebraically equivalent forms by which these kemels are expressed. Since three-dimensional (3D) modeling can require a huge number of Green's function evaluations, total computational time can be influenced by computational time for the Hankel transform evaluations. Linear digital filters have proven to be a fast and accurate method of computing these Hankel transforms. In EM modeling for 3D inversion, electric fields are generally evaluated by the secondary field formulation to avoid the singularity problem. In this study, three components of electric fields for five different sources on the surface of homogeneous half-space were derived as primary field solutions. Moreover, reflection coefficients in TE and TM modes were produced to calculate EM responses accurately for a two-layered model having a sea layer. Accurate primary fields should substantially improve accuracy and decrease computation times for Green's function-based problems like MT problems and marine EM surveys.

Improvement of Address Pointer Assignment in DSP Code Generation (DSP용 코드 생성에서 주소 포인터 할당 성능 향상 기법)

  • Lee, Hee-Jin;Lee, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.37-47
    • /
    • 2008
  • Exploitation of address generation units which are typically provided in DSPs plays an important role in DSP code generation since that perform fast address computation in parallel to the central data path. Offset assignment is optimization of memory layout for program variables by taking advantage of the capabilities of address generation units, consists of memory layout generation and address pointer assignment steps. In this paper, we propose an effective address pointer assignment method to minimize the number of address calculation instructions in DSP code generation. The proposed approach reduces the time complexity of a conventional address pointer assignment algorithm with fixed memory layouts by using minimum cost-nodes breaking. In order to contract memory size and processing time, we employ a powerful pruning technique. Moreover our proposed approach improves the initial solution iteratively by changing the memory layout for each iteration because the memory layout affects the result of the address pointer assignment algorithm. We applied the proposed approach to about 3,000 sequences of the OffsetStone benchmarks to demonstrate the effectiveness of the our approach. Experimental results with benchmarks show an average improvement of 25.9% in the address codes over previous works.

Clustering of Web Objects with Similar Popularity Trends (유사한 인기도 추세를 갖는 웹 객체들의 클러스터링)

  • Loh, Woong-Kee
    • The KIPS Transactions:PartD
    • /
    • v.15D no.4
    • /
    • pp.485-494
    • /
    • 2008
  • Huge amounts of various web items such as keywords, images, and web pages are being made widely available on the Web. The popularities of such web items continuously change over time, and mining temporal patterns in popularities of web items is an important problem that is useful for several web applications. For example, the temporal patterns in popularities of search keywords help web search enterprises predict future popular keywords, enabling them to make price decisions when marketing search keywords to advertisers. However, presence of millions of web items makes it difficult to scale up previous techniques for this problem. This paper proposes an efficient method for mining temporal patterns in popularities of web items. We treat the popularities of web items as time-series, and propose gapmeasure to quantify the similarity between the popularities of two web items. To reduce the computation overhead for this measure, an efficient method using the Fast Fourier Transform (FFT) is presented. We assume that the popularities of web items are not necessarily following any probabilistic distribution or periodic. For finding clusters of web items with similar popularity trends, we propose to use a density-based clustering algorithm based on the gap measure. Our experiments using the popularity trends of search keywords obtained from the Google Trends web site illustrate the scalability and usefulness of the proposed approach in real-world applications.

A study on the discriminant analysis of node deployment based on cable type Wi-Fi in indoor (케이블형 Wi-Fi 기반 실내 공간의 노드 배치 판별 분석에 관한 연구)

  • Zin, Hyeon-Cheol;Kim, Won-Yeol;Kim, Jong-Chan;Kim, Yoon-Sik;Seo, Dong-Hoan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.9
    • /
    • pp.836-841
    • /
    • 2016
  • An indoor positioning system using Wi-Fi is essential to produce a radio map that combines the indoor space of two or more dimensions, the information of node positions, and etc. in processing for constructing the radio map, the measurement of the received signal strength indicator(RSSI) and the confirmation of node placement information counsume substantial time. Especially, when the installed wireless environment is changed or a new space is created, easy installation of the node and fast indoor radio mapping are needed to provide indoor location-based services. In this paper, to reduce the time consumption, we propose an algorithm to distinguish the straight and curve lines of a corridor section by RSSI visualization and Sobel filter-based edge detection that enable accurate node deployment and space analysis using cable-type Wi-Fi node installed at a 3 m interval. Because the cable type Wi-Fi is connected by a same power line, it has an advantage that the installation order of nodes at regular intervals could be confirmed accurately. To be able to analyze specific sections in space based on this advantage, the distribution of the signal was confirmed and analyzed by Sobel filter based edge detection and total RSSI distribution(TRD) computation through a visualization process based on the measured RSSI. As a result to compare the raw data with the performance of the proposed algorithm, the signal intensity of proposed algorithm is improved by 13.73 % in the curve section. Besides, the characteristics of the straight and the curve line were enhanced as the signal intensity of the straight line decreased by an average of 34.16 %.

Topology of High Speed System Emulator and Its Software (초고속 시스템 에뮬레이터의 구조와 이를 위한 소프트웨어)

  • Kim, Nam-Do;Yang, Se-Yang
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.479-488
    • /
    • 2001
  • As the SoC designs complexity constantly increases, the simulation that uses their software models simply takes too much time. To solve this problem, FPGA-based logic emulators have been developed and commonly used in the industry. However, FPGA-based logic emulators are facing with the problems of which not only very low FPGA resource usage rate due to the very limited number of pins in FPGAs, but also the emulation speed getting slow drastically as the complexity of designs increases. In this paper, we proposed a new innovative emulation architecture and its software that has high FPGA resource usage rate and makes the emulation extremely fast. The proposed emulation system has merits to overcome the FPGA pin limitation by pipelined ring which transfers multiple logic signal through a single physical pin, and it also makes possible to use a high speed system clock through the intelligent ring topology. In this topology, not only all signal transfer channels among EPGAs are totally separated from user logic so that a high speed system clock can be used, but also the depth of combinational paths is kept swallow as much as possible. Both of these are contributed to achieve high speed emulation. For pipelined singnals transfer among FPGAs we adopt a few heuristic scheduling having low computation complexity. Experimental result with a 12 bit microcontroller has shown that high speed emulation possible even with these simple heuristic scheduling algorithms.

  • PDF

Feature Selection to Predict Very Short-term Heavy Rainfall Based on Differential Evolution (미분진화 기반의 초단기 호우예측을 위한 특징 선택)

  • Seo, Jae-Hyun;Lee, Yong Hee;Kim, Yong-Hyuk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.6
    • /
    • pp.706-714
    • /
    • 2012
  • The Korea Meteorological Administration provided the recent four-years records of weather dataset for our very short-term heavy rainfall prediction. We divided the dataset into three parts: train, validation and test set. Through feature selection, we select only important features among 72 features to avoid significant increase of solution space that arises when growing exponentially with the dimensionality. We used a differential evolution algorithm and two classifiers as the fitness function of evolutionary computation to select more accurate feature subset. One of the classifiers is Support Vector Machine (SVM) that shows high performance, and the other is k-Nearest Neighbor (k-NN) that is fast in general. The test results of SVM were more prominent than those of k-NN in our experiments. Also we processed the weather data using undersampling and normalization techniques. The test results of our differential evolution algorithm performed about five times better than those using all features and about 1.36 times better than those using a genetic algorithm, which is the best known. Running times when using a genetic algorithm were about twenty times longer than those when using a differential evolution algorithm.