• Title/Summary/Keyword: Multiple theorem

Search Result 107, Processing Time 0.022 seconds

Non-linear dynamic assessment of low-rise RC building model under sequential ground motions

  • Haider, Syed Muhammad Bilal;Nizamani, Zafarullah;Yip, Chun Chieh
    • Structural Engineering and Mechanics
    • /
    • v.74 no.6
    • /
    • pp.789-807
    • /
    • 2020
  • Multiple earthquakes that occur during short seismic intervals affect the inelastic behavior of the structures. Sequential ground motions against the single earthquake event cause the building structure to face loss in stiffness and its strength. Although, numerous research studies had been conducted in this research area but still significant limitations exist such as: 1) use of traditional design procedure which usually considers single seismic excitation; 2) selecting a seismic excitation data based on earthquake events occurred at another place and time. Therefore, it is important to study the effects of successive ground motions on the framed structures. The objective of this study is to overcome the aforementioned limitations through testing a two storey RC building structural model scaled down to 1/10 ratio through a similitude relation. The scaled model is examined using a shaking table. Thereafter, the experimental model results are validated with simulated results using ETABS software. The test framed specimen is subjected to sequential five artificial and four real-time earthquake motions. Dynamic response history analysis has been conducted to investigate the i) observed response and crack pattern; ii) maximum displacement; iii) residual displacement; iv) Interstorey drift ratio and damage limitation. The results of the study conclude that the low-rise building model has ability to resist successive artificial ground motion from its strength. Sequential artificial ground motions cause the framed structure to displace each storey twice in correlation with vary first artificial seismic vibration. The displacement parameters showed that real-time successive ground motions have a limited impact on the low-rise reinforced concrete model. The finding shows that traditional seismic design EC8 requires to reconsider the traditional design procedure.

A High Efficiency Data Compression Scheme Based on Deletion of Bit-plain in Wireless Multimedia Sensor Networks (무선 멀티미디어 센서 네트워크에서 비트-평면 삭제를 통한 고효율 데이터 압축 기법)

  • Park, Junho;Ryu, Eunkyung;Son, Ingook;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.10
    • /
    • pp.37-45
    • /
    • 2013
  • In recent years, the demands of multimedia data in wireless sensor networks have been significantly increased for the high-quality environment monitoring applications that utilize sensor nodes. However, since the amount of multimedia data is very large, the network lifetime is significantly reduced due to excessive energy consumption on particular nodes. To overcome this problem, in this paper, we propose a high efficiency data compression scheme in wireless multimedia sensor networks. The proposed scheme reduces the packet size by a multiple compression technique that consists of primary compression that deletes the lower priority bits considering characteristics of multimedia data and secondary compression based on Chinese Remainder Theorem. To show the superiority of our scheme, we compare it with the existing compression scheme. Our experimental results show that our proposed scheme reduces the amount of transmitted data by about 55% and increases network lifetime by about 16% over the existing scheme on average.

Design and ultimate behavior of RC plates and shells: two case studies

  • Min, Chang-Shik
    • Structural Engineering and Mechanics
    • /
    • v.14 no.2
    • /
    • pp.171-190
    • /
    • 2002
  • Two cases of design are performed for the hyperbolic paraboloid saddle shell (Lin-Scordelis saddle shell) and the hyperbolic cooling tower (Grand Gulf cooling tower) to check the design strength against a consistent design load, therefore to verify the adequacy of the design algorithm. An iterative numerical computational algorithm is developed for combined membrane and flexural forces, which is based on equilibrium consideration for the limit state of reinforcement and cracked concrete. The design algorithm is implemented in a finite element analysis computer program developed by Mahmoud and Gupta. The amount of reinforcement is then determined at the center of each element by an elastic finite element analysis with the design ultimate load. Based on ultimate nonlinear analyses performed with designed saddle shell, the analytically calculated ultimate load exceeded the design ultimate load from 7% to 34% for analyses with various magnitude of tension stiffening. For the cooling tower problem the calculated ultimate load exceeded the design ultimate load from 26% to 63% with similar types of analyses. Since the effective tension stiffening would vary over the life of the shells due to environmental factors, a degree of uncertainty seems inevitable in calculating the actual failure load by means of numerical analysis. Even though the ultimate loads are strongly dependent on the tensile properties of concrete, the calculated ultimate loads are higher than the design ultimate loads for both design cases. For the cases designed, the design algorithm gives a lower bound on the design ultimate load with respect to the lower bound theorem. This shows the adequacy of the design algorithm developed, at least for the shells studied. The presented design algorithm for the combined membrane and flexural forces can be evolved as a general design method for reinforced concrete plates and shells through further studies involving the performance of multiple designs and the analyses of differing shell configurations.

Unsupervised one-class classification for condition assessment of bridge cables using Bayesian factor analysis

  • Wang, Xiaoyou;Li, Lingfang;Tian, Wei;Du, Yao;Hou, Rongrong;Xia, Yong
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.41-51
    • /
    • 2022
  • Cables are critical components of cable-stayed bridges. A structural health monitoring system provides real-time cable tension recording for cable health monitoring. However, the measurement data involve multiple sources of variability, i.e., varying environmental and operational factors, which increase the complexity of cable condition monitoring. In this study, a one-class classification method is developed for cable condition assessment using Bayesian factor analysis (FA). The single-peaked vehicle-induced cable tension is assumed to be relevant to vehicle positions and weights. The Bayesian FA is adopted to establish the correlation model between cable tensions and vehicles. Vehicle weights are assumed to be latent variables and the influences of different transverse positions are quantified by coefficient parameters. The Bayesian theorem is employed to estimate the parameters and variables automatically, and the damage index is defined on the basis of the well-trained model. The proposed method is applied to one cable-stayed bridge for cable damage detection. Significant deviations of the damage indices of Cable SJS11 were observed, indicating a damaged condition in 2011. This study develops a novel method to evaluate the health condition of individual cable using the FA in the Bayesian framework. Only vehicle-induced cable tensions are used and there is no need to monitor the vehicles. The entire process, including the data pre-processing, model training and damage index calculation of one cable, takes only 35 s, which is highly efficient.

An Efficient Algorithm for Streaming Time-Series Matching that Supports Normalization Transform (정규화 변환을 지원하는 스트리밍 시계열 매칭 알고리즘)

  • Loh, Woong-Kee;Moon, Yang-Sae;Kim, Young-Kuk
    • Journal of KIISE:Databases
    • /
    • v.33 no.6
    • /
    • pp.600-619
    • /
    • 2006
  • According to recent technical advances on sensors and mobile devices, processing of data streams generated by the devices is becoming an important research issue. The data stream of real values obtained at continuous time points is called streaming time-series. Due to the unique features of streaming time-series that are different from those of traditional time-series, similarity matching problem on the streaming time-series should be solved in a new way. In this paper, we propose an efficient algorithm for streaming time- series matching problem that supports normalization transform. While the existing algorithms compare streaming time-series without any transform, the algorithm proposed in the paper compares them after they are normalization-transformed. The normalization transform is useful for finding time-series that have similar fluctuation trends even though they consist of distant element values. The major contributions of this paper are as follows. (1) By using a theorem presented in the context of subsequence matching that supports normalization transform[4], we propose a simple algorithm for solving the problem. (2) For improving search performance, we extend the simple algorithm to use $k\;({\geq}\;1)$ indexes. (3) For a given k, for achieving optimal search performance of the extended algorithm, we present an approximation method for choosing k window sizes to construct k indexes. (4) Based on the notion of continuity[8] on streaming time-series, we further extend our algorithm so that it can simultaneously obtain the search results for $m\;({\geq}\;1)$ time points from present $t_0$ to a time point $(t_0+m-1)$ in the near future by retrieving the index only once. (5) Through a series of experiments, we compare search performances of the algorithms proposed in this paper, and show their performance trends according to k and m values. To the best of our knowledge, since there has been no algorithm that solves the same problem presented in this paper, we compare search performances of our algorithms with the sequential scan algorithm. The experiment result showed that our algorithms outperformed the sequential scan algorithm by up to 13.2 times. The performances of our algorithms should be more improved, as k is increased.

Limit Pricing by Noncooperative Oligopolists (과점산업(寡占産業)에서의 진입제한가격(進入制限價格))

  • Nam, Il-chong
    • KDI Journal of Economic Policy
    • /
    • v.12 no.1
    • /
    • pp.127-148
    • /
    • 1990
  • A Milgrom-Roberts style signalling model of limit pricing is developed to analyze the possibility and the scope of limit pricing in general, noncooperative oligopolies. The model contains multiple incumbent firms facing a potential entrant and assumes an information asymmetry between incombents and the potential entrant about the market demand. There are two periods in the model. In period 1, n incumbent firms simultaneously and noncooperatively choose quantities. At the end of period 1, the potential entrant observes the market price and makes an entry decision. In period 2, depending on the entry decision of the entrant, n' or (n+1) firms choose quantities again before the game terminates. Since the choice of incumbent firms in period 1 depends on their information about demand, the market price in period 1 conveys information about the market demand. Thus, there is a systematic link between the market price and the profitability of entry. Using Bayes-Nash equilibrium as the solution concept, we find that there exist some demand conditions under which incumbent firms will limit price. In symmetric equilibria, incumbent firms each produce an output that is greater than the Cournot output and induce a price that is below the Cournot price. In doing so, each incumbent firm refrains from maximizing short-run profit and supplies a public good that is entry deterrence. The reason that entry is deterred by such a reduced price is that it conveys information about the demand of the industry that is unfavorable to the entrant. This establishes the possibility of limit pricing by noncooperative oligopolists in a setting that is fully rational, and also generalizes the result of Milgrom and Roberts to general oligopolies, confirming Bain's intuition. Limit pricing by incumbents explained above can be interpreted as a form of credible collusion in which each firm voluntarily deviates from myopic optimization in order to deter entry using their superior information. This type of implicit collusion differs from Folk-theorem type collusions in many ways and suggests that a collusion can be a credible one even in finite games as long as there is information asymmetry. Another important result is that as the number of incumbent firms approaches infinity, or as the industry approaches a competitive one, the probability that limit pricing occurs converges to zero and the probability of entry converges to that under complete information. This limit result confirms the intuition that as the number of agents sharing the same private information increases, the value of the private information decreases, and the probability that the information gets revealed increases. This limit result also supports the conventional belief that there is no entry problem in a competitive market. Considering the fact that limit pricing is generally believed to occur at an early stage of an industry and the fact that many industries in Korea are oligopolies in their infant stages, the theoretical results of this paper suggest that we should pay attention to the possibility of implicit collusion by incumbent firms aimed at deterring new entry using superior information. The long-term loss to the Korean economy from limit pricing can be very large if the industry in question is a part of the world market and the domestic potential entrant whose entry is deterred could .have developed into a competitor in the world market. In this case, the long-term loss to the Korean economy should include the lost opportunity in the world market in addition to the domestic long-run welfare loss.

  • PDF

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.