• Title/Summary/Keyword: (Max,+)-Linear Systems

Search Result 44, Processing Time 0.03 seconds

The controllable fluid dash pot damper performance

  • Samali, Bijan;Widjaja, Joko;Reizes, John
    • Smart Structures and Systems
    • /
    • v.2 no.3
    • /
    • pp.209-224
    • /
    • 2006
  • The use of smart dampers to optimally control the response of structures is on the increase. To maximize the potential use of such damper systems, their accurate modeling and assessment of their performance is of vital interest. In this study, the performance of a controllable fluid dashpot damper, in terms of damper forces, damper dynamic range and damping force hysteretic loops, respectively, is studied mathematically. The study employs a damper Bingham-Maxwell (BingMax) model whose mathematical formulation is developed using a Fourier series technique. The technique treats this one-dimensional Navier-Stokes's momentum equation as a linear superposition of initial-boundary value problems (IBVPs): boundary conditions, viscous term, constant Direct Current (DC) induced fluid plug and fluid inertial term. To hold the formulation applicable, the DC current level to the damper is supplied as discrete constants. The formulation and subsequent simulation are validated with experimental results of a commercially available magneto rheological (MR) dashpot damper (Lord model No's RD-1005-3) subjected to a sinusoidal stroke motion using a 'SCHENK' material testing machine in the Materials Laboratory at the University of Technology, Sydney.

Effect of Nitric Oxide on the Sinusoidal Uptake of Organic Cations and Anions by Isolated Hepatocytes

  • Song, Im-Sook;Lee, In-Kyoung;Chung, Suk-Jae;Kim, Sang-Geon;Lee, Myung-Gull;Shim, Chang-Koo
    • Archives of Pharmacal Research
    • /
    • v.25 no.6
    • /
    • pp.984-988
    • /
    • 2002
  • The issue of whether or not the presence NOx (NO and oxidized metabolites) in the hepatocytes at pathological levels affects the functional activity of transport systems within the sinusoidal membrane was investigated. For this purpose, the effect of the pretreatment of isolated hepatocytes with sodium nitroprusside (SNP), a spontaneous NO donor, on the sinusoidal uptake of tributylmethylammonium (TBuMA) and triethylmethyl ammonium (TEMA), representative substrates of the organic cation transporter (OCT), and taurocholate, a representative substrate of the $Na^+$/taurocholate cotransporting polypeptide (NTCP), was measured. The uptake of TBuMA and TEMA was not affected by the pretreatment, as demonstrated by the nearly identical kinetic parameters for the uptake ($i.e., V_{max}, K_{m} and CL_{linear}$). The uptake of mannitol into hepatocytes was not affected, demonstrating that the membrane integrity remained constant, irregardless of the SNP prutreatment. On the contrary, the uptake of taurocholate was significantly inhibited by the pretreatment, resulting in a significant decrease in V_{max}$, thus providing a clear demonstration that NOx preferentially affects the function of NTCP rather than OCT on the sinusoidal membrane. A direct interaction between NOx and NTCP or a decrease in $Na^+/K^+$ ATPase activity as the result of SNP pretreatment might be responsible for this selective effect of NOx.

A Novel Differential Equal Gain Transmission Technique using M-PSK Constellations in MIMO System (MIMO 시스템에서 M-PSK 성운을 이용한 새로운 차분 동 이득 전송 기술)

  • Kim, Young-Ju;Seo, Chang-Won;Park, Noeyoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.1
    • /
    • pp.24-31
    • /
    • 2015
  • A differential codebook using M-ary phase shift keying (M-PSK) constellation as its codeword elements, is proposed for Long term evolution (LTE), LTE-Advanced (LTE-A), and/or WiMAX systems. Due to the temporal correlation of the adjacent channel, the consecutive precoding matrices are likely to be similar. This approach quantize only the differential information of the channel instead of the whole channel subspace, which virtually increase the codebook size to realize more accurate quantization of the channel. Especially, the proposed codebook has the same properties of LTE release-8 codebook that is, constant modulus, complexity reduction, and nested property. The mobile station can be designed by using a less expensive non-linear amplifier utilizing the constant modulus property. Computer simulations show that the capacity of the proposed codebook performs better than LTE release-8 codebook with the same amount of feedback information.

Classification Algorithms for Human and Dog Movement Based on Micro-Doppler Signals

  • Lee, Jeehyun;Kwon, Jihoon;Bae, Jin-Ho;Lee, Chong Hyun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.1
    • /
    • pp.10-17
    • /
    • 2017
  • We propose classification algorithms for human and dog movement. The proposed algorithms use micro-Doppler signals obtained from humans and dogs moving in four different directions. A two-stage classifier based on a support vector machine (SVM) is proposed, which uses a radial-based function (RBF) kernel and $16^{th}$-order linear predictive code (LPC) coefficients as feature vectors. With the proposed algorithms, we obtain the best classification results when a first-level SVM classifies the type of movement, and then, a second-level SVM classifies the moving object. We obtain the correct classification probability 95.54% of the time, on average. Next, to deal with the difficult classification problem of human and dog running, we propose a two-layer convolutional neural network (CNN). The proposed CNN is composed of six ($6{\times}6$) convolution filters at the first and second layers, with ($5{\times}5$) max pooling for the first layer and ($2{\times}2$) max pooling for the second layer. The proposed CNN-based classifier adopts an auto regressive spectrogram as the feature image obtained from the $16^{th}$-order LPC vectors for a specific time duration. The proposed CNN exhibits 100% classification accuracy and outperforms the SVM-based classifier. These results show that the proposed classifiers can be used for human and dog classification systems and also for classification problems using data obtained from an ultra-wideband (UWB) sensor.

Analysis of Polishing Mechanism and Characteristics of Aspherical Lens with MR Polishing (MR Polishing을 이용한 비구면 렌즈의 연마 메커니즘 및 연마 특성 분석)

  • Lee, Jung-Won;Cho, Myeong-Woo;Ha, Seok-Jae;Hong, Kwang-Pyo;Cho, Yong-Kyu;Lee, In-Cheol;Kim, Byung-Min
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.14 no.3
    • /
    • pp.36-42
    • /
    • 2015
  • The aspherical lens was designed to be able to array a focal point. For this reason, it has very curved surface. The aspherical lens is fabricated by injection molding or diamond turning machine. With the aspherical lens, tool marks and surface roughness affect the optical characteristics, such as transmissivity. However, it is difficult to polish free form surface shapes uniformly with conventional methods. Therefore, in this paper, the ultra-precision polishing method with MR fluid was used to polish an aspherical lens with 4-axis position control systems. A Tool path and polishing mechanism were developed to polish the aspherical lens shape. An MR polishing experiment was performed using a generated tool path with a PMMA aspherical lens after the turning process. As a result, surface roughness was improved from $R_a=40.99nm$, $R_{max}=357.1nm$ to $R_a=4.54nm$, $R_{max}=35.72nm$. Finally, the MR polishing system can be applied to the finishing process of fabrication of the aspherical lens.

Centralized Channel Allocation Schemes for Incomplete Medium Sharing Systems with General Channel Access Constraints (불완전매체공유 시스템을 위한 집중방식 채널할당기법)

  • Kim Dae-Woo;Lee Byoung-Seok;Choe Jin-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.3B
    • /
    • pp.183-198
    • /
    • 2006
  • We define the incomplete medium sharing system as a multi-channel shared medium communication system where constraints are imposed to the set of channels that may be allocated to some transmitter-receiver node pairs. To derive a centralized MAC scheme of a incomplete medium sharing system, we address the problem of optimal channel allocation The optimal channel allocation problem is then translated into a max-flow problem in a multi-commodity flow graph, and it is shown that the optimal solution can then be obtained by solving a linear programming problem. In addition, two suboptimal channel allocation schemes are proposed to bring down the computational complexity to a practical/feasible level; (1) one is a modified iSLIP channel allocation scheme, (2) the other is sequential channel allocation scheme. From the results of a extensive set of numerical experiments, it is found that the suboptimal schemes evaluate channel utilization close to that of the optimal schemes while requiring much less amount of computation than the optimal scheme. In particular, the sequential channel allocation scheme is shown to achieve higher channel utilization with less computational complexity than . the modified iSLIP channel allocation scheme.

The Cardinality Constrained Multi-Period Linear Programming Knapsack Problem (선수제약 다기간 선형계획 배낭문제)

  • Won, Joong-Yeon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.38 no.4
    • /
    • pp.64-71
    • /
    • 2015
  • In this paper, we present a multi-period 0-1 knapsack problem which has the cardinality constraints. Theoretically, the presented problem can be regarded as an extension of the multi-period 0-1 knapsack problem. In the multi-period 0-1 knapsack problem, there are n jobs to be performed during m periods. Each job has the execution time and its completion gives profit. All the n jobs are partitioned into m periods, and the jobs belong to i-th period may be performed not later than in the i-th period, i = 1, ${\cdots}$, m. The total production time for periods from 1 to i is given by $b_i$ for each i = 1, ${\cdots}$, m, and the objective is to maximize the total profit. In the extended problem, we can select a specified number of jobs from each of periods associated with the corresponding cardinality constraints. As the extended problem is NP-hard, the branch and bound method is preferable to solve it, and therefore it is important to have efficient procedures for solving its linear programming relaxed problem. So we intensively explore the LP relaxed problem and suggest a polynomial time algorithm. We first decompose the LP relaxed problem into m subproblems associated with each cardinality constraints. Then we identify some new properties based on the parametric analysis. Finally by exploiting the special structure of the LP relaxed problem, we develop an efficient algorithm for the LP relaxed problem. The developed algorithm has a worst case computational complexity of order max[$O(n^2logn)$, $O(mn^2)$] where m is the number of periods and n is the total number of jobs. We illustrate a numerical example.

Characteristics of Input-Output Spaces of Fuzzy Inference Systems by Means of Membership Functions and Performance Analyses (소속 함수에 의한 퍼지 추론 시스템의 입출력 공간 특성 및 성능 분석)

  • Park, Keon-Jun;Lee, Dong-Yoon
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.4
    • /
    • pp.74-82
    • /
    • 2011
  • To do fuzzy modelling of a nonlinear process needs to analyze the characteristics of input-output of fuzzy inference systems according to the division of entire input spaces and the fuzzy reasoning methods. For this, fuzzy model is expressed by identifying the structure and parameters of the system by means of input variables, fuzzy partition of input spaces, and consequence polynomial functions. In the premise part of the fuzzy rules Min-Max method using the minimum and maximum values of input data set and C-Means clustering algorithm forming input data into the clusters are used for identification of fuzzy model and membership functions are used as a series of triangular, gaussian-like, trapezoid-type membership functions. In the consequence part of the fuzzy rules fuzzy reasoning is conducted by two types of inferences such as simplified and linear inference. The identification of the consequence parameters, namely polynomial coefficients, of each rule are carried out by the standard least square method. And lastly, using gas furnace process which is widely used in nonlinear process we evaluate the performance and the system characteristics.

Studies on the Derivation of the Instantaneous Unit Hydrograph for Small Watersheds of Main River Systems in Korea (한국주요빙계의 소유역에 대한 순간단위권 유도에 관한 연구 (I))

  • 이순혁
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.19 no.1
    • /
    • pp.4296-4311
    • /
    • 1977
  • This study was conducted to derive an Instantaneous Unit Hydrograph for the accurate and reliable unitgraph which can be used to the estimation and control of flood for the development of agricultural water resources and rational design of hydraulic structures. Eight small watersheds were selected as studying basins from Han, Geum, Nakdong, Yeongsan and Inchon River systems which may be considered as a main river systems in Korea. The area of small watersheds are within the range of 85 to 470$\textrm{km}^2$. It is to derive an accurate Instantaneous Unit Hydrograph under the condition of having a short duration of heavy rain and uniform rainfall intensity with the basic and reliable data of rainfall records, pluviographs, records of river stages and of the main river systems mentioned above. Investigation was carried out for the relations between measurable unitgraph and watershed characteristics such as watershed area, A, river length L, and centroid distance of the watershed area, Lca. Especially, this study laid emphasis on the derivation and application of Instantaneous Unit Hydrograph (IUH) by applying Nash's conceptual model and by using an electronic computer. I U H by Nash's conceptual model and I U H by flood routing which can be applied to the ungaged small watersheds were derived and compared with each other to the observed unitgraph. 1 U H for each small watersheds can be solved by using an electronic computer. The results summarized for these studies are as follows; 1. Distribution of uniform rainfall intensity appears in the analysis for the temporal rainfall pattern of selected heavy rainfall event. 2. Mean value of recession constants, Kl, is 0.931 in all watersheds observed. 3. Time to peak discharge, Tp, occurs at the position of 0.02 Tb, base length of hlrdrograph with an indication of lower value than that in larger watersheds. 4. Peak discharge, Qp, in relation to the watershed area, A, and effective rainfall, R, is found to be {{{{ { Q}_{ p} = { 0.895} over { { A}^{0.145 } } }}}} AR having high significance of correlation coefficient, 0.927, between peak discharge, Qp, and effective rainfall, R. Design chart for the peak discharge (refer to Fig. 15) with watershed area and effective rainfall was established by the author. 5. The mean slopes of main streams within the range of 1.46 meters per kilometer to 13.6 meter per kilometer. These indicate higher slopes in the small watersheds than those in larger watersheds. Lengths of main streams are within the range of 9.4 kilometer to 41.75 kilometer, which can be regarded as a short distance. It is remarkable thing that the time of flood concentration was more rapid in the small watersheds than that in the other larger watersheds. 6. Length of main stream, L, in relation to the watershed area, A, is found to be L=2.044A0.48 having a high significance of correlation coefficient, 0.968. 7. Watershed lag, Lg, in hrs in relation to the watershed area, A, and length of main stream, L, was derived as Lg=3.228 A0.904 L-1.293 with a high significance. On the other hand, It was found that watershed lag, Lg, could also be expressed as {{{{Lg=0.247 { ( { LLca} over { SQRT { S} } )}^{ 0.604} }}}} in connection with the product of main stream length and the centroid length of the basin of the watershed area, LLca which could be expressed as a measure of the shape and the size of the watershed with the slopes except watershed area, A. But the latter showed a lower correlation than that of the former in the significance test. Therefore, it can be concluded that watershed lag, Lg, is more closely related with the such watersheds characteristics as watershed area and length of main stream in the small watersheds. Empirical formula for the peak discharge per unit area, qp, ㎥/sec/$\textrm{km}^2$, was derived as qp=10-0.389-0.0424Lg with a high significance, r=0.91. This indicates that the peak discharge per unit area of the unitgraph is in inverse proportion to the watershed lag time. 8. The base length of the unitgraph, Tb, in connection with the watershed lag, Lg, was extra.essed as {{{{ { T}_{ b} =1.14+0.564( { Lg} over {24 } )}}}} which has defined with a high significance. 9. For the derivation of IUH by applying linear conceptual model, the storage constant, K, with the length of main stream, L, and slopes, S, was adopted as {{{{K=0.1197( {L } over { SQRT {S } } )}}}} with a highly significant correlation coefficient, 0.90. Gamma function argument, N, derived with such watershed characteristics as watershed area, A, river length, L, centroid distance of the basin of the watershed area, Lca, and slopes, S, was found to be N=49.2 A1.481L-2.202 Lca-1.297 S-0.112 with a high significance having the F value, 4.83, through analysis of variance. 10. According to the linear conceptual model, Formular established in relation to the time distribution, Peak discharge and time to peak discharge for instantaneous Unit Hydrograph when unit effective rainfall of unitgraph and dimension of watershed area are applied as 10mm, and $\textrm{km}^2$ respectively are as follows; Time distribution of IUH {{{{u(0, t)= { 2.78A} over {K GAMMA (N) } { e}^{-t/k } { (t.K)}^{N-1 } }}}} (㎥/sec) Peak discharge of IUH {{{{ {u(0, t) }_{max } = { 2.78A} over {K GAMMA (N) } { e}^{-(N-1) } { (N-1)}^{N-1 } }}}} (㎥/sec) Time to peak discharge of IUH tp=(N-1)K (hrs) 11. Through mathematical analysis in the recession curve of Hydrograph, It was confirmed that empirical formula of Gamma function argument, N, had connection with recession constant, Kl, peak discharge, QP, and time to peak discharge, tp, as {{{{{ K'} over { { t}_{ p} } = { 1} over {N-1 } - { ln { t} over { { t}_{p } } } over {ln { Q} over { { Q}_{p } } } }}}} where {{{{K'= { 1} over { { lnK}_{1 } } }}}} 12. Linking the two, empirical formulars for storage constant, K, and Gamma function argument, N, into closer relations with each other, derivation of unit hydrograph for the ungaged small watersheds can be established by having formulars for the time distribution and peak discharge of IUH as follows. Time distribution of IUH u(0, t)=23.2 A L-1S1/2 F(N, K, t) (㎥/sec) where {{{{F(N, K, t)= { { e}^{-t/k } { (t/K)}^{N-1 } } over { GAMMA (N) } }}}} Peak discharge of IUH) u(0, t)max=23.2 A L-1S1/2 F(N) (㎥/sec) where {{{{F(N)= { { e}^{-(N-1) } { (N-1)}^{N-1 } } over { GAMMA (N) } }}}} 13. The base length of the Time-Area Diagram for the IUH was given by {{{{C=0.778 { ( { LLca} over { SQRT { S} } )}^{0.423 } }}}} with correlation coefficient, 0.85, which has an indication of the relations to the length of main stream, L, centroid distance of the basin of the watershed area, Lca, and slopes, S. 14. Relative errors in the peak discharge of the IUH by using linear conceptual model and IUH by routing showed to be 2.5 and 16.9 percent respectively to the peak of observed unitgraph. Therefore, it confirmed that the accuracy of IUH using linear conceptual model was approaching more closely to the observed unitgraph than that of the flood routing in the small watersheds.

  • PDF

A Desirability Function-Based Multi-Characteristic Robust Design Optimization Technique (호감도 함수 기반 다특성 강건설계 최적화 기법)

  • Jong Pil Park;Jae Hun Jo;Yoon Eui Nahm
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.199-208
    • /
    • 2023
  • Taguchi method is one of the most popular approaches for design optimization such that performance characteristics become robust to uncontrollable noise variables. However, most previous Taguchi method applications have addressed a single-characteristic problem. Problems with multiple characteristics are more common in practice. The multi-criteria decision making(MCDM) problem is to select the optimal one among multiple alternatives by integrating a number of criteria that may conflict with each other. Representative MCDM methods include TOPSIS(Technique for Order of Preference by Similarity to Ideal Solution), GRA(Grey Relational Analysis), PCA(Principal Component Analysis), fuzzy logic system, and so on. Therefore, numerous approaches have been conducted to deal with the multi-characteristic design problem by combining original Taguchi method and MCDM methods. In the MCDM problem, multiple criteria generally have different measurement units, which means that there may be a large difference in the physical value of the criteria and ultimately makes it difficult to integrate the measurements for the criteria. Therefore, the normalization technique is usually utilized to convert different units of criteria into one identical unit. There are four normalization techniques commonly used in MCDM problems, including vector normalization, linear scale transformation(max-min, max, or sum). However, the normalization techniques have several shortcomings and do not adequately incorporate the practical matters. For example, if certain alternative has maximum value of data for certain criterion, this alternative is considered as the solution in original process. However, if the maximum value of data does not satisfy the required degree of fulfillment of designer or customer, the alternative may not be considered as the solution. To solve this problem, this paper employs the desirability function that has been proposed in our previous research. The desirability function uses upper limit and lower limit in normalization process. The threshold points for establishing upper or lower limits let us know what degree of fulfillment of designer or customer is. This paper proposes a new design optimization technique for multi-characteristic design problem by integrating the Taguchi method and our desirability functions. Finally, the proposed technique is able to obtain the optimal solution that is robust to multi-characteristic performances.