• Title/Summary/Keyword: Constraint method

Search Result 1,685, Processing Time 0.027 seconds

Mechanical model for analyzing the water-resisting key stratum to evaluate water inrush from goaf in roof

  • Ma, Kai;Yang, Tianhong;Zhao, Yong;Hou, Xiangang;Liu, Yilong;Hou, Junxu;Zheng, Wenxian;Ye, Qiang
    • Geomechanics and Engineering
    • /
    • v.28 no.3
    • /
    • pp.299-311
    • /
    • 2022
  • Water-resisting key stratum (WKS) between coal seams is an important barrier that prevents water inrush from goaf in roof under multi-seam mining. The occurrence of water inrush can be evaluated effectively by analyzing the fracture of WKS in multi-seam mining. A "long beam" water inrush mechanical model was established using the multi-seam mining of No. 2+3 and No. 8 coal seams in Xiqu Mine as the research basis. The model comprehensively considers the pressure from goaf, the gravity of overburden rock, the gravity of accumulated water, and the constraint conditions. The stress distribution expression of the WKS was obtained under different mining distances in No. 8 coal seam. The criterion of breakage at any point of the WKS was obtained by introducing linear Mohr strength theory. By using the mechanical model, the fracture of the WKS in Xiqu Mine was examined and its breaking position was calculated. And the risk of water inrush was also evaluated. Moreover, breaking process of the WKS was reproduced with Flac3D numerical software, and was analyzed with on-site microseismic monitoring data. The results showed that when the coal face of No. 8 coal seam in Xiqu Mine advances to about 80 m ~ 100 m, the WKS is stretched and broken at the position of 60 m ~ 70 m away from the open-off cut, increasing the risk of water inrush from goaf in roof. This finding matched the result of microseismic analysis, confirming the reliability of the water inrush mechanical model. This study therefore provides a theoretical basis for the prevention of water inrush from goaf in roof in Xiqu Mine. It also provides a method for evaluating and monitoring water inrush from goaf in roof.

Analysis of a CubeSat Magnetic Cleanliness for the Space Science Mission (우주과학임무를 위한 큐브위성 자기장 청결도 분석)

  • Jo, Hye Jeong;Jin, Ho;Park, Hyeonhu;Kim, Khan-Hyuk;Jang, Yunho;Jo, Woohyun
    • Journal of Space Technology and Applications
    • /
    • v.2 no.1
    • /
    • pp.41-51
    • /
    • 2022
  • CubeSat is a satellite platform that is widely used not only for earth observation but also for space exploration. CubeSat is also used in magnetic field investigation missions to observe space physics phenomena with various shape configurations of magnetometer instrument unit. In case of magnetic field measurement, the magnetometer instrument should be far away from the satellite body to minimize the magnetic disturbances from satellites. But the accommodation setting of the magnetometer instrument is limited due to the volume constraint of small satellites like a CubeSat. In this paper, we investigated that the magnetic field interference generated by the cube satellite was analyzed how much it can affect the reliability of magnetic field measurement. For this analysis, we used a reaction wheel and Torque rods which have relatively high-power consumption as major noise sources. The magnetic dipole moment of these parts was derived by the data sheet of the manufacturer. We have been confirmed that the effect of the residual moment of the magnetic torque located in the middle of the 3U cube satellite can reach 36,000 nT from the outermost end of the body of the CubeSat in a space without an external magnetic field. In the case of accurate magnetic field measurements of less than 1 nT, we found that the magnetometer should be at least 0.6 m away from the CubeSat body. We expect that this analysis method will be an important role of a magnetic cleanliness analysis when designing a CubeSat to carry out a magnetic field measurement.

An Estimation on Average Service Life of Public Buildings in South Korea: In Case of RCC (우리나라 공공건물의 내용연수 추정: RCC를 중심으로)

  • Jung-Hoon Kwon;Jin-Hyung Cho;Hyun-Seung Oh;Sae-Jae Lee
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.1
    • /
    • pp.84-90
    • /
    • 2023
  • ASL estimation of public building is based on how appropriate the maximum age of the asset is derived based on the age record of the asset in the statistical data owned by public institutions. This is because we get a 'constrained' ASL by that number. And it is especially true because other studies have assumed that the building is an Iowa curve R3. Also, in this study, the survival rate is 1% as the threshold value at which the survival curve and the predictable life curve almost coincide. Rather than a theoretical basis, in the national statistical survey, the value of residual assets was recognized from the net value of 10% of the acquisition value when the average service life has elapsed, and 1% when doubling the average service life has elapsed. It is based on the setting mentioned above. The biggest constraint in fitting statistical data to the Iowa curve is that the maximum ASL is selected at R3 150%, and the 'constrained' ASL is calculated by the proportional expression on the assumption that the Iowa curve is followed. In like manner constraints were considered. First, the R3 disposal curve for the RCC(reinforced cement concrete) building was prepared according to the discarding method in the 2000 work, and it was jointly worked on with the National Statistical Office to secure the maximum amount of vintage data, but the lacking of sample size must be acknowledged. Even after that, the National Statistical Office and the Bank of Korea have been working on estimating the Iowa curve for each asset class in the I-O table. Another limitation is that the asset classification uses the broad classification of buildings as a subcategory. Second, if there were such assets with a lifespan of 115 years that were acquired in 1905 and disposed of in 2020, these discarded data would be omitted from this ASL calculation. Third, it is difficult to estimate the correct Iowa curve based on the stub-curve even if there is disposal data because Korea has a relatively shorter construction history, accumulated economic wealth since the 1980's. In other words, "constrained" ASL is an under-estimation of its ASL. Considering the fact that Korea was an economically developing country in the past and during rapid economic development, environmental factors such as asset accumulation and economic ability should be considered. Korea has a short period of accumulation of economic wealth, and the history of 'proper' architectures faithful to building regulations and principles is short and as a result, buildings 'not built properly' and 'proper' architectures are mixed. In this study, ASL of RCC public building was estimated at 70 years.

Three-dimensional Model Generation for Active Shape Model Algorithm (능동모양모델 알고리듬을 위한 삼차원 모델생성 기법)

  • Lim, Seong-Jae;Jeong, Yong-Yeon;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.28-35
    • /
    • 2006
  • Statistical models of shape variability based on active shape models (ASMs) have been successfully utilized to perform segmentation and recognition tasks in two-dimensional (2D) images. Three-dimensional (3D) model-based approaches are more promising than 2D approaches since they can bring in more realistic shape constraints for recognizing and delineating the object boundary. For 3D model-based approaches, however, building the 3D shape model from a training set of segmented instances of an object is a major challenge and currently it remains an open problem in building the 3D shape model, one essential step is to generate a point distribution model (PDM). Corresponding landmarks must be selected in all1 training shapes for generating PDM, and manual determination of landmark correspondences is very time-consuming, tedious, and error-prone. In this paper, we propose a novel automatic method for generating 3D statistical shape models. Given a set of training 3D shapes, we generate a 3D model by 1) building the mean shape fro]n the distance transform of the training shapes, 2) utilizing a tetrahedron method for automatically selecting landmarks on the mean shape, and 3) subsequently propagating these landmarks to each training shape via a distance labeling method. In this paper, we investigate the accuracy and compactness of the 3D model for the human liver built from 50 segmented individual CT data sets. The proposed method is very general without such assumptions and can be applied to other data sets.

Optimum Design of Two Hinged Steel Arches with I Sectional Type (SUMT법(法)에 의(依)한 2골절(滑節) I형(形) 강재(鋼材) 아치의 최적설계(最適設計))

  • Jung, Young Chae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.12 no.3
    • /
    • pp.65-79
    • /
    • 1992
  • This study is concerned with the optimal design of two hinged steel arches with I cross sectional type and aimed at the exact analysis of the arches and the safe and economic design of structure. The analyzing method of arches which introduces the finite difference method considering the displacements of structure in analyzing process is used to eliminate the error of analysis and to determine the sectional force of structure. The optimizing problems of arches formulate with the objective functions and the constraints which take the sectional dimensions(B, D, $t_f$, $t_w$) as the design variables. The object functions are formulated as the total weight of arch and the constraints are derived by using the criteria with respect to the working stress, the minimum dimension of flange and web based on the part of steel bridge in the Korea standard code of road bridge and including the economic depth constraint of the I sectional type, the upper limit dimension of the depth of web and the lower limit dimension of the breadth of flange. The SUMT method using the modified Newton Raphson direction method is introduced to solve the formulated nonlinear programming problems which developed in this study and tested out throught the numerical examples. The developed optimal design programming of arch is tested out and examined throught the numerical examples for the various arches. And their results are compared and analyzed to examine the possibility of optimization, the applicablity, the convergency of this algorithm and with the results of numerical examples using the reference(30). The correlative equations between the optimal sectional areas and inertia moments are introduced from the various numerical optimal design results in this study.

  • PDF

Prediction of Expected Residual Useful Life of Rubble-Mound Breakwaters Using Stochastic Gamma Process (추계학적 감마 확률과정을 이용한 경사제의 기대 잔류유효수명 예측)

  • Lee, Cheol-Eung
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.31 no.3
    • /
    • pp.158-169
    • /
    • 2019
  • A probabilistic model that can predict the residual useful lifetime of structure is formulated by using the gamma process which is one of the stochastic processes. The formulated stochastic model can take into account both the sampling uncertainty associated with damages measured up to now and the temporal uncertainty of cumulative damage over time. A method estimating several parameters of stochastic model is additionally proposed by introducing of the least square method and the method of moments, so that the age of a structure, the operational environment, and the evolution of damage with time can be considered. Some features related to the residual useful lifetime are firstly investigated into through the sensitivity analysis on parameters under a simple setting of single damage data measured at the current age. The stochastic model are then applied to the rubble-mound breakwater straightforwardly. The parameters of gamma process can be estimated for several experimental data on the damage processes of armor rocks of rubble-mound breakwater. The expected damage levels over time, which are numerically simulated with the estimated parameters, are in very good agreement with those from the flume testing. It has been found from various numerical calculations that the probabilities exceeding the failure limit are converged to the constraint that the model must be satisfied after lasting for a long time from now. Meanwhile, the expected residual useful lifetimes evaluated from the failure probabilities are seen to be different with respect to the behavior of damage history. As the coefficient of variation of cumulative damage is becoming large, in particular, it has been shown that the expected residual useful lifetimes have significant discrepancies from those of the deterministic regression model. This is mainly due to the effect of sampling and temporal uncertainties associated with damage, by which the first time to failure tends to be widely distributed. Therefore, the stochastic model presented in this paper for predicting the residual useful lifetime of structure can properly implement the probabilistic assessment on current damage state of structure as well as take account of the temporal uncertainty of future cumulative damage.

Integrity evaluation of grouting in umbrella arch methods by using guided ultrasonic waves (유도초음파를 이용한 강관보강다단 그라우팅의 건전도 평가)

  • Hong, Young-Ho;Yu, Jung-Doung;Byun, Yong-Hoon;Jang, Hyun-Ick;You, Byung-Chul;Lee, Jong-Sub
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.15 no.3
    • /
    • pp.187-199
    • /
    • 2013
  • Umbrella arch method (UAM) used for improving the stability of the tunnel ground condition has been widely applied in the tunnel construction projects due to the advantage of obtaining both reinforcement and waterproof. The purpose of this study is to develop the evaluation technique of the integrity of bore-hole in UAM by using a non-destructive test and to evaluate the possibility of being applied to the field. In order to investigate the variations of frequency depending on grouted length, the specimens with different grouted ratios are made in the two constraint conditions (free boundary condition and embedded condition). The hammer impact reflection method in which excitation and reception occur simultaneously at the head of pipe was used. The guided waves generated by hitting a pipe with a hammer were reflected at the tip and returned to the head, and the signals were received by an acoustic emission (AE) sensor installed at the head. For the laboratory experiments, the specimens were prepared with different grouted ratios (25 %, 50 %, 75 %, 100 %). In addition, field tests were performed for the application of the evaluation technique. Fast Fourier transform and wavelet transform were applied to analyze the measured waves. The experimental studies show that grouted ratio has little effects on the velocities of guided waves. Main frequencies of reflected waves tend to decrease with an increase in the grouted length in the time-frequency domain. This study suggests that the non-destructive tests using guided ultrasonic waves be effective to evaluate the bore-hole integrity of the UAM in the field.

The Optimal Configuration of Arch Structures Using Force Approximate Method (부재력(部材力) 근사해법(近似解法)을 이용(利用)한 아치구조물(構造物)의 형상최적화(形狀最適化)에 관한 연구(研究))

  • Lee, Gyu Won;Ro, Min Lae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.95-109
    • /
    • 1993
  • In this study, the optimal configuration of arch structure has been tested by a decomposition technique. The object of this study is to provide the method of optimizing the shapes of both two hinged and fixed arches. The problem of optimal configuration of arch structures includes the interaction formulas, the working stress, and the buckling stress constraints on the assumption that arch ribs can be approximated by a finite number of straight members. On the first level, buckling loads are calculated from the relation of the stiffness matrix and the geometric stiffness matrix by using Rayleigh-Ritz method, and the number of the structural analyses can be decreased by approximating member forces through sensitivity analysis using the design space approach. The objective function is formulated as the total weight of the structures, and the constraints are derived by including the working stress, the buckling stress, and the side limit. On the second level, the nodal point coordinates of the arch structures are used as design variables and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, the problem of optimization can be reduced to unconstrained optimal design problem which is easy to solve. Numerical comparisons with results which are obtained from numerical tests for several arch structures with various shapes and constraints show that convergence rate is very fast regardless of constraint types and configuration of arch structures. And the optimal configuration or the arch structures obtained in this study is almost the identical one from other results. The total weight could be decreased by 17.7%-91.7% when an optimal configuration is accomplished.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Index-based Searching on Timestamped Event Sequences (타임스탬프를 갖는 이벤트 시퀀스의 인덱스 기반 검색)

  • 박상현;원정임;윤지희;김상욱
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.468-478
    • /
    • 2004
  • It is essential in various application areas of data mining and bioinformatics to effectively retrieve the occurrences of interesting patterns from sequence databases. For example, let's consider a network event management system that records the types and timestamp values of events occurred in a specific network component(ex. router). The typical query to find out the temporal casual relationships among the network events is as fellows: 'Find all occurrences of CiscoDCDLinkUp that are fellowed by MLMStatusUP that are subsequently followed by TCPConnectionClose, under the constraint that the interval between the first two events is not larger than 20 seconds, and the interval between the first and third events is not larger than 40 secondsTCPConnectionClose. This paper proposes an indexing method that enables to efficiently answer such a query. Unlike the previous methods that rely on inefficient sequential scan methods or data structures not easily supported by DBMSs, the proposed method uses a multi-dimensional spatial index, which is proven to be efficient both in storage and search, to find the answers quickly without false dismissals. Given a sliding window W, the input to a multi-dimensional spatial index is a n-dimensional vector whose i-th element is the interval between the first event of W and the first occurrence of the event type Ei in W. Here, n is the number of event types that can be occurred in the system of interest. The problem of‘dimensionality curse’may happen when n is large. Therefore, we use the dimension selection or event type grouping to avoid this problem. The experimental results reveal that our proposed technique can be a few orders of magnitude faster than the sequential scan and ISO-Depth index methods.hods.