• Title/Summary/Keyword: Finite-Field

Search Result 3,439, Processing Time 0.031 seconds

Stability Analysis of Open Pit Slopes in the Pasir Coal Field, Indonesia (인도네시아 Pasir 탄전에서의 노천채탄장 사면의 안전성해석)

  • 정소걸;선우춘;한공창;신희순;박연준
    • Proceedings of the Korean Society for Rock Mechanics Conference
    • /
    • 2000.09a
    • /
    • pp.183-193
    • /
    • 2000
  • A series of studies such as geological logging data analysis, detailed geological survey, rock mass evaluation, in-situ and laboratory tests, rock strength and mechanical properties of the rock were concerned. The stability of the slope were carried out inorder to design the pit slope and individual benches using the stereographic projection analysis and numerical methods in Roto Pit of Pasir coal fetid. The bedding plane was one of the major discontinuities in the Roto Pit and the dip of which is about $60^{\circ}$in the northern part and $83^{\circ}$in the southern part. The dip of bedding becomes steeper from north to south. The plane and toppling failures are presented in many slopes. In laboratory test the average uniaxial compressive strength of mudstone was 9 MPa and that of weak sandstone was 10 MPa. In-situ test showed that the rocks of Roto north mining area are mostly weak enough to be classified in grade from R2(weak) to R3(medium strong weak) and the coal is classified in grades from R1(Very weak) to R2(Weak). The detailed stability analysis were carried out on 4 areas of Roto north(east, west, south and north), and 2 areas of Roto south(east and west). In this paper, the minimum factor of safety was set to 1.2 which is a general criterion for open pit mines. Using the stereographic projection analysis and the limit equilibrium method, slope angles were calculated as 30~$36^{\circ}$for a factor of safety greater than 1.2. Then these results were re-evaluated by numerical analysis using FLAC. The final slope angles were determined by rational described abode. A final slope of 34 degrees can guarantee the stability for the eastern part of the Roto north area, 33 degrees for the western part, 35 degrees for the northern part and 35 degrees for the southern part. For the Roto south area, 36 degrees was suggested for both sides of the pit. Once the pit slope is designed based on the stability analysis and the safety measures. the stability of 니ope should be checked periodically during the mining operations. Because the slope face will be exposed long time to the rain fall, a study such aspreventive measures against weathering and erosion is highly recommended to be implemented.

  • PDF

A Joint Application of DRASTIC and Numerical Groundwater Flow Model for The Assessment of Groundwater Vulnerability of Buyeo-Eup Area (DRASTIC 모델 및 지하수 수치모사 연계 적용에 의한 부여읍 일대의 지하수 오염 취약성 평가)

  • Lee, Hyun-Ju;Park, Eun-Gyu;Kim, Kang-Joo;Park, Ki-Hoon
    • Journal of Soil and Groundwater Environment
    • /
    • v.13 no.1
    • /
    • pp.77-91
    • /
    • 2008
  • In this study, we developed a technique of applying DRASTIC, which is the most widely used tool for estimation of groundwater vulnerability to the aqueous phase contaminant infiltrated from the surface, and a groundwater flow model jointly to assess groundwater contamination potential. The developed technique is then applied to Buyeo-eup area in Buyeo-gun, Chungcheongnam-do, Korea. The input thematic data of a depth to water required in DRASTIC model is known to be the most sensitive to the output while only a few observations at a few time schedules are generally available. To overcome this practical shortcoming, both steady-state and transient groundwater level distributions are simulated using a finite difference numerical model, MODFLOW. In the application for the assessment of groundwater vulnerability, it is found that the vulnerability results from the numerical simulation of a groundwater level is much more practical compared to cokriging methods. Those advantages are, first, the results from the simulation enable a practitioner to see the temporally comprehensive vulnerabilities. The second merit of the technique is that the method considers wide variety of engaging data such as field-observed hydrogeologic parameters as well as geographic relief. The depth to water generated through geostatistical methods in the conventional method is unable to incorporate temporally variable data, that is, the seasonal variation of a recharge rate. As a result, we found that the vulnerability out of both the geostatistical method and the steady-state groundwater flow simulation are in similar patterns. By applying the transient simulation results to DRASTIC model, we also found that the vulnerability shows sharp seasonal variation due to the change of groundwater recharge. The change of the vulnerability is found to be most peculiar during summer with the highest recharge rate and winter with the lowest. Our research indicates that numerical modeling can be a useful tool for temporal as well as spatial interpolation of the depth to water when the number of the observed data is inadequate for the vulnerability assessments through the conventional techniques.

Development of Three-Dimensional Trajectory Model for Detecting Source Region of the Radioactive Materials Released into the Atmosphere (대기 누출 방사성물질 선원 위치 추적을 위한 3차원 궤적모델 개발)

  • Suh, Kyung-Suk;Park, Kihyun;Min, Byung-Il;Kim, Sora;Yang, Byung-Mo
    • Journal of Radiation Protection and Research
    • /
    • v.41 no.1
    • /
    • pp.31-39
    • /
    • 2016
  • Background: It is necessary to consider the overall countermeasure for analysis of nuclear activities according to the increase of the nuclear facilities like nuclear power and reprocessing plants in the neighboring countries including China, Taiwan, North Korea, Japan and South Korea. South Korea and comprehensive nuclear-test-ban treaty organization (CTBTO) are now operating the monitoring instruments to detect radionuclides released into the air. It is important to estimate the origin of radionuclides measured using the detection technology as well as the monitoring analysis in aspects of investigation and security of the nuclear activities in neighboring countries. Materials and methods: A three-dimensional forward/backward trajectory model has been developed to estimate the origin of radionuclides for a covert nuclear activity. The developed trajectory model was composed of forward and backward modules to track the particle positions using finite difference method. Results and discussion: A three-dimensional trajectory model was validated using the measured data at Chernobyl accident. The calculated results showed a good agreement by using the high concentration measurements and the locations where was near a release point. The three-dimensional trajectory model had some uncertainty according to the release time, release height and time interval of the trajectory at each release points. An atmospheric dispersion model called long-range accident dose assessment system (LADAS), based on the fields of regards (FOR) technique, was applied to reduce the uncertainties of the trajectory model and to improve the detective technology for estimating the radioisotopes emission area. Conclusion: The detective technology developed in this study can evaluate in release area and origin for covert nuclear activities based on measured radioisotopes at monitoring stations, and it might play critical tool to improve the ability of the nuclear safety field.

About Short-stacking Effect of Illite-smectite Mixed Layers (일라이트-스멕타이트 혼합층광물의 단범위적층효과에 대한 고찰)

  • Kang, Il-Mo
    • Economic and Environmental Geology
    • /
    • v.45 no.2
    • /
    • pp.71-78
    • /
    • 2012
  • Illite-smectite mixed layers (I-S) occurring authigenically in diagenetic and hydrothermal environments reacts toward more illite-rich phases as temperature and potassium ion concentration increase. For that reason, I-S is often used as geothermometry and/or geochronometry at the field of hydrocarbons or ore minerals exploration. Generally, I-S shows X-ray powder diffraction (XRD) patterns of ultra-thin lamellar structures, which consist of restricted numbers of sillicate layers (normally, 5 ~ 15 layers) stacked in parallel to a-b planes. This ultra-thinness is known to decrease I-S expandability (%S) rather than theoretically expected one (short-stacking effect). We attempt here to quantify the short stacking effect of I-S using the difference of two types of expandability: one type is a maximum expandability ($%S_{Max}$) of infinite stacks of fundamental particles (physically inseparable smallest units), and the other type is an expandability of finite particle stacks normally measured using X-ray powder diffraction (XRD) ($%S_{XRD}$). Eleven I-S samples from the Geumseongsan volcanic complex, Uiseong, Gyeongbuk, have been analyzed for measuring $%S_{XRD}$ and average coherent scattering thickness (CST) after size separation under 1 ${\mu}m$. Average fundamental particle thickness ($N_f$) and $%S_{Max}$ have been determined from $%S_{XRD}$ and CST using inter-parameter relationships of I-S layer structures. The discrepancy between $%S_{Max}$ and $%S_{XRD}$ (${\Delta}%S$) suggests that the maximum short-stacking effect happens approximately at 20 $%S_{XRD}$, of which point represents I-S layer structures consisting of ca. average 3-layered fundamental particles ($N_f{\approx}3$). As a result of inferring the $%S_{XRD}$ range of each Reichweite using the $%S_{XRD}$ vs. $N_f$ diagram of Kang et al. (2002), we can confirms that the fundamental particle thickness is a determinant factor for I-S Reichweite, and also that the short-stacking effect shifts the $%S_{XRD}$ range of each Reichweite toward smaller $%S_{XRD}$ values than those that can be theoretically prospected using junction probability.

The Contact Metamorphism Due to the Intrusion of the Ogcheon and Boeun granites (옥천화강암과 보은화강암 관입에 의한 접촉변성작용)

  • 오창환;김창숙;박영도
    • The Journal of the Petrological Society of Korea
    • /
    • v.6 no.2
    • /
    • pp.133-149
    • /
    • 1997
  • In the metapelites around the Ogcheon granite, the metamorphic grade increases from the biotite zone through the andalusite zone to the sillimanite zone towards the intrusion contact. In the metabasites around the Boeun granite, the metamorphic grade increases from transitional zone between the greenchist and amphibolite facies through the amphibolite facies to the upper amphibolite facies towards the intrusion contact. In the Doiri area locating near the intrusion contact of the Boeun granite, sillimanite- and andalusite-bearing metapelites are found with in 500 m away from the contact. The evidence described above indicates that the Ogcheon and Boeun granites caused low-P/T type contact metamorphism to the country rocks. The P-T condition of contact metamorphism due to the intrusion of the Ogcheon granite is $540{\pm}40^{circ}C, 2.8{\pm}0.9$ kb. The temperature condition of contact metamorphism due to the intrusion of the Boeun granite is $698{\pm}28^{\circ}C$. The wide compositional range of amphibole and plagioclase in the metabasites around the Boeun granite is due to the immisibility gab of amphibole and plagioclase and unstable relict composition resulted from an incomplete metamorphic reaction. The compositional range of stable amphibole and plagioclase decreases as a metamorphic grade increases due to a close of immiscibility gab. The thermal effect of contact metamorphism due to the intrusion of the Ogcheon and Boeun granites, are calculated using the CONTACT2 program based on a two dimensional finite difference method. In order to estimate the thermal effect of an introduced pluton, a circle with 10 km diameter and a triangle with 20 km side are used for the intrusion geometries of the Ogcheon granite and the Boeun granite, respectively. The results from the field and modeling studies suggest that the intrusion temperatures of the Ogcheon granite close to $800^{\circ}C$ and the intrusion temperature of the Boeun granite is higher than $1000^{\circ}C$. However, the intrusion temperatures can be lower than the suggested temperature, if the geothermal gradient prior to the intrusion of the Ogcheon and Boeun granites was higher than the normal continental grothermal gradient.

  • PDF

A Fast Algorithm for Computing Multiplicative Inverses in GF(2$^{m}$) using Factorization Formula and Normal Basis (인수분해 공식과 정규기저를 이용한 GF(2$^{m}$ ) 상의 고속 곱셈 역원 연산 알고리즘)

  • 장용희;권용진
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.324-329
    • /
    • 2003
  • The public-key cryptosystems such as Diffie-Hellman Key Distribution and Elliptical Curve Cryptosystems are built on the basis of the operations defined in GF(2$^{m}$ ):addition, subtraction, multiplication and multiplicative inversion. It is important that these operations should be computed at high speed in order to implement these cryptosystems efficiently. Among those operations, as being the most time-consuming, multiplicative inversion has become the object of lots of investigation Formant's theorem says $\beta$$^{-1}$ =$\beta$$^{2}$sup m/-2/, where $\beta$$^{-1}$ is the multiplicative inverse of $\beta$$\in$GF(2$^{m}$ ). Therefore, to compute the multiplicative inverse of arbitrary elements of GF(2$^{m}$ ), it is most important to reduce the number of times of multiplication by decomposing 2$^{m}$ -2 efficiently. Among many algorithms relevant to the subject, the algorithm proposed by Itoh and Tsujii[2] has reduced the required number of times of multiplication to O(log m) by using normal basis. Furthermore, a few papers have presented algorithms improving the Itoh and Tsujii's. However they have some demerits such as complicated decomposition processes[3,5]. In this paper, in the case of 2$^{m}$ -2, which is mainly used in practical applications, an efficient algorithm is proposed for computing the multiplicative inverse at high speed by using both the factorization formula x$^3$-y$^3$=(x-y)(x$^2$+xy+y$^2$) and normal basis. The number of times of multiplication of the algorithm is smaller than that of the algorithm proposed by Itoh and Tsujii. Also the algorithm decomposes 2$^{m}$ -2 more simply than other proposed algorithms.

Truncation Artifact Reduction Using Weighted Normalization Method in Prototype R/F Chest Digital Tomosynthesis (CDT) System (프로토타입 R/F 흉부 디지털 단층영상합성장치 시스템에서 잘림 아티팩트 감소를 위한 가중 정규화 접근법에 대한 연구)

  • Son, Junyoung;Choi, Sunghoon;Lee, Donghoon;Kim, Hee-Joung
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.1
    • /
    • pp.111-118
    • /
    • 2019
  • Chest digital tomosynthesis has become a practical imaging modality because it can solve the problem of anatomy overlapping in conventional chest radiography. However, because of both limited scan angle and finite-size detector, a portion of chest cannot be represented in some or all of the projection. These bring a discontinuity in intensity across the field of view boundaries in the reconstructed slices, which we refer to as the truncation artifacts. The purpose of this study was to reduce truncation artifacts using a weighted normalization approach and to investigate the performance of this approach for our prototype chest digital tomosynthesis system. The system source-to-image distance was 1100 mm, and the center of rotation of X-ray source was located on 100 mm above the detector surface. After obtaining 41 projection views with ${\pm}20^{\circ}$ degrees, tomosynthesis slices were reconstructed with the filtered back projection algorithm. For quantitative evaluation, peak signal to noise ratio and structure similarity index values were evaluated after reconstructing reference image using simulation, and mean value of specific direction values was evaluated using real data. Simulation results showed that the peak signal to noise ratio and structure similarity index was improved respectively. In the case of the experimental results showed that the effect of artifact in the mean value of specific direction of the reconstructed image was reduced. In conclusion, the weighted normalization method improves the quality of image by reducing truncation artifacts. These results suggested that weighted normalization method could improve the image quality of chest digital tomosynthesis.

A Study on the Structural Reinforcement of the Modified Caisson Floating Dock (개조된 케이슨 플로팅 도크의 구조 보강에 대한 연구)

  • Kim, Hong-Jo;Seo, Kwang-Cheol;Park, Joo-Shin
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.1
    • /
    • pp.172-178
    • /
    • 2021
  • In the ship repair market, interest in maintenance and repair is steadily increasing due to the reinforcement of prevention of environmental pollution caused by ships and the reinforcement of safety standards for ship structures. By reflecting this effect, the number of requests for repairs by foreign shipping companies increases to repair shipbuilders in the Southwest Sea. However, because most of the repair shipbuilders in the southwestern area are small and medium-sized companies, it is difficult to lead to the integrated synergy effect of the repair shipbuilding companies. Moreover, the infrastructure is not integrated; hence, using the infrastructure jointly is a challenge, which acts as an obstacle to the activation of the repair shipbuilding industry. Floating docks are indispensable to operating the repair shipbuilding business; in addition, most of them are operated through renovation/repair after importing aging caisson docks from overseas. However, their service life is more than 30 years; additionally, there is no structure inspection standard. Therefore, it is vulnerable to the safety field. In this study, the finite element analysis program of ANSYS was used to evaluate the structural safety of the modified caisson dock and obtain additional structural reinforcement schemes to solve the derived problems. For the floating docks, there are classification regulations; however, concerning structural strength, the regulations are insufficient, and the applicability is inferior. These insufficient evaluation areas were supplemented through a detailed structural FE-analysis. The reinforcement plan was decided by reinforcing the pontoon deck and reinforcement of the side tank, considering the characteristics of the repair shipyard condition. The final plan was selected to reinforce the side wing tank through the structural analysis of the decision; in addition, the actual structure was fabricated to reflect the reinforcement plan. Our results can be used as reference data for improving the structural strength of similar facilities; we believe that the optimal solution can be found quickly if this method is used during renovation/repair.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF