• Title/Summary/Keyword: DUMP

Search Result 344, Processing Time 0.025 seconds

Mobility of Transition Metals by Change of Redox Condition in Dump Tailings from the Dukum Mine, Korea (덕음광산 광미의 산화${\cdot}$환원 조건에 따른 전이원소의 이동성)

  • 문용희;문희수;박영석;문지원;송윤구;이종천
    • Economic and Environmental Geology
    • /
    • v.36 no.4
    • /
    • pp.285-293
    • /
    • 2003
  • Tailings of Dukum mine in the vadose and saturated zone were investigated to reveal the mobility of metal elements and the condition of mineralogical solubility according to redox environments throughout the geochemical analysis, thermodynamic modelling, and mineralogical study for solid-samples and water samples(vadose zone; distilled water: tailings=5 : 1 reacted, saturated zone; pore-water extracted). In the vadose zone, sulfide oxidation has generated low-pH(2.72∼6.91) condition and high concentration levels of S $O_4$$^{2-}$(561∼1430mg/L) and other metals(Zn : 0.12∼l57 mg/L, Pb : 0.06∼0.83 mg/L, Cd : 0.06∼l.35 mg/L). Jarosite$(KFe_3(SO_4)_2(OH)_6)$ and gypsum$(CaSO_4{\cdot}2H_2O$) were identified on XRD patterns and thermodynamics modelling. In the saturated zone, concentration of metal ions decreased because pH values were neutral(7.25∼8.10). But Fe and Mn susceptible to redox potential increased by low-pe values(7.40∼3.40) as the depth increased. Rhodochrosite$(MnCO_3)$ identified by XRD and thermodynamics modelling suggested that $Mn^{4+}$ or $Mn^{3+}$ was reduced to $Mn^{2+}$. Along pH conditions, concentrations of dissolved metal ions has been most abundant in vadose zone throughout borehole samples. It was observed that pH had more effect on metal solubilities than redox potential. How-ever, the release of co-precipitated heavy metals following the dissolution of Fe-Mn oxyhydroxides could be the mechanism by which reduced condition affected heavy metal solubility considering the decrease of pe as depth increased in tile saturated zone.

A Study on the Legislative Conception of Terror of the Advanced European Nations (유럽 선진국의 법제적 테러 개념에 관한 고찰)

  • Kwon, Jeong-Hun;Kim, Tae-Hwan
    • Korean Security Journal
    • /
    • no.15
    • /
    • pp.29-50
    • /
    • 2008
  • Many countries throughout the world have enacted laws on terrorism in the light of the changes that time has brought to them, geographical features, cultural values, and environmental elements. Especially some advanced European nations prescribe the definition of terrorism, the purpose of terrorism, the behavior of terrorism, and the types of crimes related to terrorism and so on for the following reason that it is more vital for the authorities concerned to investigate and punish terrorists after the rise of terrorism. In this regard, this paper analyzes legislative countermoves against terrorists of advanced countries such as France, Germany, and England and through this sheds light on the need of future anti-terrorism bills. The legislative basic guidelines directly to manipulate future terrors based on theories derived from this study could be summarized as follows. In the first place, providing laws on direct investigative power and harsher punishment to those involved in terrorism is a prerequisite for social security and thus the presidential directive of the state anti-terrorism action guidelines just deals with administrative measures without any effective response to terrorism. Hence it is urgent to make anti-terrorism bill concerning investigation and punishment of terrorists. In the second place, it is associated with the objectives of terror. The expression "all sorts of" stated in Korean law is so quite unclear that it can not fulfill the required conditions for naming it "crime". Comprehending provisoes of the crime that meets the purpose of the terrorists is necessary in order to investigate and inflict punishment on them. Therefore, it is advisable to establish specific and precise principles such as political, social, ideological, and religious purpose of terrorists in the bill. In the third place, to meet the flow of times of technicalization, informatization, such provisoes as destruction of electronic data system, crimes related to nuclear materials, purchases of weapons by terrorists, tax administration for prohibition of sale, and arson should be considered in terror bill. In the fourth place, nonselective attack toward unspecified individuals has become a serious issue in our society. Terrorists leave poisonous foods or beverages to crowded place or dump toxic chemicals into river intentionally. Therefore more strict regulations must be included in terror bill to prevent possible terrorist attacks.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF