• 제목/요약/키워드: loss system

검색결과 7,482건 처리시간 0.035초

Summer Environmental Evaluation of Water and Sediment Quality in the South Sea and East China Sea (남해 및 동중국해의 하계 수질 및 저질 환경평가)

  • Lee, Dae-In;Cho, Hyeon-Seo;Yoon, Yang-Ho;Choi, Young-Chan;Lee, Jeong-Hoon
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • 제8권2호
    • /
    • pp.83-99
    • /
    • 2005
  • To evaluate environmental charateristics of the South Sea and East China Sea on summer, water and sediment quality were measured in June 2001-2003. Surface layer was affceted by Warm water originated from the high temperature and salinity-Tsushima Warm Current, on the other hand, Yellow Sea Cold Water was spread to the bottom layer in the south-western part of the Jeju island, and salinity at stations near the Yangtze River was decreased below 29psu because of a enormous freshwater discharges. Thermocline-depth was formed at about 10m, and chlorophyll maximum layer was existed in and below the thermocline. COD(Chemical Oxygen Demand), TN(Total Nitrogen), and TP(Total Phosphorus) concentrations showed seawater quality grade II in surface layer of the most area, but concentrations of such as COD, Chl. a, TSS(Total Suspended Solid), and nutrients were greatly increased in the effect area of Yangtze River discharges. Correlations between dissolved inorganic nitrogen, Chl. a and salinity were negative patterns strongly, in contrast, those of inorganic phosphorus, COD and Chl. a were positive, which indicates that phytoplankton biomass and phosphorus are considered as important factors of organic matter distribution and algal growth, respectively. in the study area. The distribution of ignition loss, COD, and $H_2S$ of surface sediment were in the ranges of 2.61-8.81%, $0.64-11.86mgO_2/g-dry$, and ND-0.25 mgS/g-dry, respectively, with relatively high concentration in the eastern part of the study area. Therefore, to effective and sustainable use and management of this area, continuous monitoring and countermeasures about major input sources to the water and sediment, and prediction according to the environmental variation, are necessary.

  • PDF

Comparative Analysis of ViSCa Platform-based Mobile Payment Service with other Cases (스마트카드 가상화(ViSCa) 플랫폼 기반 모바일 결제 서비스 제안 및 타 사례와의 비교분석)

  • Lee, June-Yeop;Lee, Kyoung-Jun
    • Journal of Intelligence and Information Systems
    • /
    • 제20권2호
    • /
    • pp.163-178
    • /
    • 2014
  • Following research proposes "Virtualization of Smart Cards (ViSCa)" which is a security system that aims to provide a multi-device platform for the deployment of services that require a strong security protocol, both for the access & authentication and execution of its applications and focuses on analyzing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service by comparing with other similar cases. At the present day, the appearance of new ICT, the diffusion of new user devices (such as smartphones, tablet PC, and so on) and the growth of internet penetration rate are creating many world-shaking services yet in the most of these applications' private information has to be shared, which means that security breaches and illegal access to that information are real threats that have to be solved. Also mobile payment service is, one of the innovative services, has same issues which are real threats for users because mobile payment service sometimes requires user identification, an authentication procedure and confidential data sharing. Thus, an extra layer of security is needed in their communication and execution protocols. The Virtualization of Smart Cards (ViSCa), concept is a holistic approach and centralized management for a security system that pursues to provide a ubiquitous multi-device platform for the arrangement of mobile payment services that demand a powerful security protocol, both for the access & authentication and execution of its applications. In this sense, Virtualization of Smart Cards (ViSCa) offers full interoperability and full access from any user device without any loss of security. The concept prevents possible attacks by third parties, guaranteeing the confidentiality of personal data, bank accounts or private financial information. The Virtualization of Smart Cards (ViSCa) concept is split in two different phases: the execution of the user authentication protocol on the user device and the cloud architecture that executes the secure application. Thus, the secure service access is guaranteed at anytime, anywhere and through any device supporting previously required security mechanisms. The security level is improved by using virtualization technology in the cloud. This virtualization technology is used terminal virtualization to virtualize smart card hardware and thrive to manage virtualized smart cards as a whole, through mobile cloud technology in Virtualization of Smart Cards (ViSCa) platform-based mobile payment service. This entire process is referred to as Smart Card as a Service (SCaaS). Virtualization of Smart Cards (ViSCa) platform-based mobile payment service virtualizes smart card, which is used as payment mean, and loads it in to the mobile cloud. Authentication takes place through application and helps log on to mobile cloud and chooses one of virtualized smart card as a payment method. To decide the scope of the research, which is comparing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service with other similar cases, we categorized the prior researches' mobile payment service groups into distinct feature and service type. Both groups store credit card's data in the mobile device and settle the payment process at the offline market. By the location where the electronic financial transaction information (data) is stored, the groups can be categorized into two main service types. First is "App Method" which loads the data in the server connected to the application. Second "Mobile Card Method" stores its data in the Integrated Circuit (IC) chip, which holds financial transaction data, which is inbuilt in the mobile device secure element (SE). Through prior researches on accept factors of mobile payment service and its market environment, we came up with six key factors of comparative analysis which are economic, generality, security, convenience(ease of use), applicability and efficiency. Within the chosen group, we compared and analyzed the selected cases and Virtualization of Smart Cards (ViSCa) platform-based mobile payment service.

CCD Photometric Observations and Light Curve Synthesis of the Near-Contact Binary XZ Canis Minoris (근접촉쌍성 XZ CMi의 CCD 측광관측과 광도곡선 분석)

  • Kim, Chun-Hwey;Park, Jang-Ho;Lee, Jae-Woo;Jeong, Jang-Hae;Oh, Jun-Young
    • Journal of Astronomy and Space Sciences
    • /
    • 제26권2호
    • /
    • pp.141-156
    • /
    • 2009
  • Through the photometric observations of the near-contact binary, XZ CMi, new BV light curves were secured and seven times of minimum light were determined. An intensive period study with all published timings, including ours, confirms that the period of XZ CMi has varied in a cyclic period variation superposed on a secular period decrease over last 70 years. Assuming the cyclic change of period to occur by a light-time effect due to a third-body, the light-time orbit with a semi-amplitude of 0.0056d, a period of 29y and an eccentricity of 0.71 was calculated. The observed secular period decrease of $-5.26{\times}10^{-11}d/P$ was interpreted as a result of simultaneous occurrence of both a period decrease of $-8.20{\times}10^{-11}d/P$ by angular momentum loss (AML) due to a magnetic braking stellar wind and a period increase of $2.94{\times}10^{-11}d/P$ by a mass transfer from the less massive secondary to the primary components in the system. In this line the decreasing rate of period due to AML is about 3 times larger than the increasing one by a mass transfer in their absolute values. The latter implies a mass transfer of $\dot{M}_s=3.21{\times}10^{-8}M_{\odot}y^{-1}$ from the less massive secondary to the primary. The BV light curves with the latest Wilson-Devinney binary code were analyzed for two separate models of 8200K and 7000K as the photospheric temperature of the primary component. Both models confirm that XZ CMi is truly a near-contact binary with a less massive secondary completely filling Roche lobe and a primary inside the inner Roche lobe and there is a third-light corresponding to about 15-17% of the total system light. However, the third-light source can not be the same as the third-body suggested from the period study. At the present, however, we can not determine which one between two models is better fitted to the observations because of a negligible difference of $\sum(O-C)^2$ between them. The diversity of mass ratios, with which previous investigators were in disagreement, still remains to be one of unsolved problems in XZ CMi system. Spectroscopic observations for a radial velocity curve and high-resolution spectra as well as a high-precision photometry are needed to resolve some of remaining problems.

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • 제25권4호
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

Ecological Studies on the Burned Forest -On the Productivity System of the Burned Forest- (산화적지(山火跡地)의 생태학적(生態學的) 연구(硏究) -산화후(山火後) 임지(林地)의 생산구조(生産構造)에 대(對)하여-)

  • Kim, Ok Kyung;Chong, Hyon Pae
    • Journal of Korean Society of Forest Science
    • /
    • 제12권1호
    • /
    • pp.45-54
    • /
    • 1971
  • Ecological studies on the effect of an accidental fire on the composition of the post-fire vegetation in relation to the productivity system were made at the burned site on Mt. Samak located at Duckduwon-Ri, Sumyun, Chun Sung-Kun, Kangwon-Do, the same plots used in the previous study carried out in 1967. The result are summarized as follows. 1. In the productivity system, the standing crop measured was as follows; Carex Lanceolata var. Nana, Miscanthus purpurscens etc. were contained in the herbs and their individual number was larger than that of the woody plants. (Table 1). In the woody plants, Quercus Acutissima was the most abundant, showing larger number of tree than Quercus dentata. The S.D.R. value of the family Poaceae was the highest among the herbs and in the test plots, it was 4 times larger in number than in the controlled plots. (Table 3, Fig. 4, 5). 2. In the unburned sities, 5 dominant species were selected and by calculating their S.D.R., it was shown that woody plants, S.D.R. is 4.43 while it is 11.52 with herbs. (Table 4, Fig. 6). 3. When making comparisons with the standing crop on the higher around, it was found that the test plots had 522.45 gm more than the controlled and 1470. 53gm more than those on lower ground. These results were considered to indicate that high temperature caused by fire resulted in the increase of germination rate of seeds as it was seen in the previous study and it further stimulate the growth of the perennial plants. (Table 6, 7) 4. In the number of species, the standing crop was increased in the order of Genus Miscanthus and Genus Carex. and in the woody plants Genus Lespedeza was increased in the standing crop. 5. It was found that in the rest plots, total summed height was greater by about 6000cm than that in the controlled plots. 6. In conclusion, the forest fire gave a great loss to tall trees and woody plants burning them together with unmatured seeds. In the succession of the 2nd year it was considered that the growth of the perennial plants had been stimulated on the barned sites.

  • PDF

The Effect of Nicotine-Contaminated Mulberry Leaf in the Vicinity of Tabacco Drying Plant on Cocoon Crop (연초건조장 부근의 뽕잎이 잠작에 미치는 영향)

  • 양성열;이상풍;김계명;이상욱
    • Journal of Sericultural and Entomological Science
    • /
    • 제20권2호
    • /
    • pp.26-31
    • /
    • 1978
  • The objective of the present study was to investigate the effect of nicotine-contaminated mulberry-leaf, which was grown in the vicinity of tobacco drying plant (TDP), on cocoon crop. Mulberry-leaf harvested from the field at Sericultural Experiment Station (SES), Suweon, Korea, was used as control, supposedly nicotine-free leaf. Leaf harvested from the field in the distances of 30-50m, 300-400m and 700-800m from TDP was fed during the whole larval stage of the silkworm at. SES. The effect of leaf in each treatment level on the quantitative characters of the silkworm was summarized as follows; 1. Larval duration from 4th instar on was significantly longer in the TDP-leaf treatments than for the control. 2. Duration of matured silkworm appearance became longer as the distance of the mulberry-field from TDP got shorter, because the larval duration and growth of the silkworm were not uniform in the TDP-leaf treatments. 3. Mortality rates during the late larval, cocoon spinning, and pupal stages were highest for the 30-50m leaf, especially mortality rates during the late larval and pupal stage were serious. 4. Pupation rate was lowest for the 30-50m leaf and those for the 300-400m and the 700-800m leaf were not significantly different from that of the control. 5. Nicotine damage to cocoon weight and cocoon shell weight was significant in each TDP-leaf level. Cocoon shell ratio was reduced at the same extent in each level, compared with the control. 6. The ratio among cocoon-classes was significantly different between treatments, compared with best-cocoon ratio of 87.1% for the control. Cocoons were not uniform for the 30-50m leaf, and those for the 300-400m and the 700-800m leaf were as almost uniform as those for the control. 7. Loss of fresh cocoon yield became greater as the distance of the mulberry-field from the TDP-got shorter. In conclusion, the critical distance of mulberry-field, which influences larval health, cocoon quality and yield, appeared to be 800m from the TDP. Such other factors as wind direction and topographic location may be involved in the critical distance. 8. From the present experiment, we could obtain only the effect of nicotine on the silkworm through digestive system, since the silkworm was raised at SES in Suweon. If the silkworm.. were raised in the vicinity of the TDP, poison effect of nicotine on the silkworm could beo expected through exoskeleton and tracheal system as well as through digestive system.

  • PDF

The Theory of Chen tuan's Internal Alchemy and Intermixture of Taoism, Buddhism and Neo-Confucianism (진단의 내단이론과 삼교회통론)

  • Kim, Kyeong-Soo
    • The Journal of Korean Philosophical History
    • /
    • 제31호
    • /
    • pp.53-86
    • /
    • 2011
  • Taoism exercised its influence and has made much progress apparently under the aegis of the Tang dynasty. But since the external alchemy, a traditional way of eternal life that they have pursued, met the limitation, they were placed in a situation where they needed to seek a new discipline. From this period to the early North Song dynasty, three religions have established the unique theoretical systems of their own theory of ascetic practices. They showed their own unique formats as follows. Neo-Confucianism established the theory of moral training, Buddhism did the theory of ascetic practices and Taoism had theory of discipline. By this time, a person who claimed the Intermixture of Three Religions composed the new system of theory of ascetic practice by taking advantage of other religions and putting them into his own view. Chen tuan established the theory of internal alchemy of Taoism and was the most influential figure in the world of thought since North Song dynasty. He clearly declared that he accepted the merits of other religions in his theory. He added I Ching of Confucianism in I Ching of secret of Taoism to stop the logical gaps during the process of disciplines in Taoism and took ascetic practices on mind of Buddhism into his system while he sought a way to integrate the dual structure of body and mind. The theory of Chen tuan's internal alchemy was training schema with stages of 'YeonJeongHwaGi', 'YeonGiHwaSin', and 'YeonSinHwanHeo' based on the concepts of vital, energy and spirit. The internal alchemy practice that Chen tuan was saying started from the practice of Zen to keep the mind calm with the basis of fundamental principles of interpretation of book of change according to Taoism. When a person reached the state to be in concert with all changes at the end of the silence and be full of wisdoms, he finally returned to the state of BokGwiMuGeuk by taking the flow of subtle mind and transforming it into energy. He expressed this process by drawing 'MuGeukDo'. Oriental philosophy categorized human into 'phenomenal existence' and 'original existence'. The logic of theory of ascetic practice has been established from these 'category of existence'. It would be determined whether it will return to 'original existence' or be stepped up from 'phenomenal existence' according to how the concept of 'self' or 'I' was made. Chen tuan who established the theory of internal alchemy in Taoism has established the unique theory of internal alchemy discipline and system of intermixture of three religions in this aspect. Today is called 'era of self-loss' or 'era of incurable diseases' caused by environmental pollution. It's still meaningful to review the theory of discipline of Chen tuan's connecting the body and the soul to heal the self, and keep life healthy and pursue the new way of discipline based on it.

Optimal Monetary Policy System for Both Macroeconomics and Financial Stability (거시경제와 금융안정을 종합 고려한 최적 통화정책체계 연구)

  • Joonyoung Hur;Hyoung Seok Oh
    • KDI Journal of Economic Policy
    • /
    • 제46권1호
    • /
    • pp.91-129
    • /
    • 2024
  • The Bank of Korea, through a legal amendment in 2011 following the financial crisis, was entrusted with the additional responsibility of financial stability beyond its existing mandate of price stability. Since then, concerns have been raised about the prolonged increase in household debt compared to income conditions, which could constrain consumption and growth and increase the possibility of a crisis in the event of negative economic shocks. The current accumulation of financial imbalances suggests a critical period for the government and central bank to be more vigilant, ensuring it does not impede the stable flow of our financial and economic systems. This study examines the applicability of the Integrated Inflation Targeting (IIT) framework proposed by the Bank for International Settlements (BIS) for macro-financial stability in promoting long-term economic stability. Using VAR models, the study reveals a clear increase in risk appetite following interest rate cuts after the financial crisis, leading to a rise in household debt. Additionally, analyzing the central bank's conduct of monetary policy from 2000 to 2021 through DSGE models indicates that the Bank of Korea has operated with a form of IIT, considering both inflation and growth in its policy decisions, with some responsiveness to the increase in household debt. However, the estimation of a high interest rate smoothing coefficient suggests a cautious approach to interest rate adjustments. Furthermore, estimating the optimal interest rate rule to minimize the central bank's loss function reveals that a policy considering inflation, growth, and being mindful of household credit conditions is superior. It suggests that the policy of actively adjusting the benchmark interest rate in response to changes in economic conditions and being attentive to household credit situations when household debt is increasing rapidly compared to income conditions has been analyzed as a desirable policy approach. Based on these findings, we conclude that the integrated inflation targeting framework proposed by the BIS could be considered as an alternative policy system that supports the stable growth of the economy in the medium to long term.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

The Spatio-temporal Distribution of Organic Matter on the Surface Sediment and Its Origin in Gamak Bay, Korea (가막만 표층퇴적물중 유기물량의 시.공간적 분포 특성)

  • Noh Il-Hyeon;Yoon Yang-Ho;Kim Dae-Il;Park Jong-Sick
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • 제9권1호
    • /
    • pp.1-13
    • /
    • 2006
  • A field survey on the spatio-temporal distribution characteristics and origins of organic matter in surface sediments was carried out monthly at six stations in Gamak Bay, South Korea from April 2000 to March 2002. The range of ignition loss(IL) was $4.6{\sim}11.6%(7.1{\pm}1.6%)$, while chemical oxygen demand(CODs) ranged from $12.25{\sim}99.26mgO_2/g-dry(30.98{\pm}19.09mgO_2/g-dry)$, acid volatile sulfide(AVS) went from no detection(ND)${\sim}10.29mgS/g-dry(1.02{\pm}0.58mgS/g-dry)$, and phaeopigment was $6.84{\sim}116.18{\mu}g/g-dry(23.72{\pm}21.16{\mu}g/g-dry)$. The ranges of particulate organic carbon(POC) and particulate organic nitrogen(PON) were $5.45{\sim}23.24 mgC/g-dty(10.34{\pm}4.40C\;mgC/g-dry)$ and $0.71{\sim}2.99mgN/g-dry(1.37{\pm}0.58mgN/g-dry)$, respectively. Water content was in the range of $43.1{\sim}77.6%(55.8{\pm}5.6%)$, and mud content(silt+clay) was higher than 95% at all stations. The spatial distribution of organic matter in surface sediments was greatly divided between the northwestern, central and eastern areas, southern entrance area from the distribution characteristic of their organic matters. The concentrations of almost all items were greater at the northwestern and southern entrance area than at the other areas in Gamak Bay. In particular, sedimentary pollution was very serious at the northwestern area, because the area had an excessive supply of organic matter due to aquaculture activity and the inflow of sewage from the land. These materials stayed longer because of the topographical characteristics of such as basin and the anoxic conditions in the bottom seawater environment caused by thermocline in the summer. The tendency of temporal change was most prominently in the period of high-water temperatures than low-water ones at the northwestern and southern entrance areas. On the other hand, the central and eastern areas did not show a regular trend for changing the concentrations of each item but mainly showed a higher tendency during the low-water temperatures. This was observed for all but AVS concentrations which were higher during the period of high-water temperature at all stations. Especially, the central and eastern areas showed a large temporal increase of AVS concentration during those periods of high-water temperature where the concentration of CODs was in excess of $20mgO_2/g-dry$. The results show that the organic matters in surface sediments in Gamak Bay actually originated from autochthonous organic matters with eight or less in average C/N ratio including the organic matters generated by the use of ocean, rather than terrigenous organic matters. However, the formation of autochthonous organic matter was mainly derived from detritus than living phytoplankton, indicated the results of the POC/phaeopigment ratio. In addition, the CODs/IL ratio results demonstrate that the detritus was the product of artificial activities such as dregs feeding and fecal pellets of farm organisms caused by aquaculture activities rather than the dynamic of natural ocean activities.

  • PDF