• Title/Summary/Keyword: DISTANCE

Search Result 25,490, Processing Time 0.055 seconds

The Characteristic of Laws on the Kind of Urban Green Spaces and the Legal Requirements for the Green Spaces of Urban Habitat in China (중국의 도시녹지 종류와 도시거주구 녹지의 설치 기준에 관한 법제도의 현황과 특성)

  • Shin, Ick-Soon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.41 no.3
    • /
    • pp.1-11
    • /
    • 2013
  • This study investigated Chinese Laws on the kind of urban green spaces and the legal requirements for the green spaces of urban habitat and analyzed the specificities of them intending to provide basic data to suggest bringing in or not the relevant Chinese Laws to Korea. This study can be summarized as follows: First, the concept of Chinese urban green spaces(g.s.) classified by 5 kinds(park g.s., production g.s., protection g.s., attachment g.s., the others g.s.) placed the park and green spaces in the same category unlike the Korean urban green spaces that only distinguishes between park and green spaces. The Chinese Urban Park is classified by 4 kinds(composite park, community park, special park, linear park) at the 'Standard for urban green spaces classification' which is below in rank of the legal system. Second, in case of calculation for green spaces ratio of urban green spaces in China, the green rooftop landscaping area should not be included as a green spaces area except the rooftop of a basement or semi basement building to which residents have easy access. The green spaces requirements and compulsory secure ratio by 3 habitat kinds(habitat, small habit, minimum habitat) of when to act as a residential plan is regulated. Third, the green spaces system is obligated to establish at habitat green spaces plan and is specified to conserve and improve existing trees and green spaces. The green spaces ratio on reconstruction for old habitat is relaxed to be lower than for new habitat and a gradient of green spaces is peculiarly clarified. The details and requirements for establishment and the minimum area intending for each classes of the central green spaces(habitat park, children park, minimum habitat's green spaces) are regulated. Especially at a garden style of minimum habitat's green spaces, intervals between the south and north houses and a compulsory security for green spaces area classifying into two groups(closing type green spaces and open type green spaces) by a middle-rise or high-rise building are clarified. System of calculation for green spaces area is presented at a special regulation. Fourth, a general index(area/person) of public green spaces within habitat to achieve by 3 habitat kinds is determined, in this case, the index on reconstruction for a deterioration zone can be relaxed to be lower to the extent of a specified quantity. A location and scale, minimum width and minimum area per place of public green spaces are regulated. A space plot principle including adjacent to a road, greening area ratio against total area, security of open space and the shadow line boundary of sunshine are also regulated to intend for public green spaces. Fifth, the minimum horizontal distance between the underground cables and the surrounding greening trees are regulated as the considerable items for green spaces when setting up the underground cables. The principle to establish green spaces within public service facilities is regulated according to the kind of service contents. It shall be examined in order to import or not the special regulations that only exist in Chinese Laws but not in Korean Laws. The result of this study will contribute to gain the domestic landscape architect's' sympathy of the research related to Chinese urban green spaces laws requiring immediate attention and will be a good chance to advance into the internationalization of Korean Landscape Architectural Laws.

Methodology for Identifying Issues of User Reviews from the Perspective of Evaluation Criteria: Focus on a Hotel Information Site (사용자 리뷰의 평가기준 별 이슈 식별 방법론: 호텔 리뷰 사이트를 중심으로)

  • Byun, Sungho;Lee, Donghoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.23-43
    • /
    • 2016
  • As a result of the growth of Internet data and the rapid development of Internet technology, "big data" analysis has gained prominence as a major approach for evaluating and mining enormous data for various purposes. Especially, in recent years, people tend to share their experiences related to their leisure activities while also reviewing others' inputs concerning their activities. Therefore, by referring to others' leisure activity-related experiences, they are able to gather information that might guarantee them better leisure activities in the future. This phenomenon has appeared throughout many aspects of leisure activities such as movies, traveling, accommodation, and dining. Apart from blogs and social networking sites, many other websites provide a wealth of information related to leisure activities. Most of these websites provide information of each product in various formats depending on different purposes and perspectives. Generally, most of the websites provide the average ratings and detailed reviews of users who actually used products/services, and these ratings and reviews can actually support the decision of potential customers in purchasing the same products/services. However, the existing websites offering information on leisure activities only provide the rating and review based on one stage of a set of evaluation criteria. Therefore, to identify the main issue for each evaluation criterion as well as the characteristics of specific elements comprising each criterion, users have to read a large number of reviews. In particular, as most of the users search for the characteristics of the detailed elements for one or more specific evaluation criteria based on their priorities, they must spend a great deal of time and effort to obtain the desired information by reading more reviews and understanding the contents of such reviews. Although some websites break down the evaluation criteria and direct the user to input their reviews according to different levels of criteria, there exist excessive amounts of input sections that make the whole process inconvenient for the users. Further, problems may arise if a user does not follow the instructions for the input sections or fill in the wrong input sections. Finally, treating the evaluation criteria breakdown as a realistic alternative is difficult, because identifying all the detailed criteria for each evaluation criterion is a challenging task. For example, if a review about a certain hotel has been written, people tend to only write one-stage reviews for various components such as accessibility, rooms, services, or food. These might be the reviews for most frequently asked questions, such as distance between the nearest subway station or condition of the bathroom, but they still lack detailed information for these questions. In addition, in case a breakdown of the evaluation criteria was provided along with various input sections, the user might only fill in the evaluation criterion for accessibility or fill in the wrong information such as information regarding rooms in the evaluation criteria for accessibility. Thus, the reliability of the segmented review will be greatly reduced. In this study, we propose an approach to overcome the limitations of the existing leisure activity information websites, namely, (1) the reliability of reviews for each evaluation criteria and (2) the difficulty of identifying the detailed contents that make up the evaluation criteria. In our proposed methodology, we first identify the review content and construct the lexicon for each evaluation criterion by using the terms that are frequently used for each criterion. Next, the sentences in the review documents containing the terms in the constructed lexicon are decomposed into review units, which are then reconstructed by using the evaluation criteria. Finally, the issues of the constructed review units by evaluation criteria are derived and the summary results are provided. Apart from the derived issues, the review units are also provided. Therefore, this approach aims to help users save on time and effort, because they will only be reading the relevant information they need for each evaluation criterion rather than go through the entire text of review. Our proposed methodology is based on the topic modeling, which is being actively used in text analysis. The review is decomposed into sentence units rather than considering the whole review as a document unit. After being decomposed into individual review units, the review units are reorganized according to each evaluation criterion and then used in the subsequent analysis. This work largely differs from the existing topic modeling-based studies. In this paper, we collected 423 reviews from hotel information websites and decomposed these reviews into 4,860 review units. We then reorganized the review units according to six different evaluation criteria. By applying these review units in our methodology, the analysis results can be introduced, and the utility of proposed methodology can be demonstrated.

Spatio-Temporal Monitoring of Soil CO2 Fluxes and Concentrations after Artificial CO2 Release (인위적 CO2 누출에 따른 토양 CO2 플럭스와 농도의 시공간적 모니터링)

  • Kim, Hyun-Jun;Han, Seung Hyun;Kim, Seongjun;Yun, Hyeon Min;Jun, Seong-Chun;Son, Yowhan
    • Journal of Environmental Impact Assessment
    • /
    • v.26 no.2
    • /
    • pp.93-104
    • /
    • 2017
  • CCS (Carbon Capture and Storage) is a technical process to capture $CO_2$ from industrial and energy-based sources, to transfer and sequestrate impressed $CO_2$ in geological formations, oceans, or mineral carbonates. However, potential $CO_2$ leakage exists and causes environmental problems. Thus, this study was conducted to analyze the spatial and temporal variations of $CO_2$ fluxes and concentrations after artificial $CO_2$ release. The Environmental Impact Evaluation Test Facility (EIT) was built in Eumseong, Korea in 2015. Approximately 34kg $CO_2$ /day/zone were injected at Zones 2, 3, and 4 among the total of 5 zones from October 26 to 30, 2015. $CO_2$ fluxes were measured every 30 minutes at the surface at 0m, 1.5m, 2.5m, and 10m from the $CO_2$ releasing well using LI-8100A until November 13, 2015, and $CO_2$ concentrations were measured once a day at 15cm, 30cm, and 60cm depths at every 0m, 1.5m, 2.5m, 5m, and 10m from the well using GA5000 until November 28, 2015. $CO_2$ flux at 0m from the well started increasing on the fifth day after $CO_2$ release started, and continued to increase until November 13 even though the artificial $CO_2$ release stopped. $CO_2$ fluxes measured at 2.5m, 5.0m, and 10m from the well were not significantly different with each other. On the other hand, soil $CO_2$ concentration was shown as 38.4% at 60cm depth at 0m from the well in Zone 3 on the next day after $CO_2$ release started. Soil $CO_2$ was horizontally spreaded overtime, and detected up to 5m away from the well in all zones until $CO_2$ release stopped. Also, soil $CO_2$ concentrations at 30cm and 60cm depths at 0m from the well were measured similarly as $50.6{\pm}25.4%$ and $55.3{\pm}25.6%$, respectively, followed by 30cm depth ($31.3{\pm}17.2%$) which was significantly lower than those measured at the other depths on the final day of $CO_2$ release period. Soil $CO_2$ concentrations at all depths in all zones were gradually decreased for about 1 month after $CO_2$ release stopped, but still higher than those of the first day after $CO_2$ release stared. In conclusion, the closer the distance from the well and the deeper the depth, the higher $CO_2$ fluxes and concentrations occurred. Also, long-term monitoring should be required because the leaked $CO_2$ gas can remains in the soil for a long time even if the leakage stopped.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

An Expert System for the Estimation of the Growth Curve Parameters of New Markets (신규시장 성장모형의 모수 추정을 위한 전문가 시스템)

  • Lee, Dongwon;Jung, Yeojin;Jung, Jaekwon;Park, Dohyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.17-35
    • /
    • 2015
  • Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase for a certain period of time. Developing precise forecasting models are considered important since corporates can make strategic decisions on new markets based on future demand estimated by the models. Many studies have developed market growth curve models, such as Bass, Logistic, Gompertz models, which estimate future demand when a market is in its early stage. Among the models, Bass model, which explains the demand from two types of adopters, innovators and imitators, has been widely used in forecasting. Such models require sufficient demand observations to ensure qualified results. In the beginning of a new market, however, observations are not sufficient for the models to precisely estimate the market's future demand. For this reason, as an alternative, demands guessed from those of most adjacent markets are often used as references in such cases. Reference markets can be those whose products are developed with the same categorical technologies. A market's demand may be expected to have the similar pattern with that of a reference market in case the adoption pattern of a product in the market is determined mainly by the technology related to the product. However, such processes may not always ensure pleasing results because the similarity between markets depends on intuition and/or experience. There are two major drawbacks that human experts cannot effectively handle in this approach. One is the abundance of candidate reference markets to consider, and the other is the difficulty in calculating the similarity between markets. First, there can be too many markets to consider in selecting reference markets. Mostly, markets in the same category in an industrial hierarchy can be reference markets because they are usually based on the similar technologies. However, markets can be classified into different categories even if they are based on the same generic technologies. Therefore, markets in other categories also need to be considered as potential candidates. Next, even domain experts cannot consistently calculate the similarity between markets with their own qualitative standards. The inconsistency implies missing adjacent reference markets, which may lead to the imprecise estimation of future demand. Even though there are no missing reference markets, the new market's parameters can be hardly estimated from the reference markets without quantitative standards. For this reason, this study proposes a case-based expert system that helps experts overcome the drawbacks in discovering referential markets. First, this study proposes the use of Euclidean distance measure to calculate the similarity between markets. Based on their similarities, markets are grouped into clusters. Then, missing markets with the characteristics of the cluster are searched for. Potential candidate reference markets are extracted and recommended to users. After the iteration of these steps, definite reference markets are determined according to the user's selection among those candidates. Then, finally, the new market's parameters are estimated from the reference markets. For this procedure, two techniques are used in the model. One is clustering data mining technique, and the other content-based filtering of recommender systems. The proposed system implemented with those techniques can determine the most adjacent markets based on whether a user accepts candidate markets. Experiments were conducted to validate the usefulness of the system with five ICT experts involved. In the experiments, the experts were given the list of 16 ICT markets whose parameters to be estimated. For each of the markets, the experts estimated its parameters of growth curve models with intuition at first, and then with the system. The comparison of the experiments results show that the estimated parameters are closer when they use the system in comparison with the results when they guessed them without the system.

Study on the Difference in Intake Rate by Kidney in Accordance with whether the Bladder is Shielded and Injection method in 99mTc-DMSA Renal Scan for Infants (소아 99mTc-DMSA renal scan에서 방광차폐유무와 방사성동위원소 주입방법에 따른 콩팥섭취율 차이에 관한 연구)

  • Park, Jeong Kyun;Cha, Jae Hoon;Kim, Kwang Hyun;An, Jong Ki;Hong, Da Young;Seong, Hyo Jin
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.2
    • /
    • pp.27-31
    • /
    • 2016
  • Purpose $^{99m}Tc-DMSA$ renal scan is a test for the comparison of the function by imaging the parenchyma of the kidneys by the cortex of a kidney and by computing the intake ratio of radiation by the left and right kidney. Since the distance between the kidneys and the bladder is not far given the bodily structure of an infant, the bladder is included in the examination domain. Research was carried out with the presumption that counts of bladder would impart an influence on the kidneys at the time of this renal scan. In consideration of the special feature that only a trace amount of a RI is injected in a pediatric examination, research on the method of injection was also carried out concurrently. Materials and Methods With 34 infants aged between 1 month to 12 months for whom a $^{99m}Tc-DMSA$ renal scan was implemented on the subjects, a Post IMAGE was acquired in accordance with the test time after having injected the same quantity of DMSA of 0.5mCi. Then, after having acquired an additional image by shielding the bladder by using a circular lead plate for comparison purposes, a comparison was made by illustrating the percentile of (Lt. Kidney counts + Rt. Kidney counts)/ Total counts, by drawing the same sized ROI (length of 55.2mm X width of 70.0mm). In addition, in the format of a 3-way stopcock, a Heparin cap and direct injection into the patient were performed in accordance with RI injection methods. The differences in the count changes in accordance with each of the methods were compared by injecting an additional 2cc of saline into the 3-way stopcock and Heparin cap. Results The image prior to shielding of the bladder displayed a kidney intake rate with a deviation of $70.9{\pm}3.18%$ while the image after the shielding of the bladder displayed a kidney intake rate with a deviation of $79.4{\pm}5.19%$, thereby showing approximately 6.5~8.5% of difference. In terms of the injection method, the method that used the 3-way form, a deviation of $68.9{\pm}2.80%$ prior to the shielding and a deviation of $78.1{\pm}5.14%$ after the shielding were displayed. In the method of using a Heparin cap, a deviation of $71.3{\pm}5.14%$ prior to the shielding and a deviation of $79.8{\pm}3.26%$ after the shielding were displayed. Lastly, in the method of direct injection into the patient, a deviation of $75.1{\pm}4.30%$ prior to the shielding and a deviation of $82.1{\pm}2.35%$ after the shielding were displayed, thereby illustrating differences in the kidney intake rates in the order of direct injection, a Heparin cap and the 3-way methods. Conclusion Since a substantially minute quantity of radiopharmaceuticals is injected for infants in comparison to adults, the cases of having shielded the bladder by removing radiation of the bladder displayed kidney intake rates that are improved from those of the cases of not having shielded the bladder. Although there are difficulties in securing blood vessels, it is deemed that the method of direct injection would be more helpful in acquisition of better images since it displays improved kidney intake rate in comparison to other methods.

  • PDF

Environmental Management of Marine Cage Fish Farms using Numerical Modelling (수치모델을 이용한 해상어류가두리양식장의 환경관리 방안)

  • Kwon, Jung-No;Jung, Rae-Hong;Kang, Yang-Soon;An, Kyoung-Ho;Lee, Won-Chan
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.10 no.4
    • /
    • pp.181-195
    • /
    • 2005
  • To study the effects of aquaculture activity of marine cage fish farms on marine environment, field researches including hydrography, sediment, benthos and trap experiment at the marine cage fish farms(Site A) around estuaries of Tongyeong city were carried out during June $26\~27$, 2003. A simulation using numerical model-DEPOMOD was conducted to predict the solid deposition from fish cage and to assess the probable solid deposition, and the efficiency of environmental management of marine cage fish farms was studied. The marine cage fish farms cultured mainly common sea bass (Lateolabrax japonicus), red seabream (Pagrus major), striped breakperch (Oplegnathus fasciatus) and black rockfish(Sebastes schlegeli), and total amount of cultured fish of the Site A were 23.1MT. The amount of husbandry fish by unit area(and volume) of the fish cage was $43.0kg\;m^{-2}(6.1kg\;m^{-3})$. The daily mean amounts of food fed by unit biomass and cage area were $30.8g\;kg^{-1}day^{-1},\;1.32kg\;m^{-2}day^{-1},$ respectively, at the Site A. The concentration of ORP of the sediment below the center at the Site A was -334.6 mV and the concentrations of AVS, COD, Carbon and Nitrogen were $0.43mg\;g^{-1}dry,\;17.75mg\;g^{-1}dry,\;10.19mg\;g^{-1}dry\;and\;3.49mg\;g^{-1}dry$, respectively. Capitella capitata was dominant benthic species which occupied $57.8\%$ of total species, and the Infaunal Trophical Index(ITI) was marked below 20 within 20 m distance from the edge of the Site A. The result of trap experiment, the solid deposition from the Site A was $34,485g\;m^{-2}yr^{-1}$ at 0 m from the center of the cage and $18,915g\;m^{-2}yr^{-1}$ at 42 m. From a model simulation, it was estimated that using a model simulation, the proportion of unfed food was $40\%$ at the Site A and the annual total amount of solid deposition was 63,401 accounting for $24.4\%$ of the annual total food fed at the Site A. The area solid deposition settled was estimated to be $8,450m^2$, which was about 16 times of the total area of fish cage at the Site A. And concerning ITI and abundance of benthos, the model predicted that sustainable solid flux at the Site A was below $10,000gm^{-2}yr^{-1}$. The percentage of food wasted was main element of solid deposition at the marine cage fish farms, and for minimizing solid deposition it is necessary to increase the efficiency of the food uptake. Based on the result of the model simulation, if the percentage of food wasted decreases to $10\%$ from the current $40\%$, then the solid deposition could decrease to a half. In addition, it was predicted that if farmers use EP pellets as food fed instead of MP and fish trash, solid deposition could decrease by $57\%$. Also this study proposes that the cage facility ratio of the licensed area be decreased to less than $5\%$ to minimize the sediment pollution.

Studies on the Assumption of the Locations and Formational Characteristics in Yigye-gugok, Mt. Bukhansan (북한산 이계구곡(耳溪九曲)의 위치비정과 집경(集景) 특성)

  • Jung, Woo-Jin;Rho, Jae-Hyun;Lee, Hee-Young
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.35 no.3
    • /
    • pp.41-66
    • /
    • 2017
  • The purpose of this research is to empirically trace the junctures of Yigye-gugok managed by Gwan-am Hong Gyeong-mo, a grandson of Yigye Hong Yang-ho who originally designed Yigye-gugok, while reviewing the features of the forms and patterns of gugok. The results of the research are as follows. 1. Ui-dong was part of the domain of the capital during the Chosun dynasty, which also is located in the city of Seoul as a matter of administrative zone. Likewisely, Yigye-gugok is taken as a special meaning for it was one and only gugok. Starting with Mangyeong Waterfall as the $1^{st}$ gok, Yigye follows through the $2^{nd}$ gok of Jeokchwibyeong Rock, the $3^{rd}$ gok of Chanunbong Peak, the $4^{th}$ gok of Jinuigang Rock, the $5^{th}$ gok of Okkyeongdae Rock, the $6^{th}$ gok of Wolyeongdam Pond, the $7^{th}$ gok of Tagyeongam Rock, the $8^{th}$ gok of Myeongoktan Stream, and the $9^{th}$ gok of Jaeganjeong Pavilion. Of these, Mangyeong Waterfall, Chanunbong Peak, and Okkyeongdae Rock are distinct for their locations in as much as their features, while estimated locations for Jinuigang Rock, Wolyeongdam Pond, Myeongoktan Stream, and Jaeganjeong Pavilion were discovered. However, Jeokchwibyeong Rock and Tagyeongam Rock demonstrated multiple locations in close resemblance to documentary literatures within secretive proximity, whereas geography, scenery, and sighted objects were considered to evaluate the 1st estimated location. Through these endeavored, it was possible to identify the shipping routes and structures for the total distance of 2.1km running from the $1^{st}$ gok to the $9^{th}$ gok, which nears Gwanam's description of 5ri(里), or approximately 1.96km for gugok. 2. Set towards the end of the $18^{th}$ century, Yigye-gugok originated from a series of work shaping the space of Hong Yang-ho's tomb into a space for the family. Comparing Yigye-gugok to other gugoks, numerous differences are apparent from beyond the rather more general format such as adjoining the $8^{th}$ gok while paving through the lower directions from the upper directions of the water. This gives rises to the interpretation such that Yigye-gugok was positioned to separate the doman of the family from those of the other families in power, thereby taking over Ui-dong. Yet, the aspect of the possession of the space lends itself to the determination that the location positioned at the $8^{th}$ gok above Mangyeongpok Waterfall representing Wooyi-dong was a consequence of the centrifugal space creation efforts. 3. While writings and poetic works were manufactured in such large quantities in Yigye-gugok whose products of setters and managers seemed intended towards gugok-do and letters carved on the rocks among others, there is yet a tremendous lack of visual media in the same respect. 'Yigye-gugok Daejacheop' Specimens of Handwriting offers the traces of Gwanam's attempts to engrave gakja at the food of Yigye-gugok. This research was able to ascertain that 'Yigye-gugok Daejacheop' Specimens of Handwriting was a product of Hong Yang-ho's collections maintained under the auspices of the National Central Museum, which are renowned for Song Shi-yeol's penmanship.

STUDIES ON THE PROPAGATION OF ABALONE (전복의 증식에 관한 연구)

  • PYEN Choong-Kyu
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.3 no.3
    • /
    • pp.177-186
    • /
    • 1970
  • The spawning of the abalone, Haliotis discus hannai, was induced In October 1969 by air ex-position for about 30 minutes. At temperatures of from 14.0 to $18.8^{\circ}C$, the youngest trochophore stage was reached within 22 hours after the egg was laid. The trochophore was transformed into the veliger stage within 34 hours after fertilization. For $7\~9$ days after oviposition the veliger floated in sea water and then settled to the bottom. The peristomal shell was secreted along the outer lip of the aperture of the larval shell, and the first respiratory pore appears at about 110 days after fertilization. The shell attained a length of 0.40 mm in 15 days, 1.39 mm in 49 days, 2.14 mm in 110 days, 5.20 mm in 170 days and 10.00 mm in 228 days respectively. Monthly growth rate of the shell length is expressed by the following equation :$L=0.9981\;e^{0.18659M}$ where L is shell length and M is time in month. The density of floating larvae in the culture tank was about 10 larvae per 100 co. The number of larvae attached to a polyethylene collector ($30\times20\;cm$) ranged from 10 to 600. Mortality of the settled larvae on the polyethylene collector was about $87.0\%$ during 170 days following settlement. The culture of Nauicula sp. was made with rough polyethylene collectors hung at three different depths, namely 5 cm, 45 cm and 85 cm. At each depth the highest cell concentration appeared after $15\~17$ days, and the numbers of cells are shown as follows: $$5\;cm\;34.3\times10^4\;Cells/cm^2$$ $$45\;cm\;27.2\times10^4\;Cells/cm^2$$ $$85\;cm\;26.3\times10^4\;Cells/cm^2$$ At temperatures of from 13.0 to $14.3^{\circ}C$, the distance travelled by the larvae (3.0 mm In shell length) averaged 11.36 mm for a Period of 30 days. Their locomation was relatively active between 6 p.m. and 9 p.m., and $52.2\%$ of them moved during this period. When the larvae (2.0 mm in shell length) were kept in water at $0\;to\;\~1.8^{\circ}C$, they moved 1.15cm between 4 p.m. and 8 p.m. and 0.10 cm between midnight and 8 a.m. The relationships between shell length and body weight of the abalone sampled from three different localities are shown as follows: Dolsan-do $W=0.2479\;L^{2.5721}$ Huksan-do $W=0.1001\;L^{3.1021}$ Pohang $W=0.9632\;L^{2.0611}$

  • PDF