• Title/Summary/Keyword: monitoring

Search Result 26,364, Processing Time 0.058 seconds

A Study on the Seawater Filtration Characteristics of Single and Dual-filter Layer Well by Field Test (현장실증시험에 의한 단일 및 이중필터층 우물의 해수 여과 특성 연구)

  • Song, Jae-Yong;Lee, Sang-Moo;Kang, Byeong-Cheon;Lee, Geun-Chun;Jeong, Gyo-Cheol
    • The Journal of Engineering Geology
    • /
    • v.29 no.1
    • /
    • pp.51-68
    • /
    • 2019
  • This study performs to evaluate adaptability of seashore filtering type seawater-intake which adapts dua1 filter well alternative for direct seawater-intake. This study varies filter condition of seashore free surface aquifer which is composed of sand layer then installs real size dual filter well and single filter well to evaluate water permeability and proper pumping amount according to filter condition. According to result of step aquifer test, it is analysed that 110.3% synergy effect of water permeability coefficient is happened compare to single filter since dual filter well has better improvement. dual filter has higher water permeability coefficient compare to same pumping amount, this means dual filter has more improved water permeability than single filter. According to analysis result of continuous aquifer test, it is evaluated that dual filter well (SD1200) has higher water permeability than single filter well (SS800) by analysis of water permeability coefficient using monitoring well and gauging well, it is also analysed dual filter has 110.7% synergy effect of water permeability coefficient. As a evaluation result of pumping amount according to analysis of water level dropping rate, it is analysed that dual filter well increased 122.8% pumping amount compare to single filter well when water level dropping is 2.0 m. As a result of calculating proper pumping amount using water level dropping rate, it is analysed that dual filter well shows 136.0% higher pumping amount compare to single filter well. It is evaluated that proper pumping amount has 122.8~160% improvement compare to single filter, pumping amount improvement rate is 139.6% compare to averaged single filter. In other words, about 40% water intake efficiency can be improved by just installation of dual filter compare to normal well. Proper pumping amount of dual filter well using inflection point is 2843.3 L/min and it is evaluated that daily seawater intake amount is about $4,100m^3/day$ (${\fallingdotseq}4094.3m^3/day$) in one hole of dual filter well. Since it is possible to intake plenty of water in one hole, higher adaptability is anticipated. In case of intaking seawater using dual filter well, no worries regarding damages on facilities caused by natural disaster such as severe weather or typhoon, improvement of pollution is anticipated due to seashore sand layer acts like filter. Therefore, It can be alternative of environmental issue for existing seawater intake technique, can save maintenance expenses related to installation fee or damages and has excellent adaptability in economic aspect. The result of this study will be utilized as a basic data of site demonstration test for adaptation of riverside filtered water of upcoming dual filter well and this study is also anticipated to present standard of well design and construction related to riverside filter and seashore filter technique.

The Effect of Shading on Pedestrians' Thermal Comfort in the E-W Street (동-서 가로에서 차양이 보행자의 열적 쾌적성에 미치는 영향)

  • Ryu, Nam-Hyong;Lee, Chun-Seok
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.6
    • /
    • pp.60-74
    • /
    • 2018
  • This study was to investigate the pedestrian's thermal environments in the North Sidewalk of E-W Street during summer heatwave. We carried out detailed measurements with four human-biometeorological stations on Dongjin Street, Jinju, Korea ($N35^{\circ}10.73{\sim}10.75^{\prime}$, $E128^{\circ}55.90{\sim}58.00^{\prime}$, elevation: 50m). Two of the stations stood under one row street tree and hedge(One-Tree), two row street tree and hedge (Two-Tree), one of the stations stood under shelter and awning(Shelter), while the other in the sun (Sunlit). The measurement spots were instrumented with microclimate monitoring stations to continuously measure microclimate, radiation from the six cardinal directions at the height of 1.1m so as to calculate the Universal Thermal Climate Index (UTCI) from 24th July to 21th August 2018. The radiant temperature of sidewalk's elements were measured by the reflective sphere and thermal camera at 29th July 2018. The analysis results of 9 day's 1 minute term human-biometeorological data absorbed by a man in standing position from 10am to 4pm, and 1 day's radiant temperature of sidewalk elements from 1:16pm to 1:35pm, showed the following. The shading of street tree and shelter were mitigated heat stress by the lowered UTCI at mid and late summer's daytime, One-Tree and Two-Tree lowered respectively 0.4~0.5 level, 0.5~0.8 level of the heat stress, Shelter lowered respectively 0.3~1.0 level of the heat stress compared with those in the Sunlit. But the thermal environments in the One-Tree, Two-Tree and Shelter during the heat wave supposed to user "very strong heat stress" while those in the Sunlit supposed to user "very strong heat stres" and "exterme heat stress". The main heat load temperature compared with body temperature ($37^{\circ}C$) were respectively $7.4^{\circ}C{\sim}21.4^{\circ}C$ (pavement), $14.7^{\circ}C{\sim}15.8^{\circ}C$ (road), $12.7^{\circ}C$ (shelter canopy), $7.0^{\circ}C$ (street funiture), $3.5^{\circ}C{\sim}6.4^{\circ}C$ (building facade). The main heat load percentage were respectively 34.9%~81.0% (pavement), 9.6%~25.2% (road), 24.8% (shelter canopy), 14.1%~15.4% (building facade), 5.7% (street facility). Reducing the radiant temperature of the pavement, road, building surfaces by shading is the most effective means to achieve outdoor thermal comfort for pedestrians in sidewalk. Therefore, increasing the projected canopy area and LAI of street tree through the minimal training and pruning, building dense roadside hedge are essential for pedestrians thermal comfort. In addition, thermal liner, high reflective materials, greening etc. should be introduced for reducing the surface temperature of shelter and awning canopy. Also, retro-reflective materials of building facade should be introduced for the control of reflective sun radiation. More aggressively pavement watering should be introduced for reducing the surface temperature of sidewalk's pavement.

Characteristics of Vegetation Structure of Burned Area in Mt. Geombong, Samcheok-si, Kangwon-do (강원도 삼척 검봉산 일대 산불 피해복원지 식생 구조 특성)

  • Sung, Jung Won;Shim, Yun Jin;Lee, Kyeong Cheol;Kweon, Hyeong keun;Kang, Won Seok;Chung, You Kyung;Lee, Chae Rim;Byun, Se Min
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.24 no.3
    • /
    • pp.15-24
    • /
    • 2022
  • In 2000, a total of 23,794ha of forest was lost due to the East Coast forest fire, and about 70% of the damaged area was concentrated in Samcheok. In 2001, artificial restoration and natural restoration were implemented in the damaged area. This study was conducted to understand the current vegetation structure 21 years after the restoration of forest fire damage in the Samcheok, Gumbong Mountain area. As a result of classifying the vegetation community, it was divided into three communities: Quercus variabilis-Pinus densiflora community, Pinus densiflora-Quercus mongolica community, and Pinus thunbergii community. Quercus variabilis, Pinus densiflora, and Pinus thunbergii planted in the artificial restoration site were found to continue to grow as dominant species in the local vegetation after restoration. As for the species diversity index of the community, the Quercus variabilis-Pinus densiflora community dominated by deciduous broad-leaf trees showed the highest, and the coniferous forest Pinus thunbergii community showed the lowest. Vegetation in areas affected by forest fires is greatly affected by reforestation tree species, and 21 years later, it has shown a tendency to recover to the forest type before forest fire. In order to establish DataBase for effective restoration and to prepare monitoring data, it is necessary to construct data through continuous vegetation survey on the areas affected by forest fires.

Soil Physical Properties of Arable Land by Land Use Across the Country (토지이용별 전국 농경지 토양물리적 특성)

  • Cho, H.R.;Zhang, Y.S.;Han, K.H.;Cho, H.J.;Ryu, J.H.;Jung, K.Y.;Cho, K.R.;Ro, A.S.;Lim, S.J.;Choi, S.C.;Lee, J.I.;Lee, W.K.;Ahn, B.K.;Kim, B.H.;Kim, C.Y.;Park, J.H.;Hyun, S.H.
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.3
    • /
    • pp.344-352
    • /
    • 2012
  • Soil physical properties determine soil quality in aspect of root growth, infiltration, water and nutrient holding capacity. Although the monitoring of soil physical properties is important for sustainable agricultural production, there were few studies. This study was conducted to investigate the condition of soil physical properties of arable land according to land use across the country. The work was investigated on plastic film house soils, upland soils, orchard soils, and paddy soils from 2008 to 2011, including depth of topsoil, bulk density, hardness, soil texture, and organic matter. The average physical properties were following; In plastic film house soils, the depth of topsoil was 16.2 cm. For the topsoils, hardness was 9.0 mm, bulk density was 1.09 Mg $m^{-3}$, and organic matter content was 29.0 g $kg^{-1}$. For the subsoils, hardness was 19.8 mm, bulk density was 1.32 Mg $m^{-3}$, and organic matter content was 29.5 g $kg^{-1}$; In upland soils, depth of topsoil was 13.3 cm. For the topsoils, hardness was 11.3 mm, bulk density was 1.33 Mg $m^{-3}$, and organic matter content was 20.6 g $kg^{-1}$. For the subsoils, hardness was 18.8 mm, bulk density was 1.52 Mg $m^{-3}$, and organic matter content was 13.0 g $kg^{-1}$. Classified by the types of crop, soil physical properties were high value in a group of deep-rooted vegetables and a group of short-rooted vegetables soil, but low value in a group of leafy vegetables soil; In orchard soils, the depth of topsoil was 15.4 cm. For the topsoils, hardness was 16.1 mm, bulk density was 1.25 Mg $m^{-3}$, and organic matter content was 28.5 g $kg^{-1}$. For the subsoils, hardness was 19.8 mm, bulk density was 1.41 Mg $m^{-3}$, and organic matter content was 15.9 g $kg^{-1}$; In paddy soils, the depth of topsoil was 17.5 cm. For the topsoils, hardness was 15.3 mm, bulk density was 1.22 Mg $m^{-3}$, and organic matter content was 23.5 g $kg^{-1}$. For the subsoils, hardness was 20.3 mm, bulk density was 1.47 Mg $m^{-3}$, and organic matter content was 17.5 g $kg^{-1}$. The average of bulk density was plastic film house soils < paddy soils < orchard soils < upland soils in order, according to land use. The bulk density value of topsoils is mainly distributed in 1.0~1.25 Mg $m^{-3}$. The bulk density value of subsoils is mostly distributed in more than 1.50, 1.35~1.50, and 1.0~1.50 Mg $m^{-3}$ for upland and paddy soils, orchard soils, and plastic film house soils, respectively. Classified by soil textural family, there was lower bulk density in clayey soil, and higher bulk density in fine silty and sandy soil. Soil physical properties and distribution of topography were different classified by the types of land use and growing crops. Therefore, we need to consider the types of land use and crop for appropriate soil management.

Cohort Observation of Blood Lead Concentration of Storage Battery Workers (축전지공장 근로자들의 혈중 연농도에 대한 코호트 관찰)

  • Kim, Chang-Yoon;Kim, Jung-Man;Han, Gu-Wung;Park, Jung-Han
    • Journal of Preventive Medicine and Public Health
    • /
    • v.23 no.3 s.31
    • /
    • pp.324-337
    • /
    • 1990
  • To assess the effectiveness of the interventions in working environment and personal hygiene for the occupational exposure to the lead, 156 workers (116 exposed subjects and 40 controls) of a newly established battery factory were examined for their blood lead concentration (Pb-B) in every 3 months up to 18 months. Air lead concentration (Pb-A) of the workplaces was also checked for 3 times in 6 months interval from August 1987. Environmental intervention included the local exhaust ventilation and vacuum cleaning of the floor. Intervention of the personal hygiene included the daily change of clothes, compulsory shower after work and hand washing before meal, prohibition of cigarette smoking and food consumption at the work site and wearing mask. Mean Pb-B of the controls was $21.97{\pm}3.36{\mu}g/dl$ at the preemployment examination and slightly increased to $22.75{\pm}3.38{\mu}g/dl$ after 6 months. Mean Pb-B of the workers who were employed before the factory was in operation (Group A) was $20.49{\pm}3.84{\mu}g/dl$ on employment and it was increased to $23.90{\pm}5.30{\mu}g/dl$ after 3 months (p<0.01). Pb-B was increased to $28.84{\pm}5.76{\mu}g/dl$ 6 months after the employment which was 1 month after the initiation of intervention program. It did not increase thereafter and ranged between $26.83{\mu}g/dl\;and\;28.28{\mu}g/dl$ in the subsequent 4 tests. Mean Pb-B of the workers who were employed after the factory had been in operation but before the intervention program was initiated (Group B) was $16.58{\pm}4/53{\mu}g/dl$ before the exposure and it was increased to $28.82{\pm}5.66{\mu}g/dl$(P<0.01) in 3 months later (1 month after the intervention). The values of subsequent 4 tests remained between 26.46 and $28.54{\mu}g/dl$. Mean Pb-B of the workers who were employed after intervention program had been started (Group C) was $19.45{\pm}3.44{\mu}g/dl$ at the preemployment examination and gradually increased to $22.70{\pm}4.55{\mu}g/dl$ after 3 months(P<0.01), $23.68{\pm}4.18{\mu}g/dl$ after 6 months, and $24.42{\pm}3.60{\mu}g/dl$ after 9 months. Work stations were classified into 4 parts according to Pb-A. The Pb-A of part I, the highest areas, were $0.365mg/m^3$, and after the intervention the levels were decreased to $0.216mg/m^3\;and\;0.208mg/m^3$ in follow-up tests. The Pb-A of part II was decreased from $0.232mg/m^3\;to\;0.148mg/m^3,\;and\;0.120mg/m^3$ after the intervention. Pb-A of part III and W was tested only after intervention and the Pb-A of part III were $0.124mg/m^3$ in Jannuary 1988 and $0.081mg/m^3$ in August 1988. The Pb-A of part IV not stationed at one place but moving around, was $0.110mg/m^3$ in August 1988. There was no consistent relationship between Pb-B and Pb-A. Pb-B of the group A and B workers in the part of the highest Pb-A were lower than those of the workers in the parts of lower Pb-A. Pb-B of the workers in the part of the lowest Pb-A incerased more rapidly. Pb-B of group C workers was the highest in part I and the lowest in part IV. These findings suggest that Pb-B is more valid method than Pb-A for monitoring the health of lead workers and intervention in personal hygiene is more effective than environmental intervention.

  • PDF

Clinical Application of Serum CEA, SCC, Cyfra21-1, and TPA in Lung Cancer (폐암환자에서 혈청 CEA, SCC, Cyfra21-1, TPA-M 측정의 의의)

  • Lee, Jun-Ho;Kim, Kyung-Chan;Lee, Sang-Jun;Lee, Jong-Kook;Jo, Sung-Jae;Kwon, Kun-Young;Han, Sung-Beom;Jeon, Young-June
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.4
    • /
    • pp.785-795
    • /
    • 1997
  • Background : Tumor markers have been used in diagnosis, predicting the extent of disease, monitoring recurrence after therapy and prediction of prognosis. But the utility of markers in lung cancer has been limited by low sensitivity and specificity. TPA-M is recently developed marker using combined monoclonal antibody of Cytokeratin 8, 18, and 19. This study was conducted to evaluate the efficacy of new tumor marker, TPA-M by comparing the estabilished markers SCC, CEA, Cyfra21-1 in lung cancer. Method : An immunoradiometric assay of serum CEA, sec, Cyfra21-1, and TPA-M was performed in 49 pathologically confirmed lung cancer patients who visited Keimyung University Hospital from April 1996 to August 1996, and 29 benign lung diseases. Commercially available kits, Ab bead CEA (Eiken) to CEA, SCC RIA BEAD (DAINABOT) to SCC, CA2H (TFB) to Cyfra2H. and TPA-M (DAIICHI) to TPA-M were used for this study. Results : The mean serum values of lung cancer group and control group were $10.05{\pm}38.39{\mu}/L$, $1.59{\pm}0.94{\mu}/L$ in CEA, $3.04{\pm}5.79{\mu}/L$, $1.58{\pm}2.85{\mu}/L$ in SCC, $8.27{\pm}11.96{\mu}/L$, $1.77{\pm}2.72{\mu}/L$ in Cyfra21-1, and $132.02{\pm}209.35\;U/L$, $45.86{\pm}75.86\;U/L$ in TPA-M respectively. Serum values of Cyfra21-1 and TPA-M in lung cancer group were higher than control group (p<0.05). Using cutoff value recommended by the manufactures, that is $2.5{\mu}/L$ in CEA, $3.0{\mu}/L$ in Cyfra21-1, 70.0 U/L in TPA-M, and $2.0{\mu}/L$ in SCC, sensitivity and specificity of lung cancer were 33.3%, 78.6% in CEA, 50.0%, 89.7% in Cyfra21-1, 52.3%, 89.7% in TPA-M, 23.8%, 89.3% in SCC. Sensitivity and specificity of nonsmall cell lung cancer were 36.1%, 78.1% in CEA, 50.1%, 89.7% in Cyfra21-1, 53.1%, 89.7% in TPA-M, 33.8%, 89.3% in SCC. Sensitivity and specificity of small cell lung cancer were 25.0%, 78.5% in CEA, 50.0%, 89.6% in Cyfra21-1, 50.0%, 89.6% in TPA-M, 0%, 89.2% in SCC. Cutoff value according to ROC(Receiver operating characteristics) curve was $1.25{\mu}/L$ in CEA, $1.5{\mu}/L$ in Cyfra2-1, 35 U/L in TPA-M, $0.6{\mu}/L$ in SCC. With this cutoff value, sensitivity, specificity, accuracy and kappa index of Cyfra21-1 and TPA-M were better than CEA and SCC. SCC only was related with statistic significance to TNM stages, dividing to operable stages(TNM stage I to IIIA) and inoperable stages (IIIB and IV) (p<0.05). But no tumor markers showed any correlation with significance with tumor size(p>0.05). Conclusion : Serum TPA-M and Cyfra21-1 shows higher sensitivity and specificity than CEA and SCC in overall lung cancer and nonsmall cell lung cancer those were confirmed pathologically. SCC has higher specificity in nonsmall cell lung cancer. And the level of serum sec are signiticantly related with TNM staging.

  • PDF

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

Changes in Agricultural Extension Services in Korea (한국농촌지도사업(韓國農村指導事業)의 변동(變動))

  • Fujita, Yasuki;Lee, Yong-Hwan;Kim, Sung-Soo
    • Journal of Agricultural Extension & Community Development
    • /
    • v.7 no.1
    • /
    • pp.155-166
    • /
    • 2000
  • When the marcher visited Korea in fall 1994, he was shocked to see high rise apartment buildings around the capitol region including Seoul and Suwon, resulting from rising demand of housing because of urban migration followed by second and third industrial development. After 6 years in March 2000, the researcher witnessed more apartment buildings and vinyl house complexes, one of the evidences of continued economic progress in Korea. Korea had to receive the rescue finance from International Monetary Fund (IMF) because of financial crisis in 1997. However, the sign of recovery was seen in a year, and the growth rate of Gross Domestic Products (GDP) in 1999 recorded as high as 10.7 percent. During this period, the Korean government has been working on restructuring of banks, enterprises, labour and public sectors. The major directions of government were; localization, reducing administrative manpower, limiting agricultural budgets, privatization of public enterprises, integration of agricultural organization, and easing of various regulations. Thus, the power of central government shifted to local government resulting in a power increase for city mayors and county chiefs. Agricultural extension services was one of targets of government restructuring, transferred to local governments from central government. At the same time, the number of extension offices was reduced by 64 percent, extension personnel reduced by 24 percent, and extension budgets reduced. During the process of restructuring, the basic direction of extension services was set by central Rural Development Administration Personnel management, technology development and supports were transferred to provincial Rural Development Administrations, and operational responsibilities transferred to city/county governments. Agricultural extension services at the local levels changed the name to Agricultural Technology Extension Center, established under jurisdiction of city mayor or county chief. The function of technology development works were added, at the same time reducing the number of educators for agriculture and rural life. As a result of observations of rural areas and agricultural extension services at various levels, functional responsibilities of extension were not well recognized throughout the central, provincial, and local levels. Central agricultural extension services should be more concerned about effective rural development by monitoring provincial and local level extension activities more throughly. At county level extension services, it may be desirable to add a research function to reflect local agricultural technological needs. Sometimes, adding administrative tasks for extension educators may be helpful far farmers. However, tasks such as inspection and investigation should be avoided, since it may hinder the effectiveness of extension educational activities. It appeared that major contents of the agricultural extension service in Korea were focused on saving agricultural materials, developing new agricultural technology, enhancing agricultural export, increasing production and establishing market oriented farming. However these kinds of efforts may lead to non-sustainable agriculture. It would be better to put more emphasis on sustainable agriculture in the future. Agricultural extension methods in Korea may be better classified into two approaches or functions; consultation function for advanced farmers and technology transfer or educational function for small farmers. Advanced farmers were more interested in technology and management information, while small farmers were more concerned about information for farm management directions and timely diffusion of agricultural technology information. Agricultural extension service should put more emphasis on small farmer groups and active participation of farmers in these groups. Providing information and moderate advice in selecting alternatives should be the major activities for consultation for advanced farmers, while problem solving processes may be the major educational function for small farmers. Systems such as internet and e-mail should be utilized for functions of information exchange. These activities may not be an easy task for decreased numbers of extension educators along with increased administrative tasks. It may be difficult to practice a one-to-one approach However group guidance may improve the task to a certain degree.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Review of the Korean Indigenous Species Investigation Project (2006-2020) by the National Institute of Biological Resources under the Ministry of Environment, Republic of Korea (한반도 자생생물 조사·발굴 연구사업 고찰(2006~2020))

  • Bae, Yeon Jae;Cho, Kijong;Min, Gi-Sik;Kim, Byung-Jik;Hyun, Jin-Oh;Lee, Jin Hwan;Lee, Hyang Burm;Yoon, Jung-Hoon;Hwang, Jeong Mi;Yum, Jin Hwa
    • Korean Journal of Environmental Biology
    • /
    • v.39 no.1
    • /
    • pp.119-135
    • /
    • 2021
  • Korea has stepped up efforts to investigate and catalog its flora and fauna to conserve the biodiversity of the Korean Peninsula and secure biological resources since the ratification of the Convention on Biological Diversity (CBD) in 1992 and the Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits (ABS) in 2010. Thus, after its establishment in 2007, the National Institute of Biological Resources (NIBR) of the Ministry of Environment of Korea initiated a project called the Korean Indigenous Species Investigation Project to investigate indigenous species on the Korean Peninsula. For 15 years since its beginning in 2006, this project has been carried out in five phases, Phase 1 from 2006-2008, Phase 2 from 2009-2011, Phase 3 from 2012-2014, Phase 4 from 2015-2017, and Phase 5 from 2018-2020. Before this project, in 2006, the number of indigenous species surveyed was 29,916. The figure was cumulatively aggregated at the end of each phase as 33,253 species for Phase 1 (2008), 38,011 species for Phase 2 (2011), 42,756 species for Phase 3 (2014), 49,027 species for Phase 4 (2017), and 54,428 species for Phase 5(2020). The number of indigenous species surveyed grew rapidly, showing an approximately 1.8-fold increase as the project progressed. These statistics showed an annual average of 2,320 newly recorded species during the project period. Among the recorded species, a total of 5,242 new species were reported in scientific publications, a great scientific achievement. During this project period, newly recorded species on the Korean Peninsula were identified using the recent taxonomic classifications as follows: 4,440 insect species (including 988 new species), 4,333 invertebrate species except for insects (including 1,492 new species), 98 vertebrate species (fish) (including nine new species), 309 plant species (including 176 vascular plant species, 133 bryophyte species, and 39 new species), 1,916 algae species (including 178 new species), 1,716 fungi and lichen species(including 309 new species), and 4,812 prokaryotic species (including 2,226 new species). The number of collected biological specimens in each phase was aggregated as follows: 247,226 for Phase 1 (2008), 207,827 for Phase 2 (2011), 287,133 for Phase 3 (2014), 244,920 for Phase 4(2017), and 144,333 for Phase 5(2020). A total of 1,131,439 specimens were obtained with an annual average of 75,429. More specifically, 281,054 insect specimens, 194,667 invertebrate specimens (except for insects), 40,100 fish specimens, 378,251 plant specimens, 140,490 algae specimens, 61,695 fungi specimens, and 35,182 prokaryotic specimens were collected. The cumulative number of researchers, which were nearly all professional taxonomists and graduate students majoring in taxonomy across the country, involved in this project was around 5,000, with an annual average of 395. The number of researchers/assistant researchers or mainly graduate students participating in Phase 1 was 597/268; 522/191 in Phase 2; 939/292 in Phase 3; 575/852 in Phase 4; and 601/1,097 in Phase 5. During this project period, 3,488 papers were published in major scientific journals. Of these, 2,320 papers were published in domestic journals and 1,168 papers were published in Science Citation Index(SCI) journals. During the project period, a total of 83.3 billion won (annual average of 5.5 billion won) or approximately US $75 million (annual average of US $5 million) was invested in investigating indigenous species and collecting specimens. This project was a large-scale research study led by the Korean government. It is considered to be a successful example of Korea's compressed development as it attracted almost all of the taxonomists in Korea and made remarkable achievements with a massive budget in a short time. The results from this project led to the National List of Species of Korea, where all species were organized by taxonomic classification. Information regarding the National List of Species of Korea is available to experts, students, and the general public (https://species.nibr.go.kr/index.do). The information, including descriptions, DNA sequences, habitats, distributions, ecological aspects, images, and multimedia, has been digitized, making contributions to scientific advancement in research fields such as phylogenetics and evolution. The species information also serves as a basis for projects aimed at species distribution and biological monitoring such as climate-sensitive biological indicator species. Moreover, the species information helps bio-industries search for useful biological resources. The most meaningful achievement of this project can be in providing support for nurturing young taxonomists like graduate students. This project has continued for the past 15 years and is still ongoing. Efforts to address issues, including species misidentification and invalid synonyms, still have to be made to enhance taxonomic research. Research needs to be conducted to investigate another 50,000 species out of the estimated 100,000 indigenous species on the Korean Peninsula.