• Title/Summary/Keyword: 필터 링

Search Result 3,386, Processing Time 0.031 seconds

The Effective Approach for Non-Point Source Management (효과적인 비점오염원관리를 위한 접근 방향)

  • Park, Jae Hong;Ryu, Jichul;Shin, Dong Seok;Lee, Jae Kwan
    • Journal of Wetlands Research
    • /
    • v.21 no.2
    • /
    • pp.140-146
    • /
    • 2019
  • In order to manage non-point sources, the paradigm of the system should be changed so that the management of non-point sources will be systematized from the beginning of the use and development of the land. It is necessary to change the method of national subsidy support and poeration plan for the non-point source management area. In order to increase the effectiveness of the non-point source reduction project, it is necessary to provide a minimum support ratio and to provide additional support according to the performance of the local government. A new system should be established to evaluate the performance of non-point source reduction projects and to monitor the operational effectiveness. It is necessary to establish the related rules that can lead the local government to take responsible administration so that the local governments faithfully carry out the non-point source reduction project and achieve the planned achievement and become the sustainable maintenance. Alternative solutions are needed, such as problems with the use of $100{\mu}m$ filter in automatic sampling and analysis, timely acquisition of water sampling and analysis during rainfall, and effective management of non-point sources network operation management. As an alternative, it is necessary to consider improving the performance of sampling and analysis equipment, and operate the base station. In addition, countermeasures are needed if the amount of pollutant reduction according to the non-point source reduction facility promoted by the national subsidy is required to be used as the development load of the TMDLs. As an alternative, it is possible to consider supporting incentive type of part of the maintenance cost of the non-point source reduction facility depending on the amount of pollutants reduction.

Trend Analysis of Vegetation Changes of Korean Fir (Abies koreana Wilson) in Hallasan and Jirisan Using MODIS Imagery (MODIS 시계열 위성영상을 이용한 한라산과 지리산 구상나무 식생 변동 추세 분석)

  • Minki Choo;Cheolhee Yoo;Jungho Im;Dongjin Cho;Yoojin Kang;Hyunkyung Oh;Jongsung Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.325-338
    • /
    • 2023
  • Korean fir (Abies koreana Wilson) is one of the most important environmental indicator tree species for assessing climate change impacts on coniferous forests in the Korean Peninsula. However, due to the nature of alpine and subalpine regions, it is difficult to conduct regular field surveys of Korean fir, which is mainly distributed in regions with altitudes greater than 1,000 m. Therefore, this study analyzed the vegetation change trend of Korean fir using regularly observed remote sensing data. Specifically, normalized difference vegetation index (NDVI) from Moderate Resolution Imaging Spectroradiometer (MODIS), land surface temperature (LST), and precipitation data from Global Precipitation Measurement (GPM) Integrated Multi-satellitE Retrievalsfor GPM from September 2003 to 2020 for Hallasan and Jirisan were used to analyze vegetation changes and their association with environmental variables. We identified a decrease in NDVI in 2020 compared to 2003 for both sites. Based on the NDVI difference maps, areas for healthy vegetation and high mortality of Korean fir were selected. Long-term NDVI time-series analysis demonstrated that both Hallasan and Jirisan had a decrease in NDVI at the high mortality areas (Hallasan: -0.46, Jirisan: -0.43). Furthermore, when analyzing the long-term fluctuations of Korean fir vegetation through the Hodrick-Prescott filter-applied NDVI, LST, and precipitation, the NDVI difference between the Korean fir healthy vegetation and high mortality sitesincreased with the increasing LST and decreasing precipitation in Hallasan. Thissuggests that the increase in LST and the decrease in precipitation contribute to the decline of Korean fir in Hallasan. In contrast, Jirisan confirmed a long-term trend of declining NDVI in the areas of Korean fir mortality but did not find a significant correlation between the changes in NDVI and environmental variables (LST and precipitation). Further analyses of environmental factors, such as soil moisture, insolation, and wind that have been identified to be related to Korean fir habitats in previous studies should be conducted. This study demonstrated the feasibility of using satellite data for long-term monitoring of Korean fir ecosystems and investigating their changes in conjunction with environmental conditions. Thisstudy provided the potential forsatellite-based monitoring to improve our understanding of the ecology of Korean fir.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Reducing error rates in general nuclear medicine imaging to increase patient satisfaction (핵의학 일반영상 검사업무 오류개선 활동에 따른 환자 만족도)

  • Kim, Ho-Sung;Im, In-Chul;Park, Cheol-Woo;Lim, Jong-Duek;Kim, Sun-Geun;Lee, Jae-Seung
    • Journal of the Korean Society of Radiology
    • /
    • v.5 no.5
    • /
    • pp.295-302
    • /
    • 2011
  • To n the field of nuclear medicine, with regard to checking regular patients, from the moment they register up to the doctor's diagnosis, the person in charge of the checks can find errors in the diagnosis, reexamine, reanalyze the results or save images to PACS. Through this process, the results obtained from the readings are delayed due to checks and additional tests which occur in hospitals, causing patient satisfaction and affected reliability. Accordingly, the purpose is to include visual inspection of the results to minimize error, improve efficiency and increase patient satisfaction. Nuclear medicine and imaging tests from examines at Asan Medical Center, Seoul, from March 2008 to December 2008, were analyzed for errors. The first stage, from January 2009 to December 2009, established procedures and know-how. The second stage from January 2010 until June 2010 conducted Pre-and Post-filtering assessment, and the third stage from July 2010 until October 2010 consisted of cross-checks and attaching stickers and comparing error cases. Of 92 errors, the 1st, 2nd and 3rd stage had 32 cases, and there were 46 cases after the 4th stage, with the overall errors reduced by 74.3% from 94.6%. In the field of general nuclear medicine, where various kinds of checks are performed according to the patient's needs, analysis, image composition, differing images in PACS, etc, all have the potential for mistakes to be made. In order to decrease error rates, the image can continuously Cross-Check and Confirm diagnosis.

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

A Study on the Selection of Parameter Values of FUSION Software for Improving Airborne LiDAR DEM Accuracy in Forest Area (산림지역에서의 LiDAR DEM 정확도 향상을 위한 FUSION 패러미터 선정에 관한 연구)

  • Cho, Seungwan;Park, Joowon
    • Journal of Korean Society of Forest Science
    • /
    • v.106 no.3
    • /
    • pp.320-329
    • /
    • 2017
  • This study aims to evaluate whether the accuracy of LiDAR DEM is affected by the changes of the five input levels ('1','3','5','7' and '9') of median parameter ($F_{md}$), mean parameter ($F_{mn}$) of the Filtering Algorithm (FA) in the GroundFilter module and median parameter ($I_{md}$), mean parameter ($I_{mn}$) of the Interpolation Algorithm (IA) in the GridSurfaceCreate module of the FUSION in order to present the combination of parameter levels producing the most accurate LiDAR DEM. The accuracy is measured by the residuals calculated by difference between the field elevation values and their corresponding DEM elevation values. A multi-way ANOVA is used to statistically examine whether there are effects of parameter level changes on the means of the residuals. The Tukey HSD is conducted as a post-hoc test. The results of the multi- way ANOVA test show that the changes in the levels of $F_{md}$, $F_{mn}$, $I_{mn}$ have significant effects on the DEM accuracy with the significant interaction effect between $F_{md}$ and $F_{mn}$. Therefore, the level of $F_{md}$, $F_{mn}$, and the interaction between two variables are considered to be factors affecting the accuracy of LiDAR DEM as well as the level of $I_{mn}$. As the results of the Tukey HSD test on the combination levels of $F_{md}{\ast}F_{mn}$, the mean of residuals of the '$9{\ast}3$' combination provides the highest accuracy while the '$1{\ast}1$' combination provides the lowest one. Regarding $I_{mn}$ levels, the mean of residuals of the both '3' and '1' provides the highest accuracy. This study can contribute to improve the accuracy of the forest attributes as well as the topographic information extracted from the LiDAR data.

Evaluation to Obtain the Image According to the Spatial Domain Filtering of Various Convolution Kernels in the Multi-Detector Row Computed Tomography (MDCT에서의 Convolution Kernel 종류에 따른 공간 영역 필터링의 영상 평가)

  • Lee, Hoo-Min;Yoo, Beong-Gyu;Kweon, Dae-Cheol
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.71-81
    • /
    • 2008
  • Our objective was to evaluate the image of spatial domain filtering as an alternative to additional image reconstruction using different kernels in MDCT. Derived from thin collimated source images were generated using water phantom and abdomen B10(very smooth), B20(smooth), B30(medium smooth), B40 (medium), B50(medium sharp), B60(sharp), B70(very sharp) and B80(ultra sharp) kernels. MTF and spatial resolution measured with various convolution kernels. Quantitative CT attenuation coefficient and noise measurements provided comparable HU(Hounsfield) units in this respect. CT attenuation coefficient(mean HU) values in the water were values in the water were $1.1{\sim}1.8\;HU$, air($-998{\sim}-1000\;HU$) and noise in the water($5.4{\sim}44.8\;HU$), air($3.6{\sim}31.4\;HU$). In the abdominal fat a CT attenuation coefficient($-2.2{\sim}0.8\;HU$) and noise($10.1{\sim}82.4\;HU$) was measured. In the abdominal was CT attenuation coefficient($53.3{\sim}54.3\;HU$) and noise($10.4{\sim}70.7\;HU$) in the muscle and in the liver parenchyma of CT attenuation coefficient($60.4{\sim}62.2\;HU$) and noise ($7.6{\sim}63.8\;HU$) in the liver parenchyma. Image reconstructed with a convolution kernel led to an increase in noise, whereas the results for CT attenuation coefficient were comparable. Image scanned with a high convolution kernel(B80) led to an increase in noise, whereas the results for CT attenuation coefficient were comparable. Image medications of image sharpness and noise eliminate the need for reconstruction using different kernels in the future. Adjusting CT various kernels, which should be adjusted to take into account the kernels of the CT undergoing the examination, may control CT images increase the diagnostic accuracy.

  • PDF

Development of an Automatic Seed Marker Registration Algorithm Using CT and kV X-ray Images (CT 영상 및 kV X선 영상을 이용한 자동 표지 맞춤 알고리듬 개발)

  • Cheong, Kwang-Ho;Cho, Byung-Chul;Kang, Sei-Kwon;Kim, Kyoung-Joo;Bae, Hoon-Sik;Suh, Tae-Suk
    • Radiation Oncology Journal
    • /
    • v.25 no.1
    • /
    • pp.54-61
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: The purpose of this study is to develop a practical method for determining accurate marker positions for prostate cancer radiotherapy using CT images and kV x-ray images obtained from the use of the on- board imager (OBI). $\underline{Materials\;and\;Methods}$: Three gold seed markers were implanted into the reference position inside a prostate gland by a urologist. Multiple digital image processing techniques were used to determine seed marker position and the center-of-mass (COM) technique was employed to determine a representative reference seed marker position. A setup discrepancy can be estimated by comparing a computed $COM_{OBI}$ with the reference $COM_{CT}$. A proposed algorithm was applied to a seed phantom and to four prostate cancer patients with seed implants treated in our clinic. $\underline{Results}$: In the phantom study, the calculated $COM_{CT}$ and $COM_{OBI}$ agreed with $COM_{actual}$ within a millimeter. The algorithm also could localize each seed marker correctly and calculated $COM_{CT}$ and $COM_{OBI}$ for all CT and kV x-ray image sets, respectively. Discrepancies of setup errors between 2D-2D matching results using the OBI application and results using the proposed algorithm were less than one millimeter for each axis. The setup error of each patient was in the range of $0.1{\pm}2.7{\sim}1.8{\pm}6.6\;mm$ in the AP direction, $0.8{\pm}1.6{\sim}2.0{\pm}2.7\;mm$ in the SI direction and $-0.9{\pm}1.5{\sim}2.8{\pm}3.0\;mm$ in the lateral direction, even though the setup error was quite patient dependent. $\underline{Conclusion}$: As it took less than 10 seconds to evaluate a setup discrepancy, it can be helpful to reduce the setup correction time while minimizing subjective factors that may be user dependent. However, the on-line correction process should be integrated into the treatment machine control system for a more reliable procedure.

Analysis of Interactions in Multiple Genes using IFSA(Independent Feature Subspace Analysis) (IFSA 알고리즘을 이용한 유전자 상호 관계 분석)

  • Kim, Hye-Jin;Choi, Seung-Jin;Bang, Sung-Yang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.157-165
    • /
    • 2006
  • The change of external/internal factors of the cell rquires specific biological functions to maintain life. Such functions encourage particular genes to jnteract/regulate each other in multiple ways. Accordingly, we applied a linear decomposition model IFSA, which derives hidden variables, called the 'expression mode' that corresponds to the functions. To interpret gene interaction/regulation, we used a cross-correlation method given an expression mode. Linear decomposition models such as principal component analysis (PCA) and independent component analysis (ICA) were shown to be useful in analyzing high dimensional DNA microarray data, compared to clustering methods. These methods assume that gene expression is controlled by a linear combination of uncorrelated/indepdendent latent variables. However these methods have some difficulty in grouping similar patterns which are slightly time-delayed or asymmetric since only exactly matched Patterns are considered. In order to overcome this, we employ the (IFSA) method of [1] to locate phase- and shut-invariant features. Membership scoring functions play an important role to classify genes since linear decomposition models basically aim at data reduction not but at grouping data. We address a new function essential to the IFSA method. In this paper we stress that IFSA is useful in grouping functionally-related genes in the presence of time-shift and expression phase variance. Ultimately, we propose a new approach to investigate the multiple interaction information of genes.

Finding Influential Users in the SNS Using Interaction Concept : Focusing on the Blogosphere with Continuous Referencing Relationships (상호작용성에 의한 SNS 영향유저 선정에 관한 연구 : 연속적인 참조관계가 있는 블로고스피어를 중심으로)

  • Park, Hyunjung;Rho, Sangkyu
    • The Journal of Society for e-Business Studies
    • /
    • v.17 no.4
    • /
    • pp.69-93
    • /
    • 2012
  • Various influence-related relationships in Social Network Services (SNS) among users, posts, and user-and-post, can be expressed using links. The current research evaluates the influence of specific users or posts by analyzing the link structure of relevant social network graphs to identify influential users. We applied the concept of mutual interactions proposed for ranking semantic web resources, rather than the voting notion of Page Rank or HITS, to blogosphere, one of the early SNS. Through many experiments with network models, where the performance and validity of each alternative approach can be analyzed, we showed the applicability and strengths of our approach. The weight tuning processes for the links of these network models enabled us to control the experiment errors form the link weight differences and compare the implementation easiness of alternatives. An additional example of how to enter the content scores of commercial or spam posts into the graph-based method is suggested on a small network model as well. This research, as a starting point of the study on identifying influential users in SNS, is distinctive from the previous researches in the following points. First, various influence-related properties that are deemed important but are disregarded, such as scraping, commenting, subscribing to RSS feeds, and trusting friends, can be considered simultaneously. Second, the framework reflects the general phenomenon where objects interacting with more influential objects increase their influence. Third, regarding the extent to which a bloggers causes other bloggers to act after him or her as the most important factor of influence, we treated sequential referencing relationships with a viewpoint from that of PageRank or HITS (Hypertext Induced Topic Selection).