• Title/Summary/Keyword: 필터 링

Search Result 3,392, Processing Time 0.033 seconds

Improved Trend Estimation of Non-monotonic Time Series Through Increased Homogeneity in Direction of Time-variation (시변동의 동질성 증가에 의한 비단조적 시계열자료의 경향성 탐지력 향상)

  • Oh, Kyoung-Doo;Park, Soo-Yun;Lee, Soon-Cheol;Jun, Byong-Ho;Ahn, Won-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.38 no.8 s.157
    • /
    • pp.617-629
    • /
    • 2005
  • In this paper, a hypothesis is tested that division of non-monotonic time series into monotonic parts will improve the estimation of trends through increased homogeneity in direction of time-variation using LOWESS smoothing and seasonal Kendall test. From the trend analysis of generated time series and water temperature, discharge, air temperature and solar radiation of Lake Daechung, it is shown that the hypothesis is supported by improved estimation of trends and slopes. Also, characteristics in homogeneity variation of seasonal changes seems to be more clearly manifested as homogeneity in direction of time-variation is increased. And this will help understand the effects of human intervention on natural processes and seems to warrant more in-depth study on this subject. The proposed method can be used for trend analysis to detect monotonic trends and it is expected to improve understanding of long-term changes in natural environment.

Leak Location Detection of Underground Water Pipes using Acoustic Emission and Acceleration Signals (음향방출 및 가속도 신호를 이용한 지하매설 상수도배관의 누수지점 탐지연구)

  • Lee, Young-Sup;Yoon, Dong-Jin;Jeong, Jung-Chae
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.23 no.3
    • /
    • pp.227-236
    • /
    • 2003
  • Leaks in underground pipelines can cause social, environmental and economical problems. One of relevant countermeasures against leaks is to find and repair of leak points of the pipes. Leak noise is a good source to identify the location of leak points of the pipelines. Although there have been several methods to detect the leak location with leak noise, such as listening rods, hydrophones or ground microphones, they have not been so efficient tools. In this paper, acoustic emission (AE) sensors and accelermeters are used to detect leak locations which could provide all easier and move efficient method. Filtering, signal processing and algorithm of raw input data from sensors for the detection of leak location are described. A 120m-long pipeline system for experiment is installed and the results with the system show that the algorithm with the AE sensors and accelerometers offers accurate pinpointing of leaks. Theoretical analysis of sound wave propagation speed of water in underground pipes, which is critically important in leak locating, is also described.

Segmentation of Multispectral MRI Using Fuzzy Clustering (퍼지 클러스터링을 이용한 다중 스펙트럼 자기공명영상의 분할)

  • 윤옥경;김현순;곽동민;김범수;김동휘;변우목;박길흠
    • Journal of Biomedical Engineering Research
    • /
    • v.21 no.4
    • /
    • pp.333-338
    • /
    • 2000
  • In this paper, an automated segmentation algorithm is proposed for MR brain images using T1-weighted, T2-weighted, and PD images complementarily. The proposed segmentation algorithm is composed of 3 step. In the first step, cerebrum images are extracted by putting a cerebrum mask upon the three input images. In the second step, outstanding clusters that represent inner tissues of the cerebrum are chosen among 3-dimensional(3D) clusters. 3D clusters are determined by intersecting densely distributed parts of 2D histogram in the 3D space formed with three optimal scale images. Optimal scale image is made up of applying scale space filtering to each 2D histogram and searching graph structure. Optimal scale image best describes the shape of densely distributed parts of pixels in 2D histogram and searching graph structure. Optimal scale image best describes the shape of densely distributed parts of pixels in 2D histogram. In the final step, cerebrum images are segmented using FCM algorithm with its initial centroid value as the outstanding clusters centroid value. The proposed cluster's centroid accurately. And also can get better segmentation results from the proposed segmentation algorithm with multi spectral analysis than the method of single spectral analysis.

  • PDF

Object Tracking Based on Centroids Shifting with Scale Adaptation (중심 이동 기반의 스케일 적응적 물체 추적 알고리즘)

  • Lee, Suk-Ho;Choi, Eun-Cheol;Kang, Moon-Gi
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.529-537
    • /
    • 2011
  • In this paper, we propose a stable scale adaptive tracking method that uses centroids of the target colors. Most scale adaptive tracking methods have utilized histograms to determine target window sizes. However, in certain cases, histograms fail to provide good estimates of target sizes, for example, in the case of occlusion or the appearance of colors in the background that are similar to the target colors. This is due to the fact that histograms are related to the numbers of pixels that correspond to the target colors. Therefore, we propose the use of centroids that correspond to the target colors in the scale adaptation algorithm, since centroids are less sensitive to changes in the number of pixels that correspond to the target colors. Due to the spatial information inherent in centroids, a direct relationship can be established between centroids and the scale of target regions. Generally, after the zooming factors that correspond to all the target colors are calculated, the unreliable zooming factors are filtered out to produce a reliable zooming factor that determines the new scale of the target. Combined with the centroid based tracking algorithm, the proposed scale adaptation method results in a stable scale adaptive tracking algorithm. It tracks objects in a stable way, even when the background colors are similar to the colors of the object.

A Study of Intelligent Recommendation System based on Naive Bayes Text Classification and Collaborative Filtering (나이브베이즈 분류모델과 협업필터링 기반 지능형 학술논문 추천시스템 연구)

  • Lee, Sang-Gi;Lee, Byeong-Seop;Bak, Byeong-Yong;Hwang, Hye-Kyong
    • Journal of Information Management
    • /
    • v.41 no.4
    • /
    • pp.227-249
    • /
    • 2010
  • Scholarly information has increased tremendously according to the development of IT, especially the Internet. However, simultaneously, people have to spend more time and exert more effort because of information overload. There have been many research efforts in the field of expert systems, data mining, and information retrieval, concerning a system that recommends user-expected information items through presumption. Recently, the hybrid system combining a content-based recommendation system and collaborative filtering or combining recommendation systems in other domains has been developed. In this paper we resolved the problem of the current recommendation system and suggested a new system combining collaborative filtering and Naive Bayes Classification. In this way, we resolved the over-specialization problem through collaborative filtering and lack of assessment information or recommendation of new contents through Naive Bayes Classification. For verification, we applied the new model in NDSL's paper service of KISTI, especially papers from journals about Sitology and Electronics, and witnessed high satisfaction from 4 experimental participants.

Co-registration of PET-CT Brain Images using a Gaussian Weighted Distance Map (가우시안 가중치 거리지도를 이용한 PET-CT 뇌 영상정합)

  • Lee, Ho;Hong, Helen;Shin, Yeong-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.7
    • /
    • pp.612-624
    • /
    • 2005
  • In this paper, we propose a surface-based registration using a gaussian weighted distance map for PET-CT brain image fusion. Our method is composed of three main steps: the extraction of feature points, the generation of gaussian weighted distance map, and the measure of similarities based on weight. First, we segment head using the inverse region growing and remove noise segmented with head using region growing-based labeling in PET and CT images, respectively. And then, we extract the feature points of the head using sharpening filter. Second, a gaussian weighted distance map is generated from the feature points in CT images. Thus it leads feature points to robustly converge on the optimal location in a large geometrical displacement. Third, weight-based cross-correlation searches for the optimal location using a gaussian weighted distance map of CT images corresponding to the feature points extracted from PET images. In our experiment, we generate software phantom dataset for evaluating accuracy and robustness of our method, and use clinical dataset for computation time and visual inspection. The accuracy test is performed by evaluating root-mean-square-error using arbitrary transformed software phantom dataset. The robustness test is evaluated whether weight-based cross-correlation achieves maximum at optimal location in software phantom dataset with a large geometrical displacement and noise. Experimental results showed that our method gives more accuracy and robust convergence than the conventional surface-based registration.

Technical Trend on the Recycling Technologies for Stripping Process Waste Solution by the Patent and Paper Analysis (특허(特許)와 논문(論文)으로 본 스트리핑 공정폐액(工程廢液) 재활용(再活用) 기술(技術) 동향(動向))

  • Lee, Ho-Kyung;Lee, In-Gyoo;Park, Myung-Jun;Koo, Kee-Kahb;Cho, Young-Ju;Cho, Bong-Gyoo
    • Resources Recycling
    • /
    • v.22 no.4
    • /
    • pp.81-90
    • /
    • 2013
  • Since the 1990s, the rapid development of information and communication industry, the demand for semiconductor and LCD continues to increase. Therefore in the formation of fine circuit patterns, which are the cores of sensitizer and the most expensive thinner and stripper liquor used to remove photoresist and its dilution, the amount in demand are dramatically increasing, emerging need for recycling of waste thinner and stripper liquor. Recently, recycling technologies of stripping process waste solution has been widely studied by economic aspects and environmental aspects, in terms of efficiency of the stripping process. In this study, analyzed paper and patent for recycling technologies of waste solution from stripping process. The range of search was limited in the open patents of USA (US), European Union (EP), Japan (JP), Korea (KR) and SCI journals from 1981 to 2010. Patents and journals were collected using key-words searching and filtered by filtering criteria. The trends of the patents and journals was analyzed by the years, countries, companies, and technologies.

A Novel Method for Automated Honeycomb Segmentation in HRCT Using Pathology-specific Morphological Analysis (병리특이적 형태분석 기법을 이용한 HRCT 영상에서의 새로운 봉와양폐 자동 분할 방법)

  • Kim, Young Jae;Kim, Tae Yun;Lee, Seung Hyun;Kim, Kwang Gi;Kim, Jong Hyo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.109-114
    • /
    • 2012
  • Honeycombs are dense structures that small cysts, which generally have about 2~10 mm in diameter, are surrounded by the wall of fibrosis. When honeycomb is found in the patients, the incidence of acute exacerbation is generally very high. Thus, the observation and quantitative measurement of honeycomb are considered as a significant marker for clinical diagnosis. In this point of view, we propose an automatic segmentation method using morphological image processing and assessment of the degree of clustering techniques. Firstly, image noises were removed by the Gaussian filtering and then a morphological dilation method was applied to segment lung regions. Secondly, honeycomb cyst candidates were detected through the 8-neighborhood pixel exploration, and then non-cyst regions were removed using the region growing method and wall pattern testing. Lastly, final honeycomb regions were segmented through the extraction of dense regions which are consisted of two or more cysts using cluster analysis. The proposed method applied to 80 High resolution computed tomography (HRCT) images and achieved a sensitivity of 89.4% and PPV (Positive Predictive Value) of 72.2%.

Super Resolution Algorithm Based on Edge Map Interpolation and Improved Fast Back Projection Method in Mobile Devices (모바일 환경을 위해 에지맵 보간과 개선된 고속 Back Projection 기법을 이용한 Super Resolution 알고리즘)

  • Lee, Doo-Hee;Park, Dae-Hyun;Kim, Yoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.103-108
    • /
    • 2012
  • Recently, as the prevalence of high-performance mobile devices and the application of the multimedia content are expanded, Super Resolution (SR) technique which reconstructs low resolution images to high resolution images is becoming important. And in the mobile devices, the development of the SR algorithm considering the operation quantity or memory is required because of using the restricted resources. In this paper, we propose a new single frame fast SR technique suitable for mobile devices. In order to prevent color distortion, we change RGB color domain to HSV color domain and process the brightness information V (Value) considering the characteristics of human visual perception. First, the low resolution image is enlarged by the improved fast back projection considering the noise elimination. And at the same time, the reliable edge map is extracted by using the LoG (Laplacian of Gaussian) filtering. Finally, the high definition picture is reconstructed by using the edge information and the improved back projection result. The proposed technique removes effectually the unnatural artefact which is generated during the super resolution restoration, and the edge information which can be lost is amended and emphasized. The experimental results indicate that the proposed algorithm provides better performance than conventional back projection and interpolation methods.

Geographical Name Denoising by Machine Learning of Event Detection Based on Twitter (트위터 기반 이벤트 탐지에서의 기계학습을 통한 지명 노이즈제거)

  • Woo, Seungmin;Hwang, Byung-Yeon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.10
    • /
    • pp.447-454
    • /
    • 2015
  • This paper proposes geographical name denoising by machine learning of event detection based on twitter. Recently, the increasing number of smart phone users are leading the growing user of SNS. Especially, the functions of short message (less than 140 words) and follow service make twitter has the power of conveying and diffusing the information more quickly. These characteristics and mobile optimised feature make twitter has fast information conveying speed, which can play a role of conveying disasters or events. Related research used the individuals of twitter user as the sensor of event detection to detect events that occur in reality. This research employed geographical name as the keyword by using the characteristic that an event occurs in a specific place. However, it ignored the denoising of relationship between geographical name and homograph, it became an important factor to lower the accuracy of event detection. In this paper, we used removing and forecasting, these two method to applied denoising technique. First after processing the filtering step by using noise related database building, we have determined the existence of geographical name by using the Naive Bayesian classification. Finally by using the experimental data, we earned the probability value of machine learning. On the basis of forecast technique which is proposed in this paper, the reliability of the need for denoising technique has turned out to be 89.6%.