• Title/Summary/Keyword: 필터 링

Search Result 3,389, Processing Time 0.029 seconds

Object Tracking Based on Centroids Shifting with Scale Adaptation (중심 이동 기반의 스케일 적응적 물체 추적 알고리즘)

  • Lee, Suk-Ho;Choi, Eun-Cheol;Kang, Moon-Gi
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.529-537
    • /
    • 2011
  • In this paper, we propose a stable scale adaptive tracking method that uses centroids of the target colors. Most scale adaptive tracking methods have utilized histograms to determine target window sizes. However, in certain cases, histograms fail to provide good estimates of target sizes, for example, in the case of occlusion or the appearance of colors in the background that are similar to the target colors. This is due to the fact that histograms are related to the numbers of pixels that correspond to the target colors. Therefore, we propose the use of centroids that correspond to the target colors in the scale adaptation algorithm, since centroids are less sensitive to changes in the number of pixels that correspond to the target colors. Due to the spatial information inherent in centroids, a direct relationship can be established between centroids and the scale of target regions. Generally, after the zooming factors that correspond to all the target colors are calculated, the unreliable zooming factors are filtered out to produce a reliable zooming factor that determines the new scale of the target. Combined with the centroid based tracking algorithm, the proposed scale adaptation method results in a stable scale adaptive tracking algorithm. It tracks objects in a stable way, even when the background colors are similar to the colors of the object.

A Study of Intelligent Recommendation System based on Naive Bayes Text Classification and Collaborative Filtering (나이브베이즈 분류모델과 협업필터링 기반 지능형 학술논문 추천시스템 연구)

  • Lee, Sang-Gi;Lee, Byeong-Seop;Bak, Byeong-Yong;Hwang, Hye-Kyong
    • Journal of Information Management
    • /
    • v.41 no.4
    • /
    • pp.227-249
    • /
    • 2010
  • Scholarly information has increased tremendously according to the development of IT, especially the Internet. However, simultaneously, people have to spend more time and exert more effort because of information overload. There have been many research efforts in the field of expert systems, data mining, and information retrieval, concerning a system that recommends user-expected information items through presumption. Recently, the hybrid system combining a content-based recommendation system and collaborative filtering or combining recommendation systems in other domains has been developed. In this paper we resolved the problem of the current recommendation system and suggested a new system combining collaborative filtering and Naive Bayes Classification. In this way, we resolved the over-specialization problem through collaborative filtering and lack of assessment information or recommendation of new contents through Naive Bayes Classification. For verification, we applied the new model in NDSL's paper service of KISTI, especially papers from journals about Sitology and Electronics, and witnessed high satisfaction from 4 experimental participants.

Co-registration of PET-CT Brain Images using a Gaussian Weighted Distance Map (가우시안 가중치 거리지도를 이용한 PET-CT 뇌 영상정합)

  • Lee, Ho;Hong, Helen;Shin, Yeong-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.7
    • /
    • pp.612-624
    • /
    • 2005
  • In this paper, we propose a surface-based registration using a gaussian weighted distance map for PET-CT brain image fusion. Our method is composed of three main steps: the extraction of feature points, the generation of gaussian weighted distance map, and the measure of similarities based on weight. First, we segment head using the inverse region growing and remove noise segmented with head using region growing-based labeling in PET and CT images, respectively. And then, we extract the feature points of the head using sharpening filter. Second, a gaussian weighted distance map is generated from the feature points in CT images. Thus it leads feature points to robustly converge on the optimal location in a large geometrical displacement. Third, weight-based cross-correlation searches for the optimal location using a gaussian weighted distance map of CT images corresponding to the feature points extracted from PET images. In our experiment, we generate software phantom dataset for evaluating accuracy and robustness of our method, and use clinical dataset for computation time and visual inspection. The accuracy test is performed by evaluating root-mean-square-error using arbitrary transformed software phantom dataset. The robustness test is evaluated whether weight-based cross-correlation achieves maximum at optimal location in software phantom dataset with a large geometrical displacement and noise. Experimental results showed that our method gives more accuracy and robust convergence than the conventional surface-based registration.

Technical Trend on the Recycling Technologies for Stripping Process Waste Solution by the Patent and Paper Analysis (특허(特許)와 논문(論文)으로 본 스트리핑 공정폐액(工程廢液) 재활용(再活用) 기술(技術) 동향(動向))

  • Lee, Ho-Kyung;Lee, In-Gyoo;Park, Myung-Jun;Koo, Kee-Kahb;Cho, Young-Ju;Cho, Bong-Gyoo
    • Resources Recycling
    • /
    • v.22 no.4
    • /
    • pp.81-90
    • /
    • 2013
  • Since the 1990s, the rapid development of information and communication industry, the demand for semiconductor and LCD continues to increase. Therefore in the formation of fine circuit patterns, which are the cores of sensitizer and the most expensive thinner and stripper liquor used to remove photoresist and its dilution, the amount in demand are dramatically increasing, emerging need for recycling of waste thinner and stripper liquor. Recently, recycling technologies of stripping process waste solution has been widely studied by economic aspects and environmental aspects, in terms of efficiency of the stripping process. In this study, analyzed paper and patent for recycling technologies of waste solution from stripping process. The range of search was limited in the open patents of USA (US), European Union (EP), Japan (JP), Korea (KR) and SCI journals from 1981 to 2010. Patents and journals were collected using key-words searching and filtered by filtering criteria. The trends of the patents and journals was analyzed by the years, countries, companies, and technologies.

A Novel Method for Automated Honeycomb Segmentation in HRCT Using Pathology-specific Morphological Analysis (병리특이적 형태분석 기법을 이용한 HRCT 영상에서의 새로운 봉와양폐 자동 분할 방법)

  • Kim, Young Jae;Kim, Tae Yun;Lee, Seung Hyun;Kim, Kwang Gi;Kim, Jong Hyo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.109-114
    • /
    • 2012
  • Honeycombs are dense structures that small cysts, which generally have about 2~10 mm in diameter, are surrounded by the wall of fibrosis. When honeycomb is found in the patients, the incidence of acute exacerbation is generally very high. Thus, the observation and quantitative measurement of honeycomb are considered as a significant marker for clinical diagnosis. In this point of view, we propose an automatic segmentation method using morphological image processing and assessment of the degree of clustering techniques. Firstly, image noises were removed by the Gaussian filtering and then a morphological dilation method was applied to segment lung regions. Secondly, honeycomb cyst candidates were detected through the 8-neighborhood pixel exploration, and then non-cyst regions were removed using the region growing method and wall pattern testing. Lastly, final honeycomb regions were segmented through the extraction of dense regions which are consisted of two or more cysts using cluster analysis. The proposed method applied to 80 High resolution computed tomography (HRCT) images and achieved a sensitivity of 89.4% and PPV (Positive Predictive Value) of 72.2%.

Super Resolution Algorithm Based on Edge Map Interpolation and Improved Fast Back Projection Method in Mobile Devices (모바일 환경을 위해 에지맵 보간과 개선된 고속 Back Projection 기법을 이용한 Super Resolution 알고리즘)

  • Lee, Doo-Hee;Park, Dae-Hyun;Kim, Yoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.103-108
    • /
    • 2012
  • Recently, as the prevalence of high-performance mobile devices and the application of the multimedia content are expanded, Super Resolution (SR) technique which reconstructs low resolution images to high resolution images is becoming important. And in the mobile devices, the development of the SR algorithm considering the operation quantity or memory is required because of using the restricted resources. In this paper, we propose a new single frame fast SR technique suitable for mobile devices. In order to prevent color distortion, we change RGB color domain to HSV color domain and process the brightness information V (Value) considering the characteristics of human visual perception. First, the low resolution image is enlarged by the improved fast back projection considering the noise elimination. And at the same time, the reliable edge map is extracted by using the LoG (Laplacian of Gaussian) filtering. Finally, the high definition picture is reconstructed by using the edge information and the improved back projection result. The proposed technique removes effectually the unnatural artefact which is generated during the super resolution restoration, and the edge information which can be lost is amended and emphasized. The experimental results indicate that the proposed algorithm provides better performance than conventional back projection and interpolation methods.

Geographical Name Denoising by Machine Learning of Event Detection Based on Twitter (트위터 기반 이벤트 탐지에서의 기계학습을 통한 지명 노이즈제거)

  • Woo, Seungmin;Hwang, Byung-Yeon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.10
    • /
    • pp.447-454
    • /
    • 2015
  • This paper proposes geographical name denoising by machine learning of event detection based on twitter. Recently, the increasing number of smart phone users are leading the growing user of SNS. Especially, the functions of short message (less than 140 words) and follow service make twitter has the power of conveying and diffusing the information more quickly. These characteristics and mobile optimised feature make twitter has fast information conveying speed, which can play a role of conveying disasters or events. Related research used the individuals of twitter user as the sensor of event detection to detect events that occur in reality. This research employed geographical name as the keyword by using the characteristic that an event occurs in a specific place. However, it ignored the denoising of relationship between geographical name and homograph, it became an important factor to lower the accuracy of event detection. In this paper, we used removing and forecasting, these two method to applied denoising technique. First after processing the filtering step by using noise related database building, we have determined the existence of geographical name by using the Naive Bayesian classification. Finally by using the experimental data, we earned the probability value of machine learning. On the basis of forecast technique which is proposed in this paper, the reliability of the need for denoising technique has turned out to be 89.6%.

Development of an AIDA(Automatic Incident Detection Algorithm) for Uninterrupted Flow Based on the Concept of Short-term Displaced Flow (연속류도로 단기 적체 교통량 개념 기반 돌발상황 자동감지 알고리즘 개발)

  • Lee, Kyu-Soon;Shin, Chi-Hyun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.2
    • /
    • pp.13-23
    • /
    • 2016
  • Many traffic centers are highly hesitant in employing existing Automatic Incident Detection Algorithms due to high false alarm rate, low detection rate, and enormous effort taken in maintaining algorithm parameters, together with complex algorithm structure and filtering/smoothing process. Concerns grow over the situation particularly in Freeway Incident Management Area This study proposes a new algorithm and introduces a novel concept, the Displaced Flow Index (DiFI) which is similar to a product of relative speed and relative occupancy for every execution period. The algorithm structure is very simple, also easy to understand with minimum parameters, and could use raw data without any additional pre-processing. To evaluate the performance of the DiFI algorithm, validation test on the algorithm has been conducted using detector data taken from Naebu Expressway in Seoul and following transferability tests with Gyeongbu Expressway detector data. Performance test has utilized many indices such as DR, FAR, MTTD (Mean Time To Detect), CR (Classification Rate), CI (Composite Index) and PI (Performance Index). It was found that the DR is up to 100%, the MTTD is a little over 1.0 minutes, and the FAR is as low as 2.99%. This newly designed algorithm seems promising and outperformed SAO and most popular AIDAs such as APID and DELOS, and showed the best performance in every category.

Automatic Extraction of Buildings using Aerial Photo and Airborne LIDAR Data (항공사진과 항공레이저 데이터를 이용한 건물 자동추출)

  • 조우석;이영진;좌윤석
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.4
    • /
    • pp.307-317
    • /
    • 2003
  • This paper presents an algorithm that automatically extracts buildings among many different features on the earth surface by fusing LIDAR data with panchromatic aerial images. The proposed algorithm consists of three stages such as point level process, polygon level process, parameter space level process. At the first stage, we eliminate gross errors and apply a local maxima filter to detect building candidate points from the raw laser scanning data. After then, a grouping procedure is performed for segmenting raw LIDAR data and the segmented LIDAR data is polygonized by the encasing polygon algorithm developed in the research. At the second stage, we eliminate non-building polygons using several constraints such as area and circularity. At the last stage, all the polygons generated at the second stage are projected onto the aerial stereo images through collinearity condition equations. Finally, we fuse the projected encasing polygons with edges detected by image processing for refining the building segments. The experimental results showed that the RMSEs of building corners in X, Y and Z were 8.1cm, 24.7cm, 35.9cm, respectively.

Development of Android Smart Phone App for Analysis of Remote Sensing Images (위성영상정보 분석을 위한 안드로이드 스마트폰 앱 개발)

  • Kang, Sang-Goo;Lee, Ki-Won
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.5
    • /
    • pp.561-570
    • /
    • 2010
  • The purpose of this study is to develop an Android smartphone app providing analysis capabilities of remote sensing images, by using mobile browsing open sources of gvSIG, open source remote sensing software of OTB and open source DBMS of PostgreSQL. In this app, five kinds of remote sensing algorithms for filtering, segmentation, or classification are implemented, and the processed results are also stored and managed in image database to retrieve. Smartphone users can easily use their functions through graphical user interfaces of app which are internally linked to application server for image analysis processing and external DBMS. As well, a practical tiling method for smartphone environments is implemented to reduce delay time between user's requests and its processing server responses. Till now, most apps for remotely sensed image data sets are mainly concerned to image visualization, distinguished from this approach providing analysis capabilities. As the smartphone apps with remote sensing analysis functions for general users and experts are widely utilizing, remote sensing images are regarded as information resources being capable of producing actual mobile contents, not potential resources. It is expected that this study could trigger off the technological progresses and other unique attempts to develop the variety of smartphone apps for remote sensing images.