• Title/Summary/Keyword: Automatic Extraction Algorithm

Search Result 298, Processing Time 0.024 seconds

Adaptive Image Content-Based Retrieval Techniques for Multiple Queries (다중 질의를 위한 적응적 영상 내용 기반 검색 기법)

  • Hong Jong-Sun;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.3 s.303
    • /
    • pp.73-80
    • /
    • 2005
  • Recently there have been many efforts to support searching and browsing based on the visual content of image and multimedia data. Most existing approaches to content-based image retrieval rely on query by example or user based low-level features such as color, shape, texture. But these methods of query are not easy to use and restrict. In this paper we propose a method for automatic color object extraction and labelling to support multiple queries of content-based image retrieval system. These approaches simplify the regions within images using single colorizing algorithm and extract color object using proposed Color and Spatial based Binary tree map(CSB tree map). And by searching over a large of number of processed regions, a index for the database is created by using proposed labelling method. This allows very fast indexing of the image by color contents of the images and spatial attributes. Futhermore, information about the labelled regions, such as the color set, size, and location, enables variable multiple queries that combine both color content and spatial relationships of regions. We proved our proposed system to be high performance through experiment comparable with another algorithm using 'Washington' image database.

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.

Automated Algorithm for Super Resolution(SR) using Satellite Images (위성영상을 이용한 Super Resolution(SR)을 위한 자동화 알고리즘)

  • Lee, S-Ra-El;Ko, Kyung-Sik;Park, Jong-Won
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.2
    • /
    • pp.209-216
    • /
    • 2018
  • High-resolution satellite imagery is used in diverse fields such as meteorological observation, topography observation, remote sensing (RS), military facility monitoring and protection of cultural heritage. In satellite imagery, low-resolution imagery can take place depending on the conditions of hardware (e.g., optical system, satellite operation altitude, image sensor, etc.) even though the images were obtained from the same satellite imaging system. Once a satellite is launched, the adjustment of the imaging system cannot be done to improve the resolution of the degraded images. Therefore, there should be a way to improve resolution, using the satellite imagery. In this study, a super resolution (SR) algorithm was adopted to improve resolution, using such low-resolution satellite imagery. The SR algorithm is an algorithm which enhances image resolution by matching multiple low-resolution images. In satellite imagery, however, it is difficult to get several images on the same region. To take care of this problem, this study performed the SR algorithm by calibrating geometric changes on images after applying automatic extraction of feature points and projection transform. As a result, a clear edge was found just like the SR results in which feature points were manually obtained.

Diagnosis parameters extraction by correlativity analysis of blood pressure(BP) and head blood pressure(HBP) and Development of multi-function automatic blood pressure monitor (상완혈압과 두부혈압의 상관성 분석에 의한 진단요소 추출과 다기능 전자혈압계의 개발)

  • 이용흠;고수복;정동명
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.40 no.6
    • /
    • pp.58-67
    • /
    • 2003
  • Many adult diseases(cerebral apoplexy, athymiait, etc.) result from hypertension, blood circulation disturbance and increment of HBP. In early diagnosis of these diseases, MRI, X-ray and PET have been used rather aim for treatment than prevention of a disease. Since, cerebral apoplexy and athymiait have been caused to the regular/irregular persons, it is very important to measure HBP which has connection with cerebral blood low state. HBP has more diagnosis elements than that of BP. So, we can diagnose accurate hypertension by measuring of HBP. But, existing sphygmomanometers and automatic BP monitors can not measure HBF, and can not execute complex function(measuring of BP/HBP, blood flow improvement). The purpose of this paper is to develop the system and algorithm which can measure BP/HBP for accurate diagnosis. Also, we extracted diagnosis factors by the correlativity analysis of BP/HBP. The maximum pressure of HBP corresponds to 62% that of BP, the minimum pressure of HBP corresponds to 46% that of BP. Therefore, we developed the multi function automatic blood pressure monitor which can measure BP/HBP and improve cerebral blood flow state.

Automatic Generation of Snort Content Rule for Network Traffic Analysis (네트워크 트래픽 분석을 위한 Snort Content 규칙 자동 생성)

  • Shim, Kyu-Seok;Yoon, Sung-Ho;Lee, Su-Kang;Kim, Sung-Min;Jung, Woo-Suk;Kim, Myung-Sup
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.4
    • /
    • pp.666-677
    • /
    • 2015
  • The importance of application traffic analysis for efficient network management has been emphasized continuously. Snort is a popular traffic analysis system which detects traffic matched to pre-defined signatures and perform various actions based on the rules. However, it is very difficult to get highly accurate signatures to meet various analysis purpose because it is very tedious and time-consuming work to search the entire traffic data manually or semi-automatically. In this paper, we propose a novel method to generate signatures in a fully automatic manner in the form of sort rule from raw packet data captured from network link or end-host. We use a sequence pattern algorithm to generate common substring satisfying the minimum support from traffic flow data. Also, we extract the location and header information of the signature which are the components of snort content rule. When we analyzed the proposed method to several application traffic data, the generated rule could detect more than 97 percentage of the traffic data.

A Study of the Automatic Extraction of Hypernyms arid Hyponyms from the Corpus (코퍼스를 이용한 상하위어 추출 연구)

  • Pang, Chan-Seong;Lee, Hae-Yun
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.2
    • /
    • pp.143-161
    • /
    • 2008
  • The goal of this paper is to extract the hyponymy relation between words in the corpus. Adopting the basic algorithm of Hearst (1992), I propose a method of pattern-based extraction of semantic relations from the corpus. To this end, I set up a list of hypernym-hyponym pairs from Sejong Electronic Dictionary. This list is supplemented with the superordinate-subordinate terms of CoroNet. Then, I extracted all the sentences from the corpus that include hypemym-hyponym pairs of the list. From these extracted sentences, I collected all the sentences that contain meaningful constructions that occur systematically in the corpus. As a result, we could obtain 21 generalized patterns. Using the PERL program, we collected sentences of each of the 21 patterns. 57% of the sentences are turned out to have hyponymy relation. The proposed method in this paper is simpler and more advanced than that in Cederberg and Widdows (2003), in that using a word net or an electronic dictionary is generally considered to be efficient for information retrieval. The patterns extracted by this method are helpful when we look fer appropriate documents during information retrieval, and they are used to expand the concept networks like ontologies or thesauruses. However, the word order of Korean is relatively free and it is difficult to capture various expressions of a fired pattern. In the future, we should investigate more semantic relations than hyponymy, so that we can extract various patterns from the corpus.

  • PDF

Comparative Study of GDPA and Hough Transformation for Linear Feature Extraction using Space-borne Imagery (위성 영상정보를 이용한 선형 지형지물 추출에서의 GDPA와 Hough 변환 처리결과 비교연구)

  • Lee Kiwon;Ryu Hee-Young;Kwon Byung-Doo
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.4
    • /
    • pp.261-274
    • /
    • 2004
  • The feature extraction using remotely sensed imagery has been recognized one of the important tasks in remote sensing applications. As the high-resolution imagery are widely used to the engineering purposes, need of more accurate feature information also is increasing. Especially, in case of the automatic extraction of linear feature such as road using mid or low-resolution imagery, several techniques was developed and applied in the mean time. But quantitatively comparative analysis of techniques and case studies for high-resolution imagery is rare. In this study, we implemented a computer program to perform and compare GDPA (Gradient Direction Profile Analysis) algorithm and Hough transformation. Also the results of applying two techniques to some images were compared with road centerline layers and boundary layers of digital map and presented. For quantitative comparison, the ranking method using commission error and omission error was used. As results, Hough transform had high accuracy over 20% on the average. As for execution speed, GDPA shows main advantage over Hough transform. But the accuracy was not remarkable difference between GDPA and Hough transform, when the noise removal was app]ied to the result of GDPA. In conclusion, it is expected that GDPA have more advantage than Hough transform in the application side.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

Rule Acquisition Using Ontology Based on Graph Search (그래프 탐색을 이용한 웹으로부터의 온톨로지 기반 규칙습득)

  • Park, Sangun;Lee, Jae Kyu;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.3
    • /
    • pp.95-110
    • /
    • 2006
  • To enhance the rule-based reasoning capability of Semantic Web, the XRML (eXtensible Rule Markup Language) approach embraces the meta-information necessary for the extraction of explicit rules from Web pages and its maintenance. To effectuate the automatic identification of rules from unstructured texts, this research develops a framework of using rule ontology. The ontology can be acquired from a similar site first, and then can be used for multiple sites in the same domain. The procedure of ontology-based rule identification is regarded as a graph search problem with incomplete nodes, and an A* algorithm is devised to solve the problem. The procedure is demonstrated with the domain of shipping rates and return policy comparison portal, which needs rule based reasoning capability to answer the customer's inquiries. An example ontology is created from Amazon.com, and is applied to the many online retailers in the same domain. The experimental result shows a high performance of this approach.

  • PDF