• Title/Summary/Keyword: automatic processing

Search Result 2,230, Processing Time 0.031 seconds

An Accuracy Evaluation of Algorithm for Shoreline Change by using RTK-GPS (RTK-GPS를 이용한 해안선 변화 자동추출 알고리즘의 정확도 평가)

  • Lee, Jae One;Kim, Yong Suk;Lee, In Su
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.32 no.1D
    • /
    • pp.81-88
    • /
    • 2012
  • This present research was carried out by dividing two parts; field surveying and data processing, in order to analyze changed patterns of a shoreline. Firstly, the shoreline information measured by the precise GPS positioning during long duration was collected. Secondly, the algorithm for detecting an auto boundary with regards to the changed shoreline with multi-image data was developed. Then, a comparative research was conducted. Haeundae beach which is one of the most famous ones in Korea was selected as a test site. RTK-GPS surveying had been performed overall eight times from September 2005 to September 2009. The filed test by aerial Lidar was conducted twice on December 2006 and March 2009 respectively. As a result estimated from both sensors, there is a slight difference. The average length of shoreline analyzed by RTK-GPS is approximately 1,364.6 m, while one from aerial Lidar is about 1,402.5 m. In this investigation, the specific algorithm for detecting the shoreline detection was developed by Visual C++ MFC (Microsoft Foundation Class). The analysis result estimated by aerial photo and satellite image was 1,391.0 m. The level of reliability was 98.1% for auto boundary detection when it compared with real surveying data.

Eye Region Detection Method in Rotated Face using Global Orientation Information (전역적인 에지 오리엔테이션 정보를 이용한 기울어진 얼굴 영상에서의 눈 영역 추출)

  • Jang, Chang-Hyuk;Park, An-Jin;Kurata Takeshi;Jain Anil K.;Park, Se-Hyun;Kim, Eun-Yi;Yang, Jong-Yeol;Jung, Kee-Chul
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.11 no.4
    • /
    • pp.82-92
    • /
    • 2006
  • In the field of image recognition, research on face recognition has recently attracted a lot of attention. The most important step in face recognition is automatic eye detection researched as a prerequisite stage. Existing eye detection methods for focusing on the frontal face can be mainly classified into two categories: active infrared(IR)-based approaches and image-based approaches. This paper proposes an eye region detection method in non-frontal faces. The proposed method is based on the edge--based method that shows the fastest computation time. To extract eye region in non-frontal faces, the method uses edge orientationhistogram of the global region of faces. The problem caused by some noise and unfavorable ambient light is solved by using proportion of width and height for local information and relationship between components for global information in approximately extracted region. In experimental results, the proposed method improved precision rates, as solving 3 problems caused by edge information and achieves a detection accuracy of 83.5% and a computational time of 0.5sec per face image using 300 face images provided by The Weizmann Institute of Science.

  • PDF

Automatic Text Categorization Using Passage-based Weight Function and Passage Type (문단 단위 가중치 함수와 문단 타입을 이용한 문서 범주화)

  • Joo, Won-Kyun;Kim, Jin-Suk;Choi, Ki-Seok
    • The KIPS Transactions:PartB
    • /
    • v.12B no.6 s.102
    • /
    • pp.703-714
    • /
    • 2005
  • Researches in text categorization have been confined to whole-document-level classification, probably due to lacks of full-text test collections. However, full-length documents availably today in large quantities pose renewed interests in text classification. A document is usually written in an organized structure to present its main topic(s). This structure can be expressed as a sequence of sub-topic text blocks, or passages. In order to reflect the sub-topic structure of a document, we propose a new passage-level or passage-based text categorization model, which segments a test document into several Passages, assigns categories to each passage, and merges passage categories to document categories. Compared with traditional document-level categorization, two additional steps, passage splitting and category merging, are required in this model. By using four subsets of Routers text categorization test collection and a full-text test collection of which documents are varying from tens of kilobytes to hundreds, we evaluated the proposed model, especially the effectiveness of various passage types and the importance of passage location in category merging. Our results show simple windows are best for all test collections tested in these experiments. We also found that passages have different degrees of contribution to main topic(s), depending on their location in the test document.

A Study of Statistical Learning as a CRM s Classifier Functions (CRM의 기능 분류를 위한 통계적 학습에 관한 연구)

  • Jang, Geun;Lee, Jung-Bae;Lee, Byung-Soo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.71-76
    • /
    • 2004
  • The recent ERP and CRM is mostly focused on the conventional function performances. However, the recent business environment has brought the change in market due to the rapid progress of internet and e-commerce. It is mostly becoming e-business and spreading out as development of the relationship with other cooperating companies, the rapid progress of the relationship with customers, and intensification competitive power through the development of business progress in the organization. CRM(custom relationship management) is a kind of the marketing progress which forms, manages, and intensifies the relationship between the customers and companies to manage the acquired customers and increase the worth of customers for the company. It needs the system base which analyzes the information of customers since it functions on the basis of various information about customers and is linked to the business category such as producing, marketing, and decision making. Since ERP is extending its function to SCM, CRM, and SEM(strategic Enterprise Management), the 21 century s ERP develop as the strategy tool of e-business and, as the mediation for this, will subdivide the functions of CRM effectively by the analogic study of data. Also, to accomplish classification work of the file which in existing becomes accomplished with possibility work with an automatic movement with the user will be able to accomplish a more efficiently work the agent which in order leads the machine studying law, it is one thing with system feature.

Tracking and Interpretation of Moving Object in MPEG-2 Compressed Domain (MPEG-2 압축 영역에서 움직이는 객체의 추적 및 해석)

  • Mun, Su-Jeong;Ryu, Woon-Young;Kim, Joon-Cheol;Lee, Joon-Hoan
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.27-34
    • /
    • 2004
  • This paper proposes a method to trace and interpret a moving object based on the information which can be directly obtained from MPEG-2 compressed video stream without decoding process. In the proposed method, the motion flow is constructed from the motion vectors included in compressed video. We calculate the amount of pan, tilt, and zoom associated with camera operations using generalized Hough transform. The local object motion can be extracted from the motion flow after the compensation with the parameters related to the global camera motion. Initially, a moving object to be traced is designated by user via bounding box. After then automatic tracking Is performed based on the accumulated motion flows according to the area contributions. Also, in order to reduce the cumulative tracking error, the object area is reshaped in the first I-frame of a GOP by matching the DCT coefficients. The proposed method can improve the computation speed because the information can be directly obtained from the MPEG-2 compressed video, but the object boundary is limited by macro-blocks rather than pixels. Also, the proposed method is proper for approximate object tracking rather than accurate tracing of an object because of limited information available in the compressed video data.

A Region-based Comparison Algorithm of k sets of Trapezoids (k 사다리꼴 셋의 영역 중심 비교 알고리즘)

  • Jung, Hae-Jae
    • The KIPS Transactions:PartA
    • /
    • v.10A no.6
    • /
    • pp.665-670
    • /
    • 2003
  • In the applications like automatic masks generation for semiconductor production, a drawing consists of lots of polygons that are partitioned into trapezoids. The addition/deletion of a polygon to/from the drawing is performed through geometric operations such as insertion, deletion, and search of trapezoids. Depending on partitioning algorithm being used, a polygon can be partitioned differently in terms of shape, size, and so on. So, It's necessary to invent some comparison algorithm of sets of trapezoids in which each set represents interested parts of a drawing. This comparison algorithm, for example, may be used to verify a software program handling geometric objects consisted of trapezoids. In this paper, given k sets of trapezoids in which each set forms the regions of interest of each drawing, we present how to compare the k sets to see if all k sets represent the same geometric scene. When each input set has the same number n of trapezoids, the algorithm proposed has O(2$^{k-2}$ $n^2$(log n+k)) time complexity. It is also shown that the algorithm suggested has the same time complexity O( $n^2$ log n) as the sweeping-based algorithm when the number k(<< n) of input sets is small. Furthermore, the proposed algorithm can be kn times faster than the sweeping-based algorithm when all the trapezoids in the k input sets are almost the same.

Automatic Method for Extracting Homogeneity Threshold and Segmenting Homogeneous Regions in Image (영상의 동질성 문턱 값 추출과 영역 분할 자동화 방법)

  • Han, Gi-Tae
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.363-374
    • /
    • 2010
  • In this paper, we propose the method for extracting Homogeneity Threshold($H_T$) and for segmenting homogeneous regions by USRG(Unseeded Region Growing) with $H_T$. The $H_T$ is a criterion to distinguish homogeneity in neighbor pixels and is computed automatically from the original image by proposed method. Theoretical background for proposed method is based on the Otsu's single level threshold method. The method is used to divide a small local part of original image int o two classes and the sum($\sigma_c$) of standard deviations for the classes to satisfy special conditions for distinguishing as different regions from each other is used to compute $H_T$. To find validity for proposed method, we compare the original image with the image that is regenerated with only the segmented homogeneous regions and show up the fact that the difference between two images is not exist visually and also present the steps to regenerate the image in order the size of segmented homogeneous regions and in order the intensity that includes pixels. Also, we show up the validity of proposed method with various results that is segmented using the homogeneity thresholds($H^*_T$) that is added a coefficient ${\alpha}$ for adjusting scope of $H_T$. We expect that the proposed method can be applied in various fields such as visualization and animation of natural image, anatomy and biology and so on.

The Cost and Adjustment Factors Estimation Method from the Perspective of Provider for Information System Maintenance Cost (공급자 관점의 정보시스템 유지보수 비용항목과 조정계수 산정방안)

  • Lee, ByoungChol;Rhew, SungYul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.11
    • /
    • pp.757-764
    • /
    • 2013
  • The estimation of maintenance cost of information system so far has been conducted centered on the ordering body, so the problem of provider's having to cover the cost due to small cost compared to the amount of work is not solved. This study is a base study for estimating the maintenance cost of information system centered on provider, and it deduces cost items of maintenance and suggests adjustment factors for adjusting the gap between the ordering body and provider regarding the maintenance cost. In order to deduce the cost items of maintenance, this study adds the activities of the provider for maintenance to the base study of cost factors regarding the existing maintenance activity, divides, and classifies them into the fixed cost and variable cost. In order to adjust the gap between the ordering body and provider regarding the maintenance cost, this study found the adjustment factors such as the code, utility, and components created by the automatic tool that was not included when estimating the maintenance cost centered on the ordering body. After examining and analyzing K Company's data of maintenance performance for three years, it confirmed that the gap regarding the adjustment factors was about 13% in case of K Company.

Ontology Modeling and Rule-based Reasoning for Automatic Classification of Personal Media (미디어 영상 자동 분류를 위한 온톨로지 모델링 및 규칙 기반 추론)

  • Park, Hyun-Kyu;So, Chi-Seung;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.3
    • /
    • pp.370-379
    • /
    • 2016
  • Recently personal media were produced in a variety of ways as a lot of smart devices have been spread and services using these data have been desired. Therefore, research has been actively conducted for the media analysis and recognition technology and we can recognize the meaningful object from the media. The system using the media ontology has the disadvantage that can't classify the media appearing in the video because of the use of a video title, tags, and script information. In this paper, we propose a system to automatically classify video using the objects shown in the media data. To do this, we use a description logic-based reasoning and a rule-based inference for event processing which may vary in order. Description logic-based reasoning system proposed in this paper represents the relation of the objects in the media as activity ontology. We describe how to another rule-based reasoning system defines an event according to the order of the inference activity and order based reasoning system automatically classify the appropriate event to the category. To evaluate the efficiency of the proposed approach, we conducted an experiment using the media data classified as a valid category by the analysis of the Youtube video.

Development of Android Smartphone App for Corner Point Feature Extraction using Remote Sensing Image (위성영상정보 기반 코너 포인트 객체 추출 안드로이드 스마트폰 앱 개발)

  • Kang, Sang-Goo;Lee, Ki-Won
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.1
    • /
    • pp.33-41
    • /
    • 2011
  • In the information communication technology, it is world-widely apparent that trend movement from internet web to smartphone app by users demand and developers environment. So it needs kinds of appropriate technological responses from geo-spatial domain regarding this trend. However, most cases in the smartphone app are the map service and location recognition service, and uses of geo-spatial contents are somewhat on the limited level or on the prototype developing stage. In this study, app for extraction of corner point features using geo-spatial imagery and their linkage to database system are developed. Corner extraction is based on Harris algorithm, and all processing modules in database server, application server, and client interface composing app are designed and implemented based on open source. Extracted corner points are applied LOD(Level of Details) process to optimize on display panel. Additional useful function is provided that geo-spatial imagery can be superimposed with the digital map in the same area. It is expected that this app can be utilized to automatic establishment of POI (Point of Interests) or point-based land change detection purposes.