• Title/Summary/Keyword: Semantic-Based Information Extraction

Search Result 137, Processing Time 0.027 seconds

Extraction method of Stay Point using a Statistical Analysis (통계적 분석방법을 이용한 Stay Point 추출 연구)

  • Park, Jin Gwan;Oh, Soo Lyul
    • Smart Media Journal
    • /
    • v.5 no.4
    • /
    • pp.26-40
    • /
    • 2016
  • Recent researches have been conducted for a user of the position acquisition and analysis since the mobile devices was developed. Trajectory data mining of location analysis method for a user is used to extract the meaningful information based on the user's trajectory. It should be preceded by a process of extracting Stay Point. In order to carry out trajectory data mining by analyzing the user of the GPS Trajectory. The conventional Stay Point extraction algorithm is low confidence because the user to arbitrarily set the threshold values. It does not distinguish between staying indoors and outdoors. Thus, the ambiguity of the position is increased. In this paper we proposed extraction method of Stay Point using a statistical analysis. We proposed algorithm improves position accuracy by extracting the points that are staying indoors and outdoors using Gaussian distribution. And we also improve reliability of the algorithm since that does not use arbitrarily set threshold.

Feature Generation of Dictionary for Named-Entity Recognition based on Machine Learning (기계학습 기반 개체명 인식을 위한 사전 자질 생성)

  • Kim, Jae-Hoon;Kim, Hyung-Chul;Choi, Yun-Soo
    • Journal of Information Management
    • /
    • v.41 no.2
    • /
    • pp.31-46
    • /
    • 2010
  • Now named-entity recognition(NER) as a part of information extraction has been used in the fields of information retrieval as well as question-answering systems. Unlike words, named-entities(NEs) are generated and changed steadily in documents on the Web, newspapers, and so on. The NE generation causes an unknown word problem and makes many application systems with NER difficult. In order to alleviate this problem, this paper proposes a new feature generation method for machine learning-based NER. In general features in machine learning-based NER are related with words, but entities in named-entity dictionaries are related to phrases. So the entities are not able to be directly used as features of the NER systems. This paper proposes an encoding scheme as a feature generation method which converts phrase entities into features of word units. Futhermore, due to this scheme, entities with semantic information in WordNet can be converted into features of the NER systems. Through our experiments we have shown that the performance is increased by about 6% of F1 score and the errors is reduced by about 38%.

Major Character Extraction using Character-Net (Character-Net을 이용한 주요배역 추출)

  • Park, Seung-Bo;Kim, Yoo-Won;Jo, Geun-Sik
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.85-102
    • /
    • 2010
  • In this paper, we propose a novel method of analyzing video and representing the relationship among characters based on their contexts in the video sequences, namely Character-Net. As a huge amount of video contents is generated even in a single day, the searching and summarizing technologies of the contents have also been issued. Thereby, a number of researches have been proposed related to extracting semantic information of video or scenes. Generally stories of video, such as TV serial or commercial movies, are made progress with characters. Accordingly, the relationship between the characters and their contexts should be identified to summarize video. To deal with these issues, we propose Character-Net supporting the extraction of major characters in video. We first identify characters appeared in a group of video shots and subsequently extract the speaker and listeners in the shots. Finally, the characters are represented by a form of a network with graphs presenting the relationship among them. We present empirical experiments to demonstrate Character-Net and evaluate performance of extracting major characters.

Fuzzy AHP and FCM-driven Hybrid Group Decision Support Mechanism (퍼지 AHP와 퍼지인식도 기반의 하이브리드 그룹 의사결정지원 메커니즘)

  • Kim, Jin-Sung;Lee, Kun-Chang
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2003.11a
    • /
    • pp.239-250
    • /
    • 2003
  • In this research, we propose a hybrid group decision support mechanism (H-GDSM) based on Fuzzy AHP (Analytic Hierarchy Process) and FCM (Fuzzy Cognitive Map). The AHP elicits a corresponding priority vector interpreting the preferred information among the decision makers. Corresponding vector was composed of the pairwise comparison values of a set of objects. Since pairwise comparison values are the judgments obtained from an appropriate semantic scale. However, AHP couldn't represent the causal relationship among information, which were used by decision makers. In contrast to AHP, FCM could represent the causal relationship among variables or information. Therefore, FCMs were successfully developed and used in several ill-structured domains, such as strategic decision-making, policy making, and simulations. Nonetheless, many researchers used subjective and voluntary inputs to simulate the FCM. As a result of subjective inputs, it couldn't avoid the rebukes of businessman. To overcome these limitations, we incorporated the Fuzzy membership functions, AHP and FCM into a H-GDSM. In contrast to current AHP methods and FCMs, the H-GDSM method developed herein could concurrently tackle the pairwise comparison involving causal relationships under a group decision-making environment. The strengths and contributions of our mechanism were 1) handling of qualitative knowledge and causal relationships, 2) extraction of objective input value to simulate the FCM, 3) multi-phase group decision support based on H-GDSM. To validate our proposed mechanism we developed a simple prototype system to support negotiation-based decisions in electronic commerce (EC).

  • PDF

Rule Acquisition Using Ontology Based on Graph Search (그래프 탐색을 이용한 웹으로부터의 온톨로지 기반 규칙습득)

  • Park, Sangun;Lee, Jae Kyu;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.3
    • /
    • pp.95-110
    • /
    • 2006
  • To enhance the rule-based reasoning capability of Semantic Web, the XRML (eXtensible Rule Markup Language) approach embraces the meta-information necessary for the extraction of explicit rules from Web pages and its maintenance. To effectuate the automatic identification of rules from unstructured texts, this research develops a framework of using rule ontology. The ontology can be acquired from a similar site first, and then can be used for multiple sites in the same domain. The procedure of ontology-based rule identification is regarded as a graph search problem with incomplete nodes, and an A* algorithm is devised to solve the problem. The procedure is demonstrated with the domain of shipping rates and return policy comparison portal, which needs rule based reasoning capability to answer the customer's inquiries. An example ontology is created from Amazon.com, and is applied to the many online retailers in the same domain. The experimental result shows a high performance of this approach.

  • PDF

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

A Study of Relationship Derivation Technique using object extraction Technique (개체추출기법을 이용한 관계성 도출기법)

  • Kim, Jong-hee;Lee, Eun-seok;Kim, Jeong-su;Park, Jong-kook;Kim, Jong-bae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.309-311
    • /
    • 2014
  • Despite increasing demands for big data application based on the analysis of scattered unstructured data, few relevant studies have been reported. Accordingly, the present study suggests a technique enabling a sentence-based semantic analysis by extracting objects from collected web information and automatically analyzing the relationships between such objects with collective intelligence and language processing technology. To be specific, collected information is stored in DBMS in a structured form, and then morpheme and feature information is analyzed. Obtained morphemes are classified into objects of interest, marginal objects and objects of non-interest. Then, with an inter-object attribute recognition technique, the relationships between objects are analyzed in terms of the degree, scope and nature of such relationships. As a result, the analysis of relevance between the information was based on certain keywords and used an inter-object relationship extraction technique that can determine positivity and negativity. Also, the present study suggested a method to design a system fit for real-time large-capacity processing and applicable to high value-added services.

  • PDF

Training Performance Analysis of Semantic Segmentation Deep Learning Model by Progressive Combining Multi-modal Spatial Information Datasets (다중 공간정보 데이터의 점진적 조합에 의한 의미적 분류 딥러닝 모델 학습 성능 분석)

  • Lee, Dae-Geon;Shin, Young-Ha;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.2
    • /
    • pp.91-108
    • /
    • 2022
  • In most cases, optical images have been used as training data of DL (Deep Learning) models for object detection, recognition, identification, classification, semantic segmentation, and instance segmentation. However, properties of 3D objects in the real-world could not be fully explored with 2D images. One of the major sources of the 3D geospatial information is DSM (Digital Surface Model). In this matter, characteristic information derived from DSM would be effective to analyze 3D terrain features. Especially, man-made objects such as buildings having geometrically unique shape could be described by geometric elements that are obtained from 3D geospatial data. The background and motivation of this paper were drawn from concept of the intrinsic image that is involved in high-level visual information processing. This paper aims to extract buildings after classifying terrain features by training DL model with DSM-derived information including slope, aspect, and SRI (Shaded Relief Image). The experiments were carried out using DSM and label dataset provided by ISPRS (International Society for Photogrammetry and Remote Sensing) for CNN-based SegNet model. In particular, experiments focus on combining multi-source information to improve training performance and synergistic effect of the DL model. The results demonstrate that buildings were effectively classified and extracted by the proposed approach.

Automation of Building Extraction and Modeling Using Airborne LiDAR Data (항공 라이다 데이터를 이용한 건물 모델링의 자동화)

  • Lim, Sae-Bom;Kim, Jung-Hyun;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.619-628
    • /
    • 2009
  • LiDAR has capability of rapid data acquisition and provides useful information for reconstructing surface of the Earth. However, Extracting information from LiDAR data is not easy task because LiDAR data consist of irregularly distributed point clouds of 3D coordinates and lack of semantic and visual information. This thesis proposed methods for automatic extraction of buildings and 3D detail modeling using airborne LiDAR data. As for preprocessing, noise and unnecessary data were removed by iterative surface fitting and then classification of ground and non-ground data was performed by analyzing histogram. Footprints of the buildings were extracted by tracing points on the building boundaries. The refined footprints were obtained by regularization based on the building hypothesis. The accuracy of building footprints were evaluated by comparing with 1:1,000 digital vector maps. The horizontal RMSE was 0.56m for test areas. Finally, a method of 3D modeling of roof superstructure was developed. Statistical and geometric information of the LiDAR data on building roof were analyzed to segment data and to determine roof shape. The superstructures on the roof were modeled by 3D analytical functions that were derived by least square method. The accuracy of the 3D modeling was estimated using simulation data. The RMSEs were 0.91m, 1.43m, 1.85m and 1.97m for flat, sloped, arch and dome shapes, respectively. The methods developed in study show that the automation of 3D building modeling process was effectively performed.

MEDU-Net+: a novel improved U-Net based on multi-scale encoder-decoder for medical image segmentation

  • Zhenzhen Yang;Xue Sun;Yongpeng, Yang;Xinyi Wu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1706-1725
    • /
    • 2024
  • The unique U-shaped structure of U-Net network makes it achieve good performance in image segmentation. This network is a lightweight network with a small number of parameters for small image segmentation datasets. However, when the medical image to be segmented contains a lot of detailed information, the segmentation results cannot fully meet the actual requirements. In order to achieve higher accuracy of medical image segmentation, a novel improved U-Net network architecture called multi-scale encoder-decoder U-Net+ (MEDU-Net+) is proposed in this paper. We design the GoogLeNet for achieving more information at the encoder of the proposed MEDU-Net+, and present the multi-scale feature extraction for fusing semantic information of different scales in the encoder and decoder. Meanwhile, we also introduce the layer-by-layer skip connection to connect the information of each layer, so that there is no need to encode the last layer and return the information. The proposed MEDU-Net+ divides the unknown depth network into each part of deconvolution layer to replace the direct connection of the encoder and decoder in U-Net. In addition, a new combined loss function is proposed to extract more edge information by combining the advantages of the generalized dice and the focal loss functions. Finally, we validate our proposed MEDU-Net+ MEDU-Net+ and other classic medical image segmentation networks on three medical image datasets. The experimental results show that our proposed MEDU-Net+ has prominent superior performance compared with other medical image segmentation networks.