• Title/Summary/Keyword: Data Searching

Search Result 1,574, Processing Time 0.037 seconds

Implementation of Tile Searching and Indexing Management Algorithms for Mobile GIS Performance Enhancement

  • Lee, Kang-Won;Choi, Jin-Young
    • Journal of Internet of Things and Convergence
    • /
    • v.1 no.1
    • /
    • pp.11-19
    • /
    • 2015
  • The mobile and ubiquitous environment is experiencing a rapid development of information and communications technology as it provides an ever increasing flow of information. Particularly, GIS is now widely applied in daily life due to its high accuracy and functionality. GIS information is utilized through the tiling method, which divides and manages large-scale map information. The tiling method manages map information and additional information to allow overlay, so as to facilitate quick access to tiled data. Unlike past studies, this paper proposes a new architecture and algorithms for tile searching and indexing management to optimize map information and additional information for GIS mobile applications. Since this involves the processing of large-scale information and continuous information changes, information is clustered for rapid processing. In addition, data size is minimized to overcome the constrained performance associated with mobile devices. Our system has been implemented in actual services, leading to a twofold increase in performance in terms of processing speed and mobile bandwidth.

The Low Cost Implementation of Speech Recognition System for the Web (웹에서의 저가 음성인식 시스템의 구현)

  • Park, Yong-Beom;Park, Jong-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.1129-1135
    • /
    • 1999
  • isolated word recognition using the Dynamic Time warping algorithm has shown good recognition rate on speaker dependent environment. But, practically, since the searching time of the dynamic Time Warping algorithm is rapidly increased as searching data is increased. it is hard to implement. In the context-dependent-short-query system such as educational children's workbook on the Web, the number of responses to the specific questions is limited. Therefore, the searching space for the answers can be reduced depending on the questions. In this paper, low cost implementation method using DTW for the Web has been proposed. To cover the weakness of DTW, the searching space is reduced by the context. the searching space, depends on the specific questions, is chosen from interest searchable candidates. In the real implementation, the proposed method show better performance of both time and recognition rate.

  • PDF

Implementation of Image Compression and Searching System using Wavelet Transform (Wavelet 변환을 이용한 영상압축 및 검색 시스템의 구현)

  • Yoon, Jung-Mo;Kim, Sang-Yeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.4
    • /
    • pp.50-58
    • /
    • 2001
  • The image information, used most frequently in multimedia, is visual and spatial information. It has several characters including the diversity of storage and output methods, large capacity, spatial relationship expression, and irregularity. Therefore, the various researches for methods of storing efficiently, managing, searching such image data are going on. And recently, it has arisen the movement of international standardization, MPEG-7 for searching contents base in multimedia environment. Especially, the research for implementation of more effective image database searching system important subject, because the practical image search system which can storage a lot of image information as database and query, search them has not generalized. Now the image search system based on text has researched to high degree, but it has many shortages so that nowadays the researches for searching system based on contents are going on. This research has used the wavelet conversion largely using in image processing instead of DCT method largely using in existent system, and so it had met similar and precise results than prior methods by image compression and extraction of specific vector.

  • PDF

A Study on Related Variables of University Students' Coping Behavior Concerning Job-searching Problems (대학생의 취업대처행동에 영향을 미치는 관련 변인의 탐색 - 사회인구학적 변인과 개인내적 변인을 중심으로)

  • Kim Kyung-Hwa;Min Ha-Yeoung
    • Journal of Families and Better Life
    • /
    • v.24 no.3 s.81
    • /
    • pp.73-82
    • /
    • 2006
  • The purpose of this study was to investigate the related variables of university students' coping behavior concerning job-searching problems. The subjects were 436 senior students (212 men and 224 women) enrolled in a university in Gyeongbuk Province. Survey questionnaires were used to measure undergraduate students' coping behavior concerning job-searching problems, work commitment, their will to accept downward employment, sex role identity, grade, sex, perceived SES, and major. Data were analyzed by means, standard deviations, t-test, one-way ANOVA, Scheffe' test, and regression. Results are summarized as follows: (1) Male students' level of active and supportive coping behavior was higher than female students'. Male students' level of evasive coping behavior was lower than female students'. Students who perceive their economic condition as negative were higher in active and supportive coping behavior and lower in evasive coping behavior than the students who perceive their economic condition as positive. (2) The students who were strong in work commitment were higher in active coping behavior, and lower in evasive coping behavior than those who were not. (3) The students who were willing to accept downward employment were higher in active coping behavior than those who were not.(4) The students' coping behavior concerning job-searching problems differed according to their sex role identity. (5) Work commitment and sex role identity were influential variables on university students' job-coping behavior.

Development of Prototype and Model about the Moving Picture Searching System based on MPEG-7 and KEM (MPEG-7과 KEM 기반의 동영상 검색 시스템 모델 및 프로토타입의 개발)

  • Choe, HyunJong
    • The Journal of Korean Association of Computer Education
    • /
    • v.12 no.3
    • /
    • pp.75-83
    • /
    • 2009
  • Moving picture has become the important media in education with expanded e-learning paradigm, but Korea Educational Metadata has limitation about representing information of lots of events and objects in moving picture. Announcing the MPEG-7 specification the information of lots of events and objects in it can be presented in terms of semantic and structural description of moving pictures. In this paper moving picture searching system model that integrates two metadata specifications, such as KEM and MPEG-7, is proposed. In this model one ontology to combine two metadata specifications is designed, and the other ontology about knowledge of a subject matter is added to search efficiently in searching system. As some moving picture data from Edunet were selected and stored in our server, our prototype of searching system using MPEG-7 and KEM shows the results that we are expected.

  • PDF

An Improved Interpolation Method using Pixel Difference Values for Effective Reversible Data Hiding (효과적인 가역 정보은닉을 위한 픽셀의 차이 값을 이용한 개선된 보간법)

  • Kim, Pyung Han;Jung, Ki Hyun;Yoon, Eun-Jun;Ryu, Kwan-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.6
    • /
    • pp.768-788
    • /
    • 2021
  • The reversible data hiding technique safely transmits secret data to the recipient from malicious attacks by third parties. In addition, this technique can completely restore the image used as a transmission medium for secret data. The reversible data hiding schemes have been proposed in various forms, and recently, the reversible data hiding schemes based on interpolation are actively researching. The reversible data hiding scheme based on the interpolation method expands the original image into the cover image and embed secret data. However, the existing interpolation-based reversible data hiding schemes did not embed secret data during the interpolation process. To improve this problem, this paper proposes embedding the first secret data during the image interpolation process and embedding the second secret data into the interpolated cover image. In the embedding process, the original image is divided into blocks without duplicates, and the maximum and minimum values are determined within each block. Three way searching based on the maximum value and two way searching based on the minimum value are performed. And, image interpolation is performed while embedding the first secret data using the PVD scheme. A stego image is created by embedding the second secret data using the maximum difference value and log function in the interpolated cover image. As a result, the proposed scheme embeds secret data twice. In particular, it is possible to embed secret data even during the interpolation process of an image that did not previously embed secret data. Experimental results show that the proposed scheme can transmit more secret data to the receiver while maintaining the image quality similar to other interpolation-based reversible data hiding schemes.

Implementation of Policy based In-depth Searching for Identical Entities and Cleansing System in LOD Cloud (LOD 클라우드에서의 연결정책 기반 동일개체 심층검색 및 정제 시스템 구현)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.67-77
    • /
    • 2018
  • This paper suggests that LOD establishes its own link policy and publishes it to LOD cloud to provide identity among entities in different LODs. For specifying the link policy, we proposed vocabulary set founded on RDF model as well. We implemented Policy based In-depth Searching and Cleansing(PISC for short) system that proceeds in-depth searching across LODs by referencing the link policies. PISC has been published on Github. LODs have participated voluntarily to LOD cloud so that degree of the entity identity needs to be evaluated. PISC, therefore, evaluates the identities and cleanses the searched entities to confine them to that exceed user's criterion of entity identity level. As for searching results, PISC provides entity's detailed contents which have been collected from diverse LODs and ontology customized to the content. Simulation of PISC has been performed on DBpedia's 5 LODs. We found that similarity of 0.9 of source and target RDF triples' objects provided appropriate expansion ratio and inclusion ratio of searching result. For sufficient identity of searched entities, 3 or more target LODs are required to be specified in link policy.

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

Real-Time Indexing Performance Optimization of Search Platform Based on Big Data Cluster (빅데이터 클러스터 기반 검색 플랫폼의 실시간 인덱싱 성능 최적화)

  • Nayeon Keum;Dongchul Park
    • Journal of Platform Technology
    • /
    • v.11 no.6
    • /
    • pp.89-105
    • /
    • 2023
  • With the development of information technology, most of the information has been converted into digital information, leading to the Big Data era. The demand for search platform has increased to enhance accessibility and usability of information in the databases. Big data search software platforms consist of two main components: (1) an indexing component to generate and store data indices for a fast and efficient data search and (2) a searching component to look up the given data fast. As an amount of data has explosively increased, data indexing performance has become a key performance bottleneck of big data search platforms. Though many companies adopted big data search platforms, relatively little research has been made to improve indexing performance. This research study employs Elasticsearch platform, one of the most famous enterprise big data search platforms, and builds physical clusters of 3 nodes to investigate optimal indexing performance configurations. Our comprehensive experiments and studies demonstrate that the proposed optimal Elasticsearch configuration achieves high indexing performance by an average of 3.13 times.

  • PDF

Design of Data Generating for Fast Searching and Customized Service for Underground Utility Facilities (지하공동구 관리를 위한 고속 검색 데이터 생성 및 사용자 맞춤형 서비스 방안 설계)

  • Park, Jonghwa;Jeon, Jihye;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.390-397
    • /
    • 2021
  • As digital twin technology is applied to various industrial fields, technologies to effectively process large amounts of data are required. In this paper, we discuss a customized service method for fast search and effective delivery of large-scale data for underground facility for public utilities management. The proposed schemes are divided into two ways: a fast search data generation method and a customized information service segmentation method to efficiently search and abbreviate vast amounts of data. In the high-speed search data generation, we discuss the configuration of the synchronization process for the time series analysis of the sensors collected in the underground facility and the additional information method according to the data reduction. In the user-customized service method, we define the types of users in normal and disaster situations, and discuss how to service them accordingly. Through this study, it is expected to be able to develop a systematic data generation and service model for the management of underground utilities that can effectively search and receive large-scale data in a disaster situation.