• Title/Summary/Keyword: Processing Map

Search Result 1,473, Processing Time 0.024 seconds

Automatic Face Tracking based on Active Contour Model using Two-Level Composite Gradient Map (두 단계 합성 기울기 맵을 이용한 활성 외곽선 모델 기반 자동 얼굴 추적)

  • Kim, Soo-Kyung;Jang, Yo-Jin;Hong, Helen
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.901-911
    • /
    • 2009
  • In this paper, we propose a construction technique of two-level composite gradient map to automatically track a face with large movement in successive frames. Our method is composed of three main steps. First, the gradient maps with two-level resolution are generated for fast convergence of active contour. Second, to recognize the variations of face between successive frames and remove the neighbor background, weighted composite gradient map is generated by combining the composite gradient map and difference mask of previous and current frames. Third, to prevent active contour from converging local minima, the energy slope is generated by using closing operation. In addition, the fast closing operation is proposed to accelerate the processing time of closing operation. For performance evaluation, we compare our method with previous active contour model-based face tracking methods using a visual inspection, robustness test and processing time. Experimental results show that our method can effectively track the face with large movement and robustly converge to the optimal position even in frames with complicated background.

Image Coding Using DCT Map and Binary Tree-structured Vector Quantizer (DCT 맵과 이진 트리 구조 벡터 양자화기를 이용한 영상 부호화)

  • Jo, Seong-Hwan;Kim, Eung-Seong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.81-91
    • /
    • 1994
  • A DCT map and new cldebook design algorithm based on a two-dimension discrete cosine transform (2D-DCT) is presented for coder of image vector quantizer. We divide the image into smaller subblocks, then, using 2D DCT, separate it into blocks which are hard to code but it bears most of the visual information and easy to code but little visual information, and DCT map is made. According to this map, the significant features of training image are extracted by using the 2D DCT. A codebook is generated by partitioning the training set into a binary tree based on tree-structure. Each training vector at a nonterminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. Compared with the pairwise neighbor (PPN) and classified VQ(CVQ) algorithm, about 'Lenna' and 'Boat' image, the new algorithm results in a reduction in computation time and shows better picture quality with 0.45 dB and 0.33dB differences as to PNN, 0.05dB and 0.1dB differences as to CVQ respectively.

  • PDF

Data Fragmentation Protection Technique for the Performance Enhancement of DB-Based Navigation Supporting Incremental Map Update (점증적인 맵 갱신을 지원하는 DB 기반 내비게이션의 성능 향상을 위한 데이터 단편화 방지 기법)

  • Kim, Yong Ho;Kim, Jae Kwang;Jin, Seongil
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.3
    • /
    • pp.77-82
    • /
    • 2020
  • Most of the navigation in the vehicle has been developed based on a complex structure of PSF(Physical Storage Format) files, making it difficult to support incremental map updates. DB-based navigation is drawing attention as a next-generation navigation method to solve this problem. In DB-based navigation that supports incremental map updates, data fragmentation due to continuous map data updates can increase data access costs, which can lead to a decrease in search performance. In this paper, as one of the performance enhancement methods of DB-based navigation that supports incremental map updates, data fragmentation prevention techniques were presented and the performance improvement effect was verified through actual implementation.

Designation of a Road in Urban Area Using Rough Transform

  • Kim, Joon-Cheol;Park, Sung-Mo;Lee, Joon-whoan;Jeong, Soo
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.766-771
    • /
    • 2002
  • Automatic change detection based on the vector-to-raster comparison is hard especially in high-resolution image. This paper proposes a method to designate roads in high-resolution image in sequential manner using the information from vector map in which Hough transform is used for reliability. By its linearity, the road of urban areas in a vector map can be easily parameterized. Following some pre-processing to remove undesirable objects, we obtain the edge map of raster image. Then the edge map is transformed to a parameter space to find the selected road from vector map. The comparison is done in the parameter space to find the best matching. The set of parameters of a road from vector map is treated as the constraints to do matching. After designating the road, we may overlay it on the raster image for precise monitoring. The results can be used for detection of changes in road object in a semi-automatic fashion.

  • PDF

A Study on the Small-scale Map Production using Automatic Map Generalization in a Digital Environment and Accuracy Assessment (일반화 기법을 이용한 소축척 지도의 자동생성 및 정확도 평가에 관한 연구)

  • 김감래;이호남
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.14 no.1
    • /
    • pp.27-38
    • /
    • 1996
  • Non-scale digital map have important role in the field of GIS and other application area which using geographical data in recently against conventional map restricted by scale and information. The main objective of this study is to develope the automated map production system for small scale map in conjuction with generalization techniques in a digital environment. We will intend to develope algorithms and programs for each generalization operators based on specific terrain feature with vector data. This study will be performed aspects related to an data model development of generalization process, focussing on priority for processing sequency with maintaining vector topology, and error analysis for generalized digital data.

  • PDF

Reduction in Sample Size for Efficient Monte Carlo Localization (효율적인 몬테카를로 위치추정을 위한 샘플 수의 감소)

  • Yang Ju-Ho;Song Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.5
    • /
    • pp.450-456
    • /
    • 2006
  • Monte Carlo localization is known to be one of the most reliable methods for pose estimation of a mobile robot. Although MCL is capable of estimating the robot pose even for a completely unknown initial pose in the known environment, it takes considerable time to give an initial pose estimate because the number of random samples is usually very large especially for a large-scale environment. For practical implementation of MCL, therefore, a reduction in sample size is desirable. This paper presents a novel approach to reducing the number of samples used in the particle filter for efficient implementation of MCL. To this end, the topological information generated through the thinning technique, which is commonly used in image processing, is employed. The global topological map is first created from the given grid map for the environment. The robot then scans the local environment using a laser rangefinder and generates a local topological map. The robot then navigates only on this local topological edge, which is likely to be similar to the one obtained off-line from the given grid map. Random samples are drawn near the topological edge instead of being taken with uniform distribution all over the environment, since the robot traverses along the edge. Experimental results using the proposed method show that the number of samples can be reduced considerably, and the time required for robot pose estimation can also be substantially decreased without adverse effects on the performance of MCL.

Clutter Rejection Method using Background Adaptive Threshold Map (배경 적응적 문턱치 맵(Threshold Map)을 이용한 클러터 제거 기법)

  • Kim, Jieun;Yang, Yu Kyung;Lee, Boo Hwan;Kim, Yeon Soo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.17 no.2
    • /
    • pp.175-181
    • /
    • 2014
  • In this paper, we propose a robust clutter pre-thresholding method using background adaptive Threshold Map for the clutter rejection in the complex coastal environment. The proposed algorithm is composed of the use of Threshold Map's and method of its calculation. Additionally we also suggest an automatic decision method of Thresold Map's update. Experimental results on some sets of real infrared image sequence show that the proposed method could remove clutters effectively without any loss of detection rate for the aim target and reduce processing time dramatically.

Multimodal layer surveillance map based on anomaly detection using multi-agents for smart city security

  • Shin, Hochul;Na, Ki-In;Chang, Jiho;Uhm, Taeyoung
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.183-193
    • /
    • 2022
  • Smart cities are expected to provide residents with convenience via various agents such as CCTV, delivery robots, security robots, and unmanned shuttles. Environmental data collected by various agents can be used for various purposes, including advertising and security monitoring. This study suggests a surveillance map data framework for efficient and integrated multimodal data representation from multi-agents. The suggested surveillance map is a multilayered global information grid, which is integrated from the multimodal data of each agent. To confirm this, we collected surveillance map data for 4 months, and the behavior patterns of humans and vehicles, distribution changes of elevation, and temperature were analyzed. Moreover, we represent an anomaly detection algorithm based on a surveillance map for security service. A two-stage anomaly detection algorithm for unusual situations was developed. With this, abnormal situations such as unusual crowds and pedestrians, vehicle movement, unusual objects, and temperature change were detected. Because the surveillance map enables efficient and integrated processing of large multimodal data from a multi-agent, the suggested data framework can be used for various applications in the smart city.

A Basic Study for the Development of Multidisciplinary Intervention Guide Map of Auditory Processing Disorders (청각처리장애의 다학문적 중재 안내도 개발을 위한 기초 연구)

  • Kim, Soo-Jin
    • Journal of Digital Convergence
    • /
    • v.13 no.12
    • /
    • pp.259-268
    • /
    • 2015
  • People with auditory processing disorders(APD) do not exactly understand what they hear with normal hearing levels because of difficulties in the processing of auditory information in auditory nervous system. The purposes of this study are to investigate intervention strategies suggested by current literatures and to develop a guide map for APD intervention. The problem based intervention strategies are customized to the specific deficits of a subtype of Buffalo model and Bellis/Ferre model and general intervention strategies are recommended with compensatory strategy, auditory training, environmental modification and so on. Multidisciplinary team should determine and provide various intervention strategies to improve auditory capabilities of a child with APD intensively and persistently. APD intervention guide map is organized with four steps. It helps clinicians and teachers related with the intervention of APD find appropriate intervention strategies and process in order to reduce difficulties of a child with APD and a suspected APD.

A Hot-Data Replication Scheme Based on Data Access Patterns for Enhancing Processing Speed of MapReduce (맵-리듀스의 처리 속도 향상을 위한 데이터 접근 패턴에 따른 핫-데이터 복제 기법)

  • Son, Ingook;Ryu, Eunkyung;Park, Junho;Bok, Kyoungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.11
    • /
    • pp.21-27
    • /
    • 2013
  • In recently years, with the growth of social media and the development of mobile devices, the data have been significantly increased. Hadoop has been widely utilized as a typical distributed storage and processing framework. The tasks in Mapreduce based on the Hadoop distributed file system are allocated to the map as close as possible by considering the data locality. However, there are data being requested frequently according to the data analysis tasks of Mapreduce. In this paper, we propose a hot-data replication mechanism to improve the processing speed of Mapreduce according to data access patterns. The proposed scheme reduces the task processing time and improves the data locality using the replica optimization algorithm on the high access frequency of hot data. It is shown through performance evaluation that the proposed scheme outperforms the existing scheme in terms of the load of access frequency.