• Title/Summary/Keyword: Occlusion Robust

Search Result 99, Processing Time 0.018 seconds

Combining an Edge-Based Method and a Direct Method for Robust 3D Object Tracking

  • Lomaliza, Jean-Pierre;Park, Hanhoon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.2
    • /
    • pp.167-177
    • /
    • 2021
  • In the field of augmented reality, edge-based methods have been popularly used in tracking textureless 3D objects. However, edge-based methods are inherently vulnerable to cluttered backgrounds. Another way to track textureless or poorly-textured 3D objects is to directly align image intensity of 3D object between consecutive frames. Although the direct methods enable more reliable and stable tracking compared to using local features such as edges, they are more sensitive to occlusion and less accurate than the edge-based methods. Therefore, we propose a method that combines an edge-based method and a direct method to leverage the advantages from each approach. Experimental results show that the proposed method is much robust to both fast camera (or object) movements and occlusion while still working in real time at a frame rate of 18 Hz. The tracking success rate and tracking accuracy were improved by up to 84% and 1.4 pixels, respectively, compared to using the edge-based method or the direct method solely.

Multi-mode Kernel Weight-based Object Tracking (멀티모드 커널 가중치 기반 객체 추적)

  • Kim, Eun-Sub;Kim, Yong-Goo;Choi, Yoo-Joo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.4
    • /
    • pp.11-17
    • /
    • 2015
  • As the needs of real-time visual object tracking are increasing in various kinds of application fields such as surveillance, entertainment, etc., kernel-based mean-shift tracking has received more interests. One of major issues in kernel-based mean-shift tracking is to be robust under partial or full occlusion status. This paper presents a real-time mean-shift tracking which is robust in partial occlusion by applying multi-mode local kernel weight. In the proposed method, a kernel is divided into multiple sub-kernels and each sub-kernel has a kernel weight to be determined according to the location of the sub-kernel. The experimental results show that the proposed method is more stable than the previous methods with multi-mode kernels in partial occlusion circumstance.

Active Fusion Model with Robustness against Partial Occlusions (부분적 폐색에 강건한 활동적 퓨전 모델)

  • Lee Joong-Jae;Lee Geun-Soo;Kim Gye-Young
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.35-46
    • /
    • 2006
  • The dynamic change of background and moving objects is an important factor which causes the problem of occlusion in tracking moving objects. The tracking accuracy is also remarkably decreased in the presence of occlusion. We therefore propose an active fusion model which is robust against partial occlusions that are occurred by background and other objects. The active fusion model is consisted of contour-based md region-based snake. The former is a conventional snake model using contour features of a moving object and the latter is a regional snake model which considers region features inside its boundary. First, this model classifies total occlusion into contour and region occlusion. And then it adjusts the confidence of each model based on calculating the location and amount of occlusion, so it can overcome the problem of occlusion. Experimental results show that the proposed method can successfully track a moving object but the previous methods fail to track it under partial occlusion.

Occlusion-Robust Marker-Based Augmented Reality Using Particle Swarm Optimization (파티클 집단 최적화를 이용한 가려짐에 강인한 마커 기반 증강현실)

  • Park, Hanhoon;Choi, Junyeong;Moon, Kwang-Seok
    • Journal of the HCI Society of Korea
    • /
    • v.11 no.1
    • /
    • pp.39-45
    • /
    • 2016
  • Effective and efficient estimation of camera poses is a core method in implementing augmented reality systems or applications. The most common one is using markers, e.g., ARToolkit. However, use of markers suffers from a notorious problem that is vulnerable to occlusion. To overcome this, this paper proposes a top-down method that iteratively estimates the current camera pose by using particle swarm optimization. Through experiments, it was confirmed that the proposed method enables to implement augmented reality on severely-occluded markers.

Face Recognition Robust to Occlusion via Dual Sparse Representation

  • Shin, Hyunhye;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.3 no.2
    • /
    • pp.46-48
    • /
    • 2016
  • Purpose In face reocognition area, estimating occlusion in face images is on the rise. In this paper, we propose a new face recognition algorithm based on dual sparse representation to solve this problem. Method Each face image is partitioned into several pieces and sparse representation is implemented in each part. Then, some parts that have large sparse concentration index are combined and sparse representation is performed one more time. Each test sample is classified by using the final sparse coefficient where correlation between the test sample and training sample is applied. Results The recognition rate of the proposed algorithm is higher than that of the basic sparse representation classification. Conclusion The proposed method can be applied in real life which needs to identify someone exactly whether the person disguises his face or not.

Visual tracking based Discriminative Correlation Filter Using Target Separation and Detection

  • Lee, Jun-Haeng
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.55-61
    • /
    • 2017
  • In this paper, we propose a novel tracking method using target separation and detection that are based on discriminative correlation filter (DCF), which is studied a lot recently. 'Retainability' is one of the most important factor of tracking. There are some factors making retainability of tracking worse. Especially, fast movement and occlusion of a target frequently occur in image data, and when it happens, it would make target lost. As a result, the tracking cannot be retained. For maintaining a robust tracking, in this paper, separation of a target is used so that normal tracking is maintained even though some part of a target is occluded. The detection algorithm is executed and find new location of the target when the target gets out of tracking range due to occlusion of whole part of a target or fast movement speed of a target. A variety of experiments with various image data sets are conducted. The algorithm proposed in this paper showed better performance than other conventional algorithms when fast movement and occlusion of a target occur.

Techniques for Background Updating under PTZ Camera Based Surveillance

  • Jung, Sung-Hoon;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.12
    • /
    • pp.1745-1754
    • /
    • 2009
  • PTZ (Pan-Tilt-Zoom) camera based surveillance systems are enlarging their field of application due to their wide observable area. We aimed to detect both static and moving objects in automated working space by using a PTZ camera. For object detection we used background difference method because of the high quality segmentation. However, the method has a problem called 'hole' that is caused by non-continuous surveillance of the PTZ camera and its own characteristics. Moreover, the occlusion which occurs when the moving object overlaps with the static object should be solved for robust object detection. In this paper, we suggest a region-based technique for updating background images thereby overcoming the hole and occlusion problem. Through experiments with real scenes, it was verified that meaningful static and/or moving objects were detected very well.

  • PDF

Robust Face Recognition under Limited Training Sample Scenario using Linear Representation

  • Iqbal, Omer;Jadoon, Waqas;ur Rehman, Zia;Khan, Fiaz Gul;Nazir, Babar;Khan, Iftikhar Ahmed
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3172-3193
    • /
    • 2018
  • Recently, several studies have shown that linear representation based approaches are very effective and efficient for image classification. One of these linear-representation-based approaches is the Collaborative representation (CR) method. The existing algorithms based on CR have two major problems that degrade their classification performance. First problem arises due to the limited number of available training samples. The large variations, caused by illumintion and expression changes, among query and training samples leads to poor classification performance. Second problem occurs when an image is partially noised (contiguous occlusion), as some part of the given image become corrupt the classification performance also degrades. We aim to extend the collaborative representation framework under limited training samples face recognition problem. Our proposed solution will generate virtual samples and intra-class variations from training data to model the variations effectively between query and training samples. For robust classification, the image patches have been utilized to compute representation to address partial occlusion as it leads to more accurate classification results. The proposed method computes representation based on local regions in the images as opposed to CR, which computes representation based on global solution involving entire images. Furthermore, the proposed solution also integrates the locality structure into CR, using Euclidian distance between the query and training samples. Intuitively, if the query sample can be represented by selecting its nearest neighbours, lie on a same linear subspace then the resulting representation will be more discriminate and accurately classify the query sample. Hence our proposed framework model the limited sample face recognition problem into sufficient training samples problem using virtual samples and intra-class variations, generated from training samples that will result in improved classification accuracy as evident from experimental results. Moreover, it compute representation based on local image patches for robust classification and is expected to greatly increase the classification performance for face recognition task.

Ship Number Recognition Method Based on An improved CRNN Model

  • Wenqi Xu;Yuesheng Liu;Ziyang Zhong;Yang Chen;Jinfeng Xia;Yunjie Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.740-753
    • /
    • 2023
  • Text recognition in natural scene images is a challenging problem in computer vision. The accurate identification of ship number characters can effectively improve the level of ship traffic management. However, due to the blurring caused by motion and text occlusion, the accuracy of ship number recognition is difficult to meet the actual requirements. To solve these problems, this paper proposes a dual-branch network based on the CRNN identification network. The network couples image restoration and character recognition. The CycleGAN module is used for blur restoration branch, and the Pix2pix module is used for character occlusion branch. The two are coupled to reduce the impact of image blur and occlusion. Input the recovered image into the text recognition branch to improve the recognition accuracy. After a lot of experiments, the model is robust and easy to train. Experiments on CTW datasets and real ship maps illustrate that our method can get more accurate results.