• Title/Summary/Keyword: Multiple feature detection

Search Result 163, Processing Time 0.028 seconds

Dual Detection-Guided Newborn Target Intensity Based on Probability Hypothesis Density for Multiple Target Tracking

  • Gao, Li;Ma, Yongjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.5095-5111
    • /
    • 2016
  • The Probability Hypothesis Density (PHD) filter is a suboptimal approximation and tractable alternative to the multi-target Bayesian filter based on random finite sets. However, the PHD filter fails to track newborn targets when the target birth intensity is unknown prior to tracking. In this paper, a dual detection-guided newborn target intensity PHD algorithm is developed to solve the problem, where two schemes, namely, a newborn target intensity estimation scheme and improved measurement-driven scheme, are proposed. First, the newborn target intensity estimation scheme, consisting of the Dirichlet distribution with the negative exponent parameter and target velocity feature, is used to recursively estimate the target birth intensity. Then, an improved measurement-driven scheme is introduced to reduce the errors of the estimated number of targets and computational load. Simulation results demonstrate that the proposed algorithm can achieve good performance in terms of target states, target number and computational load when the newborn target intensity is not predefined in multi-target tracking systems.

Face Recognition Using a Facial Recognition System

  • Almurayziq, Tariq S;Alazani, Abdullah
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.280-286
    • /
    • 2022
  • Facial recognition system is a biometric manipulation. Its applicability is simpler, and its work range is broader than fingerprints, iris scans, signatures, etc. The system utilizes two technologies, such as face detection and recognition. This study aims to develop a facial recognition system to recognize person's faces. Facial recognition system can map facial characteristics from photos or videos and compare the information with a given facial database to find a match, which helps identify a face. The proposed system can assist in face recognition. The developed system records several images, processes recorded images, checks for any match in the database, and returns the result. The developed technology can recognize multiple faces in live recordings.

Classification Method of Harmful Image Content Rates in Internet (인터넷에서의 유해 이미지 컨텐츠 등급 분류 기법)

  • Nam, Taek-Yong;Jeong, Chi-Yoon;Han, Chi-Moon
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.3
    • /
    • pp.318-326
    • /
    • 2005
  • This paper presents the image feature extraction method and the image classification technique to select the harmful image flowed from the Internet by grade of image contents such as harmlessness, sex-appealing, harmfulness (nude), serious harmfulness (adult) by the characteristic of the image. In this paper, we suggest skin area detection technique to recognize whether an input image is harmful or not. We also propose the ROI detection algorithm that establishes region of interest to reduce some noise and extracts harmful degree effectively and defines the characteristics in the ROI area inside. And this paper suggests the multiple-SVM training method that creates the image classification model to select as 4 types of class defined above. This paper presents the multiple-SVM classification algorithm that categorizes harmful grade of input data with suggested classification model. We suggest the skin likelihood image made of the shape information of the skin area image and the color information of the skin ratio image specially. And we propose the image feature vector to use in the characteristic category at a course of traininB resizing the skin likelihood image. Finally, this paper presents the performance evaluation of experiment result, and proves the suitability of grading image using image feature classification algorithm.

Convolutional Neural Network-based System for Vehicle Front-Side Detection (컨볼루션 신경망 기반의 차량 전면부 검출 시스템)

  • Park, Young-Kyu;Park, Je-Kang;On, Han-Ik;Kang, Dong-Joong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1008-1016
    • /
    • 2015
  • This paper proposes a method for detecting the front side of vehicles. The method can find the car side with a license plate even with complicated and cluttered backgrounds. A convolutional neural network (CNN) is used to solve the detection problem as a unified framework combining feature detection, classification, searching, and localization estimation and improve the reliability of the system with simplicity of usage. The proposed CNN structure avoids sliding window search to find the locations of vehicles and reduces the computing time to achieve real-time processing. Multiple responses of the network for vehicle position are further processed by a weighted clustering and probabilistic threshold decision method. Experiments using real images in parking lots show the reliability of the method.

Enhancing the Reliability of Wi-Fi Network Using Evil Twin AP Detection Method Based on Machine Learning

  • Seo, Jeonghoon;Cho, Chaeho;Won, Yoojae
    • Journal of Information Processing Systems
    • /
    • v.16 no.3
    • /
    • pp.541-556
    • /
    • 2020
  • Wireless networks have become integral to society as they provide mobility and scalability advantages. However, their disadvantage is that they cannot control the media, which makes them vulnerable to various types of attacks. One example of such attacks is the evil twin access point (AP) attack, in which an authorized AP is impersonated by mimicking its service set identifier (SSID) and media access control (MAC) address. Evil twin APs are a major source of deception in wireless networks, facilitating message forgery and eavesdropping. Hence, it is necessary to detect them rapidly. To this end, numerous methods using clock skew have been proposed for evil twin AP detection. However, clock skew is difficult to calculate precisely because wireless networks are vulnerable to noise. This paper proposes an evil twin AP detection method that uses a multiple-feature-based machine learning classification algorithm. The features used in the proposed method are clock skew, channel, received signal strength, and duration. The results of experiments conducted indicate that the proposed method has an evil twin AP detection accuracy of 100% using the random forest algorithm.

Forgery Detection Scheme Using Enhanced Markov Model and LBP Texture Operator in Low Quality Images (저품질 이미지에서 확장된 마르코프 모델과 LBP 텍스처 연산자를 이용한 위조 검출 기법)

  • Agarwal, Saurabh;Jung, Ki-Hyun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1171-1179
    • /
    • 2021
  • Image forensic is performed to check image limpidness. In this paper, a robust scheme is discussed to detect median filtering in low quality images. Detection of median filtering assists in overall image forensic. Improved spatial statistical features are extracted from the image to classify pristine and median filtered images. Image array data is rescaled to enhance the spatial statistical information. Features are extracted using Markov model on enhanced spatial statistics. Multiple difference arrays are considered in different directions for robust feature set. Further, texture operator features are combined to increase the detection accuracy and SVM binary classifier is applied to train the classification model. Experimental results are promising for images of low quality JPEG compression.

Cascaded-Hop For DeepFake Videos Detection

  • Zhang, Dengyong;Wu, Pengjie;Li, Feng;Zhu, Wenjie;Sheng, Victor S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1671-1686
    • /
    • 2022
  • Face manipulation tools represented by Deepfake have threatened the security of people's biological identity information. Particularly, manipulation tools with deep learning technology have brought great challenges to Deepfake detection. There are many solutions for Deepfake detection based on traditional machine learning and advanced deep learning. However, those solutions of detectors almost have problems of poor performance when evaluated on different quality datasets. In this paper, for the sake of making high-quality Deepfake datasets, we provide a preprocessing method based on the image pixel matrix feature to eliminate similar images and the residual channel attention network (RCAN) to resize the scale of images. Significantly, we also describe a Deepfake detector named Cascaded-Hop which is based on the PixelHop++ system and the successive subspace learning (SSL) model. By feeding the preprocessed datasets, Cascaded-Hop achieves a good classification result on different manipulation types and multiple quality datasets. According to the experiment on FaceForensics++ and Celeb-DF, the AUC (area under curve) results of our proposed methods are comparable to the state-of-the-art models.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Investigation of Airborne LIDAR Intensity data

  • Chang Hwijeong;Cho Woosug
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.646-649
    • /
    • 2004
  • LiDAR(Light Detection and Ranging) system can record intensity data as well as range data. Recently, LiDAR intensity data is widely used for landcover classification, ancillary data of feature extraction, vegetation species identification, and so on. Since the intensity return value is associated with several factors, same features is not consistent for same flight or multiple flights. This paper investigated correlation between intensity and range data. Once the effects of range was determined, the single flight line normalization and the multiple flight line normalization was performed by an empirical function that was derived from relationship between range and return intensity

  • PDF

Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras (다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법)

  • Ok, Sooyol;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.