• Title/Summary/Keyword: Recognition of Researchers

Search Result 287, Processing Time 0.026 seconds

Reflection-type Finger Vein Recognition for Mobile Applications

  • Zhang, Congcong;Liu, Zhi;Liu, Yi;Su, Fangqi;Chang, Jun;Zhou, Yiran;Zhao, Qijun
    • Journal of the Optical Society of Korea
    • /
    • v.19 no.5
    • /
    • pp.467-476
    • /
    • 2015
  • Finger vein recognition, which is a promising biometric method for identity authentication, has attracted significant attention. Considerable research focuses on transmission-type finger vein recognition, but this type of authentication is difficult to implement in mobile consumer devices. Therefore, reflection-type finger vein recognition should be developed. In the reflection-type vein recognition field, the majority of researchers concentrate on palm and palm dorsa patterns, and only a few pay attention to reflection-type finger vein recognition. Thus, this paper presents reflection-type finger vein recognition for biometric application that can be integrated into mobile consumer devices. A database is built to test the proposed algorithm. A novel method of region-of-interest localization for a finger vein image is introduced, and a scheme for effectively extracting finger vein features is proposed. Experiments demonstrate the feasibility of reflection-type finger vein recognition.

A policy study for the voice recognition technology based on elderly health care (음성인식기술의 노인간병 적용을 위한 정책연구)

  • Cho, Byung-Chul;Cheon, Sooyoung;Kim, Kab-Nyun;Yuk, Hyun-Seung
    • Journal of Digital Convergence
    • /
    • v.16 no.2
    • /
    • pp.9-17
    • /
    • 2018
  • The purpose of this study is to find out how voice recognition technology can be utilized to solve the elderly problem rapidly aging in Korea. Public support services and civilian nursing services for the elderly are expected to expand in Korea. In this case, voice recognition technology can be used variously for the elderly who are not familiar with the media interface. To this end, our researchers visited Japan and examined the achievements obtained by voice recognition technology in the elderly care. Especially, when caregivers write reports, they have greatly reduced their working hours by replacing the handwritten reports with ones using voice recognition technology. This method can be easily implemented in Korea. In addition, the social cost of the elderly support can be gradually reduced through the development of a robot equipped with voice recognition technology. Consequently, we realize that when voice recognition technology is combined with artificial intelligence programs of various emotion recognition functions and various policy possibilities as well.

Representative Batch Normalization for Scene Text Recognition

  • Sun, Yajie;Cao, Xiaoling;Sun, Yingying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2390-2406
    • /
    • 2022
  • Scene text recognition has important application value and attracted the interest of plenty of researchers. At present, many methods have achieved good results, but most of the existing approaches attempt to improve the performance of scene text recognition from the image level. They have a good effect on reading regular scene texts. However, there are still many obstacles to recognizing text on low-quality images such as curved, occlusion, and blur. This exacerbates the difficulty of feature extraction because the image quality is uneven. In addition, the results of model testing are highly dependent on training data, so there is still room for improvement in scene text recognition methods. In this work, we present a natural scene text recognizer to improve the recognition performance from the feature level, which contains feature representation and feature enhancement. In terms of feature representation, we propose an efficient feature extractor combined with Representative Batch Normalization and ResNet. It reduces the dependence of the model on training data and improves the feature representation ability of different instances. In terms of feature enhancement, we use a feature enhancement network to expand the receptive field of feature maps, so that feature maps contain rich feature information. Enhanced feature representation capability helps to improve the recognition performance of the model. We conducted experiments on 7 benchmarks, which shows that this method is highly competitive in recognizing both regular and irregular texts. The method achieved top1 recognition accuracy on four benchmarks of IC03, IC13, IC15, and SVTP.

Position Recognition and User Identification System Using Signal Strength Map in Home Healthcare Based on Wireless Sensor Networks (WSNs) (무선 센서네트워크 기반 신호강도 맵을 이용한 재택형 위치인식 및 사용자 식별 시스템)

  • Yang, Yong-Ju;Lee, Jung-Hoon;Song, Sang-Ha;Yoon, Young-Ro
    • Journal of Biomedical Engineering Research
    • /
    • v.28 no.4
    • /
    • pp.494-502
    • /
    • 2007
  • Ubiquitous location based services (u-LBS) will be interested to an important services. They can easily recognize object position at anytime, anywhere. At present, many researchers are making a study of the position recognition and tracking. This paper consists of postion recognition and user identification system. The position recognition is based on location under services (LBS) using a signal strength map, a database is previously made use of empirical measured received signal strength indicator (RSSI). The user identification system automatically controls instruments which is located in home. Moreover users are able to measures body signal freely. We implemented the multi-hop routing method using the Star-Mesh networks. Also, we use the sensor devices which are satisfied with the IEEE 802.15.4 specification. The used devices are the Nano-24 modules in Octacomm Co. Ltd. A RSSI is very important factor in position recognition analysis. It makes use of the way that decides position recognition and user identification in narrow indoor space. In experiments, we can analyze properties of the RSSI, draw the parameter about position recognition. The experimental result is that RSSI value is attenuated according to increasing distances. It also derives property of the radio frequency (RF) signal. Moreover, we express the monitoring program using the Microsoft C#. Finally, the proposed methods are expected to protect a sudden death and an accident in home.

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

A Method for Finger Vein Recognition using a New Matching Algorithm (새로운 정합 알고리즘을 이용한 손가락 정맥 인식 방법)

  • Kim, Hee-Sung;Cho, Jun-Hee
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.11
    • /
    • pp.859-865
    • /
    • 2010
  • In this paper, a new method for finger vein recognition is proposed. Researchers are recently interested in the finger vein recognition since it is a good way to avoid the forgery in finger prints recognition and the inconveniences in obtaining images of the iris for iris recognition. The vein images are processed to obtain the line shaped vein images through the local histogram equalization and a thinning process. This thinned vein images are processed for matching, using a new matching algorithm, named HS(HeeSung) matching algorithm. This algorithm yields an excellent recognition rate when it is applied to the curve-linear images processed through a thinning or an edge detection. In our experiment with the finger vein images, the recognition rate has reached up to 99.20% using this algorithm applied to 650finger vein images(130person ${\times}$ 5images each). It takes only about 60 milliseconds to match one pair of images.

Development of Electrocardiogram Identification Algorithm for a Biometric System (생체 인식 시스템을 위한 심전도 개인인식 알고리즘 개발)

  • Lee, Sang-Joon;Kim, Jin-Kwon;Lee, Young-Bum;Lee, Myoung-Ho
    • Journal of Biomedical Engineering Research
    • /
    • v.31 no.5
    • /
    • pp.365-374
    • /
    • 2010
  • This paper is about the personal identification algorithm using an ECG that has been studied by a few researchers recently. Previously published algorithm can be classified as two methods. One is the method that analyzes ECG features and the other is the morphological analysis of ECG. The main characteristic of proposed algorithm uses together two methods. The algorithm consists of training and testing procedures. In training procedure, the features of all recognition objects' ECG were extracted and the PCA was performed for morphological analysis of ECG. In testing procedure, 6 candidate ECG's were chosen by morphological analysis and then the analysis of features among candidate ECG's was performed for final recognition. We choose 18 ECG files from MIT-BIH Normal Sinus Rhythm Database for estimating algorithm performance. The algorithm extracts 100 heartbeats from each ECG file, and use 40 heartbeats for training and 60 heartbeats for testing. The proposed algorithm shows clearly superior performance in all ECG data, amounting to 90.96% heartbeat recognition rate and 100% ECG recognition rate.

A Study on Visual Perception based Emotion Recognition using Body-Activity Posture (사용자 행동 자세를 이용한 시각계 기반의 감정 인식 연구)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.305-314
    • /
    • 2011
  • Research into the visual perception of human emotion to recognize an intention has traditionally focused on emotions of facial expression. Recently researchers have turned to the more challenging field of emotional expressions through body posture or activity. Proposed work approaches recognition of basic emotional categories from body postures using neural model applied visual perception of neurophysiology. In keeping with information processing models of the visual cortex, this work constructs a biologically plausible hierarchy of neural detectors, which can discriminate 6 basic emotional states from static views of associated body postures of activity. The proposed model, which is tolerant to parameter variations, presents its possibility by evaluating against human test subjects on a set of body postures of activities.

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

Detecting Complex 3D Human Motions with Body Model Low-Rank Representation for Real-Time Smart Activity Monitoring System

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Dong-Seong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1189-1204
    • /
    • 2018
  • Detecting and capturing 3D human structures from the intensity-based image sequences is an inherently arguable problem, which attracted attention of several researchers especially in real-time activity recognition (Real-AR). These Real-AR systems have been significantly enhanced by using depth intensity sensors that gives maximum information, in spite of the fact that conventional Real-AR systems are using RGB video sensors. This study proposed a depth-based routine-logging Real-AR system to identify the daily human activity routines and to make these surroundings an intelligent living space. Our real-time routine-logging Real-AR system is categorized into two categories. The data collection with the use of a depth camera, feature extraction based on joint information and training/recognition of each activity. In-addition, the recognition mechanism locates, and pinpoints the learned activities and induces routine-logs. The evaluation applied on the depth datasets (self-annotated and MSRAction3D datasets) demonstrated that proposed system can achieve better recognition rates and robust as compare to state-of-the-art methods. Our Real-AR should be feasibly accessible and permanently used in behavior monitoring applications, humanoid-robot systems and e-medical therapy systems.