• Title/Summary/Keyword: Character network

Search Result 460, Processing Time 0.025 seconds

Jungian Character Network in Growing Other Character Archetypes in Films

  • Han, Youngsue
    • International Journal of Contents
    • /
    • v.15 no.2
    • /
    • pp.13-19
    • /
    • 2019
  • This research demonstrates a clear visual outline of character influence-relations in creating Jungian character archetypes in films using R computational technology. It contributes to the integration of Jungian analytical psychology into film studies by revealing character network relations in film. This paper handles character archetypes and their influence on developing other character archetypes in films in regards to network analysis drawn from Lynn Schmidt's analysis of 45 master characters in films. Additionally, this paper conducts a character network analysis visualization experiment using R open-source software to create an easily reproducible tutorial for scholars in humanities. This research is a pioneering work that could trigger the academic communities in humanities to actively adopt data science in their research and education.

An Improved Method of Character Network Analysis for Literary Criticism: A Case Study of

  • Kwon, Ho-Chang;Shim, Kwang-Hyun
    • International Journal of Contents
    • /
    • v.13 no.3
    • /
    • pp.43-48
    • /
    • 2017
  • As a computational approach to literary criticism, the method of character network analysis has attracted attention. The character network is composed of nodes as characters and links as relationship between characters, and has been used to analyze literary works systematically. However, there were limitations in that relationships between characters were so superficial that they could not reflect intimate relationships and quantitative data from the network were not interpreted in depth regarding meaning of literary works. In this study, we propose an improved method of character network analysis through a case study on the play . First, we segmented the character network into a dialogue network focused on speaker-to-listener relationship and an opinion network focused on subject-to-object relationship. We analyzed these networks in various ways and discussed how analysis results could reflect structure and meaning of the work. Through these studies, we strived to find a way of organic and meaningful connection between literary criticism in humanities and network analysis in computer science.

A Study on the Estimation of Character Value in Media Works: Based on Network Centralities and Web-Search Data (미디어 작품 캐릭터 가치 측정 연구: 네트워크 중심성 척도와 검색 데이터를 활용하여)

  • Cho, Seonghyun;Lee, Minhyung;Choi, HanByeol Stella;Lee, Heeseok
    • Knowledge Management Research
    • /
    • v.22 no.4
    • /
    • pp.1-26
    • /
    • 2021
  • Measuring the intangible asset has been vigorously studied for its importance. Especially, the value of character in media industry is difficult to quantitatively evaluate in spite of the industry's rapid growth. Recently, the Social Network Analysis (i.e., SNA) has been actively applied to understand human usage patterns in a media field. By using SNA methodology, this study attempts to investigate how the character network characteristics of media works are linked to human search behaviors. Our analysis reveals the positive correlation and causality between character network centralities and character search data. This result implies that the character network can be used as a clue for the valuation of character assets.

An Empirical Study on the Sub-factors of Middle School Character Education using Social Network Analysis (사회 네트워크 분석을 이용한 중등 인성 교육의 세부요인에 관한 실증 연구)

  • Kim, Hyojung
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.13 no.2
    • /
    • pp.87-98
    • /
    • 2017
  • The advancements in scientific technology and information network in the 21st century allow us to easily acquire a desired knowledge. In the midst of today's informatization, globalization, and cultural diversification, adolescents experience emotional confusion while accommodating diverse cultures and information. This study aimed at examining three aspects of character suggested by the Ministry of Education, which are ethics, sociality, and emotion, and the actual sub-factors required for character education. To that end, a survey was conducted with adolescents who were at a character-building age, and social network analysis (SNA) was performed to determine the effect of character education on the sub-factors. The statistics program SPSS was used to investigate the general traits of the subjects and the validity of the research variables. The 2-mode data that were finally selected were converted to 2-mode data using NetMinder 4, which is a network analysis tool. Furthermore, a data network was established based on a quasi-network that represents the relationships between ethics, sociality, and emotion. The results of this study showed that the subjects considered honesty and justice to be the sub-domains of the ethics domain. In addition, they identified sympathy, communication, consideration for others, and cooperation as the sub-domains of the sociality domain. Finally, they believed that self-understanding and self-control were the sub-domains of the emotion domain.

Recognition of Virtual Written Characters Based on Convolutional Neural Network

  • Leem, Seungmin;Kim, Sungyoung
    • Journal of Platform Technology
    • /
    • v.6 no.1
    • /
    • pp.3-8
    • /
    • 2018
  • This paper proposes a technique for recognizing online handwritten cursive data obtained by tracing a motion trajectory while a user is in the 3D space based on a convolution neural network (CNN) algorithm. There is a difficulty in recognizing the virtual character input by the user in the 3D space because it includes both the character stroke and the movement stroke. In this paper, we divide syllable into consonant and vowel units by using labeling technique in addition to the result of localizing letter stroke and movement stroke in the previous study. The coordinate information of the separated consonants and vowels are converted into image data, and Korean handwriting recognition was performed using a convolutional neural network. After learning the neural network using 1,680 syllables written by five hand writers, the accuracy is calculated by using the new hand writers who did not participate in the writing of training data. The accuracy of phoneme-based recognition is 98.9% based on convolutional neural network. The proposed method has the advantage of drastically reducing learning data compared to syllable-based learning.

Recognition of Characters Printed on PCB Components Using Deep Neural Networks (심층신경망을 이용한 PCB 부품의 인쇄문자 인식)

  • Cho, Tai-Hoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.3
    • /
    • pp.6-10
    • /
    • 2021
  • Recognition of characters printed or marked on the PCB components from images captured using cameras is an important task in PCB components inspection systems. Previous optical character recognition (OCR) of PCB components typically consists of two stages: character segmentation and classification of each segmented character. However, character segmentation often fails due to corrupted characters, low image contrast, etc. Thus, OCR without character segmentation is desirable and increasingly used via deep neural networks. Typical implementation based on deep neural nets without character segmentation includes convolutional neural network followed by recurrent neural network (RNN). However, one disadvantage of this approach is slow execution due to RNN layers. LPRNet is a segmentation-free character recognition network with excellent accuracy proved in license plate recognition. LPRNet uses a wide convolution instead of RNN, thus enabling fast inference. In this paper, LPRNet was adapted for recognizing characters printed on PCB components with fast execution and high accuracy. Initial training with synthetic images followed by fine-tuning on real text images yielded accurate recognition. This net can be further optimized on Intel CPU using OpenVINO tool kit. The optimized version of the network can be run in real-time faster than even GPU.

A Study on Input Pattern Generation of Neural-Networks for Character Recognition (문자인식 시스템을 위한 신경망 입력패턴 생성에 관한 연구)

  • Shin, Myong-Jun;Kim, Sung-Jong;Son, Young-Ik
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.129-131
    • /
    • 2006
  • The performances of neural network systems mainly depend on the kind and the number of input patterns for its training. Hence, the kind of input patterns as well as its number is very important for the character recognition system using back-propagation network. The more input patters are used, the better the system recognizes various characters. However, training is not always successful as the number of input patters increases. Moreover, there exists a limit to consider many input patterns of the recognition system for cursive script characters. In this paper we present a new character recognition system using the back-propagation neural networks. By using an additional neural network, an input pattern generation method is provided for increasing the recognition ratio and a successful training. We firstly introduce the structure of the proposed system. Then, the character recognition system is investigated through some experiments.

  • PDF

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Study on Implementation of a neural Coprocessor for Printed Hangul-Character Recognition (한글 인쇄체 문자인식 전용 신경망 Coprocessor의 구현에 관한 연구)

  • Kim, Young-Chul;Lee, Tae-Won
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.1
    • /
    • pp.119-127
    • /
    • 1998
  • In this paper, the design of a VLSI-based multilayer neural network is presented, which can be used as a dedicated hardware for character-type segmentation and character-element recogniti on consuming large processing time in conventional software-based Hangul printed-character recognition systems. Also the architecture and its design of a neural coprocessor interfacing the neural network with a host computcr and controlling thc neural network are presented. The architecture, behavior, and performance of the proposed neural coprocessor are justified using VHDL modeling and simulation. Experimental results show the successful rates of character-type segmentation and character-element recognition is competitive to those of software-based Hangul printed-character recognition systems with retaining high-speed.

  • PDF

Low-Quality Banknote Serial Number Recognition Based on Deep Neural Network

  • Jang, Unsoo;Suh, Kun Ha;Lee, Eui Chul
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.224-237
    • /
    • 2020
  • Recognition of banknote serial number is one of the important functions for intelligent banknote counter implementation and can be used for various purposes. However, the previous character recognition method is limited to use due to the font type of the banknote serial number, the variation problem by the solid status, and the recognition speed issue. In this paper, we propose an aspect ratio based character region segmentation and a convolutional neural network (CNN) based banknote serial number recognition method. In order to detect the character region, the character area is determined based on the aspect ratio of each character in the serial number candidate area after the banknote area detection and de-skewing process is performed. Then, we designed and compared four types of CNN models and determined the best model for serial number recognition. Experimental results showed that the recognition accuracy of each character was 99.85%. In addition, it was confirmed that the recognition performance is improved as a result of performing data augmentation. The banknote used in the experiment is Indian rupee, which is badly soiled and the font of characters is unusual, therefore it can be regarded to have good performance. Recognition speed was also enough to run in real time on a device that counts 800 banknotes per minute.