• Title/Summary/Keyword: Text Input Method

Search Result 165, Processing Time 0.026 seconds

A graphical user interface for stand-alone and mixed-type modelling of reinforced concrete structures

  • Sadeghian, Vahid;Vecchio, Frank
    • Computers and Concrete
    • /
    • v.16 no.2
    • /
    • pp.287-309
    • /
    • 2015
  • FormWorks-Plus is a generalized public domain user-friendly preprocessor developed to facilitate the process of creating finite element models for structural analysis programs. The lack of a graphical user interface in most academic analysis programs forces users to input the structural model information into the standard text files, which is a time-consuming and error-prone process. FormWorks-Plus enables engineers to conveniently set up the finite element model in a graphical environment, eliminating the problems associated with conventional input text files and improving the user's perception of the application. In this paper, a brief overview of the FormWorks-Plus structure is presented, followed by a detailed explanation of the main features of the program. In addition, demonstration is made of the application of FormWorks-Plus in combination with VecTor programs, advanced nonlinear analysis tools for reinforced concrete structures. Finally, aspects relating to the modelling and analysis of three case studies are discussed: a reinforced concrete beam-column joint, a steel-concrete composite shear wall, and a SFRC shear panel. The unique mixed-type frame-membrane modelling procedure implemented in FormWorks-Plus can address the limitations associated with most frame type analyses.

Short Text Classification for Job Placement Chatbot by T-EBOW (T-EBOW를 이용한 취업알선 챗봇용 단문 분류 연구)

  • Kim, Jeongrae;Kim, Han-joon;Jeong, Kyoung Hee
    • Journal of Internet Computing and Services
    • /
    • v.20 no.2
    • /
    • pp.93-100
    • /
    • 2019
  • Recently, in various business fields, companies are concentrating on providing chatbot services to various environments by adding artificial intelligence to existing messenger platforms. Organizations in the field of job placement also require chatbot services to improve the quality of employment counseling services and to solve the problem of agent management. A text-based general chatbot classifies input user sentences into learned sentences and provides appropriate answers to users. Recently, user sentences inputted to chatbots are inputted as short texts due to the activation of social network services. Therefore, performance improvement of short text classification can contribute to improvement of chatbot service performance. In this paper, we propose T-EBOW (Translation-Extended Bag Of Words), which is a method to add translation information as well as concept information of existing researches in order to strengthen the short text classification for employment chatbot. The performance evaluation results of the T-EBOW applied to the machine learning classification model are superior to those of the conventional method.

Identifying Social Relationships using Text Analysis for Social Chatbots (소셜챗봇 구축에 필요한 관계성 추론을 위한 텍스트마이닝 방법)

  • Kim, Jeonghun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.85-110
    • /
    • 2018
  • A chatbot is an interactive assistant that utilizes many communication modes: voice, images, video, or text. It is an artificial intelligence-based application that responds to users' needs or solves problems during user-friendly conversation. However, the current version of the chatbot is focused on understanding and performing tasks requested by the user; its ability to generate personalized conversation suitable for relationship-building is limited. Recognizing the need to build a relationship and making suitable conversation is more important for social chatbots who require social skills similar to those of problem-solving chatbots like the intelligent personal assistant. The purpose of this study is to propose a text analysis method that evaluates relationships between chatbots and users based on content input by the user and adapted to the communication situation, enabling the chatbot to conduct suitable conversations. To evaluate the performance of this method, we examined learning and verified the results using actual SNS conversation records. The results of the analysis will aid in implementation of the social chatbot, as this method yields excellent results even when the private profile information of the user is excluded for privacy reasons.

ACT-R Predictive Model of Korean Text Entry on Touchscreen

  • Lim, Soo-Yong;Jo, Seong-Sik;Myung, Ro-Hae;Kim, Sang-Hyeob;Jang, Eun-Hye;Park, Byoung-Jun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.291-298
    • /
    • 2012
  • Objective: The aim of this study is to predict Korean text entry on touchscreens using ACT-R cognitive architecture. Background: Touchscreen application in devices such as satellite navigation devices, PDAs, mobile phones, etc. has been increasing, and the market size is expanding. Accordingly, there is an increasing interest to develop and evaluate the interface to enhance the user experience and increase satisfaction in the touchscreen environment. Method: In this study, Korean text entry performance in the touchscreen environment was analyzed using ACT-R. The ACT-R model considering the characteristics of the Korean language which is composed of vowels and consonants was established. Further, this study analyzed if the prediction of Korean text entry is possible through the ACT-R cognitive model. Results: In the analysis results, no significant difference on performance time between model prediction and empirical data was found. Conclusion: The proposed model can predict the accurate physical movement time as well as cognitive processing time. Application: This study is useful in conducting model-based evaluation on the text entry interface of the touchscreen and enabled quantitative and effective evaluation on the diverse types of Korean text input interfaces through the cognitive models.

Improved Text Recognition using Analysis of Illumination Component in Color Images (컬러 영상의 조명성분 분석을 통한 문자인식 성능 향상)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.131-136
    • /
    • 2007
  • This paper proposes a new approach to eliminate the reflectance component for the detection of text in color images. Color images, printed by color printing technology, normally have an illumination component as well as a reflectance component. It is well known that a reflectance component usually obstructs the task of detecting and recognizing objects like texts in the scene, since it blurs out an overall image. We have developed an approach that efficiently removes reflectance components while preserving illumination components. We decided whether an input image hits Normal or Polarized for determining the light environment, using the histogram which consisted of a red component. We were able to go ahead through the ability to extract by reducing the blur phenomenon of text by light because reflection component by an illumination change and removed it and extracted text. The experimental results have shown a superior performance even when an image has a complex background. Text detection and recognition performance is influenced by changing the illumination condition. Our method is robust to the images with different illumination conditions.

  • PDF

A Study on Key Arrangement of Virtual Keyboard based on Eyeball Input system (안구 입력 시스템 기반의 화상키보드 키 배열 연구)

  • Sa Ya Lee;Jin Gyeong Hong;Joong Sup Lee
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.94-103
    • /
    • 2024
  • The eyeball input system is a text input system designed based on 'eye tracking technology' and 'virtual keyboard character-input technology'. The current virtual keyboard structure used is a rectangular QWERTY array optimized for a multi-input method that simultaneously utilizes all 10 fingers on both hands. However, since 'eye-tracking technology' is a single-input method that relies solely on eye movement, requiring only one focal point for input, problems arise when used in conjunction with a rectangular virtual keyboard structure designed for multi-input method. To solve this problem, first of all, previous studies on the shape, type, and movement of muscles connected to the eyeball were investigated. Through the investigation, it was identified that the principle of eye movement occurs in a circle rather than in a straight line. This study, therefore, proposes a new key arrangement wherein the keys are arranged in a circular structure suitable for rotational motion rather than the key arrangement of the current virtual keyboard which is arranged in a rectangular structure and optimized for both-hand input. In addition, compared to the existing rectangular key arrangement, a performance verification experiment was conducted on the circular key arrangement, and through the experiment, it was confirmed that the circular arrangement would be a good replacement for the rectangular arrangement for the virtual keyboard.

Synthesis of Expressive Talking Heads from Speech with Recurrent Neural Network (RNN을 이용한 Expressive Talking Head from Speech의 합성)

  • Sakurai, Ryuhei;Shimba, Taiki;Yamazoe, Hirotake;Lee, Joo-Ho
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.16-25
    • /
    • 2018
  • The talking head (TH) indicates an utterance face animation generated based on text and voice input. In this paper, we propose the generation method of TH with facial expression and intonation by speech input only. The problem of generating TH from speech can be regarded as a regression problem from the acoustic feature sequence to the facial code sequence which is a low dimensional vector representation that can efficiently encode and decode a face image. This regression was modeled by bidirectional RNN and trained by using SAVEE database of the front utterance face animation database as training data. The proposed method is able to generate TH with facial expression and intonation TH by using acoustic features such as MFCC, dynamic elements of MFCC, energy, and F0. According to the experiments, the configuration of the BLSTM layer of the first and second layers of bidirectional RNN was able to predict the face code best. For the evaluation, a questionnaire survey was conducted for 62 persons who watched TH animations, generated by the proposed method and the previous method. As a result, 77% of the respondents answered that the proposed method generated TH, which matches well with the speech.

Natural Scene Text Binarization using Tensor Voting and Markov Random Field (텐서보팅과 마르코프 랜덤 필드를 이용한 자연 영상의 텍스트 이진화)

  • Choi, Hyun Su;Lee, Guee Sang
    • Smart Media Journal
    • /
    • v.4 no.4
    • /
    • pp.18-23
    • /
    • 2015
  • In this paper, we propose a method for detecting the number of clusters. This method can improve the performance of a gaussian mixture model function in conventional markov random field method by using the tensor voting. The key point of the proposed method is that extracts the number of the center through the continuity of saliency map of the input data of the tensor voting token. At first, we separate the foreground and background region candidate in a given natural images. After that, we extract the appropriate cluster number for each separate candidate regions by applying the tensor voting. We can make accurate modeling a gaussian mixture model by using a detected number of cluster. We can return the result of natural binary text image by calculating the unary term and the pairwise term of markov random field. After the experiment, we can confirm that the proposed method returns the optimal cluster number and text binarization results are improved.

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

Toward Sentiment Analysis Based on Deep Learning with Keyword Detection in a Financial Report (재무 보고서의 키워드 검출 기반 딥러닝 감성분석 기법)

  • Jo, Dongsik;Kim, Daewhan;Shin, Yoojin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.5
    • /
    • pp.670-673
    • /
    • 2020
  • Recent advances in artificial intelligence have allowed for easier sentiment analysis (e.g. positive or negative forecast) of documents such as a finance reports. In this paper, we investigate a method to apply text mining techniques to extract in the financial report using deep learning, and propose an accounting model for the effects of sentiment values in financial information. For sentiment analysis with keyword detection in the financial report, we suggest the input layer with extracted keywords, hidden layers by learned weights, and the output layer in terms of sentiment scores. Our approaches can help more effective strategy for potential investors as a professional guideline using sentiment values.