• Title/Summary/Keyword: text input

Search Result 354, Processing Time 0.024 seconds

A Study on Key Arrangement of Virtual Keyboard based on Eyeball Input system (안구 입력 시스템 기반의 화상키보드 키 배열 연구)

  • Sa Ya Lee;Jin Gyeong Hong;Joong Sup Lee
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.94-103
    • /
    • 2024
  • The eyeball input system is a text input system designed based on 'eye tracking technology' and 'virtual keyboard character-input technology'. The current virtual keyboard structure used is a rectangular QWERTY array optimized for a multi-input method that simultaneously utilizes all 10 fingers on both hands. However, since 'eye-tracking technology' is a single-input method that relies solely on eye movement, requiring only one focal point for input, problems arise when used in conjunction with a rectangular virtual keyboard structure designed for multi-input method. To solve this problem, first of all, previous studies on the shape, type, and movement of muscles connected to the eyeball were investigated. Through the investigation, it was identified that the principle of eye movement occurs in a circle rather than in a straight line. This study, therefore, proposes a new key arrangement wherein the keys are arranged in a circular structure suitable for rotational motion rather than the key arrangement of the current virtual keyboard which is arranged in a rectangular structure and optimized for both-hand input. In addition, compared to the existing rectangular key arrangement, a performance verification experiment was conducted on the circular key arrangement, and through the experiment, it was confirmed that the circular arrangement would be a good replacement for the rectangular arrangement for the virtual keyboard.

Development of a Gridded Simulation Support System for Rice Growth Based on the ORYZA2000 Model (ORYZA2000 모델에 기반한 격자형 벼 생육 모의 지원 시스템 개발)

  • Hyun, Shinwoo;Yoo, Byoung Hyun;Park, Jinyu;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.4
    • /
    • pp.270-279
    • /
    • 2017
  • Regional assessment of crop productivity using a gridded simulation approach could aid policy making and crop management. Still, little effort has been made to develop the systems that allows gridded simulations of crop growth using ORYZA 2000 model, which has been used for predicting rice yield in Korea. The objectives of this study were to develop a series of data processing modules for creating input data files, running the crop model, and aggregating output files in a region of interest using gridded data files. These modules were implemented using C++ and R to make the best use of the features provided by these programming languages. In a case study, 13000 input files in a plain text format were prepared using daily gridded weather data that had spatial resolution of 1km and 12.5 km for the period of 2001-2010. Using the text files as inputs to ORYZA2000 model, crop yield simulations were performed for each grid cell using a scenario of crop management practices. After output files were created for grid cells that represent a paddy rice field in South Korea, each output file was aggregated into an output file in the netCDF format. It was found that the spatial pattern of crop yield was relatively similar to actual distribution of yields in Korea, although there were biases of crop yield depending on regions. It seemed that those differences resulted from uncertainties incurred in input data, e.g., transplanting date, cultivar in an area, as well as weather data. Our results indicated that a set of tools developed in this study would be useful for gridded simulation of different crop models. In the further study, it would be worthwhile to take into account compatibility to a modeling interface library for integrated simulation of an agricultural ecosystem.

A Study on Implementation of Emotional Speech Synthesis System using Variable Prosody Model (가변 운율 모델링을 이용한 고음질 감정 음성합성기 구현에 관한 연구)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.8
    • /
    • pp.3992-3998
    • /
    • 2013
  • This paper is related to the method of adding a emotional speech corpus to a high-quality large corpus based speech synthesizer, and generating various synthesized speech. We made the emotional speech corpus as a form which can be used in waveform concatenated speech synthesizer, and have implemented the speech synthesizer that can be generated various synthesized speech through the same synthetic unit selection process of normal speech synthesizer. We used a markup language for emotional input text. Emotional speech is generated when the input text is matched as much as the length of intonation phrase in emotional speech corpus, but in the other case normal speech is generated. The BIs(Break Index) of emotional speech is more irregular than normal speech. Therefore, it becomes difficult to use the BIs generated in a synthesizer as it is. In order to solve this problem we applied the Variable Break[3] modeling. We used the Japanese speech synthesizer for experiment. As a result we obtained the natural emotional synthesized speech using the break prediction module for normal speech synthesize.

Analysis of error data generated by prospective teachers in programming learning (예비교사들이 프로그래밍 학습 시 발생시키는 오류 데이터 분석)

  • Moon, Wae-shik
    • Journal of The Korean Association of Information Education
    • /
    • v.22 no.2
    • /
    • pp.205-212
    • /
    • 2018
  • As a way to improve the software education ability of the pre - service teachers, we conducted programming learning using two types of programming tools (Python and Scratch) at the regular course time. In programming learning, various types of errors, which are factors that continuously hinder interest, achievement and creativity, were collected and analyzed by type. By using the analyzed data, it is possible to improve the ability of pre-service teachers to cope with the errors that can occur in the software education to be taught in the elementary school, and to improve the learning effect. In this study, logic error (37.63%) was the most frequent type that caused the most errors in programming in both conventional language that input text and language that assembles block. In addition, the detailed errors that show a lot of differences in the two languages are the errors of Python (14.3%) and scratch (3.5%) due to insufficient use of grammar and other errors.

A Study on Equation Recognition Using Tree Structure (트리 구조를 이용한 수식 인식 연구)

  • Park, Byung-Joon;Kim, Hyun-Sik;Kim, Wan-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.4
    • /
    • pp.340-345
    • /
    • 2018
  • The Compared to general sentences, the Equation uses a complex structure and various characters and symbols, so that it is not possible to input all the character sets by simply inputting a keyboard. Therefore, the editor is implemented in a text editor such as Hangul or Word. In order to express the Equation properly, it is necessary to have the learner information which can be meaningful to interpret the syntax. Even if a character is input, it can be represented by another expression depending on the relationship between the size and the position. In other words, the form of the expression is expressed as a tree model considering the relationship between characters and symbols such as the position and size to be expressed. As a field of character recognition application, a technique of recognizing characters or symbols(code) has been widely known, but a method of inputting and interpreting a Equation requires a more complicated analysis process than a general text. In this paper, we have implemented a Equation recognizer that recognizes characters in expressions and quickly analyzes the position and size of expressions.

A Study on Edit Order of Text Cells on the MS Excel Files (MS 엑셀 파일의 텍스트 셀 입력 순서에 관한 연구)

  • Lee, Yoonmi;Chung, Hyunji;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.2
    • /
    • pp.319-325
    • /
    • 2014
  • Since smart phones or tablet PCs have been widely used recently, the users can create and edit documents anywhere in real time. If the input and edit flows of documents can be traced, it can be used as evidence in digital forensic investigation. The typical document application is the MS(Microsoft) Office. As the MS Office applications consist of two file formats that Compound Document File Format which had been used from version 97 to 2003 and OOXML(Office Open XML) File Format which has been used from version 2007 to now. The studies on MS Office files were for making a decision whether the file has been tampered or not through detection of concealed items or analysis of documents properties so far. This paper analyzed the input order of text cells on MS Excel files and shows how to figure out what cell is the last edited in digital forensic perspective.

A Study on the Effect of Using Sentiment Lexicon in Opinion Classification (오피니언 분류의 감성사전 활용효과에 대한 연구)

  • Kim, Seungwoo;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.133-148
    • /
    • 2014
  • Recently, with the advent of various information channels, the number of has continued to grow. The main cause of this phenomenon can be found in the significant increase of unstructured data, as the use of smart devices enables users to create data in the form of text, audio, images, and video. In various types of unstructured data, the user's opinion and a variety of information is clearly expressed in text data such as news, reports, papers, and various articles. Thus, active attempts have been made to create new value by analyzing these texts. The representative techniques used in text analysis are text mining and opinion mining. These share certain important characteristics; for example, they not only use text documents as input data, but also use many natural language processing techniques such as filtering and parsing. Therefore, opinion mining is usually recognized as a sub-concept of text mining, or, in many cases, the two terms are used interchangeably in the literature. Suppose that the purpose of a certain classification analysis is to predict a positive or negative opinion contained in some documents. If we focus on the classification process, the analysis can be regarded as a traditional text mining case. However, if we observe that the target of the analysis is a positive or negative opinion, the analysis can be regarded as a typical example of opinion mining. In other words, two methods (i.e., text mining and opinion mining) are available for opinion classification. Thus, in order to distinguish between the two, a precise definition of each method is needed. In this paper, we found that it is very difficult to distinguish between the two methods clearly with respect to the purpose of analysis and the type of results. We conclude that the most definitive criterion to distinguish text mining from opinion mining is whether an analysis utilizes any kind of sentiment lexicon. We first established two prediction models, one based on opinion mining and the other on text mining. Next, we compared the main processes used by the two prediction models. Finally, we compared their prediction accuracy. We then analyzed 2,000 movie reviews. The results revealed that the prediction model based on opinion mining showed higher average prediction accuracy compared to the text mining model. Moreover, in the lift chart generated by the opinion mining based model, the prediction accuracy for the documents with strong certainty was higher than that for the documents with weak certainty. Most of all, opinion mining has a meaningful advantage in that it can reduce learning time dramatically, because a sentiment lexicon generated once can be reused in a similar application domain. Additionally, the classification results can be clearly explained by using a sentiment lexicon. This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of movie reviews. Additionally, various parameters in the parsing and filtering steps of the text mining may have affected the accuracy of the prediction models. However, this research contributes a performance and comparison of text mining analysis and opinion mining analysis for opinion classification. In future research, a more precise evaluation of the two methods should be made through intensive experiments.

Text-Independent Speaker Identification System Based On Vowel And Incremental Learning Neural Networks

  • Heo, Kwang-Seung;Lee, Dong-Wook;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1042-1045
    • /
    • 2003
  • In this paper, we propose the speaker identification system that uses vowel that has speaker's characteristic. System is divided to speech feature extraction part and speaker identification part. Speech feature extraction part extracts speaker's feature. Voiced speech has the characteristic that divides speakers. For vowel extraction, formants are used in voiced speech through frequency analysis. Vowel-a that different formants is extracted in text. Pitch, formant, intensity, log area ratio, LP coefficients, cepstral coefficients are used by method to draw characteristic. The cpestral coefficients that show the best performance in speaker identification among several methods are used. Speaker identification part distinguishes speaker using Neural Network. 12 order cepstral coefficients are used learning input data. Neural Network's structure is MLP and learning algorithm is BP (Backpropagation). Hidden nodes and output nodes are incremented. The nodes in the incremental learning neural network are interconnected via weighted links and each node in a layer is generally connected to each node in the succeeding layer leaving the output node to provide output for the network. Though the vowel extract and incremental learning, the proposed system uses low learning data and reduces learning time and improves identification rate.

  • PDF

Construction of Retrieval-Based Medical Database

  • Shin Yong-Won;Koo Bong-Oh;Park Byung-Rae
    • Biomedical Science Letters
    • /
    • v.10 no.4
    • /
    • pp.485-493
    • /
    • 2004
  • In the current field of Medical Informatics, the information increases, and changes fast, so we can access the various data types which are ranged from text to image type. A small number of technician digitizes these data to establish database, but it is needed a lot of money and time. Therefore digitization by many end-users confronting data and establishment of searching database is needed to manage increasing information effectively. New data and information are taken fast to provide the quality of care, diagnosis which is the basic work in the medicine. And also It is needed the medical database for purpose of private study and novice education, which is tool to make various data become knowledge. However, current medical database is used and developed only for the purpose of hospital work management. In this study, using text input, file import and object images are digitized to establish database by people who are worked at the medicine field but can not expertise to program. Data are hierarchically constructed and then knowledge is established using a tree type database establishment method. Consequently, we can get data fast and exactly through search, apply it to study as subject-oriented classification, apply it to diagnosis as time-depended reflection of data, and apply it to education and precaution through function of publishing questions and reusability of data.

  • PDF

Prosodic Contour Generation for Korean Text-To-Speech System Using Artificial Neural Networks

  • Lim, Un-Cheon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2E
    • /
    • pp.43-50
    • /
    • 2009
  • To get more natural synthetic speech generated by a Korean TTS (Text-To-Speech) system, we have to know all the possible prosodic rules in Korean spoken language. We should find out these rules from linguistic, phonetic information or from real speech. In general, all of these rules should be integrated into a prosody-generation algorithm in a TTS system. But this algorithm cannot cover up all the possible prosodic rules in a language and it is not perfect, so the naturalness of synthesized speech cannot be as good as we expect. ANNs (Artificial Neural Networks) can be trained to learn the prosodic rules in Korean spoken language. To train and test ANNs, we need to prepare the prosodic patterns of all the phonemic segments in a prosodic corpus. A prosodic corpus will include meaningful sentences to represent all the possible prosodic rules. Sentences in the corpus were made by picking up a series of words from the list of PB (phonetically Balanced) isolated words. These sentences in the corpus were read by speakers, recorded, and collected as a speech database. By analyzing recorded real speech, we can extract prosodic pattern about each phoneme, and assign them as target and test patterns for ANNs. ANNs can learn the prosody from natural speech and generate prosodic patterns of the central phonemic segment in phoneme strings as output response of ANNs when phoneme strings of a sentence are given to ANNs as input stimuli.