• Title/Summary/Keyword: source text

Search Result 267, Processing Time 0.032 seconds

UNIQUENESS AND MULTIPLICITY OF SOLUTIONS FOR THE NONLINEAR ELLIPTIC SYSTEM

  • Jung, Tacksun;Choi, Q-Heung
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.21 no.1
    • /
    • pp.139-146
    • /
    • 2008
  • We investigate the uniqueness and multiplicity of solutions for the nonlinear elliptic system with Dirichlet boundary condition $$\{-{\Delta}u+g_1(u,v)=f_1(x){\text{ in }}{\Omega},\\-{\Delta}v+g_2(u,v)=f_2(x){\text{ in }}{\Omega},$$ where ${\Omega}$ is a bounded set in $R^n$ with smooth boundary ${\partial}{\Omega}$. Here $g_1$, $g_2$ are nonlinear functions of u, v and $f_1$, $f_2$ are source terms.

  • PDF

Objective Material analysis to the device with IoT Framework System

  • Lee, KyuTae;Ki, Jang Geun
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.2
    • /
    • pp.289-296
    • /
    • 2020
  • Software copyright are written in text form of documents and stored as files, so it is easy to expose on an illegal copyright. The IOT framework configuration and service environment are also evaluated in software structure and revealed to replication environments. Illegal copyright can be easily created by intelligently modifying the program code in the framework system. This paper deals with similarity comparison to determine the suspicion of illegal copying. In general, original source code should be provided for similarity comparison on both. However, recently, the suspected developer have refused to provide the source code, and comparative evaluation are performed only with executable code. This study dealt with how to analyze the similarity with the execution code and the circuit configuration and interface state of the system without the original source code. In this paper, we propose a method of analyzing the data of the object without source code and verifying the similarity comparison result through evaluation examples.

Visualization Techniques for Massive Source Code (대용량 소스코드 시각화기법 연구)

  • Seo, Dong-Su
    • The Journal of Korean Association of Computer Education
    • /
    • v.18 no.4
    • /
    • pp.63-70
    • /
    • 2015
  • Program source code is a set of complex syntactic information which are expressed in text forms, and contains complex logical structures. Structural and logical complexity inside source code become barriers in applying visualization techniques shown in traditional big-data approaches when the volume of source code become over ten-thousand lines of code. This paper suggests a procedure for making visualization of structural characteristics in source code. For this purpose, this paper defines internal data structures as well as inter-procedural relationships among functions. The paper also suggests a means of outlining the structural characteristics of source code by visualizing the source codes with network forms The result of the research work can be used as a means of controling and understanding the massive volume of source code.

Urdu News Classification using Application of Machine Learning Algorithms on News Headline

  • Khan, Muhammad Badruddin
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.2
    • /
    • pp.229-237
    • /
    • 2021
  • Our modern 'information-hungry' age demands delivery of information at unprecedented fast rates. Timely delivery of noteworthy information about recent events can help people from different segments of life in number of ways. As world has become global village, the flow of news in terms of volume and speed demands involvement of machines to help humans to handle the enormous data. News are presented to public in forms of video, audio, image and text. News text available on internet is a source of knowledge for billions of internet users. Urdu language is spoken and understood by millions of people from Indian subcontinent. Availability of online Urdu news enable this branch of humanity to improve their understandings of the world and make their decisions. This paper uses available online Urdu news data to train machines to automatically categorize provided news. Various machine learning algorithms were used on news headline for training purpose and the results demonstrate that Bernoulli Naïve Bayes (Bernoulli NB) and Multinomial Naïve Bayes (Multinomial NB) algorithm outperformed other algorithms in terms of all performance parameters. The maximum level of accuracy achieved for the dataset was 94.278% by multinomial NB classifier followed by Bernoulli NB classifier with accuracy of 94.274% when Urdu stop words were removed from dataset. The results suggest that short text of headlines of news can be used as an input for text categorization process.

Weibo Disaster Rumor Recognition Method Based on Adversarial Training and Stacked Structure

  • Diao, Lei;Tang, Zhan;Guo, Xuchao;Bai, Zhao;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.10
    • /
    • pp.3211-3229
    • /
    • 2022
  • To solve the problems existing in the process of Weibo disaster rumor recognition, such as lack of corpus, poor text standardization, difficult to learn semantic information, and simple semantic features of disaster rumor text, this paper takes Sina Weibo as the data source, constructs a dataset for Weibo disaster rumor recognition, and proposes a deep learning model BERT_AT_Stacked LSTM for Weibo disaster rumor recognition. First, add adversarial disturbance to the embedding vector of each word to generate adversarial samples to enhance the features of rumor text, and carry out adversarial training to solve the problem that the text features of disaster rumors are relatively single. Second, the BERT part obtains the word-level semantic information of each Weibo text and generates a hidden vector containing sentence-level feature information. Finally, the hidden complex semantic information of poorly-regulated Weibo texts is learned using a Stacked Long Short-Term Memory (Stacked LSTM) structure. The experimental results show that, compared with other comparative models, the model in this paper has more advantages in recognizing disaster rumors on Weibo, with an F1_Socre of 97.48%, and has been tested on an open general domain dataset, with an F1_Score of 94.59%, indicating that the model has better generalization.

Synthesis of β-Galactooligosaccharide Using Bifidobacterial β-Galactosidase Purified from Recombinant Escherichia coli

  • Oh, So Young;Youn, So Youn;Park, Myung Soo;Kim, Hyoung-Geun;Baek, Nam-In;Li, Zhipeng;Ji, Geun Eog
    • Journal of Microbiology and Biotechnology
    • /
    • v.27 no.8
    • /
    • pp.1392-1400
    • /
    • 2017
  • Galactooligosaccharides (GOSs) are known to be selectively utilized by Bifidobacterium, which can bring about healthy changes of the composition of intestinal microflora. In this study, ${\beta}-GOS$ were synthesized using bifidobacterial ${\beta}-galactosidase$ (G1) purified from recombinant E. coli with a high GOS yield and with high productivity and enhanced bifidogenic activity. The purified recombinant G1 showed maximum production of ${\beta}-GOSs$ at pH 8.5 and $45^{\circ}C$. A matrix-assisted laser desorption ionization time-of-flight mass spectrometry analysis of the major peaks of the produced ${\beta}-GOSs$ showed MW of 527 and 689, indicating the synthesis of ${\beta}-GOSs$ at degrees of polymerization (DP) of 3 and DP4, respectively. The trisaccharides were identified as ${\beta}-{\text\tiny{D}}$-galactopyranosyl-($1{\rightarrow}4$)-O-${\beta}-{\text\tiny{D}}$-galactopyranosyl-($1{\rightarrow}4$)-O-${\beta}-{\text\tiny{D}}$-glucopyranose, and the tetrasaccharides were identified as ${\beta}-{\text\tiny{D}}$-galactopyranosyl-($1{\rightarrow}4$)-O-${\beta}-{\text\tiny{D}}$-galactopyranosyl-($1{\rightarrow}4$)-O-${\beta}-{\text\tiny{D}}$-galactopyranosyl-($1{\rightarrow}4$)-O-${\beta}-{\text\tiny{D}}$-glucopyranose. The maximal production yield of GOSs was as high as 25.3% (w/v) using purified recombinant ${\beta}-galactosidase$ and 36% (w/v) of lactose as a substrate at pH 8.5 and $45^{\circ}C$. After 140 min of the reaction under this condition, 268.3 g/l of GOSs was obtained. With regard to the prebiotic effect, all of the tested Bifidobacterium except for B. breve grew well in BHI medium containing ${\beta}-GOS$ as a sole carbon source, whereas lactobacilli and Streptococcus thermophilus scarcely grew in the same medium. Only Bacteroides fragilis, Clostridium ramosum, and Enterobacter cloacae among the 17 pathogens tested grew in BHI medium containing ${\beta}-GOS$ as a sole carbon source; the remaining pathogens did not grow in the same medium. Consequently, the ${\beta}-GOS$ are expected to contribute to the beneficial change of intestinal microbial flora.

A study on unstructured text mining algorithm through R programming based on data dictionary (Data Dictionary 기반의 R Programming을 통한 비정형 Text Mining Algorithm 연구)

  • Lee, Jong Hwa;Lee, Hyun-Kyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.20 no.2
    • /
    • pp.113-124
    • /
    • 2015
  • Unlike structured data which are gathered and saved in a predefined structure, unstructured text data which are mostly written in natural language have larger applications recently due to the emergence of web 2.0. Text mining is one of the most important big data analysis techniques that extracts meaningful information in the text because it has not only increased in the amount of text data but also human being's emotion is expressed directly. In this study, we used R program, an open source software for statistical analysis, and studied algorithm implementation to conduct analyses (such as Frequency Analysis, Cluster Analysis, Word Cloud, Social Network Analysis). Especially, to focus on our research scope, we used keyword extract method based on a Data Dictionary. By applying in real cases, we could find that R is very useful as a statistical analysis software working on variety of OS and with other languages interface.

Implementation of the Voice Conversion in the Text-to-speech System (Text-to-speech 시스템에서의 화자 변환 기능 구현)

  • Hwang Cholgyu;Kim Hyung Soon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.33-36
    • /
    • 1999
  • 본 논문에서는 기존의 text-to-speech(TTS) 합성방식이 미리 정해진 화자에 의한 단조로운 합성음을 가지는 문제를 극복하기 위하여, 임의의 화자의 음색을 표현할 수 있는 화자 변환(Voice Conversion) 기능을 구현하였다. 구현된 방식은 화자의 음향공간을 Gaussian Mixture Model(GMM)로 모델링하여 연속 확률 분포에 따른 화자 변환을 가능케 했다. 원시화자(source)와 목적화자(target)간의 특징 벡터의 joint density function을 이용하여 목적화자의 음향공간 특징벡터와 변환된 벡터간의 제곱오류를 최소화하는 변환 함수를 구하였으며, 구해진 변환 함수로 벡터 mapping에 의한 스펙트럼 포락선을 변환했다. 운율 변환은 음성 신호를 정현파 모델에 의해서 모델링하고, 분석된 운율 정보(피치, 지속 시간)는 평균값을 고려해서 변환했다. 성능 평가를 위해서 VQ mapping 방법을 함께 구현하여 각각의 정규화된 켑스트럼 거리를 구해서 성능을 비교 평가하였다. 합성시에는 ABS-OLA 기반의 정현파 모델링 방식을 채택함으로써 자연스러운 합성음을 생성할 수 있었다.

  • PDF

A Study of Main Contents Extraction from Web News Pages based on XPath Analysis

  • Sun, Bok-Keun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.7
    • /
    • pp.1-7
    • /
    • 2015
  • Although data on the internet can be used in various fields such as source of data of IR(Information Retrieval), Data mining and knowledge information servece, and contains a lot of unnecessary information. The removal of the unnecessary data is a problem to be solved prior to the study of the knowledge-based information service that is based on the data of the web page, in this paper, we solve the problem through the implementation of XTractor(XPath Extractor). Since XPath is used to navigate the attribute data and the data elements in the XML document, the XPath analysis to be carried out through the XTractor. XTractor Extracts main text by html parsing, XPath grouping and detecting the XPath contains the main data. The result, the recognition and precision rate are showed in 97.9%, 93.9%, except for a few cases in a large amount of experimental data and it was confirmed that it is possible to properly extract the main text of the news.