• Title/Summary/Keyword: contents filtering

Search Result 342, Processing Time 0.026 seconds

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

Personalized Recommendation System for IPTV using Ontology and K-medoids (IPTV환경에서 온톨로지와 k-medoids기법을 이용한 개인화 시스템)

  • Yun, Byeong-Dae;Kim, Jong-Woo;Cho, Yong-Seok;Kang, Sang-Gil
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.147-161
    • /
    • 2010
  • As broadcasting and communication are converged recently, communication is jointed to TV. TV viewing has brought about many changes. The IPTV (Internet Protocol Television) provides information service, movie contents, broadcast, etc. through internet with live programs + VOD (Video on demand) jointed. Using communication network, it becomes an issue of new business. In addition, new technical issues have been created by imaging technology for the service, networking technology without video cuts, security technologies to protect copyright, etc. Through this IPTV network, users can watch their desired programs when they want. However, IPTV has difficulties in search approach, menu approach, or finding programs. Menu approach spends a lot of time in approaching programs desired. Search approach can't be found when title, genre, name of actors, etc. are not known. In addition, inserting letters through remote control have problems. However, the bigger problem is that many times users are not usually ware of the services they use. Thus, to resolve difficulties when selecting VOD service in IPTV, a personalized service is recommended, which enhance users' satisfaction and use your time, efficiently. This paper provides appropriate programs which are fit to individuals not to save time in order to solve IPTV's shortcomings through filtering and recommendation-related system. The proposed recommendation system collects TV program information, the user's preferred program genres and detailed genre, channel, watching program, and information on viewing time based on individual records of watching IPTV. To look for these kinds of similarities, similarities can be compared by using ontology for TV programs. The reason to use these is because the distance of program can be measured by the similarity comparison. TV program ontology we are using is one extracted from TV-Anytime metadata which represents semantic nature. Also, ontology expresses the contents and features in figures. Through world net, vocabulary similarity is determined. All the words described on the programs are expanded into upper and lower classes for word similarity decision. The average of described key words was measured. The criterion of distance calculated ties similar programs through K-medoids dividing method. K-medoids dividing method is a dividing way to divide classified groups into ones with similar characteristics. This K-medoids method sets K-unit representative objects. Here, distance from representative object sets temporary distance and colonize it. Through algorithm, when the initial n-unit objects are tried to be divided into K-units. The optimal object must be found through repeated trials after selecting representative object temporarily. Through this course, similar programs must be colonized. Selecting programs through group analysis, weight should be given to the recommendation. The way to provide weight with recommendation is as the follows. When each group recommends programs, similar programs near representative objects will be recommended to users. The formula to calculate the distance is same as measure similar distance. It will be a basic figure which determines the rankings of recommended programs. Weight is used to calculate the number of watching lists. As the more programs are, the higher weight will be loaded. This is defined as cluster weight. Through this, sub-TV programs which are representative of the groups must be selected. The final TV programs ranks must be determined. However, the group-representative TV programs include errors. Therefore, weights must be added to TV program viewing preference. They must determine the finalranks.Based on this, our customers prefer proposed to recommend contents. So, based on the proposed method this paper suggested, experiment was carried out in controlled environment. Through experiment, the superiority of the proposed method is shown, compared to existing ways.

Reduction of Salt Concentration in Food Waste by Salt Reduction Process with a Rotary Reactor (로터리식 저염화 공정설비에 의한 음식물 쓰레기의 염분농도 저감)

  • Kim, Wi-sung;Seo, Young-Hwa
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.13 no.1
    • /
    • pp.61-70
    • /
    • 2005
  • In order to reduce salt(as NaCl) contents in food waste and to improve the quality of discharged wastewater produced during the recycling process of food waste for the purpose of compost and feed stuff, a salt reduction process by added water into food waste was developed. The pilot plant with a rotary type salt reduction equipment to manage continuously 0.5 ton food waste per hour was constructed and the efficiency was tested. The amount of added water was calculated by the water content and the efficiency of dewatering process of food waste. Approximately 0.8 liter water per a kilogram of food waste was injected into the reactor in which food waste was pouring simultaneously, then diluted/mixed in a rotary reactor. About 1.1 liter of leachate including added water was generated, but the leachate contained a very high content of organic particles, so most particles were recovered by two step solid-liquid separation process. The first step was a gravitational filtering process using screens with a pore diameter of 1mm, and the second separation process was centrifugal process. Organic quality of food waste which had been desalted was maintained by inputting the entirely recovered organic particles. The efficiency of salt reduction of food waste was estimated by measuring a chloride anion by titration and salinity by a probe. The results by the two different measuring methods were always over 50%, and the quality of final wastewater was improved up to $200mg/{\ell}$ as TS(total solid) by an additional settling process after the two step solid-liquid separation process.

  • PDF

Development of a Gene's Functional Classifying System for a Microarray Data using a Gene Ontology (유전자 온톨로지를 이용한 마이크로어레이 데이터의 유전자 기능 분석 시스템의 개발)

  • Lee, Jong-Keun;Park, S.S.;Hong, D.W.;Yoon, J.H.
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10c
    • /
    • pp.246-251
    • /
    • 2006
  • 마이크로어레이 실험은 수 천에서 수 만개의 유전자 발현 결과를 동시에 측정할 수 있어 질병의 발현 형질 분류 등에 유용하게 이용되고 있다. 그러나 마이크로어레이 실험은 동일한 플랫폼의 실험이라 할지라도 환경 등에 따라 그 실험 결과에 차이가 나는 등 오차를 항상 포함하고 있다. 또한 마이크로어레이 실험은 아직 고가의 실험으로 분류되어 다수의 샘플에 대한 반복 실험 결과를 얻기 어려운 상황이다. 따라서 이종의 플랫폼, 데이터 포맷, 정규화 기법 등이 서로 다른 데이터를 효율적으로 통합하여 유용한 정보를 추출하는 새로운 방식의 개발이 필요하다. 본 논문은 이와 같은 문제를 해결하기 위한 기초 단계 연구 결과이다. 마이크로어레이 실험 데이터로부터 통계적 방법을 이용하여 유의(informative) 유전자를 추출하고 유전자 온톨로지(Gene Ontology : GO)와의 연계를 통하여 유전자 정보의 기능적 분류 결과를 사용자에게 제공하는 유전자 기능 분석 시스템의 설계 및 구현 방안을 보인다. 본 시스템의 실험방법에서는 3-Fold Filtering 기법을 통하여 발현 차가 큰 유전자를 추출하고, t-검정 기법에 의하여 이들 유전자를 순위화 하였으며, 이 중 상위 100개의 유전자를 유의 유전자로 추출하였다. 다음, 이 들 유의 유전자의 t-검정 값을 GO의 유전자 기능을 나타내는 해당 텀 (term)에 가중치로 부과하여 각 유전자들과 기능적으로 연관성이 높은 텀들을 추출한다. 또한 본 연구의 유효성을 검증하기 위하여 본 시스템에 의한 마이크로어레이 데이터 분석 결과를 전문가에 의한 유전자 기능 분석 결과와 비교한다.투명성 있는 서비스를 제공하고 높은 신뢰성과 안정성이 확보될 수 있도록 구성하고자 한다. Query 수행을 여러 서버로 분산처리하게 함으로써 성능에 대한 신뢰성을 향상 시킬 수 있는 Load Balancing System을 제안한다.할 때 가장 효과적인 라우팅 프로토콜이라고 할 수 있다.iRNA 상의 의존관계를 분석할 수 있었다.수안보 등 지역에서 나타난다 이러한 이상대 주변에는 대개 온천이 발달되어 있었거나 새로 개발되어 있는 곳이다. 온천에 이용하고 있는 시추공의 자료는 배제하였으나 온천이응으로 직접적으로 영향을 받지 않은 시추공의 자료는 사용하였다 이러한 온천 주변 지역이라 하더라도 실제는 온천의 pumping 으로 인한 대류현상으로 주변 일대의 온도를 올려놓았기 때문에 비교적 높은 지열류량 값을 보인다. 한편 한반도 남동부 일대는 이번 추가된 자료에 의해 새로운 지열류량 분포 변화가 나타났다 강원 북부 오색온천지역 부근에서 높은 지열류량 분포를 보이며 또한 우리나라 대단층 중의 하나인 양산단층과 같은 방향으로 발달한 밀양단층, 모량단층, 동래단층 등 주변부로 NNE-SSW 방향의 지열류량 이상대가 발달한다. 이것으로 볼 때 지열류량은 지질구조와 무관하지 않음을 파악할 수 있다. 특히 이러한 단층대 주변은 지열수의 순환이 깊은 심도까지 가능하므로 이러한 대류현상으로 지표부근까지 높은 지온 전달이 되어 나타나는 것으로 판단된다.의 안정된 방사성표지효율을 보였다. $^{99m}Tc$-transferrin을 이용한 감염영상을 성공적으로 얻을 수 있었으며, $^{67}Ga$-citrate

  • PDF

Stimulation of Nitric Oxide Production in RAW 264.7 Macrophages by the Peptides Derived from Silk Fibroin. (실크 피브로인 유래 펩타이드에 의한 RAW 264.7 Macrophage의 Nitric Oxide 생성 촉진)

  • 박금주;현창기
    • Microbiology and Biotechnology Letters
    • /
    • v.30 no.1
    • /
    • pp.39-45
    • /
    • 2002
  • It was found that the peptides originated from the hydrolysates of silk fibroin have in vitro immunostimulating effects in murine macrophage RAW264.7 cells. The stimulation effects on nitric oxide (NO) production resulted from treatments of acid or enzymatic hydrolysates were measured. The silk fibroin preparation isolated from cocoon was most efficiently digested by acid hydrolysis. Even though the sole treatment of acid hydrolysate stimulated the NO production in dose-dependent pattern, a part of its activity was found to be caused by the contaminated endotoxin, LPS. When each endotoxin-free hydrolysates obtained by filtering it through an ultrafiltration membrane of molecular weight (MW) cut-off 10,000 to eliminate LPS was used, the peptic hydrolysate with lowest degree of hydrolysis showed the highest activity. The fractions of peptic hydrolysate with MW ranges of 1,000∼10,000, 500∼1,000 and below 500 also showed a higher MW-higher activity correlation. From the analyses of amino acid composition of each hydrolysate, it was found that the contents of arginine, lysine, alanine and glycine residues affected the activity level of hydrolysate. The results of this study showed a possibility of utilizing fibroin as a source for immunostimulating (chemopreventive) functional peptides.

NEAR REAL-TIME IONOSPHERIC MODELING USING A RBGIONAL GPS NETWORK (지역적 GPS 관측망을 이용한 준실시간 전리층 모델링)

  • Choi, Byung-Kyu;Park, Jong-Uk;Chung, Jeong-Kyun;Park, Phil-Ho
    • Journal of Astronomy and Space Sciences
    • /
    • v.22 no.3
    • /
    • pp.283-292
    • /
    • 2005
  • Ionosphere is deeply coupled to the space environment and introduces the perturbations to radio signal because of its electromagnetic characteristics. Therefore, the status of ionosphere can be estimated by analyzing the GPS signal errors which are penetrating the ionosphere and it can be the key to understand the global circulation and change in the upper atmosphere, and the characteristics of space weather. We used 9 GPS Continuously Operating Reference Stations (CORS), which have been operated by Korea Astronomy and Space Science Institute (KASI) , to determine the high precision of Total Electron Content (TEC) and the pseudorange data which is phase-leveled by a linear combination with carrier phase to reduce the inherent noise. We developed the method to model a regional ionosphere with grid form and its results over South Korea with $0.25^{\circ}\;by\;0.25^{\circ}$ spatial resolution. To improve the precision of ionosphere's TEC value, we applied IDW (Inverse Distance Weight) and Kalman Filtering method. The regional ionospheric model developed by this research was compared with GIMs (Global Ionosphere Maps) preduced by Ionosphere Working Group for 8 days and the results show $3\~4$ TECU difference in RMS values.

A Study on Spam Document Classification Method using Characteristics of Keyword Repetition (단어 반복 특징을 이용한 스팸 문서 분류 방법에 관한 연구)

  • Lee, Seong-Jin;Baik, Jong-Bum;Han, Chung-Seok;Lee, Soo-Won
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.315-324
    • /
    • 2011
  • In Web environment, a flood of spam causes serious social problems such as personal information leak, monetary loss from fishing and distribution of harmful contents. Moreover, types and techniques of spam distribution which must be controlled are varying as days go by. The learning based spam classification method using Bag-of-Words model is the most widely used method until now. However, this method is vulnerable to anti-spam avoidance techniques, which recent spams commonly have, because it classifies spam documents utilizing only keyword occurrence information from classification model training process. In this paper, we propose a spam document detection method using a characteristic of repeating words occurring in spam documents as a solution of anti-spam avoidance techniques. Recently, most spam documents have a trend of repeating key phrases that are designed to spread, and this trend can be used as a measure in classifying spam documents. In this paper, we define six variables, which represent a characteristic of word repetition, and use those variables as a feature set for constructing a classification model. The effectiveness of proposed method is evaluated by an experiment with blog posts and E-mail data. The result of experiment shows that the proposed method outperforms other approaches.

A Mobile Newspaper Application Interface to Enhance Information Accessibility of the Visually Impaired (시각장애인의 정보 접근성 향상을 위한 모바일 신문 어플리케이션 인터페이스)

  • Lee, Seung Hwan;Hong, Seong Ho;Ko, Seung Hee;Choi, Hee Yeon;Hwang, Sung Soo
    • Journal of the HCI Society of Korea
    • /
    • v.11 no.3
    • /
    • pp.5-12
    • /
    • 2016
  • The number of visually-impaired people using a smartphone is currently increasing with the help Text-to-Speech(TTS). TTS converts text data in a mobile application into sound data, and it only allows sequential search. For this reason, the location of buttons and contents inside an application should be determined carefully. However, little attention has been made on TTS service environment during the development of mobile newspaper application. This makes visually-impaired people difficult to use these applications. Furthermore, a mobile application interface which also reflects the desire of the low vision is necessary. Therefore, this paper presents a mobile newspaper interface which considers the accessibility and the desire of various visually impaired people. To this end, the proposed interface locates buttons with the consideration of TTS service environment and provides search functionality. The proposed interface also enables visually impaired people to use the application smoothly by filtering out the words that are pronounced improperly and providing the proper explanation for every button. Finally, several functionalities such as increasing font size and color reversal are implemented for the low vision. Simulation results show that the proposed interface achieves better performance than other applications in terms of search speed and usability.

Detecting near-duplication Video Using Motion and Image Pattern Descriptor (움직임과 영상 패턴 서술자를 이용한 중복 동영상 검출)

  • Jin, Ju-Kyong;Na, Sang-Il;Jenong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.107-115
    • /
    • 2011
  • In this paper, we proposed fast and efficient algorithm for detecting near-duplication based on content based retrieval in large scale video database. For handling large amounts of video easily, we split the video into small segment using scene change detection. In case of video services and copyright related business models, it is need to technology that detect near-duplicates, that longer matched video than to search video containing short part or a frame of original. To detect near-duplicate video, we proposed motion distribution and frame descriptor in a video segment. The motion distribution descriptor is constructed by obtaining motion vector from macro blocks during the video decoding process. When matching between descriptors, we use the motion distribution descriptor as filtering to improving matching speed. However, motion distribution has low discriminability. To improve discrimination, we decide to identification using frame descriptor extracted from selected representative frames within a scene segmentation. The proposed algorithm shows high success rate and low false alarm rate. In addition, the matching speed of this descriptor is very fast, we confirm this algorithm can be useful to practical application.

A proper folder recommendation technique using frequent itemsets for efficient e-mail classification (효과적인 이메일 분류를 위한 빈발 항목집합 기반 최적 이메일 폴더 추천 기법)

  • Moon, Jong-Pil;Lee, Won-Suk;Chang, Joong-Hyuk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.2
    • /
    • pp.33-46
    • /
    • 2011
  • Since an e-mail has been an important mean of communication and information sharing, there have been much effort to classify e-mails efficiently by their contents. An e-mail has various forms in length and style, and words used in an e-mail are usually irregular. In addition, the criteria of an e-mail classification are subjective. As a result, it is quite difficult for the conventional text classification technique to be adapted to an e-mail classification efficiently. An e-mail classification technique in a commercial e-mail program uses a simple text filtering technique in an e-mail client. In the previous studies on automatic classification of an e-mail, the Naive Bayesian technique based on the probability has been used to improve the classification accuracy, and most of them are on an e-mail in English. This paper proposes the personalized recommendation technique of an email in Korean using a data mining technique of frequent patterns. The proposed technique consists of two phases such as the pre-processing of e-mails in an e-mail folder and the generating a profile for the e-mail folder. The generated profile is used for an e-mail to be classified into the most appropriate e-mail folder by the subjective criteria. The e-mail classification system is also implemented, which adapts the proposed technique.