• Title/Summary/Keyword: Fully automatic

Search Result 252, Processing Time 0.024 seconds

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

Film Line Scratch Detection using a Neural Network based Texture Classifier (신경망 기반의 텍스처 분류기를 이용한 스크래치 검출)

  • Kim, Kyung-Tai;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.6 s.312
    • /
    • pp.26-33
    • /
    • 2006
  • Film restoration is to detect the location and extent of defected regions from a given movie film, and if present, to reconstruct the lost information of each region. It has gained increasing attention by many researchers, to support multimedia service of high quality. In general, an old film is degraded by dust, scratch, flick, and so on. Among these, the most frequent degradation is the scratch. So far techniques for the scratch restoration have been developed, but they have limited applicability when dealing with all kinds of scratches. To fully support the automatic scratch restoration, the system should be developed that can detect all kinds of scratches from a given frame of old films. This paper presents a neurual network (NN)-based texture classifier that automatically detect all kinds of scratches from frames in old films. To facilitate the detection of various scratch sizes, we use a pyramid of images generated from original frames by having the resolution at three levels. The image at each level is scanned by the NN-based classifier, which divides the input image into scratch regions and non-scratch regions. Then, to reduce the computational cost, the NN-based classifier is only applied to the edge pixels. To assess the validity of the proposed method, the experiments have been performed on old films and animations with all kinds of scratches, then the results show the effectiveness of the proposed method.

A DESIGN AND DEVELOPMENT OF MULTI-PURPOSE CCD CAMERA SYSTEM WITH THERMOELECTRIC COOLING II. SOFTWARE (열전냉각방식의 범용 CCD 카메라 시스템 개발 II. 소프트웨어)

  • Oh, S.H.;Kang, Y.W.;Byun, Y.I.
    • Journal of Astronomy and Space Sciences
    • /
    • v.24 no.4
    • /
    • pp.367-378
    • /
    • 2007
  • We present a software which we developed for the multi-purpose CCD camera. This software can be used on the all 3 types of CCD - KAF-0401E ($768{\times}512$), KAF-1602E ($1536{\times}1024$), KAF-3200E ($2184{\times}1472$) made in KODAK Co.. For the efficient CCD camera control, the software is operated with two independent processes of the CCD control program and the temperature/shutter operation program. This software is designed to fully automatic operation as well as manually operation under LINUX system, and is controled by LINUX user signal procedure. We plan to use this software for all sky survey system and also night sky monitoring and sky observation. As our results, the read-out time of each CCD are about 15sec, 64sec, 134sec for KAF-0401E, KAF-1602E, KAF-3200E., because these time are limited by the data transmission speed of parallel port. For larger format CCD, the data transmission is required more high speed. we are considering this control software to one using USB port for high speed data transmission.

An Intelligent Display Scheme of Soccer Video for Multimedia Mobile Devices (멀티미디어 이동형 단말을 위한 축구경기 비디오의 지능적 디스플레이 방법)

  • Seo Kee-Won;Kim Chang-Ick
    • Journal of Broadcast Engineering
    • /
    • v.11 no.2 s.31
    • /
    • pp.207-221
    • /
    • 2006
  • A fully automatic and computationally efficient method is proposed for intelligent display of soccer video on small multimedia mobile devices. The rapid progress of the multimedia signal processing has contributed to the extensive use of multimedia devices with a small LCD panel. With these emerging small mobile devices, the video sequences captured for standard- or HDTV broadcasting may give the small-display-viewers uncomfortable experiences in understanding what is happening in a scene. For instance, in a soccer video sequence taken by a long-shot camera technique, the tiny objects (e.g., soccer ball and players) may not be clearly viewed on the small LCD panel. Thus, an intelligent display technique is needed for small-display-viewers. To this end, one of the key technologies is to determine region of interest (ROI), which is a part of the scene that viewers pay more attention to than other regions. In this paper, the focus is on soccer video display for mobile devices. Instead of taking visual saliency into account, we take domain-specific approach to exploit the characteristics of the soccer video. The proposed scheme includes three modules; ground color learning, shot classification, and ROI determination. The experimental results show the propose scheme is capable of intelligent video display on mobile devices.

Pipelining Semantically-operated Services Using Ontology-based User Constraints (온톨로지 기반 사용자 제시 조건을 이용한 시맨틱 서비스 조합)

  • Jung, Han-Min;Lee, Mi-Kyoung;You, Beom-Jong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.10
    • /
    • pp.32-39
    • /
    • 2009
  • Semantically-operated services, which is different from Web services or semantic Web services with semantic markup, can be defined as the services providing search function or reasoning function using ontologies. It performs a pre-defined task by exploiting URI, ontology classes, and ontology properties. This study introduces a method for pipelining semantically-operated services based on a semantic broker which refers to ontologies and service description stored in a service manager and invokes by user constraints. The constraints consist of input instances, an output class, a visualization type, service names, and properties. This method provides automatically-generated service pipelines including composit services and a simple workflow to the user. The pipelines provided by the semantic broker can be executed in a fully-automatic manner to find a set of meaningful semantic pipelines. After all, this study would epochally contribute to develop a portal service by ways of supporting human service planners who want to find specific composit services pipelined from distributed semantically-operated services.

Effective Picture Search in Lifelog Management Systems using Bluetooth Devices (라이프로그 관리 시스템에서 블루투스 장치를 이용한 효과적인 사진 검색 방법)

  • Chung, Eun-Ho;Lee, Ki-Yong;Kim, Myoung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.383-391
    • /
    • 2010
  • A Lifelog management system provides users with services to store, manage, and search their life logs. This paper proposes a fully-automatic collecting method of real world social contacts and lifelog search engine using collected social contact information as keyword. Wireless short-distance network devices in mobile phones are used to detect social contacts of their users. Human-Bluetooth relationship matrix is built based on the frequency of a human-being and a Bluetooth device being observed at the same time. Results show that with 20% of social contact information out of full social contact information of the observation times used for calculation, 90% of human-Bluetooth relationship can be correctly acquired. A lifelog search-engine that takes human names as keyword is suggested which compares two vectors, a row of Human-Bluetooth matrix and a vector of Bluetooth list scanned while a lifelog was created, using vector information retrieval model. This search engine returns more lifelog than existing text-matching search engine and ranks the result unlike existing search-engine.

A Comparative Study on Productivity Analysis of Automated Pavement Crack Sealing Machines (도로면 크랙실링 자동화 장비의 작업 생산성 분석에 관한 비교 연구)

  • Seo, Won-Jung;Yoo, Hyun-Seok;Kim, Young-Suk
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.34 no.4
    • /
    • pp.1289-1298
    • /
    • 2014
  • Pavement crack sealing method, which is one of the methods to maintain and repair the road, prevents the extending of cracks by repairing cracks in its early occurrence and has already been applied to many roadworks in advanced foreign country for a long time. But in the conventional crack sealing method, traffic accidents occur frequently during the repair because it's commonly performed on the heavy traffic road or highway. It also has some difficulties in securing the safety of workers from the risk of burns caused by heated sealant. In an effort to solve these problems, automated pavement crack sealing machines such as ARMM, OCCSM, TTLS have been developed in advanced foreign country since early 1990s. Also APCS in 2004 and ACSTM in 2013 were already developed domestically. However, since these automated crack sealers developed from a number of research institutions have different test-bed conditions and productivity measurement models, it's difficult to compare and evaluate them objectively. In this study, the image processing time of the respective machines and the movement time of each motion on the work process were estimated by using fully autonomous mapping and semi-automatic mapping in order to measure the productivity in the same environmental conditions. In addition, the productivity measurement test-bed reflected domestic road characteristics was designed to estimate and compare the productivity of the automated crack sealing machines.

Consecutive automated production of carbon-11 labeled radiopharmaceuticals by sharing 11C-methylation reagent from one 11C-synthetic module

  • Park, Hyun Sik;Lee, Hong Jin;An, Hyun Ho;Moon, Byung Seok;Lee, Byung Chul;Kim, Sang Eun
    • Journal of Radiopharmaceuticals and Molecular Probes
    • /
    • v.2 no.2
    • /
    • pp.123-131
    • /
    • 2016
  • Increasing clinical demand for carbon-11 labeled radiopharmaceuticals has triggered technological advances in fields of radiochemistry and automated modules. Even though carbon-11 has a short half-life ($t_{1/2}=20.4min$), the consecutive second production of carbon-11 labeled radiopharmaceutical in one $^{11}C$-synthetic module should be delayed at least over 4 h to avoid the high radiation exposure. We herein aimed to produce two different carbon-11 labeled radiopharmaceuticals ([$^{11}C$]PIB and [$^{11}C$]methionine) by sharing of [$^{11}C$]methylation source in one $^{11}C$-synthetic module. The synthesis of $^{11}C$-labeling reagents ($[^{11}C]CH_3I$ or $[^{11}C]CH_3OTf$) is fully automated using the commercial TRACERlab $FX_{C-pro}$ module and is readily adaptable to $^{11}C$-labeling reactor for [$^{11}C$]PIB as well as another $^{11}C$-labeling apparatus for [$^{11}C$]methionine via the three-way valve. After completing the [$^{11}C$]PIB production, the re-synthesized $[^{11}C]CH_3I$ was passed through the three-way valve connected the polyetheretherketone (PEEK) line and loaded into the C18 Sep-Pak cartridge including the methionine precursor. The labeled product [^${11}C$]methionine was purified by a simple cartridge separation and reformulated into saline. The radiochemical yield of [$^{11}C$]PIB and [$^{11}C$]methionine were $5.3{\pm}0.6%$ and $18.7{\pm}0.8%$ (n.d.c.), respectively, with over 97% of radiochemical purity. The specific activity of [$^{11}C$]PIB was over $110GBq/{\mu}mol$. Total production time of two radiopharmaceuticals needs about 2 h from $1^{st}$ beam irradiation including quality control tests. Final [$^{11}C$]PIB and [$^{11}C$]methionine were satisfied all quality control test standards.

A Study on the Application of SAW Process for Thin Plate of 3.2 Thickness in Ship Structure (선체외판부 3.2T 박판에 대한 SAW 용접 적용에 관한 연구)

  • Oh, Chong-In;Yun, Jin-Oh;Lim, Dong-Young;Jeong, Sang-Hoon;Lee, Jeong-Soo
    • Proceedings of the KWS Conference
    • /
    • 2010.05a
    • /
    • pp.51-51
    • /
    • 2010
  • Recently just as in the automobile industry, shipbuilders also try to reduce material consumption and weight in order to keep operating costs as low as possible and improve the speed of production. Naturally industry is ever searching for welding techniques offering higher power, higher productivity and a better quality. Therefore it is important to have a details research based on the various welding process applied to steel and other materials, and to have the ability both to counsel interested companies and to evaluate the feasibility of implementation of this process. Submerged-arc welding (SAW) process is usually used about 20% of shipbuilding. Similar to gas metal arc welding(GMAW), SAW involves formation of an arc between a continuously-fed bare wire electrode and the work-piece. The process uses a flux to generate protective gases and slag, and to add alloying elements to the weld pool and a shielding gas is not required. Prior to welding, a thin layer of flux powder is placed on the work-piece surface. The arc moves along the joint line and as it does so, excess flux is recycled via a hopper. Remaining fused slag layers can be easily removed after welding. As the arc is completely covered by the flux layer, heat loss is extremely low. This produces a thermal efficiency as high as 60% (compared with 25% for manual metal arc). SAW process offers many advantages compared to conventional CO2 welding process. The main advantages of SAW are higher welding speed, facility of workers, less deformation and better than bead shape & strength of welded joint because there is no visible arc light, welding is spatter-free, fully-mechanized or automatic process, high travel speed, and depth of penetration and chemical composition of the deposited weld metal. However it is difficult to application of thin plate according to high heat input. So this paper has been focused on application of the field according to SAW process for thin plate in ship-structures. For this purpose, It has been decided to optimized welding condition by experiments, relationship between welding parameters and bead shapes, mechanical test such as tensile and bending. Also finite element(FE) based numerical comparison of thermal history and welding residual stress in A-grade 3.2 thickness steel of SAW been made in this study. From the result of this study, It makes substantial saving of time and manufacturing cost and raises the quality of product.

  • PDF

Proposed Landslide Warning System Based on Real-time Rainfall Data (급경사지 붕괴위험 판단을 위한 강우기반의 한계영역 설정 기법 연구)

  • Kim, Hong Gyun;Park, Sung Wook;Yeo, Kang Dong;Lee, Moon Se;Park, Hyuck Jin;Lee, Jung Hyun;Hong, Sung Jin
    • The Journal of Engineering Geology
    • /
    • v.26 no.2
    • /
    • pp.197-205
    • /
    • 2016
  • Rainfall-induced landslide disaster case histories are typically required to establish critical lines based on the decrease coefficient for judging the likelihood of slope collapse or failure; however, reliably setting critical lines is difficult because the number of nationwide disaster case histories is insufficient and not well distributed across the region. In this study, we propose a method for setting the critical area to judge the risk of slope collapse without disaster case history information. Past 10 years rainfall data based on decrease coefficient are plotted as points, and a reference line is established by connecting the outermost points. When realtime working rainfall cross the reference line, warning system is operating and this system can be utilized nationwide through setting of reference line for each AWS (Automatic Weather Station). Warnings were effectively predicted at 10 of the sites, and warnings could have been issued 30 min prior to the landslide movement at eight of the sites. These results indicate a reliability of about 67%. To more fully utilize this model, it is necessary to establish nationwide rainfall databases and conduct further studies to develop regional critical areas for landslide disaster prevention.