• Title/Summary/Keyword: GUI based language

Search Result 84, Processing Time 0.023 seconds

A Study on the Work Types of Chinese Bibliographic Records Based FRBR Model in the National Library of China (FRBR 모형에 의한 중국어 서지레코드의 저작유형 분석 - 중국국가도서관을 중심으로 -)

  • Dong, Gui-Cun;Kim, Jeong-Hyen
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.24 no.1
    • /
    • pp.269-286
    • /
    • 2013
  • This study analysed work types of Chinese bibliographical records based on FRBR model to identify how many useful data there are in bibliographical records and how much difference in usefulness there is according to themes. For the purposes, the study randomly extracted samples of 2,200 Chinese books with 100 cases of 22 kinds from "Chinese Library Classification" through National Library of China's OPAC, targeting Chinese books collected by the National Library of China to analyse the work types of Chinese bibliographical records and their usefulness in application of the FRBR model according to themes and data types. The results are summarized as follows: Firstly, in applying the FRBR model for Chinese bibliographical records, 18.6% was considered as useful works for which simple work was added to complex work. Secondly, although it is a fact that usefulness is higher as bibliographical relations are more complex, only works by famous people collected in 'Marxism-Leninism'(A) and some classics and modern masterpieces collected in 'Literature'(I) have diverse versions of works such as sequels, revision, reproduction, adaptation, and critical books. However, if criticism, review, explanation and bibliographical introduction are included in specific subjects of 'Military'(E), 'Language and Words'(H), 'Literature'(I) and 'Comprehensive Book'(Z), it was clear that their usefulness is relatively high.

A Dose Volume Histogram Analyzer Program for External Beam Radiotherapy (방사선치료 관련 연구를 위한 선량 체적 히스토그램 분석 프로그램 개발)

  • Kim, Jin-Sung;Yoon, Myong-Geun;Park, Sung-Yong;Shin, Jung-Suk;Shin, Eun-Hyuk;Ju, Sang-Gyu;Han, Young-Yih;Ahn, Yong-Chan
    • Radiation Oncology Journal
    • /
    • v.27 no.4
    • /
    • pp.240-248
    • /
    • 2009
  • Purpose: To provide a simple research tool that may be used to analyze a dose volume histogram from different radiation therapy planning systems for NTCP (Normal Tissue Complication Probability), OED (Organ Equivalent Dose) and so on. Materials and Metohds: A high-level computing language was chosen to implement Niemierko's EUD, Lyman-Kutcher-Burman model's NTCP, and OED. The requirements for treatment planning analysis were defined and the procedure, using a developed GUI based program, was described with figures. The calculated data, including volume at a dose, dose at a volume, EUD, and NTCP were evaluated by a commercial radiation therapy planning system, Pinnacle (Philips, Madison, WI, USA) for comparison. Results: The volume at a special dose and a dose absorbed in a volume on a dose volume histogram were successfully extracted using DVH data of several radiation planning systems. EUD, NTCP and OED were successfully calculated using DVH data and some required parameters in the literature. Conclusion: A simple DVH analyzer program was developed and has proven to be a useful research tool for radiation therapy.

The Construction of QoS Integration Platform for Real-time Negotiation and Adaptation Stream Service in Distributed Object Computing Environments (분산 객체 컴퓨팅 환경에서 실시간 협약 및 적응 스트림 서비스를 위한 QoS 통합 플랫폼의 구축)

  • Jun, Byung-Taek;Kim, Myung-Hee;Joo, Su-Chong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11S
    • /
    • pp.3651-3667
    • /
    • 2000
  • Recently, in the distributed multimedia environments based on internet, as radical growing technologies, the most of researchers focus on both streaming technology and distributed object thchnology, Specially, the studies which are tried to integrate the streaming services on the distributed object technology have been progressing. These technologies are applied to various stream service mamgements and protocols. However, the stream service management mexlels which are being proposed by the existing researches are insufficient for suporting the QoS of stream services. Besides, the existing models have the problems that cannot support the extensibility and the reusability, when the QoS-reiatedfunctions are being developed as a sub-module which is suited on the specific-purpose application services. For solving these problems, in this paper. we suggested a QoS Integrated platform which can extend and reuse using the distributed object technologies, and guarantee the QoS of the stream services. A structure of platform we suggested consists of three components such as User Control Module(UCM), QoS Management Module(QoSM) and Stream Object. Stream Object has Send/Receive operations for transmitting the RTP packets over TCP/IP. User Control ModuleI(UCM) controls Stream Objects via the COREA service objects. QoS Management Modulel(QoSM) has the functions which maintain the QoS of stream service between the UCMs in client and server. As QoS control methexlologies, procedures of resource monitoring, negotiation, and resource adaptation are executed via the interactions among these comiXments mentioned above. For constmcting this QoS integrated platform, we first implemented the modules mentioned above independently, and then, used IDL for defining interfaces among these mexlules so that can support platform independence, interoperability and portability base on COREA. This platform is constructed using OrbixWeb 3.1c following CORBA specification on Solaris 2.5/2.7, Java language, Java, Java Media Framework API 2.0, Mini-SQL1.0.16 and multimedia equipments. As results for verifying this platform functionally, we showed executing results of each module we mentioned above, and a numerical data obtained from QoS control procedures on client and server's GUI, while stream service is executing on our platform.

  • PDF

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.