• Title/Summary/Keyword: Open Source Library

Search Result 129, Processing Time 0.029 seconds

A Study of High Performance WebKit Mobile Web Browser (WebKit 모바일 웹 브라우저의 성능 향상을 위한 기법 연구)

  • Kim, Cheong-Ghil
    • Journal of Satellite, Information and Communications
    • /
    • v.7 no.1
    • /
    • pp.48-52
    • /
    • 2012
  • As the growing popularity of smartphones, mobile web browsing has become one of the most important and popular applications in mobile devices. Furthermore, it is clear that the demand for PC-like full browser performance on mobile devices is increasing greatly. WebKit is an open source web browser engine adopted by Google Android. This paper proposed a technique of increasing the performance of WebKit by paralleling its libraries. This method was applied to JPEG library and the performance evaluation was conducted in PC environment. The results was used to estimate the performance prediction on multi-core mobile embedded architecture and to show the feasibility of the proposed method to estimate the performance gain on heterogeneous multi-core embedded architecture.

Implementation and Performance Analysis of Hadoop MapReduce over Lustre Filesystem (러스터 파일 시스템 기반 하둡 맵리듀스 실행 환경 구현 및 성능 분석)

  • Kwak, Jae-Hyuck;Kim, Sangwan;Huh, Taesang;Hwang, Soonwook
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.561-566
    • /
    • 2015
  • Hadoop is becoming widely adopted in scientific and commercial areas as an open-source distributed data processing framework. Recently, for real-time processing and analysis of data, an attempt to apply high-performance computing technologies to Hadoop is being made. In this paper, we have expanded the Hadoop Filesystem library to support Lustre, which is a popular high-performance parallel distributed filesystem, and implemented the Hadoop MapReduce execution environment over the Lustre filesystem. We analysed Hadoop MapReduce over Lustre by using Hadoop standard benchmark tools. We found that Hadoop MapReduce over Lustre execution has a performance 2-13 times better than a typical Hadoop MapReduce execution.

River Water Level Prediction Method based on LSTM Neural Network

  • Le, Xuan Hien;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2018.05a
    • /
    • pp.147-147
    • /
    • 2018
  • In this article, we use an open source software library: TensorFlow, developed for the purposes of conducting very complex machine learning and deep neural network applications. However, the system is general enough to be applicable in a wide variety of other domains as well. The proposed model based on a deep neural network model, LSTM (Long Short-Term Memory) to predict the river water level at Okcheon Station of the Guem River without utilization of rainfall - forecast information. For LSTM modeling, the input data is hourly water level data for 15 years from 2002 to 2016 at 4 stations includes 3 upstream stations (Sutong, Hotan, and Songcheon) and the forecasting-target station (Okcheon). The data are subdivided into three purposes: a training data set, a testing data set and a validation data set. The model was formulated to predict Okcheon Station water level for many cases from 3 hours to 12 hours of lead time. Although the model does not require many input data such as climate, geography, land-use for rainfall-runoff simulation, the prediction is very stable and reliable up to 9 hours of lead time with the Nash - Sutcliffe efficiency (NSE) is higher than 0.90 and the root mean square error (RMSE) is lower than 12cm. The result indicated that the method is able to produce the river water level time series and be applicable to the practical flood forecasting instead of hydrologic modeling approaches.

  • PDF

A Low-Cost Speech to Sign Language Converter

  • Le, Minh;Le, Thanh Minh;Bui, Vu Duc;Truong, Son Ngoc
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.3
    • /
    • pp.37-40
    • /
    • 2021
  • This paper presents a design of a speech to sign language converter for deaf and hard of hearing people. The device is low-cost, low-power consumption, and it can be able to work entirely offline. The speech recognition is implemented using an open-source API, Pocketsphinx library. In this work, we proposed a context-oriented language model, which measures the similarity between the recognized speech and the predefined speech to decide the output. The output speech is selected from the recommended speech stored in the database, which is the best match to the recognized speech. The proposed context-oriented language model can improve the speech recognition rate by 21% for working entirely offline. A decision module based on determining the similarity between the two texts using Levenshtein distance decides the output sign language. The output sign language corresponding to the recognized speech is generated as a set of sequential images. The speech to sign language converter is deployed on a Raspberry Pi Zero board for low-cost deaf assistive devices.

A STUDY OF USING CKKS HOMOMORPHIC ENCRYPTION OVER THE LAYERS OF A CONVOLUTIONAL NEURAL NETWORK MODEL

  • Castaneda, Sebastian Soler;Nam, Kevin;Joo, Youyeon;Paek, Yunheung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.161-164
    • /
    • 2022
  • Homomorphic Encryption (HE) schemes have been recently growing as a reliable solution to preserve users' information owe to maintaining and operating the user data in the encrypted state. In addition to that, several Neural Networks models merged with HE schemes have been developed as a prospective tool for privacy-preserving machine learning. Those mentioned works demonstrated that it is possible to match the accuracy of non-encrypted models but there is always a trade-off in the computation time. In this work, we evaluate the implementation of CKKS HE scheme operations over the layers of a LeNet5 convolutional inference model, however, owing to the limitations of the evaluation environment, the scope of this work is not to develop a complete LeNet5 encrypted model. The evaluation was performed using the MNIST dataset with Microsoft SEAL (MSEAL) open-source homomorphic encryption library ported version on Python (PyFhel). The behavior of the encrypted model, the limitations faced and a small description of related and future work is also provided.

Vocabulary Analyzer Based on CEFR-J Wordlist for Self-Reflection (VACSR) Version 2

  • Yukiko Ohashi;Noriaki Katagiri;Takao Oshikiri
    • Asia Pacific Journal of Corpus Research
    • /
    • v.4 no.2
    • /
    • pp.75-87
    • /
    • 2023
  • This paper presents a revised version of the vocabulary analyzer for self-reflection (VACSR), called VACSR v.2.0. The initial version of the VACSR automatically analyzes the occurrences and the level of vocabulary items in the transcribed texts, indicating the frequency, the unused vocabulary items, and those not belonging to either scale. However, it overlooked words with multiple parts of speech due to their identical headword representations. It also needed to provide more explanatory result tables from different corpora. VACSR v.2.0 overcomes the limitations of its predecessor. First, unlike VACSR v.1, VACSR v.2.0 distinguishes words that are different parts of speech by syntactic parsing using Stanza, an open-source Python library. It enables the categorization of the same lexical items with multiple parts of speech. Second, VACSR v.2.0 overcomes the limited clarity of VACSR v.1 by providing precise result output tables. The updated software compares the occurrence of vocabulary items included in classroom corpora for each level of the Common European Framework of Reference-Japan (CEFR-J) wordlist. A pilot study utilizing VACSR v.2.0 showed that, after converting two English classes taught by a preservice English teacher into corpora, the headwords used mostly corresponded to CEFR-J level A1. In practice, VACSR v.2.0 will promote users' reflection on their vocabulary usage and can be applied to teacher training.

Study On Receiving and Processing Method about Utilization of Near Real-time Satellite Data (준실시간 활용을 위한 위성자료 수신, 가공 방안 연구)

  • Kim, Soon Yeon;Jung, Young Sim;An, Joo Young;Park, Sang Hoon;Won, Young Jin
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2017.05a
    • /
    • pp.467-467
    • /
    • 2017
  • 토양수분 및 황사발생 연구에 있어 효율적인 광역 분석을 위하여 위성자료가 활용되고 있다. 활용 시나리오에 따라서는 준실시간 자료 수신, 처리가 필요하며 본 연구에서는 이에 대한 방안을 연구하기 위하여 유럽 EUMETSAT(European Organisation for the Exploitation of Meteorological Satellites)의 ASCAT(Advanced Scatterometer) Metop-A 자료에 대하여 파악하였다. 자료 수신 프로토콜에 있어서 FTP, HTTP 등 전통적 방법에 대한 현황과 함께 비교적 최근 기법인 OGC(Open Geospatial Consortium)  WMS(Web Map Service), WCS(Web Coverage Service) 방식의 지원 현황에 대하여 확인하였다. 제공되는 자료 Format부분은 EPS Native와 BUFR(Binary Universal Form for the Representation of meteorological data)을 살펴보되 데이터 프로바이더 측에서 대부분 채택되고 있는 NetCDF(network Common Data Form)를 중심으로 파악하였다. 수신된 자료의 처리 자동화를 위한 소프트웨어는 OSGeo(The Open Source Geospatial Foundation)의 GDAL(Geospatial Data Abstraction Library), 미국 NCAR(National Center for Atmospheric Research)의 NCL(NCAR Command Language)을 중심으로 확인하였다. 자료 가공기법은 격자(Raster) 자료에 대한 기본 메타정보 확인, 좌표참조체계 변환, 해상도 및 Format 변환을 중심으로 확인하였다. 한편 OGC WMS, WCS는 자료의 전송 프로토콜 기법이면서 동시에 서버 사이드에서의 자료 변환 기능을 구비하고 있다. 예를 들어 Http Request에서 영역(Extent), Format 형식, 좌표참조체계를 지정할 수 있다. OGC WMS에 대한 EUMETSAT 파일럿 서비스에서 반환 자료의 공간적 영역, 복수 시점 제공 현황, 반환 포맷 지원 상황은 실제 메서드를 사용하여 파악하였고, 향후 발전 방향을 전망하였다.

  • PDF

Development of a user-friendly training software for pharmacokinetic concepts and models

  • Han, Seunghoon;Lim, Byounghee;Lee, Hyemi;Bae, Soo Hyun
    • Translational and Clinical Pharmacology
    • /
    • v.26 no.4
    • /
    • pp.166-171
    • /
    • 2018
  • Although there are many commercially available training software programs for pharmacokinetics, they lack flexibility and convenience. In this study, we develop simulation software to facilitate pharmacokinetics education. General formulas for time courses of drug concentrations after single and multiple dosing were used to build source code that allows users to simulate situations tailored to their learning objectives. A mathematical relationship for a 1-compartment model was implemented in the form of differential equations. The concept of population pharmacokinetics was also taken into consideration for further applications. The source code was written using R. For the convenience of users, two types of software were developed: a web-based simulator and a standalone-type application. The application was built in the JAVA language. We used the JAVA/R Interface library and the 'eval()' method from JAVA for the R/JAVA interface. The final product has an input window that includes fields for parameter values, dosing regimen, and population pharmacokinetics options. When a simulation is performed, the resulting drug concentration time course is shown in the output window. The simulation results are obtained within 1 minute even if the population pharmacokinetics option is selected and many parameters are considered, and the user can therefore quickly learn a variety of situations. Such software is an excellent candidate for development as an open tool intended for wide use in Korea. Pharmacokinetics experts will be able to use this tool to teach various audiences, including undergraduates.

Occupational Demands and Educational Needs in Korean Librarianship (한국적 도서관학교육과정 연구)

  • Choi Sung Jin;Yoon Byong Tae;Koo Bon Young
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.12
    • /
    • pp.269-327
    • /
    • 1985
  • This study was undertaken to meet more fully the demands for improved training of library personnel, occasioned by the rapidly changing roles and functions of libraries as they try to adapt to the vast social, economic and technological changes currently in progress in the Korean society. The specific purpose of this research is to develop a standard curriculum at the batchelor's level that will properly equip the professional personnel in Korean libraries for the changes confronting them. This study started with the premise that to establish a sound base for curriculum development, it was necessary first to determine what concepts, knowledge, and techniques are required for professional library personnel to perform it at an optimal level of efficiency. Explicitly, it was felt that for the development of useful curricula and courses at the batchelor's level, a prime source of knowledge should be functional behaviours that are necessary in the job situation. To determine specifically what these terminal performance behaviours should be so that learning experience provided could be rooted in reality, the decision was reached to use a systems approach to curriculum development, which is an attempt to break the mold of traditional concepts and to approach interaction from an open, innovative, and product-oriented perspective. This study was designed to: (1) identify what knowledge and techniques are required for professional library personnel to perform the job activities in which they are actually engaged, (2) to evaluate the educational needs of the knowledge and techniques that the professional librarian respondents indicate, and (3) to categorise the knowledge and techniques into teaching subjects to present the teaching subjects by their educational importance. The main data-gathering instrument for the study, a questionnaire containing 254 items, was sent to a randomly selected sample of library school graduates working in libraries and related institutions in Korea. Eighty-three librarians completed and returned the questionnaire. After analysing the returned questionnaire, the following conclusions have been reached: (A) To develop a rational curriculum rooted in the real situation of the Korean libraries, compulsory subjects should be properly chosen from those which were ranked highest in importance by the respondents. Characters and educational policies of, and other teaching subjects offered by, the individual educational institution to which a given library school belongs should also be taken into account in determining compulsory subjects. (B) It is traditionally assumed that education in librarianship should be more concerned with theoretical foundations on which any solution can be developed than with professional needs with particulars and techniques as they are used in existing library environments. However, the respondents gave the former a surprisingly lower rating. The traditional assumption must be reviewed. (C) It is universally accepted in developing library school curricula that compulsory subjects are concerned with the area of knowledge students generally need to learn and optional subjects are concerned with the area to be needed to only those who need it. Now that there is no such clear demarcation line provided in librarianship, it may be a realistic approach to designate subjects in the area rated high by the respondents as compulsory and to designate those in the area rated low as optional. (D) Optional subjects that were ranked considerably higher in importance by the respondents should be given more credits than others, and those ranked lower might be given less credits or offered infrequently or combined. (E) A standard list of compulsory and optional subjects with weekly teaching hours for a Korean library school is presented in the fourth chapter of this report.

  • PDF

A Study on Automated Input of Attribute for Referenced Objects in Spatial Relationships of HD Map (정밀도로지도 공간관계 참조객체의 속성 입력 자동화에 관한 연구)

  • Dong-Gi SUNG;Seung-Hyun MIN;Yun-Soo CHOI;Jong-Min OH
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.27 no.1
    • /
    • pp.29-40
    • /
    • 2024
  • Recently, the technology of autonomous driving, one of the core of the fourth industrial revolution, is developing, but sensor-based autonomous driving is showing limitations, such as accidents in unexpected situations, To compensate for this, HD-map is being used as a core infrastructure for autonomous driving, and interest in the public and private sectors is increasing, and various studies and technology developments are being conducted to secure the latest and accuracy of HD-map. Currently, NGII will be newly built in urban areas and major roads across the country, including the metropolitan area, where self-driving cars are expected to run, and is working to minimize data error rates through quality verification. Therefore, this study analyzes the spatial relationship of reference objects in the attribute structuring process for rapid and accurate renewal and production of HD-map under construction by NGII, By applying the attribute input automation methodology of the reference object in which spatial relations are established using the library of open source-based PyQGIS, target sites were selected for each road type, such as high-speed national highways, general national highways, and C-ITS demonstration sections. Using the attribute automation tool developed in this study, it took about 2 to 5 minutes for each target location to automatically input the attributes of the spatial relationship reference object, As a result of automation of attribute input for reference objects, attribute input accuracy of 86.4% for high-speed national highways, 79.7% for general national highways, 82.4% for C-ITS, and 82.8% on average were secured.