• Title/Summary/Keyword: Frequency of library use

Search Result 109, Processing Time 0.022 seconds

Analysis of the Utilization of Mobile Applications by Generation Z using Topic Modeling :Focusing on Users' Essay Data (토픽모델링을 활용한 Z세대의 애플리케이션 효용성에 대한 분석: 이용자의 에세이 데이터를 중심으로)

  • Park, Ju-Yeon;Jeong, Do-Heon
    • Journal of Industrial Convergence
    • /
    • v.20 no.1
    • /
    • pp.43-51
    • /
    • 2022
  • The purpose of this study is to provide basic information necessary for the establishment of mobile service marketing strategies, educational service development, and engineering education for Generation Z by analyzing the utilitization of various applications by Gen Z. To this end, 177 essays on mobile service usage experience were collected, major topics were analyzed using topic modeling, and these were visualized through word cloud analysis. As a result of the study, the main topics were related to 'transportation' such as movement and public transportation, 'personal management' such as schedule management, financial management, food management, 'transaction' such as checkout, meeting, purchase, 'leisure' such as eating out, travel, study, culture. Additionally, words such as time, thought, people, life, bus, information, confirmation, payment, KakaoTalk, and so on were found to have a high of frequency of use. Also, there was found to be a difference between topics by college. This study is meaningful in that it collected essays, which are unstructured data, and analyzed them through topic modeling.

Design of H.264 deblocking filter for the Low-Power Portable Multimedia (저전력 휴대용 멀티미디어를 위한 H.264 디블록킹 필터 설계)

  • Park, Sang Woo;Heo, Jeong Hwa;Park, Sang Bong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.8 no.4
    • /
    • pp.59-65
    • /
    • 2008
  • This paper proposed a H.264 deblocking filter for the portable low-power multimedia. In H.264 deblocking filter, total 8 input pixels in filtering operations needs own filtering operation process respectively, and each filtering process has common structures for each filtering operation. By sharing common filter coefficients and registers, we have designed and implemented an smaller gated module, and moreover filtering operations are skipped on some or whole pixels what if we use some specific condition to operate filtering modules that need lots of operations. In the core of filtering modules, we achieve 33.31% and 10.85% gate count reduction compared with those of filtering modules of the conventional deblocking filter papers. The proposed low-power deblocking filter is implemented by using samsung 0.35um standard cell library technology, the maximum operationh frequency is 108MHz, and the maximum throughput is 33.03 frames/s with CCIR601 image format.

  • PDF

A Design nd Implementation of an IEEE 802.11a Modem for a Home Network of high speed (고속 홈네트워크를 위한 IEEE 802.11a 모뎀 설계와 구현)

  • Seo Jung-Hyun;Lee Je-Hoon;Cho Kyoung-Rok;Park Kwang-Roh
    • Journal of The Institute of Information and Telecommunication Facilities Engineering
    • /
    • v.1 no.2
    • /
    • pp.4-18
    • /
    • 2002
  • In this paper, we propose the new design method for the OFDM based modem that is considerd a standard of wireless communication in indoor environments. We designed a improved FFT/IFFT in order to satisfy a data rate $6{\sim}54$Mbps required homenetworking of high speed and a improved channel equalization circuit using pilot signals for modile environments. And we designed a carrier offset estimator that uses the $tan^{-1}$ circuit to organize a memory structure. All steps are verifed performance through a FPGA and are implemented ASIC to use a standard library cell.

  • PDF

An Analysis on Information Seeking Behavior and Needs of Hearing Impaired College Students (청각장애 대학생의 도서관 이용행태와 정보요구에 대한 연구)

  • Jang, Bo Seong
    • Journal of the Korean Society for information Management
    • /
    • v.32 no.1
    • /
    • pp.297-316
    • /
    • 2015
  • This study looks into how hearing-impaired college students use libraries and what their information needs are in order to prepare basic materials which would be applied for developing a library service program and others proper enough to be used by the hearing-impaired college students. In order to achieve the research goal, the study gathered data from a total of 155 hearing-impaired college students through a survey and interviews and a frequency analysis, a cross validation, a t-test and a one-way ANOVA were conducted to analyze the data. At the end of its research, the study confirmed that the hearing-impaired college students' gender, years, degrees of disability, schools, specialties and prosthetic appliances would make significant differences in how the students use the libraries. In addition, the study took a look into differences in the hearing-impaired college students' information needs caused by types of the students' prosthetic appliances, schools and degrees of disability and found out that these types of the prosthetic appliances the students use would significantly affect every category of their information needs. The study now also understands that both the schools and the degrees of disability would make significant differences in a few categories of the information needs, and the former influences education and promotion targeting users and arrangement of sign language interpreters while the latter affects education and promotion targeting users and improvements in browsing environments.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Systematic Review about Occupational Therapy Interventions Applied to the Improvement of Activities of Daily Living in Dementia Patients (치매 환자의 일상생활활동 능력향상에 적용된 작업치료 중재에 관한 체계적 고찰)

  • Kwag, Sung-Won;Na, Hyun-Jun;Kwang, Shin-Wok;Nam, Ju-Hyun
    • Journal of Korean Clinical Health Science
    • /
    • v.2 no.1
    • /
    • pp.35-45
    • /
    • 2014
  • Purpose: This study analyzes occupational therapy interventions to improve activities of daily living (ADL) in dementia patients and the instruments used to verify their effects through a systematic review and attempts to use the results as preliminary data in selecting further interventions and instruments. Method: The databases searched included NDSL, DBpia, RISS, KISS and National Assembly Library with search words including 'Alzheimer's disease', 'Alzheimer', 'daily living' and 'ADL.' The subjects of analysis were a total of 7 studies, and a frequency analysis was used for the usage count of the interventions used in each study. In order to provide evidence, PICO Method was used for sorting. Result: As a result of this study, there were 7 occupational therapy interventions applied to improve ADL in dementia patients, which were used 7 times total. As for the instruments used to validate the effects of the interventions for the ADL, it turned out that '3 studies used AMPS (42.9%),' which was the most, followed by 'Allen Cognitive Level Screen' (ACLS) and Functional Independence Measure' (FIM), respectively used in 2 studies (28.6%); and 'Modified Barthel Index' (MBI) and 'Philadelphia Geriatric Center IADL' (PGC IADL), respectively used in 1 study. Regarding the qualitative level of evidence, it turned out that 4 studies were Level III (57.1%), followed by 2 studies at Level IV (28.6%) and 1 study at Level I (14.3%). Conclusion: This study suggested the kinds and frequencies of usage of the interventions and instruments of occupational therapy for the improvement of ADL in dementia patients, and the studies of evidence were presented by the PICO Method. It is judged that the results of this study can be used as preliminary data in selecting interventions and instruments to improve the ADL in dementia patients. In the future, studies should be carried out on the ADL in other areas related to dementia.

Systematic Review of Assessment Tools for the Housing Environment of the Old Adults Population (노년 인구의 주거환경 평가도구에 관한 체계적 고찰)

  • Lim, Young-Myoung
    • Therapeutic Science for Rehabilitation
    • /
    • v.13 no.2
    • /
    • pp.27-40
    • /
    • 2024
  • Objective : This study aimed to conduct a systematic review of the assessment tools used to assess the housing environment of older adults. Methods : Data were collected from January 2015 to August 31st, 2023, by searching databases including the Cochrane Library, PubMed, and ProQuest. From the 267 articles, nine assessment tools were selected for analysis based on their original instruments. These tools were categorized and systematically organized for analysis based on their frequency of use, assessment purposes, sub-domains, scales, and other relevant criteria. Results : Among the nine tools, HOME FAST and IPAQ-E were the most frequently used (20% each). The objectives of these tools are to assess friendliness, physical barriers, fall prevention, dementia-friendly environments, physical activity, and accessibility. The measurement scope encompassed various factors, such as outdoor spaces, buildings, transportation, housing, and community support. Conclusion : When considering the suitability of housing for the older adults population, providing foundational data for the rational selection of evaluation tools with logical validity is important. This includes factors such as the objectives and measurement scopes of housing environment assessment tools.

A Study on the Method of Scholarly Paper Recommendation Using Multidimensional Metadata Space (다차원 메타데이터 공간을 활용한 학술 문헌 추천기법 연구)

  • Miah Kam;Jee Yeon Lee
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.1
    • /
    • pp.121-148
    • /
    • 2023
  • The purpose of this study is to propose a scholarly paper recommendation system based on metadata attribute similarity with excellent performance. This study suggests a scholarly paper recommendation method that combines techniques from two sub-fields of Library and Information Science, namely metadata use in Information Organization and co-citation analysis, author bibliographic coupling, co-occurrence frequency, and cosine similarity in Bibliometrics. To conduct experiments, a total of 9,643 paper metadata related to "inequality" and "divide" were collected and refined to derive relative coordinate values between author, keyword, and title attributes using cosine similarity. The study then conducted experiments to select weight conditions and dimension numbers that resulted in a good performance. The results were presented and evaluated by users, and based on this, the study conducted discussions centered on the research questions through reference node and recommendation combination characteristic analysis, conjoint analysis, and results from comparative analysis. Overall, the study showed that the performance was excellent when author-related attributes were used alone or in combination with title-related attributes. If the technique proposed in this study is utilized and a wide range of samples are secured, it could help improve the performance of recommendation techniques not only in the field of literature recommendation in information services but also in various other fields in society.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.