• Title/Summary/Keyword: Search Item

Search Result 182, Processing Time 0.024 seconds

Fermentation of Cucurbita maxima Extracts with Microganisms from Kimchi (김치 유래 유산균을 이용한 단호박 발효음료 제조 기술 개발)

  • Roh, Hyun-Ji;Kim, Gi-Eun
    • KSBB Journal
    • /
    • v.24 no.2
    • /
    • pp.149-155
    • /
    • 2009
  • 19 strains, which could be identified as Lactobacillus sp. were isolated. The Cucurbita maxima has been known as a traditional healthy food and variable positive effects on the human body were already reported. In this study we tried to develop a production process for a healthy fermented drink with Cucurbita maxima and strains originated from Kimchi. Many kinds of lacctobacci species existed in the fermented food cannot survive in the acidic conditions in the stomach. So we tried to search and select a strain, which can arrive to the small intestine. A species of a Lactobacillus named as C332 was identifed as Lactobacillus plantarum and selected for the fermentation process. With the treatment with artificial gastric juice and artificial bile the survival rate of the cells could be calculated. The physiological characteristics at the variable conditions have been tested. After fermentation process the sensoric tests on the product with panels were tried. The most of the cells could survive in the acidic conditions and falcultive anaerobe. Especially some antibacterial effects aganinst E.coli were also found. With all kinds of the results from our research the fermented Cucurbita maxima drink can be a successful item in the market.

GGenre Pattern based User Clustering for Performance Improvement of Collaborative Filtering System (협업적 여과 시스템의 성능 향상을 위한 장르 패턴 기반 사용자 클러스터링)

  • Choi, Ja-Hyun;Ha, In-Ay;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.11
    • /
    • pp.17-24
    • /
    • 2011
  • Collaborative filtering system is the clustering about user is built and then based on that clustering results will recommend the preferred item to the user. However, building user clustering is time consuming and also once the users evaluate and give feedback about the film then rebuilding the system is not simple. In this paper, genre pattern of movie recommendation systems is being used and in order to simplify and reduce time of rebuilding user clustering. A Frequent pattern networks is used and then extracts user preference genre patterns and through that extracted patterns user clustering will be built. Through built the clustering for all neighboring users to collaborative filtering is applied and then recommends movies to the user. When receiving user information feedback, traditional collaborative filtering is to rebuild the clustering for all neighbouring users to research and do the clustering. However by using frequent pattern Networks, through user clustering based on genre pattern, collaborative filtering is applied and when rebuilding user clustering inquiry limited by search time can be reduced. After receiving user information feedback through proposed user clustering based on genre pattern, the time that need to spent on re-establishing user clustering can be reduced and also enable the possibility of traditional collaborative filtering systems and recommendation of a similar performance.

A Study on Factors of Internet Overdependence for Adults Using the Decision Tree Analysis Model (성인층의 인터넷 과의존 영향요인: 의사결정나무분석을 활용하여)

  • Seo, Hyung-Jun;Shin, Ji-Woong
    • Informatization Policy
    • /
    • v.25 no.2
    • /
    • pp.20-45
    • /
    • 2018
  • This study aims to find the factors of Internet overdependence in adults, through the decision tree analysis model, which is a data mining method using National Information Society Agency's raw data from the survey on Internet overdependence in 2016. As a result of the decision tree analysis, a total 16 nodes of Internet overdependence risk groups were identified. The main predicated variables were the amount of time spent per smart media usage in weekdays; amount of time spent per smart media usage in weekends; experiences of purchasing cash items; percentage of using smart media for leisure; negative personality; percentage of using smart media for information search and utilization; and awareness on good functions of the Internet, all of which in order had greater impact on the risk groups. Users in the highest risk node spent the smart media for more than 5 minutes per use and less than 5~10 minutes in weekdays, had experiences of cash item purchase, and had lower level of awareness on the good functions of the Internet. The analysis led to the following recommendations: First, even a short-time use has higher chances of causing Internet overdependence, and therefore, guidelines need to be developed based on research on the usage behavior rather than the usage time. Second, self-regulation is required because factors that affect overindulgence in games, such as the cash items, increase Internet overdependence. Third, using the Internet for leisure causes higher risk of overdependence and therefore, other means of leisure should be recommended.

Efficient Collaboration Method Between CPU and GPU for Generating All Possible Cases in Combination (조합에서 모든 경우의 수를 만들기 위한 CPU와 GPU의 효율적 협업 방법)

  • Son, Ki-Bong;Son, Min-Young;Kim, Young-Hak
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.9
    • /
    • pp.219-226
    • /
    • 2018
  • One of the systematic ways to generate the number of all cases is a combination to construct a combination tree, and its time complexity is O($2^n$). A combination tree is used for various purposes such as the graph homogeneity problem, the initial model for calculating frequent item sets, and so on. However, algorithms that must search the number of all cases of a combination are difficult to use realistically due to high time complexity. Nevertheless, as the amount of data becomes large and various studies are being carried out to utilize the data, the number of cases of searching all cases is increasing. Recently, as the GPU environment becomes popular and can be easily accessed, various attempts have been made to reduce time by parallelizing algorithms having high time complexity in a serial environment. Because the method of generating the number of all cases in combination is sequential and the size of sub-task is biased, it is not suitable for parallel implementation. The efficiency of parallel algorithms can be maximized when all threads have tasks with similar size. In this paper, we propose a method to efficiently collaborate between CPU and GPU to parallelize the problem of finding the number of all cases. In order to evaluate the performance of the proposed algorithm, we analyze the time complexity in the theoretical aspect, and compare the experimental time of the proposed algorithm with other algorithms in CPU and GPU environment. Experimental results show that the proposed CPU and GPU collaboration algorithm maintains a balance between the execution time of the CPU and GPU compared to the previous algorithms, and the execution time is improved remarkable as the number of elements increases.

A Study on Automated Fake News Detection Using Verification Articles (검증 자료를 활용한 가짜뉴스 탐지 자동화 연구)

  • Han, Yoon-Jin;Kim, Geun-Hyung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.569-578
    • /
    • 2021
  • Thanks to web development today, we can easily access online news via various media. As much as it is easy to access online news, we often face fake news pretending to be true. As fake news items have become a global problem, fact-checking services are provided domestically, too. However, these are based on expert-based manual detection, and research to provide technologies that automate the detection of fake news is being actively conducted. As for the existing research, detection is made available based on contextual characteristics of an article and the comparison of a title and the main article. However, there is a limit to such an attempt making detection difficult when manipulation precision has become high. Therefore, this study suggests using a verifying article to decide whether a news item is genuine or not to be affected by article manipulation. Also, to improve the precision of fake news detection, the study added a process to summarize a subject article and a verifying article through the summarization model. In order to verify the suggested algorithm, this study conducted verification for summarization method of documents, verification for search method of verification articles, and verification for the precision of fake news detection in the finally suggested algorithm. The algorithm suggested in this study can be helpful to identify the truth of an article before it is applied to media sources and made available online via various media sources.

Effectiveness of Acupuncture in the Treatment of Post-Disaster Musculoskeletal Pain: A Systematic Review (재난 후 근골격계 통증에 침치료의 유효성: 체계적 문헌고찰)

  • Ka-Hyun Kim;Sung-Won Choi;Hae-Won Hong;Ju-Young Yoon;Yong-Jun Kim;Jung-Hyun Kim
    • Journal of Korean Medicine Rehabilitation
    • /
    • v.33 no.3
    • /
    • pp.135-148
    • /
    • 2023
  • Objectives To investigate the effectiveness of acupuncture in the treatment of post-disaster musculoskeletal pain by reviewing relevant clinical studies. Methods A systematic search was conducted across 10 electronic databases to identify relevant clinical studies on acupuncture treatment for post-disaster musculoskeletal pain until May 2023. The methodological quality was evaluated using the Cochrane Risk of Bias 2 and Risk of Bias Assessment tool for non-randomized studies tools. Results Six articles were analyzed, including two randomized controlled trials (RCTs), two before-after studies, one qualitative research, and one case series. Overall, acupuncture therapy showed some improvement in pain scale among musculoskeletal pain survivors. However, no significant improvement was observed in the Short-Form McGill Pain Questionnaire (SF-MPQ-2). Subgroup analysis of participants who completed at least four acupuncture sessions revealed a significant effect on the SFMPQ-2. Additionally, a significant improvement in 36-Item Short Form Survey (SF36P) was observed after 6 months of treatment, but the 2-month treatment period did not show statistically significant effects on SF-36P improvement. The evaluation of the methodological quality of the RCTs identified some concerns of bias. Conclusions The results suggest that acupuncture is effective in alleviating post-disaster musculoskeletal pain. However, considering the limited number of selected studies and the inclusion of subjective evaluation measures, caution should be exercised in interpreting the results. Further large-scale follow-up studies are needed to determine the optimal frequency and duration of acupuncture treatment. Well-designed controlled trials should be conducted to provide more robust evidence regarding the effectiveness of acupuncture for post-disaster musculoskeletal pain.

The Relationship between Using Both Hands Keyboard Input and Hand Function Among the Lifestyles of University Student (대학생의 라이프스타일 중 양손사용 스마트폰 자판 입력과 손 기능과의 관계)

  • Bae, Seong-Hwan;Kang, Woo-Jin;Kim, Na-Yeong;Kim, Ji-Hyeon;Jo, June-Hyeok;Baek, Ji-Young
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.1
    • /
    • pp.221-228
    • /
    • 2021
  • This study aims to provide basic data for developing hand functional training programs using a keyboard to see if there is a relationship between the smart keyboard input speed using both hands, the Dexterity of the hand and the eye-hand coordination ability. The smartphone keyboard input speed, Purdue Pegboard, Grooved Pegboard Test, and Korean-Developmental-Test of Visual Perception-Adolescent were evaluated for 40 university students Province. An independent sample t-test and one-way ANOVA were conducted to identify differences in smartphone keyboard input speed, dexterity, eye-hand coordination ability and visual-motion using both hands according to the general characteristics of the subjects. Pearson correlation was also conducted to find out the relationship between hand-used smartphone keyboard input speed, hand dexterity, eye-hand coordination ability and visual-motor. As a result, the smartphone keyboard input speed using both hands showed a correlation with the dominant hand in the Purdue Pegboard Test (r=-.313, p<.05). In addition, the input speed of the smartphone keyboard is Copying(r=-.333, p<.05), Visual Motor Search(r=.455, p<.01), Visual Motor speed(r=-.453, p<.01) and Form Constancy (r=-.341, p<.05) in the item of K-DTVP-A. Therefore, it is believed that it will be helpful in the development of a treatment program using a smartphone, and it is expected that the effectiveness of a treatment program using a smartphone will be proven through additional experimental studies in the future.

A CF-based Health Functional Recommender System using Extended User Similarity Measure (확장된 사용자 유사도를 이용한 CF-기반 건강기능식품 추천 시스템)

  • Sein Hong;Euiju Jeong;Jaekyeong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.1-17
    • /
    • 2023
  • With the recent rapid development of ICT(Information and Communication Technology) and the popularization of digital devices, the size of the online market continues to grow. As a result, we live in a flood of information. Thus, customers are facing information overload problems that require a lot of time and money to select products. Therefore, a personalized recommender system has become an essential methodology to address such issues. Collaborative Filtering(CF) is the most widely used recommender system. Traditional recommender systems mainly utilize quantitative data such as rating values, resulting in poor recommendation accuracy. Quantitative data cannot fully reflect the user's preference. To solve such a problem, studies that reflect qualitative data, such as review contents, are being actively conducted these days. To quantify user review contents, text mining was used in this study. The general CF consists of the following three steps: user-item matrix generation, Top-N neighborhood group search, and Top-K recommendation list generation. In this study, we propose a recommendation algorithm that applies an extended similarity measure, which utilize quantified review contents in addition to user rating values. After calculating review similarity by applying TF-IDF, Word2Vec, and Doc2Vec techniques to review content, extended similarity is created by combining user rating similarity and quantified review contents. To verify this, we used user ratings and review data from the e-commerce site Amazon's "Health and Personal Care". The proposed recommendation model using extended similarity measure showed superior performance to the traditional recommendation model using only user rating value-based similarity measure. In addition, among the various text mining techniques, the similarity obtained using the TF-IDF technique showed the best performance when used in the neighbor group search and recommendation list generation step.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.


  • (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.