• Title/Summary/Keyword: single search

Search Result 689, Processing Time 0.028 seconds

Systematic review on interprofessional education for pre-licensure nursing student in East Asia (예비 간호인력 대상 다학제 전문직 간 교육 중재 연구의 체계적 문헌고찰: 동아시아권 국가 연구를 중심으로)

  • Heejin Lim;Hwa In Kim;Minji Kim;Seung Eun Lee
    • Quality Improvement in Health Care
    • /
    • v.30 no.1
    • /
    • pp.132-152
    • /
    • 2024
  • Purpose: This study aimed to identify and evaluate interprofessional education (IPE) interventions for healthcare professional students in East Asian countries. Methods: The reporting of this study followed the Preferred Reporting Items of Systematic Reviews and Meta-Analysis guidelines. A literature search was conducted using seven electronic databases: PubMed, EMBASE, CINAHL, Scopus, Web of Science, ERIC, and ProQuest Dissertations & Theses Global. Joanna Briggs Institute Critical Appraisal Checklists were also used to appraise the quality of the included studies. The outcomes of IPE interventions were classified based on a modified Kirkpatrick model. Results: This review included 30 studies predominantly conducted in Singapore, South Korea, and Taiwan. The prevalent research design was a one-group pre-posttest design, and most IPE interventions occurred as single events. Approximately 70% of the studies involved students from two healthcare professions, mainly nursing and medicine. Simulations, group discussions, and lectures have emerged as the most common teaching methodologies, with almost half of the studies leveraging a combination of these techniques. The IPE content primarily focused on interprofessional teamwork, communication, and clinical patient care situations; these included the management of septic shock. The effectiveness of the IPE interventions was mainly evaluated through self-reported measures, indicating improvements in attitudes, perceptions, knowledge, and skills, aligning with Level 2 of the modified Kirkpatrick model. Nonetheless, the reviewed studies did not assess changes in the participants' behavior and patient results. Conclusion: IPE interventions promise to enhance interprofessional collaboration and communication skills among health professional students. Future studies should implement rigorous designs to assess the effectiveness of IPE interventions. Moreover, when designing IPE interventions, researchers and educators should consider the role of cultural characteristics in East Asian countries.

Segmentation of Mammography Breast Images using Automatic Segmen Adversarial Network with Unet Neural Networks

  • Suriya Priyadharsini.M;J.G.R Sathiaseelan
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.151-160
    • /
    • 2023
  • Breast cancer is the most dangerous and deadly form of cancer. Initial detection of breast cancer can significantly improve treatment effectiveness. The second most common cancer among Indian women in rural areas. Early detection of symptoms and signs is the most important technique to effectively treat breast cancer, as it enhances the odds of receiving an earlier, more specialist care. As a result, it has the possible to significantly improve survival odds by delaying or entirely eliminating cancer. Mammography is a high-resolution radiography technique that is an important factor in avoiding and diagnosing cancer at an early stage. Automatic segmentation of the breast part using Mammography pictures can help reduce the area available for cancer search while also saving time and effort compared to manual segmentation. Autoencoder-like convolutional and deconvolutional neural networks (CN-DCNN) were utilised in previous studies to automatically segment the breast area in Mammography pictures. We present Automatic SegmenAN, a unique end-to-end adversarial neural network for the job of medical image segmentation, in this paper. Because image segmentation necessitates extensive, pixel-level labelling, a standard GAN's discriminator's single scalar real/fake output may be inefficient in providing steady and appropriate gradient feedback to the networks. Instead of utilising a fully convolutional neural network as the segmentor, we suggested a new adversarial critic network with a multi-scale L1 loss function to force the critic and segmentor to learn both global and local attributes that collect long- and short-range spatial relations among pixels. We demonstrate that an Automatic SegmenAN perspective is more up to date and reliable for segmentation tasks than the state-of-the-art U-net segmentation technique.

Sixteen years progress in recanalization of chronic carotid artery occlusion: A comprehensive review

  • Stanishevskiy Artem;Babichev Konstantin;Savello Alexander;Gizatullin Shamil;Svistov Dmitriy;Davydov Denis
    • Journal of Cerebrovascular and Endovascular Neurosurgery
    • /
    • v.25 no.1
    • /
    • pp.1-12
    • /
    • 2023
  • Objective: Although chronic carotid artery occlusion seems to be associated with significant risk of ischemic stroke, revascularization techniques are neither well established nor widespread. In contrast, extracranial-intracranial bypass is common despite the lack of evidence regarding neurological improvement or prevention of ischemic events. The aim of current review is to evaluate the effectiveness of various methods of recanalization of chronic carotid artery occlusion. Methods: Comprehensive literature search through PubMed, Scopus, Cochrane and Web of Science databases performed. Various parameters were assessed among patients underwent surgical, endovascular and hybrid recanalization for chronic carotid artery occlusion. Results: 40 publications from 2005 to 2021 with total of more than 1300 cases of revascularization of chronic carotid artery occlusion have been reviewed. Further parameters were assessed among patients underwent surgical, endovascular and hybrid recanalization for chronic carotid artery occlusion: mean age, male to female ratio, mean duration of occlusion before treatment, rate of successful recanalization, frequency of restenosis and reocclusion, prevalence of ischemic stroke postoperatively, neurological or other symptoms improvement and complications. Based on proposed through reviewed literature indications for revascularization and predictive factors of various recanalizing procedures, an algorithm for clinical decision making have been formulated. Conclusions: Although treatment of chronic carotid artery occlusion remains challenging, current literature suggests revascularization as single option for verified neurological improvement and prevention of ischemic events. Surgical and endovascular procedures should be taken into account when treating patients with symptomatic chronic carotid artery occlusion.

A BRIEF REVIEW OF PREDATOR-PREY MODELS FOR AN ECOLOGICAL SYSTEM WITH A DIFFERENT TYPE OF BEHAVIORS

  • Kuldeep Singh;Teekam Singh;Lakshmi Narayan Mishra;Ramu Dubey;Laxmi Rathour
    • Korean Journal of Mathematics
    • /
    • v.32 no.3
    • /
    • pp.381-406
    • /
    • 2024
  • The logistic growth model was developed with a single population in mind. We now analyze the growth of two interdependent populations, moving beyond the one-dimensional model. Interdependence between two species of animals can arise when one (the "prey") acts as a food supply for the other (the "predator"). Predator-prey models are the name given to models of this type. While social scientists are mostly concerned in human communities (where dependency hopefully takes various forms), predator-prey models are interesting for a variety of reasons. Some variations of this model produce limit cycles, an interesting sort of equilibrium that can be found in dynamical systems with two (or more) dimensions. In terms of substance, predator-prey models have a number of beneficial social science applications when the state variables are reinterpreted. This paper provides a quick overview of numerous predator-prey models with various types of behaviours that can be applied to ecological systems, based on a survey of various types of research publications published in the last ten years. The primary source for learning about predator-prey models used in ecological systems is historical research undertaken in various circumstances by various researchers. The review aids in the search for literature that investigates the impact of various parameters on ecological systems. There are also comparisons with traditional models, and the results are double-checked. It can be seen that several older predator-prey models, such as the Beddington-DeAngelis predator-prey model, the stage-structured predator-prey model, and the Lotka-Volterra predator-prey model, are stable and popular among academics. For each of these scenarios, the results are thoroughly checked.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

Design and Implementation of a Similarity based Plant Disease Image Retrieval using Combined Descriptors and Inverse Proportion of Image Volumes (Descriptor 조합 및 동일 병명 이미지 수량 역비율 가중치를 적용한 유사도 기반 작물 질병 검색 기술 설계 및 구현)

  • Lim, Hye Jin;Jeong, Da Woon;Yoo, Seong Joon;Gu, Yeong Hyeon;Park, Jong Han
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.6
    • /
    • pp.30-43
    • /
    • 2018
  • Many studies have been carried out to retrieve images using colors, shapes, and textures which are characteristic of images. In addition, there is also progress in research related to the disease images of the crop. In this paper, to be a help to identify the disease occurred in crops grown in the agricultural field, we propose a similarity-based crop disease search system using the diseases image of horticulture crops. The proposed system improves the similarity retrieval performance compared to existing ones through the combination descriptor without using a single descriptor and applied the weight based calculation method to provide users with highly readable similarity search results. In this paper, a total of 13 Descriptors were used in combination. We used to retrieval of disease of six crops using a combination Descriptor, and a combination Descriptor with the highest average accuracy for each crop was selected as a combination Descriptor for the crop. The retrieved result were expressed as a percentage using the calculation method based on the ratio of disease names, and calculation method based on the weight. The calculation method based on the ratio of disease name has a problem in that number of images used in the query image and similarity search was output in a first order. To solve this problem, we used a calculation method based on weight. We applied the test image of each disease name to each of the two calculation methods to measure the classification performance of the retrieval results. We compared averages of retrieval performance for two calculation method for each crop. In cases of red pepper and apple, the performance of the calculation method based on the ratio of disease names was about 11.89% on average higher than that of the calculation method based on weight, respectively. In cases of chrysanthemum, strawberry, pear, and grape, the performance of the calculation method based on the weight was about 20.34% on average higher than that of the calculation method based on the ratio of disease names, respectively. In addition, the system proposed in this paper, UI/UX was configured conveniently via the feedback of actual users. Each system screen has a title and a description of the screen at the top, and was configured to display a user to conveniently view the information on the disease. The information of the disease searched based on the calculation method proposed above displays images and disease names of similar diseases. The system's environment is implemented for use with a web browser based on a pc environment and a web browser based on a mobile device environment.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Importance of Strain Improvement and Control of Fungal cells Morphology for Enhanced Production of Protein-bound Polysaccharides(β-D-glucan) in Suspended Cultures of Phellinus linteus Mycelia (Phellinus linteus의 균사체 액상배양에서 단백다당체(β-D-glucan)의 생산성 향상을 위한 균주 개량과 배양형태 조절의 중요성)

  • Shin, Woo-Shik;Kwon, Yong Jung;Jeong, Yong-Seob;Chun, Gie-Taek
    • Korean Chemical Engineering Research
    • /
    • v.47 no.2
    • /
    • pp.220-229
    • /
    • 2009
  • Strain improvement and morphology investigation in bioreactor cultures were undertaken in suspended cultures of Phellinus linteus mycelia for mass production of protein-bound polysaccharides(soluble ${\beta}$-D-glucan), a powerful immuno-stimulating agent. Phellineus sp. screened for this research was identified as Phellinus linteues through ITS rDNA sequencing method and blast search, demonstrating 99.7% similarity to other Phellinus linteus strains. Intensive strain improvement program was carried out by obtaining large amounts of protoplasts for the isolation of single cell colonies. Rapid and large screening of high-yielding producers was possible because large numbers of protoplasts ($1{\times}10^5{\sim}10^6\;protoplasts/ml$) formed using the banding filtration method with the cell wall-disrupting enzymes could be regenerated in relatively high regeneration frequency($10^{-2}{\sim}10^{-3}$) in the newly developed regeneration medium. It was demonstrated that the strains showing high performances in the protoplast regeneration and solid growth medium were able to produce 5.8~6.4%(w/w) of ${\beta}$-D-glucan and 13~15 g/L of biomass in stable manners in suspended shake-flask cultures of P. linteus mycelia. In addition, cell mass increase was observed to be the most important in order to enhance ${\beta}$-D-glucan productivity during the course of strain improvement program, since the amount of ${\beta}$-D-glucan extracted from the cell wall of P. linteus mycelia was almost constant on the unit biomass basis. Therefore we fully investigated the fungal cell morphology, generally known as one of the key factors affecting cell growth extent in the bioreactor cultures of mycelial fungal cells. It was found that, in order to obtain as high cell mass as possible in the final production bioreactor cultures, the producing cells should be proliferated in condensed filamentous forms in the growth cultures, and optimum amounts of these filamentous cells should be transferred as active inoculums to the production bioreactor. In this case, ideal morphologies consisting of compacted pellets less than 0.5mm in diameter were successfully induced in the production cultures, resulting in shorter period of lag phase, 1.5 fold higher specific cell growth rate and 3.3 fold increase in the final biomass production as compared to the parallel bioreactor cultures of different morphological forms. It was concluded that not only the high-yielding but also the good morphological characteristics led to the significantly higher biomass production and ${\beta}$-D-glucan productivity in the final production cultures.

A Lifelog Management System Based on the Relational Data Model and its Applications (관계 데이터 모델 기반 라이프로그 관리 시스템과 그 응용)

  • Song, In-Chul;Lee, Yu-Won;Kim, Hyeon-Gyu;Kim, Hang-Kyu;Haam, Deok-Min;Kim, Myoung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.637-648
    • /
    • 2009
  • As the cost of disks decreases, PCs are soon expected to be equipped with a disk of 1TB or more. Assuming that a single person generates 1GB of data per month, 1TB is enough to store data for the entire lifetime of a person. This has lead to the growth of researches on lifelog management, which manages what people see and listen to in everyday life. Although many different lifelog management systems have been proposed, including those based on the relational data model, based on ontology, and based on file systems, they have all advantages and disadvantages: Those based on the relational data model provide good query processing performance but they do not support complex queries properly; Those based on ontology handle more complex queries but their performances are not satisfactory: Those based on file systems support only keyword queries. Moreover, these systems are lack of support for lifelog group management and do not provide a convenient user interface for modifying and adding tags (metadata) to lifelogs for effective lifelog search. To address these problems, we propose a lifelog management system based on the relational data model. The proposed system models lifelogs by using the relational data model and transforms queries on lifelogs into SQL statements, which results in good query processing performance. It also supports a simplified relationship query that finds a lifelog based on other lifelogs directly related to it, to overcome the disadvantage of not supporting complex queries properly. In addition, the proposed system supports for the management of lifelog groups by providing ways to create, edit, search, play, and share them. Finally, it is equipped with a tagging tool that helps the user to modify and add tags conveniently through the ion of various tags. This paper describes the design and implementation of the proposed system and its various applications.