• Title/Summary/Keyword: Deleted Data

Search Result 219, Processing Time 0.029 seconds

Study on Family Caregiving Burden Scale of Dementia-Korea(FCBSD-K) (치매환자 가족부담감의 한국형 도구개발)

  • Cho, Nam Ok
    • Korean Journal of Adult Nursing
    • /
    • v.12 no.4
    • /
    • pp.629-640
    • /
    • 2000
  • The purpose of this study was to develop and validate the scale to measure dementia patient's caregiver burden of Korea. In the first phase of the study, 15 caregivers of dementia patients were interviewed to provide narrative data from which items were developed. Initially 65 items were generated from the interview data of 15 caregivers. Content validity was judged by two separate panels of experts with 27 professionals and 30 family caregivers. These items were analyzed through the Index of Content Validity and 33 items were selected which met .80 or more of the CVI. This preliminary FCBSD-K was tested with 207 adult caregivers for reliability and construct validity including item analysis and orthogonal(Varimax) factor analysis. Eight items were deleted because of high or low item-item correlation. The result of the second factor analysis produced six factors that coincided with the conceptual framework posed for the scale developed. The six factors were labeled as 'physio social factor' 'emotional factor' 'family cultural factor' 'role obligation' 'guilt feeling' and 'financial & supportive system factor'. The alpha coefficient relating to internal consistency was .9264 for reliability. In conclusion, cultural factor is related to dementia patient's caregiver burden and FCBSD-K was useful in assessing the dementia patient's caregiver burden in Korea.

  • PDF

A Study on the Typology of Agricultural Reservoir for Effective Safety Inspection Systems (효율적인 안전진단 체계 수립을 위한 농업용 저수지 유형화 연구)

  • Lee, Chang Beom;Jung, Nam Su;Park, Seong Ki;Jeon, Sang Ok
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.57 no.5
    • /
    • pp.89-99
    • /
    • 2015
  • In this research, 1,032 data of precise safety inspection from 2004 to 2013 are gathered and constructed for finding effective safety inspection systems. Items are extracted from constructed data and factors for typology are decided with statistical method such as principle component analysis and cluster analysis. For factor decision, we extruded independent characteristics such as morphological and geographical characteristic, and deleted items which can be expressed by combination of independent characteristics. Four factors such as total storage, watershed ratio, levee length ratio, and spillway length ratio are extracted in this process. In cluster analysis, levee length ratio is excluded because it is not separated as cluster. Finally nine types of agricultural reservoir are extruded by total storage, watershed ratio, and spillway length ratio with frequency analysis.

Digital Forensics Investigation of Redis Database (Redis 데이터베이스에 대한 디지털 포렌식 조사 기법 연구)

  • Choi, Jae Mun;Jeong, Doo Won;Yoon, Jong Seong;Lee, Sang Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.5
    • /
    • pp.117-126
    • /
    • 2016
  • Recently, increasing utilization of Big Data or Social Network Service involves the increases in demand for NoSQL Database that overcomes the limitations of existing relational database. A forensic examination of Relational Database has steadily researched in terms of Digital Forensics. In contrast, the forensic examination of NoSQL Database is rarely studied. In this paper, We introduce Redis (which is) based on Key-Value Store NoSQL Database, and research the collection and analysis of forensic artifacts then propose recovery method of deleted data. Also we developed a recovery tool, it will be verified our recovery algorithm.

Improving the Quality of Response Surface Analysis of an Experiment for Coffee-Supplemented Milk Beverage: I. Data Screening at the Center Point and Maximum Possible R-Square

  • Rheem, Sungsue;Oh, Sejong
    • Food Science of Animal Resources
    • /
    • v.39 no.1
    • /
    • pp.114-120
    • /
    • 2019
  • Response surface methodology (RSM) is a useful set of statistical techniques for modeling and optimizing responses in research studies of food science. As a design for a response surface experiment, a central composite design (CCD) with multiple runs at the center point is frequently used. However, sometimes there exist situations where some among the responses at the center point are outliers and these outliers are overlooked. Since the responses from center runs are those from the same experimental conditions, there should be no outliers at the center point. Outliers at the center point ruin statistical analysis. Thus, the responses at the center point need to be looked at, and if outliers are observed, they have to be examined. If the reasons for the outliers are not errors in measuring or typing, such outliers need to be deleted. If the outliers are due to such errors, they have to be corrected. Through a re-analysis of a dataset published in the Korean Journal for Food Science of Animal Resources, we have shown that outlier elimination resulted in the increase of the maximum possible R-square that the modeling of the data can obtain, which enables us to improve the quality of response surface analysis.

The Advantage of an Ethical Supply Chain to Increase Consumer's Attention

  • Namim NA
    • The Journal of Industrial Distribution & Business
    • /
    • v.15 no.1
    • /
    • pp.31-39
    • /
    • 2024
  • Purpose: Through an ethical supply chain, brands not only catch the eye but win over a fan base of consumers who prize credibility and consistency in what they purchase. Currently, the ethical supply chain is no longer just a manufacturing process; it has become a compelling story. It draws people's attention and wins their loyalty. This research study will examine the benefits of an ethical supply chain in attracting consumer attention and building brand loyalty. Research design, data and methodology: For this research study, A detailed method was used to search and analyze relevant articles. Initial searches used set terms in certain databases. Screening criteria were the thorough scrutiny of titles and abstracts to decide their relevance to the study at hand. Thus, to enhance the quality of data, duplicate entries were deleted. Results: Based on the analysis of the prior literature, the results highlight the power of ethical saliency, showing that consumers themselves are looking for and rewarding products that meet their ethical standards. This attention to ethically transparent brands, in turn, encourages more interest and interaction with them. Conclusions: Therefore, practitioners must transmit the firm's ethical standards through all channels of communication-investor relations materials and financial reports alike.

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

Data De-duplication and Recycling Technique in SSD-based Storage System for Increasing De-duplication Rate and I/O Performance (SSD 기반 스토리지 시스템에서 중복률과 입출력 성능 향상을 위한 데이터 중복제거 및 재활용 기법)

  • Kim, Ju-Kyeong;Lee, Seung-Kyu;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.149-155
    • /
    • 2012
  • SSD is a storage device of having high-performance controller and cache buffer and consists of many NAND flash memories. Because NAND flash memory does not support in-place update, valid pages are invalidated when update and erase operations are issued in file system and then invalid pages are completely deleted via garbage collection. However, garbage collection performs many erase operations of long latency and then it reduces I/O performance and increases wear leveling in SSD. In this paper, we propose a new method of de-duplicating valid data and recycling invalid data. The method de-duplicates valid data and then recycles invalid data so that it improves de-duplication ratio. Due to reducing number of writes and garbage collection, the method could increase I/O performance and decrease wear leveling in SSD. Experimental result shows that it can reduce maximum 20% number of garbage collections and 9% I/O latency than those of general case.

A Study on Restoration and Utilization of Recorded Archaeological Data (기록화된 고고자료의 복원과 활용방안에 대한 연구)

  • Heo, Ui-Haeng
    • Journal of Digital Contents Society
    • /
    • v.18 no.4
    • /
    • pp.723-731
    • /
    • 2017
  • The restoration of archaeological data was carried out using photographs and drawings left as past records. It can be divided into ruins and artifacts. The restoration of the ruins was performed by modeling the individual parts and parts left by the photographs, aligning them and synthesizing them, and reconstructing them three-dimensionally as one object. Restoration of artifacts was performed on both photographs and drawings. After the modeling work is prioritized through the photographs, there is a method of restoring the original image by modifying the texture image of the damaged part of the modeled artifact, or restoring the original image by modeling and synthesizing the deleted part in the artifact. The restoration of the artifacts through the drawings was carried out by three - dimensional modeling and reconstruction through real mapping of images. The reconstructed archaeological data can be used in various directions. In particular, it is possible to verify and compare the results of the numerical analysis and interpretation of the past 2D data, and to provide a more accurate analysis plan in the future.

Genome Analysis Pipeline I/O Workload Analysis (유전체 분석 파이프라인의 I/O 워크로드 분석)

  • Lim, Kyeongyeol;Kim, Dongoh;Kim, Hongyeon;Park, Geehan;Choi, Minseok;Won, Youjip
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.123-130
    • /
    • 2013
  • As size of genomic data is increasing rapidly, the needs for high-performance computing system to process and store genomic data is also increasing. In this paper, we captured I/O trace of a system which analyzed 500 million sequence reads data in Genome analysis pipeline for 86 hours. The workload created 630 file with size of 1031.7 Gbyte and deleted 535 file with size of 91.4 GByte. What is interesting in this workload is that 80% of all accesses are from only two files among 654 files in the system. Size of read and write request in the workload was larger than 512 KByte and 1 Mbyte, respectively. Majority of read write operations show random and sequential patterns, respectively. Throughput and bandwidth observed in each processing phase was different from each other.

Face Recognition Using Automatic Face Enrollment and Update for Access Control in Apartment Building Entrance (아파트 공동현관 출입 통제를 위한 자동 얼굴 등록 및 갱신 기반 얼굴인식)

  • Lee, Seung Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1152-1157
    • /
    • 2021
  • This paper proposes a face recognition method for access control of apartment building. Different from most existing face recognition methods, the proposed one does not require any manual process for face enrollment. When a person is exiting through the main entrance door, his/her face data (i.e., face image and face feature) are automatically extracted from the captured video and registered in the database. When the person needs to enter the building again, the face data are extracted and the corresponding face feature is compared with the face features registered in the database. If a matching person exists, the entrance door opens and his/her access is allowed. The face data of the matching person are immediately deleted and the database has the latest face data of outgoing person. Thus, a higher recognition accuracy could be expected. To verify the feasibility of the proposed method, Python based face recognition has been implemented and the cloud service provided by a web portal.