• Title/Summary/Keyword: sensitive

Search Result 13,610, Processing Time 0.034 seconds

Helping Health Care Providers Recognize and Respond to Sensitive Issues

  • Choi, Hee-Seung;Mayahara, Masako;Rasamimari, Amnuayporn;Norr, Kathleen F.
    • Perspectives in Nursing Science
    • /
    • v.8 no.2
    • /
    • pp.121-128
    • /
    • 2011
  • Sensitive issues are both common and problematic for health care providers because sensitive issues may interfere with the future provider-client relationship and effective care. Most current training for providers focuses on a particular issue, but this is inadequate because many issues may be sensitive, and which issues will be sensitive is unpredictable. We argue that issues become sensitive when they activate one or more of three common triggers, fear, stigma, and taboo. A cycle of negative internal and interpersonal responses to the sensitive issue often leads to unresolved health issues for clients and stress and feelings of inadequacy for providers. We recommend integrated pre-service and in-service skill building to help individual health care providers respond appropriately to a wide variety of sensitive issues. We also identify specific policies and procedures to strengthen organizational support for caregivers so that providers can address these sensitive issues effectively with their clients.

  • PDF

A Strategy Study on Sensitive Information Filtering for Personal Information Protect in Big Data Analyze

  • Koo, Gun-Seo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.101-108
    • /
    • 2017
  • The study proposed a system that filters the data that is entered when analyzing big data such as SNS and BLOG. Personal information includes impersonal personal information, but there is also personal information that distinguishes it from personal information, such as religious institution, personal feelings, thoughts, or beliefs. Define these personally identifiable information as sensitive information. In order to prevent this, Article 23 of the Privacy Act has clauses on the collection and utilization of the information. The proposed system structure is divided into two stages, including Big Data Processing Processes and Sensitive Information Filtering Processes, and Big Data processing is analyzed and applied in Big Data collection in four stages. Big Data Processing Processes include data collection and storage, vocabulary analysis and parsing and semantics. Sensitive Information Filtering Processes includes sensitive information questionnaires, establishing sensitive information DB, qualifying information, filtering sensitive information, and reliability analysis. As a result, the number of Big Data performed in the experiment was carried out at 84.13%, until 7553 of 8978 was produced to create the Ontology Generation. There is considerable significan ce to the point that Performing a sensitive information cut phase was carried out by 98%.

Learning fair prediction models with an imputed sensitive variable: Empirical studies

  • Kim, Yongdai;Jeong, Hwichang
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.251-261
    • /
    • 2022
  • As AI has a wide range of influence on human social life, issues of transparency and ethics of AI are emerging. In particular, it is widely known that due to the existence of historical bias in data against ethics or regulatory frameworks for fairness, trained AI models based on such biased data could also impose bias or unfairness against a certain sensitive group (e.g., non-white, women). Demographic disparities due to AI, which refer to socially unacceptable bias that an AI model favors certain groups (e.g., white, men) over other groups (e.g., black, women), have been observed frequently in many applications of AI and many studies have been done recently to develop AI algorithms which remove or alleviate such demographic disparities in trained AI models. In this paper, we consider a problem of using the information in the sensitive variable for fair prediction when using the sensitive variable as a part of input variables is prohibitive by laws or regulations to avoid unfairness. As a way of reflecting the information in the sensitive variable to prediction, we consider a two-stage procedure. First, the sensitive variable is fully included in the learning phase to have a prediction model depending on the sensitive variable, and then an imputed sensitive variable is used in the prediction phase. The aim of this paper is to evaluate this procedure by analyzing several benchmark datasets. We illustrate that using an imputed sensitive variable is helpful to improve prediction accuracies without hampering the degree of fairness much.

Comparison of Endonuclease-Sensitive Sites by T4 Endonuclease V and UvrABC Nuclease Treatments Followed by Formamide or Sodium Hydroxide Denaturation

  • Chang, Yung-Jin
    • BMB Reports
    • /
    • v.31 no.4
    • /
    • pp.405-408
    • /
    • 1998
  • Endonuclease-sensitive sites detected by T4 endonuclease V or UvrABC nuclease treatments were compared in the dihydrofolate reductase gene of UV-irradiated Chinese hamster ovary B-11 cells. The number of endonuclease-sensitive sites detected by T4 endonuclease V treatment followed by NaOH denaturation was twice that of formamide denaturation. Repeated treatment of damaged genomic DNA with T4 endonuclease V resulted in no further increase in the number of endonuclease-sensitive sites detected. The numbers of endonuclease-sensitive sites detected by UvrABC nuclease using each denaturation condition were similar. Sequential treatment with the two endonucleases using formamide denaturation resulted in twice the number of endonuclease-sensitive sites detected by treatment of each nuclease alone. Due to a lack of AP endonuclease activity these results suggest the presence of T4 endonuclease V-sensitive sites which could be complemented by alkaline gel separation or by UvrABC nuclease treatment.

  • PDF

A New Test Algorithm for Bit-Line Sensitive Faults in High-Density Memories (고집적 메모리에서 BLSFs(Bit-Line Sensitive Faults)를 위한 새로운 테스트 알고리즘)

  • Kang, Dong-Chual;Cho, Sang-Bock
    • Journal of IKEEE
    • /
    • v.5 no.1 s.8
    • /
    • pp.43-51
    • /
    • 2001
  • As the density of memories increases, unwanted interference between cells and coupling noise between bit-lines are increased. And testing high-density memories for a high degree of fault coverage can require either a relatively large number of test vectors or a significant amount of additional test circuitry. So far, conventional test algorithms have focused on faults between neighborhood cells, not neighborhood bit-lines. In this paper, a new test algorithm for neighborhood bit-line sensitive faults (NBLSFs) based on the NPSFs(Neighborhood Pattern Sensitive Faults) is proposed. And the proposed algorithm does not require any additional circuit. Instead of the conventional five-cell or nine-cell physical neighborhood layouts to test memory cells, a three-cell layout which is minimum size for NBLSFs detection is used. Furthermore, to consider faults by maximum coupling noise by neighborhood bit-lines, we added refresh operation after write operation in the test procedure(i.e.,$write{\rightarrow}\;refresh{\rightarrow}\;read$). Also, we show that the proposed algorithm can detect stuck-at faults, transition faults, coupling faults, conventional pattern sensitive faults, and neighborhood bit-line sensitive faults.

  • PDF

Improvement of Test Method for t-ws Falult Detect (t-ws 고장 검출을 위한 테스트 방법의 개선)

  • 김철운;김영민;김태성
    • Electrical & Electronic Materials
    • /
    • v.10 no.4
    • /
    • pp.349-354
    • /
    • 1997
  • This paper aims at studying the improvement of test method for t-weight sensitive fault (t-wsf) detect. The development of RAM fabrication technology results in not only the increase at device density on chips but also the decrease in line widths in VLSI. But, the chip size that was large and complex is shortened and simplified while the cost of chips remains at the present level, in many cases, even lowering. First of all, The testing patterns for RAM fault detect, which is apt to be complicated , need to be simplified. This new testing method made use of Local Lower Bound (L.L.B) which has the memory with the beginning pattern of 0(l) and the finishing pattern of 0(1). The proposed testing patterns can detect all of RAM faults which contain stuck-at faults, coupling faults. The number of operation is 6N at 1-weight sensitive fault, 9,5N at 2-weight sensitive fault, 7N at 3-weight sensitive fault, and 3N at 4-weight sensitive fault. This test techniques can reduce the number of test pattern in memory cells, saving much more time in test, This testing patterns can detect all static weight sensitive faults and pattern sensitive faults in RAM.

  • PDF

Performance Analysis of a Dynamic Priority Control Scheme for Delay-Sensitive Traffic (음성 트래픽을 위한 동적우선권제어방식의 성능분석)

  • 김도규;김용규;조석팔
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.8
    • /
    • pp.3-11
    • /
    • 2000
  • This paper considers the performance of a dynamic priority control function (DPCF) of a threshold-based Bernoulli priority jump (TBPJ) scheme. Loss-sensitive and delay-sensitive traffics are applied to a system with a TBPJ scheme that is a general state-dependent Bernoulli scheduling scheme. Loss-sensitive and delay-sensitive traffics represent sound and data, respectively. Under the TBPJ scheme, the first packet of the loss-sensitive traffic buffer goes into the delay-sensitive traffic buffer with Bernoulli probability p according to system states which represent the buffer thresholds and the number of packets waiting for scheduling. Performance analysis shows that TBPJ scheme obtains large performance build-up for the delay-sensitive traffic without performance degradation for the loss-sensitive traffic. TBPJ scheme shows also better performance than that of HOL scheme.

  • PDF

Hiding Sensitive Frequent Itemsets by a Border-Based Approach

  • Sun, Xingzhi;Yu, Philip S.
    • Journal of Computing Science and Engineering
    • /
    • v.1 no.1
    • /
    • pp.74-94
    • /
    • 2007
  • Nowadays, sharing data among organizations is often required during the business collaboration. Data mining technology has enabled efficient extraction of knowledge from large databases. This, however, increases risks of disclosing the sensitive knowledge when the database is released to other parties. To address this privacy issue, one may sanitize the original database so that the sensitive knowledge is hidden. The challenge is to minimize the side effect on the quality of the sanitized database so that non-sensitive knowledge can still be mined. In this paper, we study such a problem in the context of hiding sensitive frequent itemsets by judiciously modifying the transactions in the database. Unlike previous work, we consider the quality of the sanitized database especially on preserving the non-sensitive frequent itemsets. To preserve the non-sensitive frequent itemsets, we propose a border-based approach to efficiently evaluate the impact of any modification to the database during the hiding process. The quality of database can be well maintained by greedily selecting the modifications with minimal side effect. Experiments results are also reported to show the effectiveness of the proposed approach.

Analyses of centrifuge modelling for artificially sensitive clay slopes

  • Park, Dong Soon
    • Geomechanics and Engineering
    • /
    • v.16 no.5
    • /
    • pp.513-525
    • /
    • 2018
  • Slope stability of sensitive clayey soils is particularly important when subjected to strength loss and deformation. Except for progressive failure, for most sensitive and insensitive slopes, it is important to review the feasibility of conventional analysis methods based on peak strength since peak strength governs slope stability before yielding. In this study, as a part of efforts to understand the behavior of sensitive clay slopes, a total of 12 centrifuge tests were performed for artificially sensitive and insensitive clay slopes using San Francisco Bay Mud (PI = 50) and Yolo Loam (PI = 10). In terms of slope stability, the results were analyzed using the updated instability factor ($N_I$). $N_I$ using equivalent unit weight to cause a failure is in reasonable agreement shown in the Taylor's chart ($N_I$ ~ 5.5). In terms of dynamic deformation, it is shown that two-way sliding is a more accurate approach than conventional one-way sliding. Two-way sliding may relate to diffused shear surfaces. The outcome of this study is contributable to analyzing stability and deformation of steep sensitive clay slopes.

Locality-Sensitive Hashing Techniques for Nearest Neighbor Search

  • Lee, Keon Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.4
    • /
    • pp.300-307
    • /
    • 2012
  • When the volume of data grows big, some simple tasks could become a significant concern. Nearest neighbor search is such a task which finds from a data set the k nearest data points to queries. Locality-sensitive hashing techniques have been developed for approximate but fast nearest neighbor search. This paper introduces the notion of locality-sensitive hashing and surveys the locality-sensitive hashing techniques. It categories them based on several criteria, presents their characteristics, and compares their performance.