• 제목/요약/키워드: AI Privacy Policy

Search Result 18, Processing Time 0.024 seconds

A Legal and Technical Analysis for Establishing Privacy Policies on Artificial Intelligence Systems (인공지능 시스템에서 개인정보 처리방침 수립을 위한 법적·기술적 요구사항 분석 연구)

  • Ju-Hyun Jeon;Kyung-Hyune Rhee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.5
    • /
    • pp.1115-1133
    • /
    • 2024
  • With the rapid development of AI technology, AI systems are increasingly collecting, processing, and utilizing personal information in large quantities. As a result, transparency and accountability of personal information processing by AI systems, ensuring the rights of information subjects, and minimizing the risk of personal information infringement are becoming important issues. However, the existing privacy policy only discloses the personal information processing in general, and there is no privacy policy for AI systems. In order to solve these problems, In response to the implementation of the privacy policy evaluation system in accordance with the revision of the Personal Information Protection Act, we propose a new AI system privacy policy establishment and disclosure for personal in the design, development and operation of AI system. This study is expected to play a complementary role to the regulations on the right of data subjects to request an explanation and exercise the right of refusal for automated decisions under the current Personal Information Protection Act.

Effects on the continuous use intention of AI-based voice assistant services: Focusing on the interaction between trust in AI and privacy concerns (인공지능 기반 음성비서 서비스의 지속이용 의도에 미치는 영향: 인공지능에 대한 신뢰와 프라이버시 염려의 상호작용을 중심으로)

  • Jang, Changki;Heo, Deokwon;Sung, WookJoon
    • Informatization Policy
    • /
    • v.30 no.2
    • /
    • pp.22-45
    • /
    • 2023
  • In research on the use of AI-based voice assistant services, problems related to the user's trust and privacy protection arising from the experience of service use are constantly being raised. The purpose of this study was to investigate empirically the effects of individual trust in AI and online privacy concerns on the continued use of AI-based voice assistants, specifically the impact of their interaction. In this study, question items were constructed based on previous studies, with an online survey conducted among 405 respondents. The effect of the user's trust in AI and privacy concerns on the adoption and continuous use intention of AI-based voice assistant services was analyzed using the Heckman selection model. As the main findings of the study, first, AI-based voice assistant service usage behavior was positively influenced by factors that promote technology acceptance, such as perceived usefulness, perceived ease of use, and social influence. Second, trust in AI had no statistically significant effect on AI-based voice assistant service usage behavior but had a positive effect on continuous use intention. Third, the privacy concern level was confirmed to have the effect of suppressing continuous use intention through interaction with trust in AI. These research results suggest the need to strengthen user experience through user opinion collection and action to improve trust in technology and alleviate users' concerns about privacy as governance for realizing digital government. When introducing artificial intelligence-based policy services, it is necessary to disclose transparently the scope of application of artificial intelligence technology through a public deliberation process, and the development of a system that can track and evaluate privacy issues ex-post and an algorithm that considers privacy protection is required.

A Study on the Process of Policy Change of Hyper-scale Artificial Intelligence: Focusing on the ACF (초거대 인공지능 정책 변동과정에 관한 연구 : 옹호연합모형을 중심으로)

  • Seok Won, Choi;Joo Yeoun, Lee
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.18 no.2
    • /
    • pp.11-23
    • /
    • 2022
  • Although artificial intelligence(AI) is a key technology in the digital transformation among the emerging technologies, there are concerns about the use of AI, so many countries have been trying to set up a proper regulation system. This study analyzes the cases of the regulation policies on AI in USA, EU and Korea with the aim to set up and improve proper AI policies and strategies in Korea. In USA, the establishment of the code of ethics for the use of AI is led by private sector. On the other side, Europe is strengthening competitiveness in the AI industry by consolidating regulations that are dispersed by EU members. Korea has also prepared and promoted policies for AI ethics, copyright and privacy protection at the national level and trying to change to a negative regulation system and improve regulations to close the gap between the leading countries and Korea in AI. Moreover, this study analyzed the course of policy changes of AI regulation policy centered on ACF(Advocacy Coalition Framework) model of Sabatier. Through this study, it proposes hyper-scale AI regulation policy recommendations for improving competitiveness and commercialization in Korea. This study is significant in that it can contribute to increasing the predictability of policy makers who have difficulties due to uncertainty and ambiguity in establishing regulatory policies caused by the emergence of hyper-scale artificial intelligence.

Perceptions of Benefits and Risks of AI, Attitudes toward AI, and Support for AI Policies (AI의 혜택 및 위험성 인식과 AI에 대한 태도, 정책 지지의 관계)

  • Lee, Jayeon
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.4
    • /
    • pp.193-204
    • /
    • 2021
  • Based on risk-benefit theory, this study examined a structural equation model accounting for the mechanisms through which affective perceptions of AI predicting individuals' support for the government's Ai policies. Four perceived characteristics of AI (i.e., usefulness, entertainment value, privacy concern, threat of human replacement) were investigated in relation to perceived benefits/risks, attitudes toward AI, and AI policy support, based on a nationwide sample of South Korea (N=352). The hypothesized model was well supported by the data: Perceived usefulness was a strong predictor of perceived benefit, which in turn predicted attitude and support. Perceived benefit and attitude played significant roles as mediators. Perceived entertainment value along with perceived usefulness and privacy concern predicted attitude, not perceived benefit. Neither attitude nor support was significantly associated with perceived risk which was predicted by privacy concern. Theoretical and practical implications of the results are discussed.

Safety Verification Techniques of Privacy Policy Using GPT (GPT를 활용한 개인정보 처리방침 안전성 검증 기법)

  • Hye-Yeon Shim;MinSeo Kweun;DaYoung Yoon;JiYoung Seo;Il-Gu Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.207-216
    • /
    • 2024
  • As big data was built due to the 4th Industrial Revolution, personalized services increased rapidly. As a result, the amount of personal information collected from online services has increased, and concerns about users' personal information leakage and privacy infringement have increased. Online service providers provide privacy policies to address concerns about privacy infringement of users, but privacy policies are often misused due to the long and complex problem that it is difficult for users to directly identify risk items. Therefore, there is a need for a method that can automatically check whether the privacy policy is safe. However, the safety verification technique of the conventional blacklist and machine learning-based privacy policy has a problem that is difficult to expand or has low accessibility. In this paper, to solve the problem, we propose a safety verification technique for the privacy policy using the GPT-3.5 API, which is a generative artificial intelligence. Classification work can be performed evenin a new environment, and it shows the possibility that the general public without expertise can easily inspect the privacy policy. In the experiment, how accurately the blacklist-based privacy policy and the GPT-based privacy policy classify safe and unsafe sentences and the time spent on classification was measured. According to the experimental results, the proposed technique showed 10.34% higher accuracy on average than the conventional blacklist-based sentence safety verification technique.

금융분야 AI의 윤리적 문제 현황과 해결방안

  • Lee, Su Ryeon;Lee, Hyun Jung;Lee, Aram;Choi, Eun Jung
    • Review of KIISC
    • /
    • v.32 no.3
    • /
    • pp.57-64
    • /
    • 2022
  • 우리 사회에서 AI 활용이 더욱 보편화 되어가고 있는 가운데 AI 신뢰에 대한 사회적 요구도 증가했다. 특히 최근 대화형 인공지능'이루다'사건으로 AI 윤리에 대한 논의가 뜨거워졌다. 금융 분야에서도 로보어드바이저, 보험 심사 등 AI가 다양하게 활용되고 있지만, AI 윤리 문제가 AI 활성화에 큰 걸림돌이 되고 있다. 본 논문에서는 인공지능으로 발생할 수 있는 윤리적 문제를 활용 도메인과 데이터 분석 파이프라인에 따라 나눈다. 금융 AI 기술 분야에 따른 윤리 문제를 분류했으며 각 분야별 윤리사례를 제시했고 윤리 문제 분류에 따른 대응 방안과 해외에서의 대응방식과 우리나라의 대응방식을 소개하며 해결방안을 제시했다. 본 연구를 통해 금융 AI 기술 발전에 더불어 윤리 문제에 대한 경각심을 고취시킬 수 있을 것으로 기대한다. 금융 AI 기술 발전이 AI 윤리와 조화를 이루며 성장하길 바라며, 금융 AI 정책 수립 시에도 AI 윤리적 문제를 염두해 두어 차별, 개인정보유출 등과 같은 AI 윤리 규범 미준수로 파생되는 문제점을 줄이며 금융분야 AI 활용이 더욱 활성화되길 기대한다.

Research on the evaluation model for the impact of AI services

  • Soonduck Yoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.191-202
    • /
    • 2023
  • This study aims to propose a framework for evaluating the impact of artificial intelligence (AI) services, based on the concept of AI service impact. It also suggests a model for evaluating this impact and identifies relevant factors and measurement approaches for each item of the model. The study classifies the impact of AI services into five categories: ethics, safety and reliability, compliance, user rights, and environmental friendliness. It discusses these five categories from a broad perspective and provides 21 detailed factors for evaluating each category. In terms of ethics, the study introduces three additional factors-accessibility, openness, and fairness-to the ten items initially developed by KISDI. In the safety and reliability category, the study excludes factors such as dependability, policy, compliance, and awareness improvement as they can be better addressed from a technical perspective. The compliance category includes factors such as human rights protection, privacy protection, non-infringement, publicness, accountability, safety, transparency, policy compliance, and explainability.For the user rights category, the study excludes factors such as publicness, data management, policy compliance, awareness improvement, recoverability, openness, and accuracy. The environmental friendliness category encompasses diversity, publicness, dependability, transparency, awareness improvement, recoverability, and openness.This study lays the foundation for further related research and contributes to the establishment of relevant policies by establishing a model for evaluating the impact of AI services. Future research is required to assess the validity of the developed indicators and provide specific evaluation items for practical use, based on expert evaluations.

A Study on Smart Energy's Privacy Policy (스마트 에너지 개인정보 보호정책에 대한 연구)

  • Noh, Jong-ho;Kwon, Hun-yeong
    • Convergence Security Journal
    • /
    • v.18 no.2
    • /
    • pp.3-10
    • /
    • 2018
  • The existing smart grid, which is centered on the power grid, is rapidly spreading to new energy and renewable energy such as heat and gas, which are expressed as smart energy. Smart Energy interacts with electric energy and is connected to wired / wireless network based on IoT sensor based on energy analysis using AI to rapidly expand ecosystem with various energy carriers and customers. However, smart energy based on IoT is lacking in technological and institutional preparation for security compared to efforts to activate the market according to the interests of government and business operators. In this study, we will present Smart Energy 's privacy policy in terms of value system(CPND) of convergence ICT.

  • PDF

Meta's Metaverse Platform Design in the Pre-launch and Ignition Life Stage

  • Song, Minzheong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.121-131
    • /
    • 2022
  • We look at the initial stage of Meta (previous Facebook)'s new metaverse platform and investigate its platform design in pre-launch and ignition life stage. From the Rocket Model (RM)'s theoretical logic, the results reveal that Meta firstly focuses on investing in key content developers by acquiring virtual reality (VR), video, music content firms and offering production support platform of the augmented reality (AR) content, 'Spark AR' last three years (2019~2021) for attracting high-potential developers and users. In terms of three matching criteria, Meta develops an Artificial Intelligence (AI) powered translation software, partners with Microsoft (MS) for cloud computing and AI, and develops an AI platform for realistic avatar, MyoSuite. In 'connect' function, Meta curates the game concept submitted by game developers, welcomes other game and SNS based metaverse apps, and expands Horizon Worlds (HW) on VR devices to PCs and mobile devices. In 'transact' function, Meta offers 'HW Creator Funding' program for metaverse, launches the first commercialized Meta Avatar Store on Meta's conventional SNS and Messaging apps by inviting all fashion creators to design and sell clothing in this store. Mata also launches an initial test of non-fungible token (NFT) display on Instagram and expands it to Facebook in the US. Lastly, regarding optimization, especially in the face of recent data privacy issues that have adversely affected corporate key performance indicators (KPIs), Meta assures not to collect any new data and to make its privacy policy easier to understand and update its terms of service more user friendly.

Artificial Intelligence and Blockchain Convergence Trend and Policy Improvement Plan (인공지능과 블록체인 융합 동향 및 정책 개선방안)

  • Yang, Hee-Tae
    • Informatization Policy
    • /
    • v.27 no.2
    • /
    • pp.3-19
    • /
    • 2020
  • Artificial intelligence(AI) and blockchain are developing as the core technology leading the Fourth Industrial Revolution. However, AI is still showing limitations in securing and verifying data and explaining the evidence for the results, and blockchain also has some drawbacks such as excessive energy consumption and lack of flexibility in data management. This study analyzed technological limitations of AI and blockchain and convergence trends to overcome them, and finally suggested ways to improve Korea's related policies. Specifically, in terms of R&D reinforcement, we proposed 1) mid- and long-term AI /blockchain convergence research at the national level and 2) blockchain-based AI data platform development. In terms of creating an innovative ecosystem, we also suggested 3) development of AI/blockchain convergence applications by industry, and 4) Start-up support for developing AI/blockchain convergence business models. Lastly, in terms of improving the legal system, we insisted that 5) widening the application of regulatory sandboxes and 6) improving regulations related to privacy protection is necessary.