• Title/Summary/Keyword: 사전 기반 모델

Search Result 850, Processing Time 0.029 seconds

A study on the selection of the target scope for destruction of personal credit information of customers whose financial transaction effect has ended (금융거래 효과가 종료된 고객의 개인신용정보 파기 대상 범위 선정에 관한 연구)

  • Baek, Song-Yi;Lim, Young-Bin;Lee, Chang-Gil;Chun, Sam-Hyun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.163-169
    • /
    • 2022
  • According to the Credit Information Act, in order to protect customer information by relationship of credit information subjects, it is destroyed and stored separately in two stages according to the period after the financial transaction effect is over. However, there is a limitation in that the destruction of personal credit information of customers whose financial transaction effect has expired cannot be collectively destroyed when the transaction has been terminated, depending on the nature of the financial product and transaction. To this end, the IT person in charge is developing a computerized program according to the target and order of destruction by investigating the business relationship by transaction type in advance. In this process, if the identification of the upper relation between tables is unclear, a compliance issue arises in which personal credit information cannot be destroyed or even information that should not be destroyed because it depends on the subjective judgment of the IT person in charge. Therefore, in this paper, we propose a model and algorithm for identifying the referenced table based on SQL executed in the computer program, analyzing the upper relation between tables with the primary key information of the table, and visualizing and objectively selecting the range to be destroyed. presented and implemented.

Design and Forensic Analysis of a Zero Trust Model for Amazon S3 (Amazon S3 제로 트러스트 모델 설계 및 포렌식 분석)

  • Kyeong-Hyun Cho;Jae-Han Cho;Hyeon-Woo Lee;Jiyeon Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.295-303
    • /
    • 2023
  • As the cloud computing market grows, a variety of cloud services are now reliably delivered. Administrative agencies and public institutions of South Korea are transferring all their information systems to cloud systems. It is essential to develop security solutions in advance in order to safely operate cloud services, as protecting cloud services from misuse and malicious access by insiders and outsiders over the Internet is challenging. In this paper, we propose a zero trust model for cloud storage services that store sensitive data. We then verify the effectiveness of the proposed model by operating a cloud storage service. Memory, web, and network forensics are also performed to track access and usage of cloud users depending on the adoption of the zero trust model. As a cloud storage service, we use Amazon S3(Simple Storage Service) and deploy zero trust techniques such as access control lists and key management systems. In order to consider the different types of access to S3, furthermore, we generate service requests inside and outside AWS(Amazon Web Services) and then analyze the results of the zero trust techniques depending on the location of the service request.

An Evaluation Technique for the Path-following Control Performance of Autonomous Surface Ships (자율운항선박의 항로추정성능 평가기법 개발에 관한 연구)

  • Daejeong Kim;ChunKi Lee;Jeongbin Yim
    • Journal of Navigation and Port Research
    • /
    • v.47 no.1
    • /
    • pp.10-17
    • /
    • 2023
  • A series of studies on the development of autonomous surface ships have been promoted in domestic and foreign countries. One of the main technologies for the development of autonomous ships is path-following control, which is closely related to securing the safety of ships at sea. In this regard, the path-following performance of an autonomous ship should be first evaluated at the design stage. The main aim of this study was to develop a visual and quantitative evaluation method for the path-following control performance of an autonomous ship at the design stage. This evaluation technique was developed using a computational fluid dynamics (CFD)-based path-following control model together with a line-of-sight (LOS) guidance algorithm. CFD software was utilized to visualize waves around the ship, performing path-following control for visual evaluation. In addition, a quantitative evaluation was carried out using the difference between the desired and estimated yaw angles, as well as the distance difference between the planned and estimated trajectories. The results demonstrated that the ship experienced large deviations from the planned path near the waypoints while changing its course. It was also found that the fluid phenomena around the ship could be easily identified by visualizing the flow generated by the ship. It is expected that the evaluation method proposed in this study will contribute to the visual and quantitative evaluation of the path-following performance of autonomous ships at the design stage.

What We Want for Virtual Humans: Classification of Consumer Expectation Value on Virtual Influencer by Age Based on Q-methodology (가상 인간에 대한 우리들이 원하는 모습: Q방법론을 기반으로 한 연령대에 따른 소비자 기대 가치 분류)

  • Ji-Chan Yun;Do-Hyung Park
    • Knowledge Management Research
    • /
    • v.24 no.2
    • /
    • pp.137-159
    • /
    • 2023
  • This study focuses on consumers' perceptions of virtual influencers, which many companies recently used for marketing. This study uses the Q methodology to derive what kind of perception consumers have about virtual influencers who work with various appearances, background stories, and worldviews as components. In addition, we want to see how the expected value of virtual influencers differs by age group. To this end, 34 statements were produced through preliminary interviews and literature reviews. This study showed that some consumers preferred appearances similar to humans, despite recognizing that virtual influencers are fictional characters. Some other consumers preferred to feel like a fictional character by maintaining virtuality, confirming that there are both opposite consumers. In addition, consumers expect virtual influencers to have consistency and expertise in the content field covered, and some consumers do not prefer to show an overly commercial appearance. This study will likely provide implications for companies that want to utilize virtual influencers in considering which ones to use for target customers in marketing activities.

AI Security Plan for Public Safety Network App Store (재난안전통신망 앱스토어를 위한 AI 보안 방안 마련)

  • Jung, Jae-eun;Ahn, Jung-hyun;Baik, Nam-kyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.458-460
    • /
    • 2021
  • The provision and application of public safety network in Korea is still insufficient for security response to the mobile app of public safety network in the stages of development, initial construction, demonstration, and initial service. The available terminals on the Disaster Safety Network (PS-LTE) are open, Android-based, dedicated terminals that potentially have vulnerabilities that can be used for a variety of mobile malware, requiring preemptive responses similar to FirstNet Certified in U.S and Google's Google Play Protect. In this paper, before listing the application service app on the public safety network mobile app store, we construct a data set for malicious and normal apps, extract features, select the most effective AI model, perform static and dynamic analysis, and analyze Based on the result, if it is not a malicious app, it is suggested to list it in the App Store. As it becomes essential to provide a service that blocks malicious behavior app listing in advance, it is essential to provide authorized authentication to minimize the security blind spot of the public safety network, and to provide certified apps for disaster safety and application service support. The safety of the public safety network can be secured.

  • PDF

Ship Collision Risk Assessment for Bridges (교량의 선박충돌위험도 평가)

  • Lee, Seong Lo;Bae, Yong Gwi
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.1A
    • /
    • pp.1-9
    • /
    • 2006
  • An analysis of the annual frequency of collapse(AF) is performed for each bridge pier exposed to ship collision. From this analysis, the impact lateral resistance can be determined for each pier. The bridge pier impact resistance is selected using a probability-based analysis procedure in which the predicted annual frequency of bridge collapse, AF, from the ship collision risk assessment is compared to an acceptance criterion. The analysis procedure is an iterative process in which a trial impact resistance is selected for a bridge component and a computed AF is compared to the acceptance criterion, and revisions to the analysis variables are made as necessary to achieve compliance. The distribution of the AF acceptance criterion among the exposed piers is generally based on the designer's judgment. In this study, the acceptance criterion is allocated to each pier using allocation weights based on the previous predictions. To determine the design impact lateral resistance of bridge components such pylon and pier, the numerical analysis is performed iteratively with the analysis variable of impact resistance ratio of pylon to pier. The design impact lateral resistance can vary greatly among the components of the same bridge, depending upon the waterway geometry, available water depth, bridge geometry, and vessel traffic characteristics. More researches on the allocation model of AF and the determination of impact resistance are required.

Developing and Implementing a Secondary Teacher Training Program to Build TPACK in Entrepreneurship Education (기업가정신 교육에서의 TPACK 강화를 위한 중등 교사 연수 프로그램 개발 및 적용)

  • Seonghye Yoon;Seyoung Kim
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.4
    • /
    • pp.51-63
    • /
    • 2023
  • The purpose of this study is to develop and implement a secondary teacher training program based on the TPACK model to strengthen the capacity of teachers of youth entrepreneurship education in the context of the increasing importance of entrepreneurship as a future competency, and to provide theoretical and practical implications based on it. To this end, a teacher training program was developed through the process of analysis, design, development, implementation, and evaluation based on the ADDIE model, and 22 secondary school teachers in Gangwon Province were trained and the effectiveness and validity were analyzed. First, the results of the paired sample t-test of TPACK in entrepreneurship education conducted before and after the program showed statistically significant improvements in all sub-competencies. Second, the satisfaction survey of the training program showed that the overall satisfaction was high with M=4.83. Third, the validity of the program was reviewed by three experts, and it was found to be highly valid with a validity of M=5.0, usefulness of M=4.7, and universality of M=5.0. Based on the results, it is suggested that in order to expand entrepreneurship education, opportunities for teachers' holistic capacity building such as TPACK should be expanded, teachers' understanding and practice of backward design should be promoted, and access to various resources that can be utilized in entrepreneurship education should be improved.

  • PDF

An Investigation on the Periodical Transition of News related to North Korea using Text Mining (텍스트마이닝을 활용한 북한 관련 뉴스의 기간별 변화과정 고찰)

  • Park, Chul-Soo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.63-88
    • /
    • 2019
  • The goal of this paper is to investigate changes in North Korea's domestic and foreign policies through automated text analysis over North Korea represented in South Korean mass media. Based on that data, we then analyze the status of text mining research, using a text mining technique to find the topics, methods, and trends of text mining research. We also investigate the characteristics and method of analysis of the text mining techniques, confirmed by analysis of the data. In this study, R program was used to apply the text mining technique. R program is free software for statistical computing and graphics. Also, Text mining methods allow to highlight the most frequently used keywords in a paragraph of texts. One can create a word cloud, also referred as text cloud or tag cloud. This study proposes a procedure to find meaningful tendencies based on a combination of word cloud, and co-occurrence networks. This study aims to more objectively explore the images of North Korea represented in South Korean newspapers by quantitatively reviewing the patterns of language use related to North Korea from 2016. 11. 1 to 2019. 5. 23 newspaper big data. In this study, we divided into three periods considering recent inter - Korean relations. Before January 1, 2018, it was set as a Before Phase of Peace Building. From January 1, 2018 to February 24, 2019, we have set up a Peace Building Phase. The New Year's message of Kim Jong-un and the Olympics of Pyeong Chang formed an atmosphere of peace on the Korean peninsula. After the Hanoi Pease summit, the third period was the silence of the relationship between North Korea and the United States. Therefore, it was called Depression Phase of Peace Building. This study analyzes news articles related to North Korea of the Korea Press Foundation database(www.bigkinds.or.kr) through text mining, to investigate characteristics of the Kim Jong-un regime's South Korea policy and unification discourse. The main results of this study show that trends in the North Korean national policy agenda can be discovered based on clustering and visualization algorithms. In particular, it examines the changes in the international circumstances, domestic conflicts, the living conditions of North Korea, the South's Aid project for the North, the conflicts of the two Koreas, North Korean nuclear issue, and the North Korean refugee problem through the co-occurrence word analysis. It also offers an analysis of South Korean mentality toward North Korea in terms of the semantic prosody. In the Before Phase of Peace Building, the results of the analysis showed the order of 'Missiles', 'North Korea Nuclear', 'Diplomacy', 'Unification', and ' South-North Korean'. The results of Peace Building Phase are extracted the order of 'Panmunjom', 'Unification', 'North Korea Nuclear', 'Diplomacy', and 'Military'. The results of Depression Phase of Peace Building derived the order of 'North Korea Nuclear', 'North and South Korea', 'Missile', 'State Department', and 'International'. There are 16 words adopted in all three periods. The order is as follows: 'missile', 'North Korea Nuclear', 'Diplomacy', 'Unification', 'North and South Korea', 'Military', 'Kaesong Industrial Complex', 'Defense', 'Sanctions', 'Denuclearization', 'Peace', 'Exchange and Cooperation', and 'South Korea'. We expect that the results of this study will contribute to analyze the trends of news content of North Korea associated with North Korea's provocations. And future research on North Korean trends will be conducted based on the results of this study. We will continue to study the model development for North Korea risk measurement that can anticipate and respond to North Korea's behavior in advance. We expect that the text mining analysis method and the scientific data analysis technique will be applied to North Korea and unification research field. Through these academic studies, I hope to see a lot of studies that make important contributions to the nation.

A Study on the Design of Sustainable App Services for Medication Management and Disposal of Waste Drugs (약 복용 관리와 폐의약품 처리를 위한 지속 가능한 앱 서비스 디자인 연구)

  • Lee, Ri-Na;Hwang, Jeong-Un;Shin, Ji-Yoon;Hwang, Jin-Do
    • Journal of Service Research and Studies
    • /
    • v.14 no.2
    • /
    • pp.48-68
    • /
    • 2024
  • Due to the global pandemic aftermath of the coronavirus, the importance of health care is being emphasized more socially. Due to the influence of these changes, domestic pharmaceutical companies have introduced regular drug delivery services, that is, drug and health functional food subscription services. Currently, this market is continuously growing. However, these regular services are causing new environmental problems in which the number of waste drugs increases due to the presence of unused drugs. Therefore, this study proposes a service that not only promotes health management through regular medication adherence to reduce the amount of pharmaceutical waste but also aims to improve awareness and practices regarding proper medication disposal. As a preliminary survey for service design, a preliminary survey was conducted on 51 adults to confirm their perception of drug use habits and waste drug collection. Based on the Honey Comb model, a guideline for service design was created, and a prototype was produced by specifying the service using the preliminary survey results and service design methodology. In order to verify the effectiveness of the prototype, a first user task survey was conducted to identify the problems of the prototype, and after improving this, a second usability test was conducted on 49 adults to confirm the versatility of the service. Usability verification was conducted using SPSS Mac version 29.0. For the evaluation results of the questionnaire, Spearmann Correlation Analysis was conducted to confirm the relationship between frequency analysis and evaluation items. This study presents specific solutions to the problem of waste drugs due to the spread of drug subscription services.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.