• Title/Summary/Keyword: 정보 공개

Search Result 2,914, Processing Time 0.026 seconds

A Hardware Implementation of the Underlying Field Arithmetic Processor based on Optimized Unit Operation Components for Elliptic Curve Cryptosystems (타원곡선을 암호시스템에 사용되는 최적단위 연산항을 기반으로 한 기저체 연산기의 하드웨어 구현)

  • Jo, Seong-Je;Kwon, Yong-Jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.1
    • /
    • pp.88-95
    • /
    • 2002
  • In recent years, the security of hardware and software systems is one of the most essential factor of our safe network community. As elliptic Curve Cryptosystems proposed by N. Koblitz and V. Miller independently in 1985, require fewer bits for the same security as the existing cryptosystems, for example RSA, there is a net reduction in cost size, and time. In this thesis, we propose an efficient hardware architecture of underlying field arithmetic processor for Elliptic Curve Cryptosystems, and a very useful method for implementing the architecture, especially multiplicative inverse operator over GF$GF (2^m)$ onto FPGA and futhermore VLSI, where the method is based on optimized unit operation components. We optimize the arithmetic processor for speed so that it has a resonable number of gates to implement. The proposed architecture could be applied to any finite field $F_{2m}$. According to the simulation result, though the number of gates are increased by a factor of 8.8, the multiplication speed We optimize the arithmetic processor for speed so that it has a resonable number of gates to implement. The proposed architecture could be applied to any finite field $F_{2m}$. According to the simulation result, though the number of gates are increased by a factor of 8.8, the multiplication speed and inversion speed has been improved 150 times, 480 times respectively compared with the thesis presented by Sarwono Sutikno et al. [7]. The designed underlying arithmetic processor can be also applied for implementing other crypto-processor and various finite field applications.

A Comparative Study on Quantifying Uncertainty of Vitamin A Determination in Infant Formula by HPLC (HPLC에 의한 조제분유 중 비타민 A 함량 분석의 측정불확도 비교산정)

  • Lee, Hong-Min;Kwak, Byung-Man;Ahn, Jang-Hyuk;Jeon, Tae-Hong
    • Korean Journal of Food Science and Technology
    • /
    • v.40 no.2
    • /
    • pp.152-159
    • /
    • 2008
  • The purpose of this study was to determine the accurate quantification of vitamin A in infant formula by comparing two different standard stock solutions as well as various sample weights using high performance liquid chromatography. The sources of uncertainty in measurement, such as sample weight, final smaple vloume, and the instrumental results, were identified and used as parameters to determine the combined standard uncertainty based on GUM(guide to the expression of uncertainty in measurement) and the Draft EURACHEM/CITAC Guide. The uncertainty components in measuring were identified as standard weight, purity, molecular weight, dilution of the standard solution, calibration curve, recovery, reproducibility, sample weight, and final sample volume. Each uncertainty component was evaluated for type A and type B and included to calculate the combined uncertainty. The analytical results and combined standard uncertainties of vitamin A according to the two different methods of stock solution preparation were 627 ${\pm}$ 33 ${\mu}$g R.E./100 g for 1,000 mg/L of stock solution, and 627 ${\pm}$ 49 ${\mu}$g R.E./100 g for 100 mg/L of stock solution. The analytical results and combined standard uncertainties of vitamin A according to the various sample weighs were 622 ${\pm}$ 48 ${\mu}$g R.E./100 g, 627 ${\pm}$ 33 ${\mu}$g R.E./100 g, and 491 ${\pm}$ 23 ${\mu}$g R.E./100 g for 1 g, 2 g, and 5 g of sampling, respectively. These data indicate that the preparation method of standard stock solution and the smaple amount were main sources of uncertainty in the analysis results for vitamin A. Preparing 1,000 mg/L of stock solution for standard material sampling rather than 100 mg, and sampling not more than 2 g of infant formula, would be effective for reducing differences in the results as well as uncertainty.

Host-Based Intrusion Detection Model Using Few-Shot Learning (Few-Shot Learning을 사용한 호스트 기반 침입 탐지 모델)

  • Park, DaeKyeong;Shin, DongIl;Shin, DongKyoo;Kim, Sangsoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.271-278
    • /
    • 2021
  • As the current cyber attacks become more intelligent, the existing Intrusion Detection System is difficult for detecting intelligent attacks that deviate from the existing stored patterns. In an attempt to solve this, a model of a deep learning-based intrusion detection system that analyzes the pattern of intelligent attacks through data learning has emerged. Intrusion detection systems are divided into host-based and network-based depending on the installation location. Unlike network-based intrusion detection systems, host-based intrusion detection systems have the disadvantage of having to observe the inside and outside of the system as a whole. However, it has the advantage of being able to detect intrusions that cannot be detected by a network-based intrusion detection system. Therefore, in this study, we conducted a study on a host-based intrusion detection system. In order to evaluate and improve the performance of the host-based intrusion detection system model, we used the host-based Leipzig Intrusion Detection-Data Set (LID-DS) published in 2018. In the performance evaluation of the model using that data set, in order to confirm the similarity of each data and reconstructed to identify whether it is normal data or abnormal data, 1D vector data is converted to 3D image data. Also, the deep learning model has the drawback of having to re-learn every time a new cyber attack method is seen. In other words, it is not efficient because it takes a long time to learn a large amount of data. To solve this problem, this paper proposes the Siamese Convolutional Neural Network (Siamese-CNN) to use the Few-Shot Learning method that shows excellent performance by learning the little amount of data. Siamese-CNN determines whether the attacks are of the same type by the similarity score of each sample of cyber attacks converted into images. The accuracy was calculated using Few-Shot Learning technique, and the performance of Vanilla Convolutional Neural Network (Vanilla-CNN) and Siamese-CNN was compared to confirm the performance of Siamese-CNN. As a result of measuring Accuracy, Precision, Recall and F1-Score index, it was confirmed that the recall of the Siamese-CNN model proposed in this study was increased by about 6% from the Vanilla-CNN model.

A Study on the Current Preservation and Management of the Korean B and C War Criminal Records in Japan (일본의 한국인 BC급 전범관련 자료 현황에 관한 연구)

  • ;Lee, Young-hak
    • The Korean Journal of Archival Studies
    • /
    • no.54
    • /
    • pp.111-150
    • /
    • 2017
  • This paper examines the current situation of sources on Korean Class B and C war criminals attached as civilians to the Japanese military during the Asian Pacific War charged with cruelly treating Allied POWs in Japanese POW camps, and also explores the possibility of a joint Korean-Japanese archive of these sources. The Japanese government agreed to the judgement of war crimes by accepting the terms of the Potsdam Declaration, and the Allied troops carried out the judgement of Class B and C war crimes in each region of Asia and the International Military Tribunal for the Far East (also known as the Tokyo Trials). However, many non-Japanese such as Koreans and Taiwanese from the Japanese colonies were prosecuted for war crimes. The issues of reparations and restoring their reputations were ignored by both the Korean and Japanese governments, and public access to their records restricted. Most records on Korean Class B and C war criminals were transferred from each ministry to the National Archives of Japan. The majority are copies of the judgements of war crimes by the Allied nations or records prepared for the erasure of Japanese war crimes after each department operated independently of the Japanese government. In the case of the Diplomatic Archives of the Ministry of Foreign Affairs, such records focused mostly on their war crimes and the transfer of B and C war criminals within Japan and the diplomatic situation. In the case of Korea and Taiwan, these records were related to the negotiations on the repatriation of Class B and C war criminals. In addition, the purpose of founding of the Japan Center for Asian Historical Records and its activities demonstrate its tremendous utility as a facility for building a joint Korea-Japan colonial archive. Thus, the current flaws of the Japan Center for Asian Historical Records should be improved on in order to build a such a joint archive in the future.

A Study on Development of Education Program Using Presidential Archives for the Free Learning Semester (자유학기제에 적용가능한 대통령기록물 활용 교육프로그램 개발)

  • Song, Na-Ra;Lee, Sung Min;Kim, Yong;Oh, Hyo-Jung
    • The Korean Journal of Archival Studies
    • /
    • no.51
    • /
    • pp.89-132
    • /
    • 2017
  • The presidential records reflect the era of the times, and it has valuable evidence to support the administrative transparency and accountability of government operations. People's interest in the presidential records increased in response to its recent leak. The presidential archives were moved to Sejong in line with its desire to provide public-friendly services. This study will help users access the archives and utilize archiving information. The Ministry of Education introduced the free learning semester, which all middle schools have began conducting since 2016. The free learning semester provides an environment where education can be provided by external organizations. As middle school students are still unfamiliar with archives, the free learning semester provides a good environment for accessing archives and records. Although it serves as an opportunity to publicize archives, existing related studies are insufficient. This study aims to develop the free learning semester program using the presidential archives and records for middle school students during the free learning semester based on the analysis of the domestic and foreign archives education program. This study shows a development of the education program using presidential archives and records through literature research, domestic and foreign case analysis, and expert interview. First, through literature research, this research understood the definition of the free learning semester as well as its types. In addition, this research identified the four types of the free learning semester education program that can be linked to the presidential archives. Second, through website analysis and the information disclosure system, this research investigated domestic and foreign cases of the education program. A total of 46 education programs of institutions were analyzed, focusing on student-led education programs in the foreign archives as well as the education programs of the free learning semester in domestic libraries and archives. Third, based on these results, This study proposed four types of free learning semester education programs using the presidential archives and records, and provided concrete examples.

Comparison of ESG Evaluation Methods: Focusing on the K-ESG Guideline (ESG 평가방법 비교: K-ESG 가이드라인을 중심으로)

  • Chanhi Cho;Hyoung-Yong Lee
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.1-25
    • /
    • 2023
  • ESG management is becoming a necessity of the times, but there are about 600 ESG evaluation indicators worldwide, causing confusion in the market as different ESG ratings were assigned to individual companies according to evaluation agencies. In addition, since the method of applying ESG was not disclosed, there were not many ways for companies that wanted to introduce ESG management to get help. Accordingly, the Ministry of Trade, Industry and Energy announced the K-ESG guideline jointly with the ministries. In previous studies, there were few studies on the comparison of evaluation grades by ESG evaluation company or the application of evaluation diagnostic items. Therefore, in this study, the ease of application and improvement of the K-ESG guideline was attempted by applying the K-ESG guideline to companies that already have ESG ratings. The position of the K-ESG guideline is also confirmed by comparing the scores calculated through the K-ESG guideline for companies that have ESG ratings from global ESG evaluation agencies and domestic ESG evaluation agencies. As a result of the analysis, first, the K-ESG guideline provide clear and detailed standards for individual companies to set their own ESG goals and set the direction of ESG practice. Second, the K-ESG guideline is suitable for domestic and global ESG evaluation standards as it has 61 diagnostic items and 12 additional diagnostic items covering the evaluation indicators of global representative ESG evaluation agencies and KCGS in Korea. Third, the ESG rating of the K-ESG guideline was higher than that of a global ESG rating company and lower than or similar to that of a domestic ESG rating company. Fourth, the ease of application of the K-ESG guideline is judged to be high. Fifth, the point to be improved in the K-ESG guideline is that the government needs to compile industry average statistics on diagnostic items in the K-ESG environment area and publish them on the government's ESG-only site. In addition, the applied weights of E, S, and G by industry should be determined and disclosed. This study will help ESG evaluation agencies, corporate management, and ESG managers interested in ESG management in establishing ESG management strategies and contributing to providing improvements to be referenced when revising the K-ESG guideline in the future.

Semantic Visualization of Dynamic Topic Modeling (다이내믹 토픽 모델링의 의미적 시각화 방법론)

  • Yeon, Jinwook;Boo, Hyunkyung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.131-154
    • /
    • 2022
  • Recently, researches on unstructured data analysis have been actively conducted with the development of information and communication technology. In particular, topic modeling is a representative technique for discovering core topics from massive text data. In the early stages of topic modeling, most studies focused only on topic discovery. As the topic modeling field matured, studies on the change of the topic according to the change of time began to be carried out. Accordingly, interest in dynamic topic modeling that handle changes in keywords constituting the topic is also increasing. Dynamic topic modeling identifies major topics from the data of the initial period and manages the change and flow of topics in a way that utilizes topic information of the previous period to derive further topics in subsequent periods. However, it is very difficult to understand and interpret the results of dynamic topic modeling. The results of traditional dynamic topic modeling simply reveal changes in keywords and their rankings. However, this information is insufficient to represent how the meaning of the topic has changed. Therefore, in this study, we propose a method to visualize topics by period by reflecting the meaning of keywords in each topic. In addition, we propose a method that can intuitively interpret changes in topics and relationships between or among topics. The detailed method of visualizing topics by period is as follows. In the first step, dynamic topic modeling is implemented to derive the top keywords of each period and their weight from text data. In the second step, we derive vectors of top keywords of each topic from the pre-trained word embedding model. Then, we perform dimension reduction for the extracted vectors. Then, we formulate a semantic vector of each topic by calculating weight sum of keywords in each vector using topic weight of each keyword. In the third step, we visualize the semantic vector of each topic using matplotlib, and analyze the relationship between or among the topics based on the visualized result. The change of topic can be interpreted in the following manners. From the result of dynamic topic modeling, we identify rising top 5 keywords and descending top 5 keywords for each period to show the change of the topic. Existing many topic visualization studies usually visualize keywords of each topic, but our approach proposed in this study differs from previous studies in that it attempts to visualize each topic itself. To evaluate the practical applicability of the proposed methodology, we performed an experiment on 1,847 abstracts of artificial intelligence-related papers. The experiment was performed by dividing abstracts of artificial intelligence-related papers into three periods (2016-2017, 2018-2019, 2020-2021). We selected seven topics based on the consistency score, and utilized the pre-trained word embedding model of Word2vec trained with 'Wikipedia', an Internet encyclopedia. Based on the proposed methodology, we generated a semantic vector for each topic. Through this, by reflecting the meaning of keywords, we visualized and interpreted the themes by period. Through these experiments, we confirmed that the rising and descending of the topic weight of a keyword can be usefully used to interpret the semantic change of the corresponding topic and to grasp the relationship among topics. In this study, to overcome the limitations of dynamic topic modeling results, we used word embedding and dimension reduction techniques to visualize topics by era. The results of this study are meaningful in that they broadened the scope of topic understanding through the visualization of dynamic topic modeling results. In addition, the academic contribution can be acknowledged in that it laid the foundation for follow-up studies using various word embeddings and dimensionality reduction techniques to improve the performance of the proposed methodology.

Effects on the continuous use intention of AI-based voice assistant services: Focusing on the interaction between trust in AI and privacy concerns (인공지능 기반 음성비서 서비스의 지속이용 의도에 미치는 영향: 인공지능에 대한 신뢰와 프라이버시 염려의 상호작용을 중심으로)

  • Jang, Changki;Heo, Deokwon;Sung, WookJoon
    • Informatization Policy
    • /
    • v.30 no.2
    • /
    • pp.22-45
    • /
    • 2023
  • In research on the use of AI-based voice assistant services, problems related to the user's trust and privacy protection arising from the experience of service use are constantly being raised. The purpose of this study was to investigate empirically the effects of individual trust in AI and online privacy concerns on the continued use of AI-based voice assistants, specifically the impact of their interaction. In this study, question items were constructed based on previous studies, with an online survey conducted among 405 respondents. The effect of the user's trust in AI and privacy concerns on the adoption and continuous use intention of AI-based voice assistant services was analyzed using the Heckman selection model. As the main findings of the study, first, AI-based voice assistant service usage behavior was positively influenced by factors that promote technology acceptance, such as perceived usefulness, perceived ease of use, and social influence. Second, trust in AI had no statistically significant effect on AI-based voice assistant service usage behavior but had a positive effect on continuous use intention. Third, the privacy concern level was confirmed to have the effect of suppressing continuous use intention through interaction with trust in AI. These research results suggest the need to strengthen user experience through user opinion collection and action to improve trust in technology and alleviate users' concerns about privacy as governance for realizing digital government. When introducing artificial intelligence-based policy services, it is necessary to disclose transparently the scope of application of artificial intelligence technology through a public deliberation process, and the development of a system that can track and evaluate privacy issues ex-post and an algorithm that considers privacy protection is required.

A Study on the Types of Dispute and its Solution through the Analysis on the Disputes Case of Franchise (프랜차이즈 분쟁사례 분석을 통한 분쟁의 유형과 해결에 관한 연구)

  • Kim, Kyu Won;Lee, Jae Han;Lim, Hyun Cheol
    • The Korean Journal of Franchise Management
    • /
    • v.2 no.1
    • /
    • pp.173-199
    • /
    • 2011
  • A franchisee has to depend on the overall system, such as knowhow and management support, from a franchisor in the franchise system and the two parties do not start with the same position in economic or information power because the franchisor controls or supports through selling or management styles. For this, unfair trades the franchisor's over controlling and limiting the franchisee might occur and other side effects by the people who give the franchisee scam trades has negatively influenced on the development of franchise industry and national economy. So, the purpose of this study is preventing unfair trade for the franchisee from understanding the causes and problems of dispute between the franchisor and the franchisee focused on the dispute cases submitted the Korea Fair Trade Mediation Agency and seeking ways to secure the transparency of recruitment process and justice of franchise management process. The results of the case analysis are followed; first, affiliation contracts should run on the franchisor's exact public information statement and the surely understanding of the franchisee. Secondly, the franchisor needs to use their past experiences and investigated data for recruiting franchisees. Thirdly, in the case of making a contract with the franchisee, the franchisor has to make sure the business area by checking it with franchisee in person. Fourthly, the contracts are important in affiliation contracts, so enacting the possibility of disputes makes the disputes decreased. Fifthly, lots of investigation and interests are needed for protecting rights and interests between the franchisor and franchisee and preventing the disputes by catching the cause and more practical solutions of the disputes from the government.

Analysis of the Impact of Generative AI based on Crunchbase: Before and After the Emergence of ChatGPT (Crunchbase를 바탕으로 한 Generative AI 영향 분석: ChatGPT 등장 전·후를 중심으로)

  • Nayun Kim;Youngjung Geum
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.19 no.3
    • /
    • pp.53-68
    • /
    • 2024
  • Generative AI is receiving a lot of attention around the world, and ways to effectively utilize it in the business environment are being explored. In particular, since the public release of the ChatGPT service, which applies the GPT-3.5 model, a large language model developed by OpenAI, it has attracted more attention and has had a significant impact on the entire industry. This study focuses on the emergence of Generative AI, especially ChatGPT, which applies OpenAI's GPT-3.5 model, to investigate its impact on the startup industry and compare the changes that occurred before and after its emergence. This study aims to shed light on the actual application and impact of generative AI in the business environment by examining in detail how generative AI is being used in the startup industry and analyzing the impact of ChatGPT's emergence on the industry. To this end, we collected company information of generative AI-related startups that appeared before and after the ChatGPT announcement and analyzed changes in industry, business content, and investment information. Through keyword analysis, topic modeling, and network analysis, we identified trends in the startup industry and how the introduction of generative AI has revolutionized the startup industry. As a result of the study, we found that the number of startups related to Generative AI has increased since the emergence of ChatGPT, and in particular, the total and average amount of funding for Generative AI-related startups has increased significantly. We also found that various industries are attempting to apply Generative AI technology, and the development of services and products such as enterprise applications and SaaS using Generative AI has been actively promoted, influencing the emergence of new business models. The findings of this study confirm the impact of Generative AI on the startup industry and contribute to our understanding of how the emergence of this innovative new technology can change the business ecosystem.

  • PDF