• Title/Summary/Keyword: big data

Search Result 6,156, Processing Time 0.033 seconds

Analysis and suggestion of research trends related to NLL -Focused on academic papers from 1998 to 2023- (북방한계선(Northern Limit Line : NLL)관련 연구 경향 분석 및 제언 -1998년~2023년 학술논문을 중심으로-)

  • Hyeon-Sik Kim;Jeong-Hoon Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.25-31
    • /
    • 2023
  • The dispute over the Northern Limit Line in the West Sea has been sharply opposed since the U.N. commander set it in August 1953 with the aim of preventing accidental armed conflict between the two Koreas in the waters of the Korean Peninsula. In 2022, for the first time since the division, North Korea made a missile provocation beyond the NLL. The purpose of this study is to identify how the research on the NLL, which is under way by North Korea's actual provocation, has been conducted and to suggest a direction to proceed. This study examined the trend of research using a total of five academic information DBs, including RISS and Scholar, focusing on academic papers studied on NLL from 1998 to 2023. As a result of examining the current status of each year, field, and research method, significant differences in research volume were identified according to the government's relationship with North Korea, and the research field had the most introduction of the concept of NLL and historical background, confirming the need to expand to more diverse fields to have international legal justification and justification for the NLL, considering the changing international environment according to the logic of power. In terms of final research methods, most of them were literature studies, so the need for quantitative research using interviews, surveys, and big data was also found. It is hoped that the analysis results of this paper will play a positive role in setting the research direction for the international response of the NLL in the future amid the interests of the international political environment that is still ongoing.

Text Mining Analysis of Media Coverage of Maritime Sports: Perceptions of Yachting, Rowing, and Canoeing (텍스트마이닝을 활용한 해양스포츠에 대한 언론 보도기사 분석: 요트, 조정, 카누를 중심으로)

  • Ji-Hyeon Kim;Bo-Kyeong Kim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.6
    • /
    • pp.609-619
    • /
    • 2023
  • This study aimed to investigate the formation of the social perception of domestic maritime sports using text mining analysis of keywords and topics from domestic media coverage over the past 10 years related to representative maritime sports, including yachting, rowing, and canoeing. The results are as follows: First, term frequency (TF) and word cloud analyses identified the top keywords: "maritime," "competition," "experience," "tourism," "world," "yachting," "canoeing," "leisure," and "participation." Second, semantic network analysis revealed that yachting was correlated with terms like "maritime," "industry," "competition," "leisure," "tourism," "boat," "facilities," and "business"; rowing with terms like "competition" and "Chungju"; and canoeing with terms like "maritime," "competition," "experience," "leisure," and "tourism." Third, topic modeling analysis indicated that yachting, rowing, and canoeing are perceived as elite sports and maritime leisure sports. However, the perception of these sports has been demonstrated to have little impact on society, public opinion, and social transformation. In summary, when considering these results comprehensively, it can be concluded that yachting and canoeing have gradually shifted from being perceived as elite sports to essential elements of the maritime leisure industry. Contrariwise, rowing remains primarily associated with elite sports, and its popularization as a maritime leisure sport appears limited at this time.

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.

Comparing Corporate and Public ESG Perceptions Using Text Mining and ChatGPT Analysis: Based on Sustainability Reports and Social Media (텍스트마이닝과 ChatGPT 분석을 활용한 기업과 대중의 ESG 인식 비교: 지속가능경영보고서와 소셜미디어를 기반으로)

  • Jae-Hoon Choi;Sung-Byung Yang;Sang-Hyeak Yoon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.347-373
    • /
    • 2023
  • As the significance of ESG (Environmental, Social, and Governance) management amplifies in driving sustainable growth, this study delves into and compares ESG trends and interrelationships from both corporate and societal viewpoints. Employing a combination of Latent Dirichlet Allocation Topic Modeling (LDA) and Semantic Network Analysis, we analyzed sustainability reports alongside corresponding social media datasets. Additionally, an in-depth examination of social media content was conducted using Joint Sentiment Topic Modeling (JST), further enriched by Semantic Network Analysis (SNA). Complementing text mining analysis with the assistance of ChatGPT, this study identified 25 different ESG topics. It highlighted differences between companies aiming to avoid risks and build trust, and the general public's diverse concerns like investment options and working conditions. Key terms like 'greenwashing,' 'serious accidents,' and 'boycotts' show that many people doubt how companies handle ESG issues. The findings from this study set the foundation for a plan that serves key ESG groups, including businesses, government agencies, customers, and investors. This study also provide to guide the creation of more trustworthy and effective ESG strategies, helping to direct the discussion on ESG effectiveness.

Factors Affecting Individual Effectiveness in Metaverse Workplaces and Moderating Effect of Metaverse Platforms: A Modified ESP Theory Perspective (메타버스 작업공간의 개인적 효과에 영향 및 메타버스 플랫폼의 조절효과에 대한 연구: 수정된 ESP 이론 관점으로)

  • Jooyeon Jeong;Ohbyung Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.207-228
    • /
    • 2023
  • After COVID-19, organizations have widely adopted platforms such as zoom or developed their proprietary online real-time systems for remote work, with recent forays into incorporating the metaverse for meetings and publicity. While ongoing studies investigate the impact of avatar customization, expansive virtual environments, and past virtual experiences on participant satisfaction within virtual reality or metaverse settings, the utilization of the metaverse as a dedicated workspace is still an evolving area. There exists a notable gap in research concerning the factors influencing the performance of the metaverse as a workspace, particularly in non-immersive work-type metaverses. Unlike studies focusing on immersive virtual reality or metaverses emphasizing immersion and presence, the majority of contemporary work-oriented metaverses tend to be non-immersive. As such, understanding the factors that contribute to the success of these existing non-immersive metaverses becomes crucial. Hence, this paper aims to empirically analyze the factors impacting personal outcomes in the non-immersive metaverse workspace and derive implications from the results. To achieve this, the study adopts the Embodied Social Presence (ESP) model as a theoretical foundation, modifying and proposing a research model tailored to the non-immersive metaverse workspace. The findings validate that the impact of presence on task engagement and task involvement exhibits a moderating effect based on the metaverse platform used. Following interviews with participants engaged in non-immersive metaverse workplaces (specifically Gather Town and Ifland), a survey was conducted to gather comprehensive insights.

An Analysis of IPA for the Improvement of University Start-up Support System: Focusing on the Case of the D University (대학 창업지원제도 개선방안 도출을 위한 IPA분석: D대학 사례를 중심으로)

  • Nam Jung-Min;You, Hyun-Kyung;Kim, Yun-Hee;Kang, Eun-Jeong;Lee, Hyun-Seok;Jang, Kyoung-Hwa;Kim, Su-Jin
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.2
    • /
    • pp.53-64
    • /
    • 2022
  • The purpose of this study is to analyze the difference in importance and performance of the university start-up support system focusing on D university students to grasp the perception of the start-up support system provided the university from the perspective of students who are real users. Through this, a plan for qualitative growth and advancement of the university start-up system was derived using the IPA (importance-performance analysis) analysis. The findings are as follows. The importance of all elements of university start-up education and start-up support system is higher than the performance, which means that the start-up education and support programs currently implemented by universities are recognized as important, but do not play a big role in terms of performance for students. In addition, the highest priority factors for improvement in the importance-performance matrix were funding and investment support, start-up space and facilities support, management advisory, patent and intellectual property support, and entrepreneurship field practice. Therefore, This study can be used as objective data to identify the factors that universities should focus on and establish a start-up support system from a long-term perspective, and to build and operate a start-up support system that reflects the needs of students.

Prediction of Decompensation and Death in Advanced Chronic Liver Disease Using Deep Learning Analysis of Gadoxetic Acid-Enhanced MRI

  • Subin Heo;Seung Soo Lee;So Yeon Kim;Young-Suk Lim;Hyo Jung Park;Jee Seok Yoon;Heung-Il Suk;Yu Sub Sung;Bumwoo Park;Ji Sung Lee
    • Korean Journal of Radiology
    • /
    • v.23 no.12
    • /
    • pp.1269-1280
    • /
    • 2022
  • Objective: This study aimed to evaluate the usefulness of quantitative indices obtained from deep learning analysis of gadoxetic acid-enhanced hepatobiliary phase (HBP) MRI and their longitudinal changes in predicting decompensation and death in patients with advanced chronic liver disease (ACLD). Materials and Methods: We included patients who underwent baseline and 1-year follow-up MRI from a prospective cohort that underwent gadoxetic acid-enhanced MRI for hepatocellular carcinoma surveillance between November 2011 and August 2012 at a tertiary medical center. Baseline liver condition was categorized as non-ACLD, compensated ACLD, and decompensated ACLD. The liver-to-spleen signal intensity ratio (LS-SIR) and liver-to-spleen volume ratio (LS-VR) were automatically measured on the HBP images using a deep learning algorithm, and their percentage changes at the 1-year follow-up (ΔLS-SIR and ΔLS-VR) were calculated. The associations of the MRI indices with hepatic decompensation and a composite endpoint of liver-related death or transplantation were evaluated using a competing risk analysis with multivariable Fine and Gray regression models, including baseline parameters alone and both baseline and follow-up parameters. Results: Our study included 280 patients (153 male; mean age ± standard deviation, 57 ± 7.95 years) with non-ACLD, compensated ACLD, and decompensated ACLD in 32, 186, and 62 patients, respectively. Patients were followed for 11-117 months (median, 104 months). In patients with compensated ACLD, baseline LS-SIR (sub-distribution hazard ratio [sHR], 0.81; p = 0.034) and LS-VR (sHR, 0.71; p = 0.01) were independently associated with hepatic decompensation. The ΔLS-VR (sHR, 0.54; p = 0.002) was predictive of hepatic decompensation after adjusting for baseline variables. ΔLS-VR was an independent predictor of liver-related death or transplantation in patients with compensated ACLD (sHR, 0.46; p = 0.026) and decompensated ACLD (sHR, 0.61; p = 0.023). Conclusion: MRI indices automatically derived from the deep learning analysis of gadoxetic acid-enhanced HBP MRI can be used as prognostic markers in patients with ACLD.

Defining Competency for Developing Digital Technology Curriculum (디지털 신기술 교육과정 개발을 위한 역량 정의)

  • Ho Lee;Juhyeon Lee;Junho Bae;Woosik Shin;Hee-Woong Kim
    • Knowledge Management Research
    • /
    • v.25 no.1
    • /
    • pp.135-154
    • /
    • 2024
  • As the digital transformation accelerates, the demand for professionals with competencies in various digital technologies such as artificial intelligence, big data is increasing in the industry. In response, the government is developing various educational programs to nurture talent in these emerging technology fields. However, the lack of a clear definition of competencies, which is the foundation of curriculum development and operation, has posed challenges in effectively designing digital technology education programs. This study systematically reviews the definitions and characteristics of competencies presented in prior research based on a literature review. Subsequently, in-depth interviews were conducted with 30 experts in emerging technology fields to derive a definition of competencies suitable for technology education programs. This research defines competencies for the development of technology education programs as 'a set of one or more knowledge and skills required to perform effectively at the expected level of a given task.' Additionally, the study identifies the elements of competencies, including knowledge and skills, as well as the principles of competency construction. The definition and characteristics of competencies provided in this study can be utilized to create more systematic and effective educational programs in emerging technology fields and bridge the gap between education and industry practice.

Domain Knowledge Incorporated Local Rule-based Explanation for ML-based Bankruptcy Prediction Model (머신러닝 기반 부도예측모형에서 로컬영역의 도메인 지식 통합 규칙 기반 설명 방법)

  • Soo Hyun Cho;Kyung-shik Shin
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.105-123
    • /
    • 2022
  • Thanks to the remarkable success of Artificial Intelligence (A.I.) techniques, a new possibility for its application on the real-world problem has begun. One of the prominent applications is the bankruptcy prediction model as it is often used as a basic knowledge base for credit scoring models in the financial industry. As a result, there has been extensive research on how to improve the prediction accuracy of the model. However, despite its impressive performance, it is difficult to implement machine learning (ML)-based models due to its intrinsic trait of obscurity, especially when the field requires or values an explanation about the result obtained by the model. The financial domain is one of the areas where explanation matters to stakeholders such as domain experts and customers. In this paper, we propose a novel approach to incorporate financial domain knowledge into local rule generation to provide explanations for the bankruptcy prediction model at instance level. The result shows the proposed method successfully selects and classifies the extracted rules based on the feasibility and information they convey to the users.

Safety Verification Techniques of Privacy Policy Using GPT (GPT를 활용한 개인정보 처리방침 안전성 검증 기법)

  • Hye-Yeon Shim;MinSeo Kweun;DaYoung Yoon;JiYoung Seo;Il-Gu Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.207-216
    • /
    • 2024
  • As big data was built due to the 4th Industrial Revolution, personalized services increased rapidly. As a result, the amount of personal information collected from online services has increased, and concerns about users' personal information leakage and privacy infringement have increased. Online service providers provide privacy policies to address concerns about privacy infringement of users, but privacy policies are often misused due to the long and complex problem that it is difficult for users to directly identify risk items. Therefore, there is a need for a method that can automatically check whether the privacy policy is safe. However, the safety verification technique of the conventional blacklist and machine learning-based privacy policy has a problem that is difficult to expand or has low accessibility. In this paper, to solve the problem, we propose a safety verification technique for the privacy policy using the GPT-3.5 API, which is a generative artificial intelligence. Classification work can be performed evenin a new environment, and it shows the possibility that the general public without expertise can easily inspect the privacy policy. In the experiment, how accurately the blacklist-based privacy policy and the GPT-based privacy policy classify safe and unsafe sentences and the time spent on classification was measured. According to the experimental results, the proposed technique showed 10.34% higher accuracy on average than the conventional blacklist-based sentence safety verification technique.