• Title/Summary/Keyword: python

Search Result 686, Processing Time 0.023 seconds

Enhancing the Text Mining Process by Implementation of Average-Stochastic Gradient Descent Weight Dropped Long-Short Memory

  • Annaluri, Sreenivasa Rao;Attili, Venkata Ramana
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.352-358
    • /
    • 2022
  • Text mining is an important process used for analyzing the data collected from different sources like videos, audio, social media, and so on. The tools like Natural Language Processing (NLP) are mostly used in real-time applications. In the earlier research, text mining approaches were implemented using long-short memory (LSTM) networks. In this paper, text mining is performed using average-stochastic gradient descent weight-dropped (AWD)-LSTM techniques to obtain better accuracy and performance. The proposed model is effectively demonstrated by considering the internet movie database (IMDB) reviews. To implement the proposed model Python language was used due to easy adaptability and flexibility while dealing with massive data sets/databases. From the results, it is seen that the proposed LSTM plus weight dropped plus embedding model demonstrated an accuracy of 88.36% as compared to the previous models of AWD LSTM as 85.64. This result proved to be far better when compared with the results obtained by just LSTM model (with 85.16%) accuracy. Finally, the loss function proved to decrease from 0.341 to 0.299 using the proposed model

Capital Structure of Malaysian Companies: Are They Different During the COVID-19 Pandemic?

  • MOHD AZHARI, Nor Khadijah;MAHMUD, Radziah;SHAHARUDDIN, Sara Naquia Hanim
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.9 no.4
    • /
    • pp.239-250
    • /
    • 2022
  • This study examined the level of capital structure and its determinants of publicly traded companies in Malaysia before and after the COVID-19 pandemic. The data for this study was examined using Python Programming Language and time-series financial data from 2,784 quarterly observations in 2019 and 2020. The maximum debt is larger before the COVID-19 period, according to the findings. During the COVID-19 period, short-term debts and total debts have both decreased slightly. However, long-term debts have increased marginally. As a result, this research demonstrates that the capital structure has changed slightly during the COVID-19 period. The findings imply that independent of the capital structure proxies, tangibility, liquidity, and business size had an impact on capital structure in both periods. It was found that profitability had a significant impact on total debts both before and after the COVID-19 crisis. While higher-profit enterprises appear to have lesser short-term debts before the COVID-19 periods, they are also more likely to have lower long-term debts during the COVID-19 periods. Even though growing companies tend to have higher short-term debts and thus total debts during those periods, longterm debts are unaffected by potential growth.

Analysis of Social Media Utilization based on Big Data-Focusing on the Chinese Government Weibo

  • Li, Xiang;Guo, Xiaoqin;Kim, Soo Kyun;Lee, Hyukku
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2571-2586
    • /
    • 2022
  • The rapid popularity of government social media has generated huge amounts of text data, and the analysis of these data has gradually become the focus of digital government research. This study uses Python language to analyze the big data of the Chinese provincial government Weibo. First, this study uses a web crawler approach to collect and statistically describe over 360,000 data from 31 provincial government microblogs in China, covering the period from January 2018 to April 2022. Second, a word separation engine is constructed and these text data are analyzed using word cloud word frequencies as well as semantic relationships. Finally, the text data were analyzed for sentiment using natural language processing methods, and the text topics were studied using LDA algorithm. The results of this study show that, first, the number and scale of posts on the Chinese government Weibo have grown rapidly. Second, government Weibo has certain social attributes, and the epidemics, people's livelihood, and services have become the focus of government Weibo. Third, the contents of government Weibo account for more than 30% of negative sentiments. The classified topics show that the epidemics and epidemic prevention and control overshadowed the other topics, which inhibits the diversification of government Weibo.

Real-Time Stock Price Prediction using Apache Spark (Apache Spark를 활용한 실시간 주가 예측)

  • Dong-Jin Shin;Seung-Yeon Hwang;Jeong-Joon Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.79-84
    • /
    • 2023
  • Apache Spark, which provides the fastest processing speed among recent distributed and parallel processing technologies, provides real-time functions and machine learning functions. Although official documentation guides for these functions are provided, a method for fusion of functions to predict a specific value in real time is not provided. Therefore, in this paper, we conducted a study to predict the value of data in real time by fusion of these functions. The overall configuration is collected by downloading stock price data provided by the Python programming language. And it creates a model of regression analysis through the machine learning function, and predicts the adjusted closing price among the stock price data in real time by fusing the real-time streaming function with the machine learning function.

Welfare Policy Visualization Analysis using Big Data -Chungcheong- (빅데이터를 활용한 복지정책 시각화분석 -충청도 중심으로-)

  • Dae-Yu Kim;Won-Shik Na
    • Advanced Industrial SCIence
    • /
    • v.2 no.1
    • /
    • pp.15-20
    • /
    • 2023
  • The purpose of this study is to analyze the changes and importance of welfare policies in Chungcheong Province using big data analysis technology in the era of the Fourth Industrial Revolution, and to propose stable welfare policies for all generations, including the socially underprivileged. Chungcheong-do policy-related big data is coded in Python, and stable government policies are proposed based on the results of visualization analysis. As a result of the study, the keywords of Chungcheong-do government policy were confirmed in the order of region, society, government and support, education, and women, and welfare policy should be strengthened with a focus on improving local health policy and social welfare. For future research direction, it will be necessary to compare overseas cases and make policy proposals on the stable impact of national welfare policies.

Development of Autonomous Navigation System Using Simulation Based on Unity-ROS (Unity-ROS 시뮬레이터 기반의 자율운항 시스템 개발 및 검증)

  • Kiwon Kim;Hyuntae Bang;Jeonghwa Seo;Wonkeun Youn
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.60 no.6
    • /
    • pp.406-415
    • /
    • 2023
  • In this study, we focused on developing and verifying ship collision avoidance algorithms using Unity simulator and ROS(Robot Operating System). ROS is used to establish an environment where communication between different operating systems is possible, and a dynamic model of a ship is constructed within Unity simulator. The Lidar data collected in Unity environment is passed to the system based on python through ROS. In the system based on python, control command values were created through the logic of the collision avoidance algorithm using data, and the values were transferred back to Unity to control the movement of the virtual ship. Through the developed simulation system, the reliability of the collision avoidance algorithm of ships with two different forms in an environment similar to the actual physical world was confirmed. As a result, it was confirmed on the simulator that it could be avoided without collision even in an environment with various types of obstacles, and that the avoidance characteristics according to the dynamics of the ship could be analyzed.

FPGA-Based Post-Quantum Cryptography Hardware Accelerator Design using High Level Synthesis (HLS 를 이용한 FPGA 기반 양자내성암호 하드웨어 가속기 설계)

  • Haesung Jung;Hanyoung Lee;Hanho Lee
    • Transactions on Semiconductor Engineering
    • /
    • v.1 no.1
    • /
    • pp.1-8
    • /
    • 2023
  • This paper presents the design and implementation of Crystals-Kyber, a next-generation postquantum cryptography, as a hardware accelerator on an FPGA using High-Level Synthesis (HLS). We optimized the Crystals-Kyber algorithm using various directives provided by Vitis HLS, configured the AXI interface, and designed a hardware accelerator that can be implemented on an FPGA. Then, we used Vivado tool to design the IP block and implement it on the ZYNQ ZCU106 FPGA. Finally, the video was recorded and H.264 compressed with Python code in the PYNQ framework, and the video encryption and decryption were accelerated using Crystals-Kyber hardware accelerator implemented on the FPGA.

Application of Decision Tree Algorithm for Automating Public Survey Performance Review (공공측량 성과심사 자동화를 위한 결정트리 알고리즘의 적용)

  • Mi-Jin Hyeon;Cheol Jin;Myung-Jin Park;Hyun Choi
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.2_2
    • /
    • pp.333-341
    • /
    • 2024
  • The current public survey performance review extracts samples according to the set screening ratio, and examines the extracted samples to determine the suitability or inadequacy of the survey performance. The examiner directly judges the survey performance submitted by the performer, and extracts it in consideration of various field conditions and topography for each subject. However, it is necessary to secure fairness in the examination as it is extracted with different extraction methods for each subject and the judgment of the examiner. Accordingly, in order to automate sampling for public survey performance review, the detailed sampling criteria of the reviewer were investigated to prepare a volume calculation table, and the automation of sampling using Python was studied. In addition, by reviewing items that can and cannot be automated, the application of the automated decision tree algorithm of sampling was reviewed.

A Case Study on Running a Game-based Programming Class for Lower Grades (저학년을 위한 게임 기반 프로그래밍 수업 운영 사례 연구)

  • Do-hyeon Choi
    • Journal of Practical Engineering Education
    • /
    • v.16 no.2
    • /
    • pp.151-157
    • /
    • 2024
  • Most of the existing game-based education programmes for lower grades are simple block-coding studies, and there is a lack of examples of programming-intensive classes. In this study, we implemented a Minecraft-based Python coding fundamentals class for 3 classes at a local elementary school during a 2-week school holiday. The learning programme was reorganised from the standard learning programme on the official website, such as building quests through LAN-PARTY and self-scripting in-game, to improve class interest and motivation. In addition, we analysed the satisfaction and preferences of the class topics through a survey, and obtained meaningful results for future educational program development. This study is significant as a basic research for the design and development of game-based educational programmes for all age groups.

Spatial Rainfall Considering Elevation and Estimation of Rain Erosivity Factor R in Revised USLE Using 1 Minute Rainfall Data and Program Development (고도를 고려한 공간강우분포와 1분 강우자료를 이용한 RUSLE의 강우침식인자(R) 산정 및 프로그램 개발)

  • JUNG, Chung-Gil;JANG, Won-Jin;KIM, Seong-Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.4
    • /
    • pp.130-145
    • /
    • 2016
  • Soil erosion processes are affected by weather factors, such as rainfall, temperature, wind, and humidity. Among these factors, rainfall directly influences soil erosion by breaking away soil particles. The kinetic energy of rainfall and water flow caused by rain entrains and transports soil particles downstream. Therefore, in order to estimate soil erosion, it is important to accurately determine the rainfall erosivity factor(R) in RUSLE(Revised Universal Soil Loss Equation). The objective of this study is to evaluate the average annual R using 14 years(2002~2015) of 1 minute rainfall data from 55 KMA(Korea Meteorological Administration) weather stations. The R results from 1 min rainfall were compared with previous R studies using 1 h rainfall data. The determination coefficients($R^2$) between R calculated using 1 min rainfall data and annual rainfall were 0.70-0.98. The estimation of 30 min rainfall intensity from 1 min rainfall data showed better $R^2$ results than results from 1 h rainfall data. For estimation of physical spatial rain erosivity(R), distribution of annual rainfall was estimated by IDW(Inverse Distance Weights) interpolation, taking elevation into consideration. Because of the computation burden, the R calculation process was programmed using the python GUI(Graphical User Interface) tool.