• Title/Summary/Keyword: machine learning (ML)

Search Result 302, Processing Time 0.021 seconds

An Extended Work Architecture for Online Threat Prediction in Tweeter Dataset

  • Sheoran, Savita Kumari;Yadav, Partibha
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.97-106
    • /
    • 2021
  • Social networking platforms have become a smart way for people to interact and meet on internet. It provides a way to keep in touch with friends, families, colleagues, business partners, and many more. Among the various social networking sites, Twitter is one of the fastest-growing sites where users can read the news, share ideas, discuss issues etc. Due to its vast popularity, the accounts of legitimate users are vulnerable to the large number of threats. Spam and Malware are some of the most affecting threats found on Twitter. Therefore, in order to enjoy seamless services it is required to secure Twitter against malicious users by fixing them in advance. Various researches have used many Machine Learning (ML) based approaches to detect spammers on Twitter. This research aims to devise a secure system based on Hybrid Similarity Cosine and Soft Cosine measured in combination with Genetic Algorithm (GA) and Artificial Neural Network (ANN) to secure Twitter network against spammers. The similarity among tweets is determined using Cosine with Soft Cosine which has been applied on the Twitter dataset. GA has been utilized to enhance training with minimum training error by selecting the best suitable features according to the designed fitness function. The tweets have been classified as spammer and non-spammer based on ANN structure along with the voting rule. The True Positive Rate (TPR), False Positive Rate (FPR) and Classification Accuracy are considered as the evaluation parameter to evaluate the performance of system designed in this research. The simulation results reveals that our proposed model outperform the existing state-of-arts.

Exploring AI Principles in Global Top 500 Enterprises: A Delphi Technique of LDA Topic Modeling Results

  • Hyun BAEK
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.2
    • /
    • pp.7-17
    • /
    • 2023
  • Artificial Intelligence (AI) technology has already penetrated deeply into our daily lives, and we live with the convenience of it anytime, anywhere, and sometimes even without us noticing it. However, because AI is imitative intelligence based on human Intelligence, it inevitably has both good and evil sides of humans, which is why ethical principles are essential. The starting point of this study is the AI principles for companies or organizations to develop products. Since the late 2010s, studies on ethics and principles of AI have been actively published. This study focused on AI principles declared by global companies currently developing various products through AI technology. So, we surveyed the AI principles of the Global 500 companies by market capitalization at a given specific time and collected the AI principles explicitly declared by 46 of them. AI analysis technology primarily analyzed this text data, especially LDA (Latent Dirichlet Allocation) topic modeling, which belongs to Machine Learning (ML) analysis technology. Then, we conducted a Delphi technique to reach a meaningful consensus by presenting the primary analysis results. We expect to provide meaningful guidelines in AI-related government policy establishment, corporate ethics declarations, and academic research, where debates on AI ethics and principles often occur recently based on the results of our study.

'Knowing' with AI in construction - An empirical insight

  • Ramalingham, Shobha;Mossman, Alan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.686-693
    • /
    • 2022
  • Construction is a collaborative endeavor. The complexity in delivering construction projects successfully is impacted by the effective collaboration needs of a multitude of stakeholders throughout the project life-cycle. Technologies such as Building Information Modelling and relational project delivery approaches such as Alliancing and Integrated Project Delivery have developed to address this conundrum. However, with the onset of the pandemic, the digital economy has surged world-wide and advances in technology such as in the areas of machine learning (ML) and Artificial Intelligence (AI) have grown deep roots across specializations and domains to the point of matching its capabilities to the human mind. Several recent studies have both explored the role of AI in the construction process and highlighted its benefits. In contrast, literature in the organization studies field has highlighted the fear that tasks currently done by humans will be done by AI in future. Motivated by these insights and with the understanding that construction is a labour intensive sector where knowledge is both fragmented and predominantly tacit in nature, this paper explores the integration of AI in construction processes across project phases from planning, scheduling, execution and maintenance operations using literary evidence and experiential insights. The findings show that AI can complement human skills rather than provide a substitute for them. This preliminary study is expected to be a stepping stone for further research and implementation in practice.

  • PDF

Prediction of karst sinkhole collapse using a decision-tree (DT) classifier

  • Boo Hyun Nam;Kyungwon Park;Yong Je Kim
    • Geomechanics and Engineering
    • /
    • v.36 no.5
    • /
    • pp.441-453
    • /
    • 2024
  • Sinkhole subsidence and collapse is a common geohazard often formed in karst areas such as the state of Florida, United States of America. To predict the sinkhole occurrence, we need to understand the formation mechanism of sinkhole and its karst hydrogeology. For this purpose, investigating the factors affecting sinkholes is an essential and important step. The main objectives of the presenting study are (1) the development of a machine learning (ML)-based model, namely C5.0 decision tree (C5.0 DT), for the prediction of sinkhole susceptibility, which accounts for sinkhole/subsidence inventory and sinkhole contributing factors (e.g., geological/hydrogeological) and (2) the construction of a regional-scale sinkhole susceptibility map. The study area is east central Florida (ECF) where a cover-collapse type is commonly reported. The C5.0 DT algorithm was used to account for twelve (12) identified hydrogeological factors. In this study, a total of 1,113 sinkholes in ECF were identified and the dataset was then randomly divided into 70% and 30% subsets for training and testing, respectively. The performance of the sinkhole susceptibility model was evaluated using a receiver operating characteristic (ROC) curve, particularly the area under the curve (AUC). The C5.0 model showed a high prediction accuracy of 83.52%. It is concluded that a decision tree is a promising tool and classifier for spatial prediction of karst sinkholes and subsidence in the ECF area.

Risk Estimates of Structural Changes in Freight Rates (해상운임의 구조변화 리스크 추정)

  • Hyunsok Kim
    • Journal of Korea Port Economic Association
    • /
    • v.39 no.4
    • /
    • pp.255-268
    • /
    • 2023
  • This paper focuses on the tests for generalized fluctuation in the context of assessing structural changes based on linear regression models. For efficient estimation there has been a growing focus on the structural change monitoring, particularly in relation to fields such as artificial intelligence(hereafter AI) and machine learning(hereafter ML). Specifically, the investigation elucidates the implementation of structural changes and presents a coherent approach for the practical application to the BDI(Baltic Dry-bulk Index), which serves as a representative maritime trade index in global market. The framework encompasses a range of F-statistics type methodologies for fitting, visualization, and evaluation of empirical fluctuation processes, including CUSUM, MOSUM, and estimates-based processes. Additionally, it provides functionality for the computation and evaluation of sequences of pruned exact linear time(hereafter PELT).

Message Security Level Integration with IoTES: A Design Dependent Encryption Selection Model for IoT Devices

  • Saleh, Matasem;Jhanjhi, NZ;Abdullah, Azween;Saher, Raazia
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.328-342
    • /
    • 2022
  • The Internet of Things (IoT) is a technology that offers lucrative services in various industries to facilitate human communities. Important information on people and their surroundings has been gathered to ensure the availability of these services. This data is vulnerable to cybersecurity since it is sent over the internet and kept in third-party databases. Implementation of data encryption is an integral approach for IoT device designers to protect IoT data. For a variety of reasons, IoT device designers have been unable to discover appropriate encryption to use. The static support provided by research and concerned organizations to assist designers in picking appropriate encryption costs a significant amount of time and effort. IoTES is a web app that uses machine language to address a lack of support from researchers and organizations, as ML has been shown to improve data-driven human decision-making. IoTES still has some weaknesses, which are highlighted in this research. To improve the support, these shortcomings must be addressed. This study proposes the "IoTES with Security" model by adding support for the security level provided by the encryption algorithm to the traditional IoTES model. We evaluated our technique for encryption algorithms with available security levels and compared the accuracy of our model with traditional IoTES. Our model improves IoTES by helping users make security-oriented decisions while choosing the appropriate algorithm for their IoT data.

A Unicode based Deep Handwritten Character Recognition model for Telugu to English Language Translation

  • BV Subba Rao;J. Nageswara Rao;Bandi Vamsi;Venkata Nagaraju Thatha;Katta Subba Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.101-112
    • /
    • 2024
  • Telugu language is considered as fourth most used language in India especially in the regions of Andhra Pradesh, Telangana, Karnataka etc. In international recognized countries also, Telugu is widely growing spoken language. This language comprises of different dependent and independent vowels, consonants and digits. In this aspect, the enhancement of Telugu Handwritten Character Recognition (HCR) has not been propagated. HCR is a neural network technique of converting a documented image to edited text one which can be used for many other applications. This reduces time and effort without starting over from the beginning every time. In this work, a Unicode based Handwritten Character Recognition(U-HCR) is developed for translating the handwritten Telugu characters into English language. With the use of Centre of Gravity (CG) in our model we can easily divide a compound character into individual character with the help of Unicode values. For training this model, we have used both online and offline Telugu character datasets. To extract the features in the scanned image we used convolutional neural network along with Machine Learning classifiers like Random Forest and Support Vector Machine. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RMS-P) and Adaptative Moment Estimation (ADAM)optimizers are used in this work to enhance the performance of U-HCR and to reduce the loss function value. This loss value reduction can be possible with optimizers by using CNN. In both online and offline datasets, proposed model showed promising results by maintaining the accuracies with 90.28% for SGD, 96.97% for RMS-P and 93.57% for ADAM respectively.

Comparison of Based on Histogram Equalization Techniques by Using Normalization in Thoracic Computed Tomography (흉부 컴퓨터 단층 촬영에서 정규화를 사용한 다양한 히스토그램 평준화 기법을 비교)

  • Lee, Young-Jun;Min, Jung-Whan
    • Journal of radiological science and technology
    • /
    • v.44 no.5
    • /
    • pp.473-480
    • /
    • 2021
  • This study was purpose to method that applies for improving the image quality in CT and X-ray scan, especially in the lung region. Also, we researched the parameters of the image before and after applying for Histogram Equalization (HE) such as mean, median values in the histogram. These techniques are mainly used for all type of medical images such as for Chest X-ray, Low-Dose Computed Tomography (CT). These are also used to intensify tiny anatomies like vessels, lung nodules, airways and pulmonary fissures. The proposed techniques consist of two main steps using the MATLAB software (R2021a). First, the technique should apply for the process of normalization for improving the basic image more correctly. In the next, the technique actively rearranges the intensity of the image contrast. Second, the Contrast Limited Adaptive Histogram Equalization (CLAHE) method was used for enhancing small details, textures and local contrast of the image. As a result, this paper shows the modern and improved techniques of HE and some advantages of the technique on the traditional HE. Therefore, this paper concludes that various techniques related to the HE can be helpful for many processes, especially image pre-processing for Machine Learning (ML), Deep Learning (DL).

White striping degree assessment using computer vision system and consumer acceptance test

  • Kato, Talita;Mastelini, Saulo Martiello;Campos, Gabriel Fillipe Centini;Barbon, Ana Paula Ayub da Costa;Prudencio, Sandra Helena;Shimokomaki, Massami;Soares, Adriana Lourenco;Barbon, Sylvio Jr.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.7
    • /
    • pp.1015-1026
    • /
    • 2019
  • Objective: The objective of this study was to evaluate three different degrees of white striping (WS) addressing their automatic assessment and customer acceptance. The WS classification was performed based on a computer vision system (CVS), exploring different machine learning (ML) algorithms and the most important image features. Moreover, it was verified by consumer acceptance and purchase intent. Methods: The samples for image analysis were classified by trained specialists, according to severity degrees regarding visual and firmness aspects. Samples were obtained with a digital camera, and 25 features were extracted from these images. ML algorithms were applied aiming to induce a model capable of classifying the samples into three severity degrees. In addition, two sensory analyses were performed: 75 samples properly grilled were used for the first sensory test, and 9 photos for the second. All tests were performed using a 10-cm hybrid hedonic scale (acceptance test) and a 5-point scale (purchase intention). Results: The information gain metric ranked 13 attributes. However, just one type of image feature was not enough to describe the phenomenon. The classification models support vector machine, fuzzy-W, and random forest showed the best results with similar general accuracy (86.4%). The worst performance was obtained by multilayer perceptron (70.9%) with the high error rate in normal (NORM) sample predictions. The sensory analysis of acceptance verified that WS myopathy negatively affects the texture of the broiler breast fillets when grilled and the appearance attribute of the raw samples, which influenced the purchase intention scores of raw samples. Conclusion: The proposed system has proved to be adequate (fast and accurate) for the classification of WS samples. The sensory analysis of acceptance showed that WS myopathy negatively affects the tenderness of the broiler breast fillets when grilled, while the appearance attribute of the raw samples eventually influenced purchase intentions.

A Study on the Detection Model of Illegal Access to Large-scale Service Networks using Netflow (Netflow를 활용한 대규모 서비스망 불법 접속 추적 모델 연구)

  • Lee, Taek-Hyun;Park, WonHyung;Kook, Kwang-Ho
    • Convergence Security Journal
    • /
    • v.21 no.2
    • /
    • pp.11-18
    • /
    • 2021
  • To protect tangible and intangible assets, most of the companies are conducting information protection monitoring by using various security equipment in the IT service network. As the security equipment that needs to be protected increases in the process of upgrading and expanding the service network, it is difficult to monitor the possible exposure to the attack for the entire service network. As a countermeasure to this, various studies have been conducted to detect external attacks and illegal communication of equipment, but studies on effective monitoring of the open service ports and construction of illegal communication monitoring system for large-scale service networks are insufficient. In this study, we propose a framework that can monitor information leakage and illegal communication attempts in a wide range of service networks without large-scale investment by analyzing 'Netflow statistical information' of backbone network equipment, which is the gateway to the entire data flow of the IT service network. By using machine learning algorithms to the Netfllow data, we could obtain the high classification accuracy of 94% in identifying whether the Telnet service port of operating equipment is open or not, and we could track the illegal communication of the damaged equipment by using the illegal communication history of the damaged equipment.