• Title/Summary/Keyword: machine learning (ML)

Search Result 280, Processing Time 0.026 seconds

An Extended Work Architecture for Online Threat Prediction in Tweeter Dataset

  • Sheoran, Savita Kumari;Yadav, Partibha
    • International Journal of Computer Science & Network Security
    • /
    • 제21권1호
    • /
    • pp.97-106
    • /
    • 2021
  • Social networking platforms have become a smart way for people to interact and meet on internet. It provides a way to keep in touch with friends, families, colleagues, business partners, and many more. Among the various social networking sites, Twitter is one of the fastest-growing sites where users can read the news, share ideas, discuss issues etc. Due to its vast popularity, the accounts of legitimate users are vulnerable to the large number of threats. Spam and Malware are some of the most affecting threats found on Twitter. Therefore, in order to enjoy seamless services it is required to secure Twitter against malicious users by fixing them in advance. Various researches have used many Machine Learning (ML) based approaches to detect spammers on Twitter. This research aims to devise a secure system based on Hybrid Similarity Cosine and Soft Cosine measured in combination with Genetic Algorithm (GA) and Artificial Neural Network (ANN) to secure Twitter network against spammers. The similarity among tweets is determined using Cosine with Soft Cosine which has been applied on the Twitter dataset. GA has been utilized to enhance training with minimum training error by selecting the best suitable features according to the designed fitness function. The tweets have been classified as spammer and non-spammer based on ANN structure along with the voting rule. The True Positive Rate (TPR), False Positive Rate (FPR) and Classification Accuracy are considered as the evaluation parameter to evaluate the performance of system designed in this research. The simulation results reveals that our proposed model outperform the existing state-of-arts.

Exploring AI Principles in Global Top 500 Enterprises: A Delphi Technique of LDA Topic Modeling Results

  • Hyun BAEK
    • 한국인공지능학회지
    • /
    • 제11권2호
    • /
    • pp.7-17
    • /
    • 2023
  • Artificial Intelligence (AI) technology has already penetrated deeply into our daily lives, and we live with the convenience of it anytime, anywhere, and sometimes even without us noticing it. However, because AI is imitative intelligence based on human Intelligence, it inevitably has both good and evil sides of humans, which is why ethical principles are essential. The starting point of this study is the AI principles for companies or organizations to develop products. Since the late 2010s, studies on ethics and principles of AI have been actively published. This study focused on AI principles declared by global companies currently developing various products through AI technology. So, we surveyed the AI principles of the Global 500 companies by market capitalization at a given specific time and collected the AI principles explicitly declared by 46 of them. AI analysis technology primarily analyzed this text data, especially LDA (Latent Dirichlet Allocation) topic modeling, which belongs to Machine Learning (ML) analysis technology. Then, we conducted a Delphi technique to reach a meaningful consensus by presenting the primary analysis results. We expect to provide meaningful guidelines in AI-related government policy establishment, corporate ethics declarations, and academic research, where debates on AI ethics and principles often occur recently based on the results of our study.

'Knowing' with AI in construction - An empirical insight

  • Ramalingham, Shobha;Mossman, Alan
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.686-693
    • /
    • 2022
  • Construction is a collaborative endeavor. The complexity in delivering construction projects successfully is impacted by the effective collaboration needs of a multitude of stakeholders throughout the project life-cycle. Technologies such as Building Information Modelling and relational project delivery approaches such as Alliancing and Integrated Project Delivery have developed to address this conundrum. However, with the onset of the pandemic, the digital economy has surged world-wide and advances in technology such as in the areas of machine learning (ML) and Artificial Intelligence (AI) have grown deep roots across specializations and domains to the point of matching its capabilities to the human mind. Several recent studies have both explored the role of AI in the construction process and highlighted its benefits. In contrast, literature in the organization studies field has highlighted the fear that tasks currently done by humans will be done by AI in future. Motivated by these insights and with the understanding that construction is a labour intensive sector where knowledge is both fragmented and predominantly tacit in nature, this paper explores the integration of AI in construction processes across project phases from planning, scheduling, execution and maintenance operations using literary evidence and experiential insights. The findings show that AI can complement human skills rather than provide a substitute for them. This preliminary study is expected to be a stepping stone for further research and implementation in practice.

  • PDF

Prediction of karst sinkhole collapse using a decision-tree (DT) classifier

  • Boo Hyun Nam;Kyungwon Park;Yong Je Kim
    • Geomechanics and Engineering
    • /
    • 제36권5호
    • /
    • pp.441-453
    • /
    • 2024
  • Sinkhole subsidence and collapse is a common geohazard often formed in karst areas such as the state of Florida, United States of America. To predict the sinkhole occurrence, we need to understand the formation mechanism of sinkhole and its karst hydrogeology. For this purpose, investigating the factors affecting sinkholes is an essential and important step. The main objectives of the presenting study are (1) the development of a machine learning (ML)-based model, namely C5.0 decision tree (C5.0 DT), for the prediction of sinkhole susceptibility, which accounts for sinkhole/subsidence inventory and sinkhole contributing factors (e.g., geological/hydrogeological) and (2) the construction of a regional-scale sinkhole susceptibility map. The study area is east central Florida (ECF) where a cover-collapse type is commonly reported. The C5.0 DT algorithm was used to account for twelve (12) identified hydrogeological factors. In this study, a total of 1,113 sinkholes in ECF were identified and the dataset was then randomly divided into 70% and 30% subsets for training and testing, respectively. The performance of the sinkhole susceptibility model was evaluated using a receiver operating characteristic (ROC) curve, particularly the area under the curve (AUC). The C5.0 model showed a high prediction accuracy of 83.52%. It is concluded that a decision tree is a promising tool and classifier for spatial prediction of karst sinkholes and subsidence in the ECF area.

Message Security Level Integration with IoTES: A Design Dependent Encryption Selection Model for IoT Devices

  • Saleh, Matasem;Jhanjhi, NZ;Abdullah, Azween;Saher, Raazia
    • International Journal of Computer Science & Network Security
    • /
    • 제22권8호
    • /
    • pp.328-342
    • /
    • 2022
  • The Internet of Things (IoT) is a technology that offers lucrative services in various industries to facilitate human communities. Important information on people and their surroundings has been gathered to ensure the availability of these services. This data is vulnerable to cybersecurity since it is sent over the internet and kept in third-party databases. Implementation of data encryption is an integral approach for IoT device designers to protect IoT data. For a variety of reasons, IoT device designers have been unable to discover appropriate encryption to use. The static support provided by research and concerned organizations to assist designers in picking appropriate encryption costs a significant amount of time and effort. IoTES is a web app that uses machine language to address a lack of support from researchers and organizations, as ML has been shown to improve data-driven human decision-making. IoTES still has some weaknesses, which are highlighted in this research. To improve the support, these shortcomings must be addressed. This study proposes the "IoTES with Security" model by adding support for the security level provided by the encryption algorithm to the traditional IoTES model. We evaluated our technique for encryption algorithms with available security levels and compared the accuracy of our model with traditional IoTES. Our model improves IoTES by helping users make security-oriented decisions while choosing the appropriate algorithm for their IoT data.

A Unicode based Deep Handwritten Character Recognition model for Telugu to English Language Translation

  • BV Subba Rao;J. Nageswara Rao;Bandi Vamsi;Venkata Nagaraju Thatha;Katta Subba Rao
    • International Journal of Computer Science & Network Security
    • /
    • 제24권2호
    • /
    • pp.101-112
    • /
    • 2024
  • Telugu language is considered as fourth most used language in India especially in the regions of Andhra Pradesh, Telangana, Karnataka etc. In international recognized countries also, Telugu is widely growing spoken language. This language comprises of different dependent and independent vowels, consonants and digits. In this aspect, the enhancement of Telugu Handwritten Character Recognition (HCR) has not been propagated. HCR is a neural network technique of converting a documented image to edited text one which can be used for many other applications. This reduces time and effort without starting over from the beginning every time. In this work, a Unicode based Handwritten Character Recognition(U-HCR) is developed for translating the handwritten Telugu characters into English language. With the use of Centre of Gravity (CG) in our model we can easily divide a compound character into individual character with the help of Unicode values. For training this model, we have used both online and offline Telugu character datasets. To extract the features in the scanned image we used convolutional neural network along with Machine Learning classifiers like Random Forest and Support Vector Machine. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RMS-P) and Adaptative Moment Estimation (ADAM)optimizers are used in this work to enhance the performance of U-HCR and to reduce the loss function value. This loss value reduction can be possible with optimizers by using CNN. In both online and offline datasets, proposed model showed promising results by maintaining the accuracies with 90.28% for SGD, 96.97% for RMS-P and 93.57% for ADAM respectively.

흉부 컴퓨터 단층 촬영에서 정규화를 사용한 다양한 히스토그램 평준화 기법을 비교 (Comparison of Based on Histogram Equalization Techniques by Using Normalization in Thoracic Computed Tomography)

  • 이영준;민정환
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제44권5호
    • /
    • pp.473-480
    • /
    • 2021
  • This study was purpose to method that applies for improving the image quality in CT and X-ray scan, especially in the lung region. Also, we researched the parameters of the image before and after applying for Histogram Equalization (HE) such as mean, median values in the histogram. These techniques are mainly used for all type of medical images such as for Chest X-ray, Low-Dose Computed Tomography (CT). These are also used to intensify tiny anatomies like vessels, lung nodules, airways and pulmonary fissures. The proposed techniques consist of two main steps using the MATLAB software (R2021a). First, the technique should apply for the process of normalization for improving the basic image more correctly. In the next, the technique actively rearranges the intensity of the image contrast. Second, the Contrast Limited Adaptive Histogram Equalization (CLAHE) method was used for enhancing small details, textures and local contrast of the image. As a result, this paper shows the modern and improved techniques of HE and some advantages of the technique on the traditional HE. Therefore, this paper concludes that various techniques related to the HE can be helpful for many processes, especially image pre-processing for Machine Learning (ML), Deep Learning (DL).

White striping degree assessment using computer vision system and consumer acceptance test

  • Kato, Talita;Mastelini, Saulo Martiello;Campos, Gabriel Fillipe Centini;Barbon, Ana Paula Ayub da Costa;Prudencio, Sandra Helena;Shimokomaki, Massami;Soares, Adriana Lourenco;Barbon, Sylvio Jr.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • 제32권7호
    • /
    • pp.1015-1026
    • /
    • 2019
  • Objective: The objective of this study was to evaluate three different degrees of white striping (WS) addressing their automatic assessment and customer acceptance. The WS classification was performed based on a computer vision system (CVS), exploring different machine learning (ML) algorithms and the most important image features. Moreover, it was verified by consumer acceptance and purchase intent. Methods: The samples for image analysis were classified by trained specialists, according to severity degrees regarding visual and firmness aspects. Samples were obtained with a digital camera, and 25 features were extracted from these images. ML algorithms were applied aiming to induce a model capable of classifying the samples into three severity degrees. In addition, two sensory analyses were performed: 75 samples properly grilled were used for the first sensory test, and 9 photos for the second. All tests were performed using a 10-cm hybrid hedonic scale (acceptance test) and a 5-point scale (purchase intention). Results: The information gain metric ranked 13 attributes. However, just one type of image feature was not enough to describe the phenomenon. The classification models support vector machine, fuzzy-W, and random forest showed the best results with similar general accuracy (86.4%). The worst performance was obtained by multilayer perceptron (70.9%) with the high error rate in normal (NORM) sample predictions. The sensory analysis of acceptance verified that WS myopathy negatively affects the texture of the broiler breast fillets when grilled and the appearance attribute of the raw samples, which influenced the purchase intention scores of raw samples. Conclusion: The proposed system has proved to be adequate (fast and accurate) for the classification of WS samples. The sensory analysis of acceptance showed that WS myopathy negatively affects the tenderness of the broiler breast fillets when grilled, while the appearance attribute of the raw samples eventually influenced purchase intentions.

Netflow를 활용한 대규모 서비스망 불법 접속 추적 모델 연구 (A Study on the Detection Model of Illegal Access to Large-scale Service Networks using Netflow)

  • 이택현;박원형;국광호
    • 융합보안논문지
    • /
    • 제21권2호
    • /
    • pp.11-18
    • /
    • 2021
  • 대다수의 기업은 유무형의 자산을 보호하기 위한 방안으로, IT서비스망에 다양한 보안 장비를 구축하여 정보보호 모니터링을 수행하고 있다. 그러나 서비스 망 고도화 및 확장 과정에서 보안 장비 투자와 보호해야 할 자산이 증가하면서 전체 서비스망에 대한 공격 노출 모니터링이 어려워지는 한계가 발생하고 있다. 이에 대응하기 위한 방안으로 외부자의 공격과 장비 불법통신을 탐지할 수 있는 다양한 연구가 진행되었으나, 대규모 서비스망에 대한 효과적인 서비스 포트 오픈 감시 및 불법 통신 모니터링 체계 구축에 대한 연구는 미진한 편이다. 본 연구에서는 IT서비스망 전체 데이터 흐름의 관문이 되는 네트워크 백본장비의 'Netflow 통계 정보'를 분석하여, 대규모 투자 없이 광범위한 서비스망의 정보 유출 및 불법 통신 시도를 감시할 수 있는 프레임워크를 제안한다. 주요 연구 성과로는 Netflow 데이터에서 운영 장비의 텔넷 서비스 오픈 여부를 6개의 ML 머신러닝 알고리즘으로 판별하여 분류 정확도 F1-Score 94%의 높은 성능을 검증하였으며, 피해 장비의 불법 통신 이력을 연관하여 추적할 수 있는 모형을 제안하였다.

Application of Artificial Intelligence Technology for Dam-Reservoir Operation in Long-Term Solution to Flood and Drought in Upper Mun River Basin

  • Areeya Rittima;JidapaKraisangka;WudhichartSawangphol;YutthanaPhankamolsil;Allan Sriratana Tabucanon;YutthanaTalaluxmana;VarawootVudhivanich
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2023년도 학술발표회
    • /
    • pp.30-30
    • /
    • 2023
  • This study aims to establish the multi-reservoir operation system model in the Upper Mun River Basin which includes 5 main dams namely, Mun Bon (MB), Lamchae (LC), Lam Takhong (LTK), Lam Phraphoeng (LPP), and Lower Lam Chiengkrai (LLCK) Dams. The knowledge and AI technology were applied aiming to develop innovative prototype for SMART dam-reservoir operation in future. Two different sorts of reservoir operation system model namely, Fuzzy Logic (FL) and Constraint Programming (CP) as well as the development of rainfall and reservoir inflow prediction models using Machine Learning (ML) technique were made to help specify the right amount of daily reservoir releases for the Royal Irrigation Department (RID). The model could also provide the essential information particularly for the Office of National Water Resource of Thailand (ONWR) to determine the short-term and long-term water resource management plan and strengthen water security against flood and drought in this region. The simulated results of base case scenario for reservoir operation in the Upper Mun from 2008 to 2021 indicated that in the same circumstances, FL and CP models could specify the new release schemes to increase the reservoir water storages at the beginning of dry season of approximately 125.25 and 142.20 MCM per year. This means that supplying the agricultural water to farmers in dry season could be well managed. In other words, water scarcity problem could substantially be moderated at some extent in case of incapability to control the expansion of cultivated area size properly. Moreover, using AI technology to determine the new reservoir release schemes plays important role in reducing the actual volume of water shortfall in the basin although the drought situation at LTK and LLCK Dams were still existed in some periods of time. Meanwhile, considering the predicted inflow and hydrologic factors downstream of 5 main dams by FL model and minimizing the flood volume by CP model could ensure that flood risk was considerably minimized as a result of new release schemes.

  • PDF