• 제목/요약/키워드: traditional experiments

검색결과 1,060건 처리시간 0.026초

Design of Smart City Considering Carbon Emissions under The Background of Industry 5.0

  • Fengjiao Zhou;Rui Ma;Mohamad Shaharudin bin Samsurijan;Xiaoqin Xie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권4호
    • /
    • pp.903-921
    • /
    • 2024
  • Industry 5.0 puts forward higher requirements for smart cities, including low-carbon, sustainable, and people-oriented, which pose challenges to the design of smart cities. In response to the above challenges, this study introduces the cyber-physical-social system (CPSS) and parallel system theory into the design of smart cities, and constructs a smart city framework based on parallel system theory. On this basis, in order to enhance the security of smart cities, a sustainable patrol subsystem for smart cities has been established. The intelligent patrol system uses a drone platform, and the trajectory planning of the drone is a key problem that needs to be solved. Therefore, a mathematical model was established that considers various objectives, including minimizing carbon emissions, minimizing noise impact, and maximizing coverage area, while also taking into account the flight performance constraints of drones. In addition, an improved metaheuristic algorithm based on ant colony optimization (ACO) algorithm was designed for trajectory planning of patrol drones. Finally, a digital environmental map was established based on real urban scenes and simulation experiments were conducted. The results show that compared with the other three metaheuristic algorithms, the algorithm designed in this study has the best performance.

Hyperspectral Image Classification using EfficientNet-B4 with Search and Rescue Operation Algorithm

  • S.Srinivasan;K.Rajakumar
    • International Journal of Computer Science & Network Security
    • /
    • 제23권12호
    • /
    • pp.213-219
    • /
    • 2023
  • In recent years, popularity of deep learning (DL) is increased due to its ability to extract features from Hyperspectral images. A lack of discrimination power in the features produced by traditional machine learning algorithms has resulted in poor classification results. It's also a study topic to find out how to get excellent classification results with limited samples without getting overfitting issues in hyperspectral images (HSIs). These issues can be addressed by utilising a new learning network structure developed in this study.EfficientNet-B4-Based Convolutional network (EN-B4), which is why it is critical to maintain a constant ratio between the dimensions of network resolution, width, and depth in order to achieve a balance. The weight of the proposed model is optimized by Search and Rescue Operations (SRO), which is inspired by the explorations carried out by humans during search and rescue processes. Tests were conducted on two datasets to verify the efficacy of EN-B4, with Indian Pines (IP) and the University of Pavia (UP) dataset. Experiments show that EN-B4 outperforms other state-of-the-art approaches in terms of classification accuracy.

초미세 크기의 마른 안개 생성과 이를 이용한 미세먼지 제거 연구 (Experimental study on the generation of ultrafine-sized dry fog and removal of particulate matter)

  • 김기웅
    • 한국가시화정보학회지
    • /
    • 제22권1호
    • /
    • pp.34-39
    • /
    • 2024
  • With the fine particulate matter (PM) poses a serious threat to public health and the environment. The ultrafine PM in particular can cause serious problems. This study investigates the effectiveness of a submicron dry fog system in removing fine PM. Two methods are used to create fine dust particles: burning incense and utilizing an aerosol generator. Results indicate that the dry fog system effectively removes fine dust particles, with a removal efficiency of up to 81.9% for PM10 and 61.9% for PM2.5 after 30 minutes of operation. The dry fog, characterized by a mean size of approximately 1.5 ㎛, exhibits superior performance in comparison to traditional water spraying methods, attributed to reduced water consumption and increased contact probability between water droplets and dust particles. Furthermore, experiments with uniform-sized particles which sizes are 1 ㎛ and 2 ㎛ demonstrate the system's capability in removing ultrafine PM. The proposed submicron dry fog system shows promise for mitigating fine dust pollution in various industrial settings, offering advantages such as energy consumption and enhanced safety for workers and equipment.

Application of a comparative analysis of random forest programming to predict the strength of environmentally-friendly geopolymer concrete

  • Ying Bi;Yeng Yi
    • Steel and Composite Structures
    • /
    • 제50권4호
    • /
    • pp.443-458
    • /
    • 2024
  • The construction industry, one of the biggest producers of greenhouse emissions, is under a lot of pressure as a result of growing worries about how climate change may affect local communities. Geopolymer concrete (GPC) has emerged as a feasible choice for construction materials as a result of the environmental issues connected to the manufacture of cement. The findings of this study contribute to the development of machine learning methods for estimating the properties of eco-friendly concrete, which might be used in lieu of traditional concrete to reduce CO2 emissions in the building industry. In the present work, the compressive strength (fc) of GPC is calculated using random forests regression (RFR) methodology where natural zeolite (NZ) and silica fume (SF) replace ground granulated blast-furnace slag (GGBFS). From the literature, a thorough set of experimental experiments on GPC samples were compiled, totaling 254 data rows. The considered RFR integrated with artificial hummingbird optimization (AHA), black widow optimization algorithm (BWOA), and chimp optimization algorithm (ChOA), abbreviated as ARFR, BRFR, and CRFR. The outcomes obtained for RFR models demonstrated satisfactory performance across all evaluation metrics in the prediction procedure. For R2 metric, the CRFR model gained 0.9988 and 0.9981 in the train and test data set higher than those for BRFR (0.9982 and 0.9969), followed by ARFR (0.9971 and 0.9956). Some other error and distribution metrics depicted a roughly 50% improvement for CRFR respect to ARFR.

Analyzing the Influence of Spatial Sampling Rate on Three-dimensional Temperature-field Reconstruction

  • Shenxiang Feng;Xiaojian Hao;Tong Wei;Xiaodong Huang;Pan Pei;Chenyang Xu
    • Current Optics and Photonics
    • /
    • 제8권3호
    • /
    • pp.246-258
    • /
    • 2024
  • In aerospace and energy engineering, the reconstruction of three-dimensional (3D) temperature distributions is crucial. Traditional methods like algebraic iterative reconstruction and filtered back-projection depend on voxel division for resolution. Our algorithm, blending deep learning with computer graphics rendering, converts 2D projections into light rays for uniform sampling, using a fully connected neural network to depict the 3D temperature field. Although effective in capturing internal details, it demands multiple cameras for varied angle projections, increasing cost and computational needs. We assess the impact of camera number on reconstruction accuracy and efficiency, conducting butane-flame simulations with different camera setups (6 to 18 cameras). The results show improved accuracy with more cameras, with 12 cameras achieving optimal computational efficiency (1.263) and low error rates. Verification experiments with 9, 12, and 15 cameras, using thermocouples, confirm that the 12-camera setup as the best, balancing efficiency and accuracy. This offers a feasible, cost-effective solution for real-world applications like engine testing and environmental monitoring, improving accuracy and resource management in temperature measurement.

변형된 비속어 탐지를 위한 토큰 기반의 분류 및 데이터셋 (Token-Based Classification and Dataset Construction for Detecting Modified Profanity)

  • 고성민;신유현
    • 정보처리학회 논문지
    • /
    • 제13권4호
    • /
    • pp.181-188
    • /
    • 2024
  • 기존의 비속어 탐지 방법들은 의도적으로 변형된 비속어를 식별하는 데 한계가 있다. 이 논문에서는 자연어 처리의 한 분야인 개체명 인식에 기반한 새로운 방법을 소개한다. 우리는 시퀀스 레이블링을 이용한 비속어 탐지 기법을 개발하고, 이를 위해 한국어 악성 댓글 중 일부 비속어를 레이블링하여 직접 데이터셋을 구축하여 실험을 수행하였다. 또한 모델의 성능을 향상시키기 위하여 거대 언어 모델중 하나인 ChatGPT를 활용해 한국어 혐오발언 데이터셋의 일부를 레이블링을 하는 방식으로 데이터셋을 증강하여 학습을 진행하였고, 이 과정에서 거대 언어 모델이 생성한 데이터셋을 인간이 필터링 하는 것만으로도 성능을 향상시킬 수 있음을 확인하였다. 이를 통해 데이터셋 증강 과정에는 여전히 인간의 관리감독이 필요함을 제시하였다.

해금 몸체의 음향학적 특성에 관한 연구 (Acoustic Characteristics of the Haegeum Body)

  • 노정욱;박상하;성굉모
    • 한국음향학회지
    • /
    • 제26권7호
    • /
    • pp.317-322
    • /
    • 2007
  • 본 논문은 한국의 전통 찰현(擦絃)악기인 해금의 음향학적 특성을 연구하는 과정의 첫 단계로서, 우선 무향실에서 해금 몸체의 전달함수를 충격응답 방법을 이용하여 측정하였다. 측정 결과로 얻은 전달함수로부터 해금 몸체의 주요 공진 특성들을 살펴보고, 각 공진점이 해금 몸체의 어느 부분과 각각 관계가 있는지를 알아내기 위해 복판의 클라드니 패턴 실험, 해금 몸체 내 공동(空洞, air cavity)의 음향학적 모델링을 통한 공진 주파수 계산 등의 과정을 수행하였다. 그 결과, 해금 몸체의 주요 공진 특성과 몸체 각 부분들 간의 상관 관계를 밝혀낼 수 있었다.

Two-dimensional concrete meso-modeling research based on pixel matrix and skeleton theory

  • Jingwei Ying;Yujun Jian;Jianzhuang Xiao
    • Computers and Concrete
    • /
    • 제33권6호
    • /
    • pp.671-688
    • /
    • 2024
  • The modeling efficiency of concrete meso-models close to real concrete is one of the important issues that limit the accuracy of mechanical simulation. In order to improve the modeling efficiency and the closeness of the numerical aggregate shape to the real aggregate, this paper proposes a method for generating a two-dimensional concrete meso-model based on pixel matrix and skeleton theory. First, initial concrete model (a container for placing aggregate) is generated using pixel matrix. Then, the skeleton curve of the residual space that is the model after excluding the existing aggregate is obtained using a thinning algorithm. Finally, the final model is obtained by placing the aggregate according to the curve branching points. Compared with the traditional Monte Carlo placement method, the proposed method greatly reduces the number of overlaps between aggregates by up to 95%, and the placement efficiency does not significantly decrease with increasing aggregate content. The model developed is close to the actual concrete experiments in terms of aggregate gradation, aspect ratio, asymmetry, concavity and convexity, and old-new mortar ratio, cracking form, and stress-strain curve. In addition, the cracking loss process of concrete under uniaxial compression was explained at the mesoscale.

다중 머신러닝 기법을 활용한 무기체계 수리부속 수요예측 정확도 개선에 관한 실증연구 (An Empirical Study on Improving the Accuracy of Demand Forecasting Based on Multi-Machine Learning)

  • 김명화;이연준;박상우;김건우;김태희
    • 한국군사과학기술학회지
    • /
    • 제27권3호
    • /
    • pp.406-415
    • /
    • 2024
  • As the equipment of the military has become more advanced and expensive, the cost of securing spare parts is also constantly increasing along with the increase in equipment assets. In particular, forecasting demand for spare parts one of the important management tasks in the military, and the accuracy of these predictions is directly related to military operations and cost management. However, because the demand for spare parts is intermittent and irregular, it is often difficult to make accurate predictions using traditional statistical methods or a single statistical or machine learning model. In this paper, we propose a model that can increase the accuracy of demand forecasting for irregular patterns of spare parts demanding by using a combination of statistical and machine learning algorithm, and through experiments on Cheonma spare parts demanding data.

입자개념계층구조를 기반으로 하는 데이터 분석 기법 (A Study on Data Analysis Approach based on Granular Concept Hierarchies)

  • 강유경;황석형;김응희;엄태정
    • 한국컴퓨터정보학회논문지
    • /
    • 제17권3호
    • /
    • pp.121-133
    • /
    • 2012
  • 본 논문에서는 형식개념분석기법에 입자의 정밀도를 조절하기 위해 스케일링 정도(Scaling level)를 도입하여 다양한 관점과 추상화 수준을 토대로 데이터를 분류하는 새로운 기법을 제안하였다. 이 기법의 특징은 주어진 데이터를 다양한 기준에 맞춰서 입자화하고, 입자들 사이에 관계를 토대로 분석하여, 입자개념계층구조(Granular Concept Hierarchy)를 구축함으로써, 데이터를 분석하고자 하는 사용자의 의도 또는 목적에 맞추어서 다양한 분류가 가능하다는 것이다. 또한, 본 연구에서 제안한 기법을 지원하는 도구(G-Tool)를 개발하였으며, 본 연구에서 제안한 기법의 유용성을 검토하기위해 실제 데이터를 대상으로 G-Tool을 사용하여 실험을 실시하였으며, 그 결과 사용자의 목적에 맞는 다양한 형태로 데이터를 분류할 수 있음을 확인하였다. 기존의 형식개념분석기법에는 입자의 정밀도를 조절할 수 없어서 특정한 어느 한 관점에 대한 분류만 가능하였으나, 본 연구에서 제안한 기법은 사용자의 의도 또는 목적에 맞추어서 다양한 종류의 스케일 정보를 조합하고 스케일링 정도를 조절함으로써 다양한 관점을 반영한 다양한 분류가 가능하다.