• Title/Summary/Keyword: Computation problem

Search Result 1,291, Processing Time 0.045 seconds

A Novel RGB Channel Assimilation for Hyperspectral Image Classification using 3D-Convolutional Neural Network with Bi-Long Short-Term Memory

  • M. Preethi;C. Velayutham;S. Arumugaperumal
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.3
    • /
    • pp.177-186
    • /
    • 2023
  • Hyperspectral imaging technology is one of the most efficient and fast-growing technologies in recent years. Hyperspectral image (HSI) comprises contiguous spectral bands for every pixel that is used to detect the object with significant accuracy and details. HSI contains high dimensionality of spectral information which is not easy to classify every pixel. To confront the problem, we propose a novel RGB channel Assimilation for classification methods. The color features are extracted by using chromaticity computation. Additionally, this work discusses the classification of hyperspectral image based on Domain Transform Interpolated Convolution Filter (DTICF) and 3D-CNN with Bi-directional-Long Short Term Memory (Bi-LSTM). There are three steps for the proposed techniques: First, HSI data is converted to RGB images with spatial features. Before using the DTICF, the RGB images of HSI and patch of the input image from raw HSI are integrated. Afterward, the pair features of spectral and spatial are excerpted using DTICF from integrated HSI. Those obtained spatial and spectral features are finally given into the designed 3D-CNN with Bi-LSTM framework. In the second step, the excerpted color features are classified by 2D-CNN. The probabilistic classification map of 3D-CNN-Bi-LSTM, and 2D-CNN are fused. In the last step, additionally, Markov Random Field (MRF) is utilized for improving the fused probabilistic classification map efficiently. Based on the experimental results, two different hyperspectral images prove that novel RGB channel assimilation of DTICF-3D-CNN-Bi-LSTM approach is more important and provides good classification results compared to other classification approaches.

Prediction of spatio-temporal AQI data

  • KyeongEun Kim;MiRu Ma;KyeongWon Lee
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.2
    • /
    • pp.119-133
    • /
    • 2023
  • With the rapid growth of the economy and fossil fuel consumption, the concentration of air pollutants has increased significantly and the air pollution problem is no longer limited to small areas. We conduct statistical analysis with the actual data related to air quality that covers the entire of South Korea using R and Python. Some factors such as SO2, CO, O3, NO2, PM10, precipitation, wind speed, wind direction, vapor pressure, local pressure, sea level pressure, temperature, humidity, and others are used as covariates. The main goal of this paper is to predict air quality index (AQI) spatio-temporal data. The observations of spatio-temporal big datasets like AQI data are correlated both spatially and temporally, and computation of the prediction or forecasting with dependence structure is often infeasible. As such, the likelihood function based on the spatio-temporal model may be complicated and some special modelings are useful for statistically reliable predictions. In this paper, we propose several methods for this big spatio-temporal AQI data. First, random effects with spatio-temporal basis functions model, a classical statistical analysis, is proposed. Next, neural networks model, a deep learning method based on artificial neural networks, is applied. Finally, random forest model, a machine learning method that is closer to computational science, will be introduced. Then we compare the forecasting performance of each other in terms of predictive diagnostics. As a result of the analysis, all three methods predicted the normal level of PM2.5 well, but the performance seems to be poor at the extreme value.

Distributed AI Learning-based Proof-of-Work Consensus Algorithm (분산 인공지능 학습 기반 작업증명 합의알고리즘)

  • Won-Boo Chae;Jong-Sou Park
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.1-14
    • /
    • 2022
  • The proof-of-work consensus algorithm used by most blockchains is causing a massive waste of computing resources in the form of mining. A useful proof-of-work consensus algorithm has been studied to reduce the waste of computing resources in proof-of-work, but there are still resource waste and mining centralization problems when creating blocks. In this paper, the problem of resource waste in block generation was solved by replacing the relatively inefficient computation process for block generation with distributed artificial intelligence model learning. In addition, by providing fair rewards to nodes participating in the learning process, nodes with weak computing power were motivated to participate, and performance similar to the existing centralized AI learning method was maintained. To show the validity of the proposed methodology, we implemented a blockchain network capable of distributed AI learning and experimented with reward distribution through resource verification, and compared the results of the existing centralized learning method and the blockchain distributed AI learning method. In addition, as a future study, the thesis was concluded by suggesting problems and development directions that may occur when expanding the blockchain main network and artificial intelligence model.

Analysis of the Existing Analytical Solutions for Isotropic Rectangular Thin Elastic Plates with Three Edges Clamped and the Other Free (등방성 직사각형의 3변 고정 1변 자유 얇은 탄성판에 대한 기존 해석해의 분석)

  • Seo, Seung-Nam
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.1A
    • /
    • pp.117-132
    • /
    • 2006
  • The existing analytical solutions for rectangular plates with three edges clamped and the other free are derived based on nondimensional differential equation and their characteristics are analyzed. Since Timoshenko and Woinowsky-Krieger's method (1959) can give solutions for the case limited to the aspect ratio of the plates less than one, this method are proved to be impractical for the bending moment calculation of the plates under consideration. Horii and Moto's method(1968) are modified by adding stabilizing terms to suppress overflow in the matrix computation, from which the series solution with maximum 150 terms can be obtained. By use of the series solution the convergence of computed bending moments is tested. The modified method can be shown to calculate the deflection properties for the plates with wide range of aspect ratios, but the computed x moment at the corner points formed by the free edge and the clamped edges can not satisfy the boundary condition and the cause of problem is discussed in detail.

Ensuring Data Confidentiality and Privacy in the Cloud using Non-Deterministic Cryptographic Scheme

  • John Kwao Dawson;Frimpong Twum;James Benjamin Hayfron Acquah;Yaw Missah
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.49-60
    • /
    • 2023
  • The amount of data generated by electronic systems through e-commerce, social networks, and data computation has risen. However, the security of data has always been a challenge. The problem is not with the quantity of data but how to secure the data by ensuring its confidentiality and privacy. Though there are several research on cloud data security, this study proposes a security scheme with the lowest execution time. The approach employs a non-linear time complexity to achieve data confidentiality and privacy. A symmetric algorithm dubbed the Non-Deterministic Cryptographic Scheme (NCS) is proposed to address the increased execution time of existing cryptographic schemes. NCS has linear time complexity with a low and unpredicted trend of execution times. It achieves confidentiality and privacy of data on the cloud by converting the plaintext into Ciphertext with a small number of iterations thereby decreasing the execution time but with high security. The algorithm is based on Good Prime Numbers, Linear Congruential Generator (LGC), Sliding Window Algorithm (SWA), and XOR gate. For the implementation in C, thirty different execution times were performed and their average was taken. A comparative analysis of the NCS was performed against AES, DES, and RSA algorithms based on key sizes of 128kb, 256kb, and 512kb using the dataset from Kaggle. The results showed the proposed NCS execution times were lower in comparison to AES, which had better execution time than DES with RSA having the longest. Contrary, to existing knowledge that execution time is relative to data size, the results obtained from the experiment indicated otherwise for the proposed NCS algorithm. With data sizes of 128kb, 256kb, and 512kb, the execution times in milliseconds were 38, 711, and 378 respectively. This validates the NCS as a Non-Deterministic Cryptographic Algorithm. The study findings hence are in support of the argument that data size does not determine the execution.

Enhancement of SBR for Speech Signal Using Adaptive Noise Floor Level (가변 잡음 레벨을 이용한 음성신호에 대한 SBR 성능 항상 기술)

  • Lee, Se-Won;Oh, Seoung-Jun;Ahn, Chang-Beom;Lee, Tae-Jin;Kang, Kyoung-Ok;Park, Ho-Chong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.148-154
    • /
    • 2009
  • In audio coding, SBR technology synthesizes the high-bands using patched time-frequency information from low-bands and the correction parameters, Since SBR transmits only correction parameters for high-bands, it provides a low-rate coding of high-bands, and is used as a core module of MPEG-4 HE-AAC, SBR was originally designed for audio signal and its performance for speech signal tends to decrease, and the major reason is an excessive noise floor in high-bands which is caused by incorrect tonality computation, In this paper, a new method to determine noise floor level in an adaptive fashion according to the speech characteristics is proposed in order to solve the problem of SBR for speech signal, The proposed method maintains the compatibility with the standard SBR, and the subjective performance evaluation shows that the proposed method improves the SBR performance especially for male speech signal compared with the standard SBR.

Benchmark Numerical Simulation on the Coupled Behavior of the Ground around a Point Heat Source Using the TOUGH-FLAC Approach (TOUGH-FLAC 기법을 이용한 점열원 주변지반의 복합거동에 대한 벤치마크 수치모사)

  • Dohyun Park
    • Tunnel and Underground Space
    • /
    • v.34 no.2
    • /
    • pp.127-142
    • /
    • 2024
  • The robustness of a numerical method means that its computational performance is maintained under various modeling conditions. New numerical methods or codes need to be assessed for robustness through benchmark testing. The TOUGH-FLAC modeling approach has been applied to various fields such as subsurface carbon dioxide storage, geological disposal of spent nuclear fuel, and geothermal development both domestically and internationally, and the modeling validity has been examined by comparing the results with experimental measurements and other numerical codes. In the present study, a benchmark test of the TOUGH-FLAC approach was performed based on a coupled thermal-hydro-mechanical behavior problem with an analytical solution. The analytical solution is related to the temperature, pore water pressure, and mechanical behavior of a fully saturated porous medium that is subjected to a point heat source. The robustness of the TOUGH-FLAC approach was evaluated by comparing the analytical solution with the results of numerical simulation. Additionally, the effects of thermal-hydro-mechanical coupling terms, fluid phase change, and timestep on the computation of coupled behavior were investigated.

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

Multi-day Trip Planning System with Collaborative Recommendation (협업적 추천 기반의 여행 계획 시스템)

  • Aprilia, Priska;Oh, Kyeong-Jin;Hong, Myung-Duk;Ga, Myeong-Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.159-185
    • /
    • 2016
  • Planning a multi-day trip is a complex, yet time-consuming task. It usually starts with selecting a list of points of interest (POIs) worth visiting and then arranging them into an itinerary, taking into consideration various constraints and preferences. When choosing POIs to visit, one might ask friends to suggest them, search for information on the Web, or seek advice from travel agents; however, those options have their limitations. First, the knowledge of friends is limited to the places they have visited. Second, the tourism information on the internet may be vast, but at the same time, might cause one to invest a lot of time reading and filtering the information. Lastly, travel agents might be biased towards providers of certain travel products when suggesting itineraries. In recent years, many researchers have tried to deal with the huge amount of tourism information available on the internet. They explored the wisdom of the crowd through overwhelming images shared by people on social media sites. Furthermore, trip planning problems are usually formulated as 'Tourist Trip Design Problems', and are solved using various search algorithms with heuristics. Various recommendation systems with various techniques have been set up to cope with the overwhelming tourism information available on the internet. Prediction models of recommendation systems are typically built using a large dataset. However, sometimes such a dataset is not always available. For other models, especially those that require input from people, human computation has emerged as a powerful and inexpensive approach. This study proposes CYTRIP (Crowdsource Your TRIP), a multi-day trip itinerary planning system that draws on the collective intelligence of contributors in recommending POIs. In order to enable the crowd to collaboratively recommend POIs to users, CYTRIP provides a shared workspace. In the shared workspace, the crowd can recommend as many POIs to as many requesters as they can, and they can also vote on the POIs recommended by other people when they find them interesting. In CYTRIP, anyone can make a contribution by recommending POIs to requesters based on requesters' specified preferences. CYTRIP takes input on the recommended POIs to build a multi-day trip itinerary taking into account the user's preferences, the various time constraints, and the locations. The input then becomes a multi-day trip planning problem that is formulated in Planning Domain Definition Language 3 (PDDL3). A sequence of actions formulated in a domain file is used to achieve the goals in the planning problem, which are the recommended POIs to be visited. The multi-day trip planning problem is a highly constrained problem. Sometimes, it is not feasible to visit all the recommended POIs with the limited resources available, such as the time the user can spend. In order to cope with an unachievable goal that can result in no solution for the other goals, CYTRIP selects a set of feasible POIs prior to the planning process. The planning problem is created for the selected POIs and fed into the planner. The solution returned by the planner is then parsed into a multi-day trip itinerary and displayed to the user on a map. The proposed system is implemented as a web-based application built using PHP on a CodeIgniter Web Framework. In order to evaluate the proposed system, an online experiment was conducted. From the online experiment, results show that with the help of the contributors, CYTRIP can plan and generate a multi-day trip itinerary that is tailored to the users' preferences and bound by their constraints, such as location or time constraints. The contributors also find that CYTRIP is a useful tool for collecting POIs from the crowd and planning a multi-day trip.

3D Modeling from 2D Stereo Image using 2-Step Hybrid Method (2단계 하이브리드 방법을 이용한 2D 스테레오 영상의 3D 모델링)

  • No, Yun-Hyang;Go, Byeong-Cheol;Byeon, Hye-Ran;Yu, Ji-Sang
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.7
    • /
    • pp.501-510
    • /
    • 2001
  • Generally, it is essential to estimate exact disparity for the 3D modeling from stereo images. Because existing methods calculate disparities from a whole image, they require too much cimputational time and bring about the mismatching problem. In this article, using the characteristic that the disparity vectors in stereo images are distributed not equally in a whole image but only exist about the background and obhect, we do a wavelet transformation on stereo images and estimate coarse disparity fields from the reduced lowpass field using area-based method at first-step. From these coarse disparity vectors, we generate disparity histogram and then separate object from background area using it. Afterwards, we restore only object area to the original image and estimate dense and accurate disparity by our two-step pixel-based method which does not use pixel brightness but use second gradient. We also extract feature points from the separated object area and estimate depth information by applying disparity vectors and camera parameters. Finally, we generate 3D model using both feature points and their z coordinates. By using our proposed, we can considerably reduce the computation time and estimate the precise disparity through the additional pixel-based method using LOG filter. Furthermore, our proposed foreground/background method can solve the mismatching problem of existing Delaunay triangulation and generate accurate 3D model.

  • PDF