• Title/Summary/Keyword: square partition

Search Result 49, Processing Time 0.02 seconds

Characteristics of Input-Output Spaces of Fuzzy Inference Systems by Means of Membership Functions and Performance Analyses (소속 함수에 의한 퍼지 추론 시스템의 입출력 공간 특성 및 성능 분석)

  • Park, Keon-Jun;Lee, Dong-Yoon
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.4
    • /
    • pp.74-82
    • /
    • 2011
  • To do fuzzy modelling of a nonlinear process needs to analyze the characteristics of input-output of fuzzy inference systems according to the division of entire input spaces and the fuzzy reasoning methods. For this, fuzzy model is expressed by identifying the structure and parameters of the system by means of input variables, fuzzy partition of input spaces, and consequence polynomial functions. In the premise part of the fuzzy rules Min-Max method using the minimum and maximum values of input data set and C-Means clustering algorithm forming input data into the clusters are used for identification of fuzzy model and membership functions are used as a series of triangular, gaussian-like, trapezoid-type membership functions. In the consequence part of the fuzzy rules fuzzy reasoning is conducted by two types of inferences such as simplified and linear inference. The identification of the consequence parameters, namely polynomial coefficients, of each rule are carried out by the standard least square method. And lastly, using gas furnace process which is widely used in nonlinear process we evaluate the performance and the system characteristics.

A Study on Evolutionary Computation of Fractal Image Compression (프랙탈 영상 압축의 진화적인 계산에 관한 연구)

  • Yoo, Hwan-Young;Choi, Bong-Han
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.365-372
    • /
    • 2000
  • he paper introduces evolutionary computing to Fractal Image Compression(FIC). In Fractal Image Compression(FIC) a partitioning of the image into ranges is required. As a solution to this problem there is a propose that evolution computation should be applied in image partitionings. Here ranges are connected sets of small square image blocks. Populations consist of $N_p$ configurations, each of which is a partitioning with a fractal code. In the evolution each configuration produces $\sigma$ children who inherit their parent partitionings except for two random neighboring ranges which are merged. From the offspring the best ones are selected for the next generation population based on a fitness criterion Collage Theorem. As the optimum image includes duplication in image data, it gets smaller in saving space more efficient in speed and more capable in image quality than any other technique in which other coding is used. Fractal Image Compression(FIC) using evolution computation in multimedia image processing applies to such fields as recovery of image and animation which needs a high-quality image and a high image-compression ratio.

  • PDF

A Study on Predictive Modeling of I-131 Radioactivity Based on Machine Learning (머신러닝 기반 고용량 I-131의 용량 예측 모델에 관한 연구)

  • Yeon-Wook You;Chung-Wun Lee;Jung-Soo Kim
    • Journal of radiological science and technology
    • /
    • v.46 no.2
    • /
    • pp.131-139
    • /
    • 2023
  • High-dose I-131 used for the treatment of thyroid cancer causes localized exposure among radiology technologists handling it. There is a delay between the calibration date and when the dose of I-131 is administered to a patient. Therefore, it is necessary to directly measure the radioactivity of the administered dose using a dose calibrator. In this study, we attempted to apply machine learning modeling to measured external dose rates from shielded I-131 in order to predict their radioactivity. External dose rates were measured at 1 m, 0.3 m, and 0.1 m distances from a shielded container with the I-131, with a total of 868 sets of measurements taken. For the modeling process, we utilized the hold-out method to partition the data with a 7:3 ratio (609 for the training set:259 for the test set). For the machine learning algorithms, we chose linear regression, decision tree, random forest and XGBoost. To evaluate the models, we calculated root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE) to evaluate accuracy and R2 to evaluate explanatory power. Evaluation results are as follows. Linear regression (RMSE 268.15, MSE 71901.87, MAE 231.68, R2 0.92), decision tree (RMSE 108.89, MSE 11856.92, MAE 19.24, R2 0.99), random forest (RMSE 8.89, MSE 79.10, MAE 6.55, R2 0.99), XGBoost (RMSE 10.21, MSE 104.22, MAE 7.68, R2 0.99). The random forest model achieved the highest predictive ability. Improving the model's performance in the future is expected to contribute to lowering exposure among radiology technologists.

Segmentation of Measured Point Data for Reverse Engineering (역공학을 위한 측정점의 영역화)

  • 양민양;이응기
    • Korean Journal of Computational Design and Engineering
    • /
    • v.4 no.3
    • /
    • pp.173-179
    • /
    • 1999
  • In reverse engineering, when a shape containing multi-patched surfaces is digitized, the boundaries of these surfaces should be detected. The objective of this paper is to introduce a computationally efficient segmentation technique for extracting edges, ad partitioning the 3D measuring point data based on the location of the boundaries. The procedure begins with the identification of the edge points. An automatic edge-based approach is developed on the basis of local geometry. A parametric quadric surface approximation method is used to estimate the local surface curvature properties. the least-square approximation scheme minimizes the sum of the squares of the actual euclidean distance between the neighborhood data points and the parametric quadric surface. The surface curvatures and the principal directions are computed from the locally approximated surfaces. Edge points are identified as the curvature extremes, and zero-crossing, which are found from the estimated surface curvatures. After edge points are identified, edge-neighborhood chain-coding algorithm is used for forming boundary curves. The original point set is then broke down into subsets, which meet along the boundaries, by scan line algorithm. All point data are applied to each boundary loops to partition the points to different regions. Experimental results are presented to verify the developed method.

  • PDF

Rotationally Invariant Space-Time Trellis Codes with 4-D Rectangular Constellations for High Data Rate Wireless Communications

  • Sterian, Corneliu Eugen D.;Wang, Cheng-Xiang;Johnsen, Ragnar;Patzold, Matthias
    • Journal of Communications and Networks
    • /
    • v.6 no.3
    • /
    • pp.258-268
    • /
    • 2004
  • We demonstrate rotationally invariant space-time (ST) trellis codes with a 4-D rectangular signal constellation for data transmission over fading channels using two transmit antennas. The rotational invariance is a good property to have that may alleviate the task of the carrier phase tracking circuit in the receiver. The transmitted data stream is segmented into eight bit blocks and quadrature amplitude modulated using a 256 point 4-D signal constellation whose 2-D constituent constellation is a 16 point square constellation doubly partitioned. The 4-D signal constellation is simply the Cartesian product of the 2-D signal constellation with it-self and has 32 subsets. The partition is performed on one side into four subsets A, B, C, and D with increased minimum-squared Euclidian distance, and on the other side into four rings, where each ring includes four points of equal energy. We propose both linear and nonlinear ST trellis codes and perform simulations using an appropriate multiple-input multiple-output (MIMO) channel model. The 4-D ST codes constructed here demonstrate about the same frame error rate (FER) performance as their 2-D counterparts, having however the added value of rotational invariance.

A Study on the Commercial Streetscape Design Guideline of the Historic and Cultural Environmental Districts in Ancient Capital Gyeongju (고도(古都) 경주의 역사문화환경지구 내 상업가로경관 디자인가이드라인에 관한 연구)

  • Hyun, Taek-Soo
    • Journal of the Korean Institute of Rural Architecture
    • /
    • v.16 no.1
    • /
    • pp.87-94
    • /
    • 2014
  • The objective of this study is to provide a townscape design guideline that harmonized with a historic landscape via condition investigation and landscape analysis of the cultural /commercial environment district where an improvement is a necessity. The followings are the conclusions: 1.To enhance the identity of Gyeongju, should pursue diversity in unity by making the image of architectural landscape a similar peculiarity. 2.The central commercial district where the tradition is valued and contemporary figures meet the tradition needs a landscape formation by CONTEXT. 3.Since a characteristic of the target area is a regional commercial center, to reduce its congestion, reforming the area to simple environment is required. 4.Induce buildings located on the streets with cultural properties to have Korean traditional tiled roof in order to fit in with their surroundings. 5.Make it a rule to partition a wall into tripartition, the wall surface exposed should be the same finishing materials. 6.Consider the visual aspect of pedestrians and a building's width, the recommendable height of the facade should be 3~3.6m. 7.For the design archetype of cornice, four traditional types based on Korean traditional eaves are suggested. 8.The design of signboards should break existing square-shapes, and seek a design to take advantage of icons that historicity and traditionalist of the city are expressed.

Improved Dynamic Programming in Local Linear Approximation Based on a Template in a Lightweight ECG Signal-Processing Edge Device

  • Lee, Seungmin;Park, Daejin
    • Journal of Information Processing Systems
    • /
    • v.18 no.1
    • /
    • pp.97-114
    • /
    • 2022
  • Interest is increasing in electrocardiogram (ECG) signal analysis for embedded devices, creating the need to develop an algorithm suitable for a low-power, low-memory embedded device. Linear approximation of the ECG signal facilitates the detection of fiducial points by expressing the signal as a small number of vertices. However, dynamic programming, a global optimization method used for linear approximation, has the disadvantage of high complexity using memoization. In this paper, the calculation area and memory usage are improved using a linear approximated template. The proposed algorithm reduces the calculation area required for dynamic programming through local optimization around the vertices of the template. In addition, it minimizes the storage space required by expressing the time information using the error from the vertices of the template, which is more compact than the time difference between vertices. When the length of the signal is L, the number of vertices is N, and the margin tolerance is M, the spatial complexity improves from O(NL) to O(NM). In our experiment, the linear approximation processing time was 12.45 times faster, from 18.18 ms to 1.46 ms on average, for each beat. The quality distribution of the percentage root mean square difference confirms that the proposed algorithm is a stable approximation.

A Linear-Time Heuristic Algorithm for k-Way Network Partitioning (선형의 시간 복잡도를 가지는 휴리스틱 k-방향 네트워크 분할 알고리즘)

  • Choi, Tae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.8
    • /
    • pp.1183-1194
    • /
    • 2004
  • Network partitioning problem is to partition a network into multiple blocks such that the size of cutset is minimized while keeping the block sizes balanced. Among these, iterative algorithms are regarded as simple and efficient which are based on cell move of Fiduccia and Mattheyses algorithm, Sanchis algorithm, or Kernighan and Lin algorithm. All these algorithms stipulate balanced block size as a constraint that should be satisfied, which makes a cell movement be inefficient. Park and Park introduced a balancing coefficient R by which the block size balance is considered as a part of partitioning cost, not as a constraint. However, Park and Park's algorithm has a square time complexity with respect to the number of cells. In this paper, we proposed Bucket algorithm that has a linear time complexity with respect to the number of cells, while taking advantage of the balancing coefficient. Reducing time complexity is made possible by a simple observation that balancing cost does not vary so much when a cell moves. Bucket data structure is used to maintain partitioning cost efficiently. Experimental results for MCNC test sets show that cutset size of proposed algorithm is 63.33% 92.38% of that of Sanchis algorithm while our algorithm satisfies predefined balancing constraints and acceptable execution time.

  • PDF

Development of QSAR Model Based on the Key Molecular Descriptors Selection and Computational Toxicology for Prediction of Toxicity of PCBs (PCBs 독성 예측을 위한 주요 분자표현자 선택 기법 및 계산독성학 기반 QSAR 모델 개발)

  • Kim, Dongwoo;Lee, Seungchel;Kim, Minjeong;Lee, Eunji;Yoo, ChangKyoo
    • Korean Chemical Engineering Research
    • /
    • v.54 no.5
    • /
    • pp.621-629
    • /
    • 2016
  • Recently, the researches on quantitative structure activity relationship (QSAR) for describing toxicities or activities of chemicals based on chemical structural characteristics have been widely carried out in order to estimate the toxicity of chemicals in multiuse facilities. Because the toxicity of chemicals are explained by various kinds of molecular descriptors, an important step for QSAR model development is how to select significant molecular descriptors. This research proposes a statistical selection of significant molecular descriptors and a new QSAR model based on partial least square (PLS). The proposed QSAR model is applied to estimate the logarithm of partition coefficients (log P) of 130 polychlorinated biphenyls (PCBs) and lethal concentration ($LC_{50}$) of 14 PCBs, where the prediction accuracies of the proposed QSAR model are compared to a conventional QSAR model provided by OECD QSAR toolbox. For the selection of significant molecular descriptors that have high correlation with molecular descriptors and activity information of the chemicals of interest, correlation coefficient (r) and variable importance of projection (VIP) are applied and then PLS model of the selected molecular descriptors and activity information is used to predict toxicities and activity information of chemicals. In the prediction results of coefficient of regression ($R^2$) and prediction residual error sum of square (PRESS), the proposed QSAR model showed improved prediction performances of log P and $LC_{50}$ by 26% and 91% than the conventional QSAR model, respectively. The proposed QSAR method based on computational toxicology can improve the prediction performance of the toxicities and the activity information of chemicals, which can contribute to the health and environmental risk assessment of toxic chemicals.

Genetically Optimized Hybrid Fuzzy Neural Networks Based on Linear Fuzzy Inference Rules

  • Oh Sung-Kwun;Park Byoung-Jun;Kim Hyun-Ki
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.2
    • /
    • pp.183-194
    • /
    • 2005
  • In this study, we introduce an advanced architecture of genetically optimized Hybrid Fuzzy Neural Networks (gHFNN) and develop a comprehensive design methodology supporting their construction. A series of numeric experiments is included to illustrate the performance of the networks. The construction of gHFNN exploits fundamental technologies of Computational Intelligence (CI), namely fuzzy sets, neural networks, and genetic algorithms (GAs). The architecture of the gHFNNs results from a synergistic usage of the genetic optimization-driven hybrid system generated by combining Fuzzy Neural Networks (FNN) with Polynomial Neural Networks (PNN). In this tandem, a FNN supports the formation of the premise part of the rule-based structure of the gHFNN. The consequence part of the gHFNN is designed using PNNs. We distinguish between two types of the linear fuzzy inference rule-based FNN structures showing how this taxonomy depends upon the type of a fuzzy partition of input variables. As to the consequence part of the gHFNN, the development of the PNN dwells on two general optimization mechanisms: the structural optimization is realized via GAs whereas in case of the parametric optimization we proceed with a standard least square method-based learning. To evaluate the performance of the gHFNN, the models are experimented with a representative numerical example. A comparative analysis demonstrates that the proposed gHFNN come with higher accuracy as well as superb predictive capabilities when comparing with other neurofuzzy models.