• Title/Summary/Keyword: K-means algorithm

Search Result 1,363, Processing Time 0.032 seconds

Performance analysis and operation simulation of the beamforming antenna applied to cellular CDMA basestation (셀룰러 CDMA 기지국에 beamforming 안테나를 적용하기 위한 동작 시뮬레이션 및 성능해석에 관한 연구)

  • Park, Jae-Jun;Bae, Byeong-Jae;Jang, Tae-Gyu
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.37 no.2
    • /
    • pp.32-45
    • /
    • 2000
  • This paper presents the analytic derivation of the SINR, when a linear array antenna is accommodated into the cellular CDMA basestation receiver, in relation to the two major performance effecting factors in beamforming(BF) applications, i. e., the direction selectivity, which refers to the narrowness of the mainbeam width, and the direction-of-arrival(DOA) estimation accuracy. The analytically derived results are compared with the operation simulation of the receiver realized with the several BF algorithms and their agreements are confirmed, consequently verifying the correctness of the analysis and the operation simulation. In order to investigate separately the effects of the errors occurring in the direction estimation and in the interference suppression, which are the two major functional components of general BF algorithms, both the algorithms of steering BF and the minimum- variance- distortionless-response(MVDR) BF are applied to the analysis. A signal model to reflect the spatially scattering phenomenon of the RF waves entering into the .:nay antenna, which directly affects on the accuracy of the BF algorithm's direction estimation, is also suggested in this paper and applied to the analysis and the operation simulation. It is confirmed from the results that the enhancement of the direction selectivity of the away antenna is not desirable in view of both the implementation economy and the BF algorithm's robustness to the erroneous factors. Such a trade-off characteristics is significant in the sense that it can be capitalized to obtain an economic means of BF implementation that does not severely deteriorate its performance while ensuring the robustness to the erroneous effects, consequently manifesting the significance of the analysis results of this paper that can be used as a design reference in developing BF algorithms to the cellular CDMA system.

  • PDF

Joint Price and Lot-size Determination for Decaying Items with Ordering Cost Inclusive of a Freight Cost under Trade Credit in a Two-stage Supply Chain (2 단계 신용거래 공급망에서 운송비용이 포함된 주문 비용을 고려한 퇴화성제품의 재고정책 및 판매가격 결정 모형)

  • Shinn, Seong-Whan
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.191-197
    • /
    • 2020
  • As an effective means of price discrimination, some suppliers offer trade credit to the distributors for the purpose of increasing the demand of the product they produce. The availability of the delay in payments from the supplier enables discount of the distributor's selling price from a wider range of the price option in anticipation of increased customer's demand. In this regard, we consider the problem of determining the distributor's optimal price and lot size simultaneously when the supplier permits delay in payments for an order of a product whose demand rate is represented by a constant price elasticity function. It is assumed that the distributor pays the shipping cost for the order and hence, the distributor's ordering cost consists of a fixed ordering cost and the shipping cost that depend on the order quantity. For the analysis, it is also assumed that inventory is depleted not only by customer's demand but also by decay. We are able to develop a solution algorithm from the properties of the mathematical model. A numerical example is presented to illustrate the algorithm developed.

Image Watermarking for Copyright Protection of Images on Shopping Mall (쇼핑몰 이미지 저작권보호를 위한 영상 워터마킹)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.147-157
    • /
    • 2013
  • With the advent of the digital environment that can be accessed anytime, anywhere with the introduction of high-speed network, the free distribution and use of digital content were made possible. Ironically this environment is raising a variety of copyright infringement, and product images used in the online shopping mall are pirated frequently. There are many controversial issues whether shopping mall images are creative works or not. According to Supreme Court's decision in 2001, to ad pictures taken with ham products is simply a clone of the appearance of objects to deliver nothing but the decision was not only creative expression. But for the photographer's losses recognized in the advertising photo shoot takes the typical cost was estimated damages. According to Seoul District Court precedents in 2003, if there are the photographer's personality and creativity in the selection of the subject, the composition of the set, the direction and amount of light control, set the angle of the camera, shutter speed, shutter chance, other shooting methods for capturing, developing and printing process, the works should be protected by copyright law by the Court's sentence. In order to receive copyright protection of the shopping mall images by the law, it is simply not to convey the status of the product, the photographer's personality and creativity can be recognized that it requires effort. Accordingly, the cost of making the mall image increases, and the necessity for copyright protection becomes higher. The product images of the online shopping mall have a very unique configuration unlike the general pictures such as portraits and landscape photos and, therefore, the general image watermarking technique can not satisfy the requirements of the image watermarking. Because background of product images commonly used in shopping malls is white or black, or gray scale (gradient) color, it is difficult to utilize the space to embed a watermark and the area is very sensitive even a slight change. In this paper, the characteristics of images used in shopping malls are analyzed and a watermarking technology which is suitable to the shopping mall images is proposed. The proposed image watermarking technology divide a product image into smaller blocks, and the corresponding blocks are transformed by DCT (Discrete Cosine Transform), and then the watermark information was inserted into images using quantization of DCT coefficients. Because uniform treatment of the DCT coefficients for quantization cause visual blocking artifacts, the proposed algorithm used weighted mask which quantizes finely the coefficients located block boundaries and coarsely the coefficients located center area of the block. This mask improves subjective visual quality as well as the objective quality of the images. In addition, in order to improve the safety of the algorithm, the blocks which is embedded the watermark are randomly selected and the turbo code is used to reduce the BER when extracting the watermark. The PSNR(Peak Signal to Noise Ratio) of the shopping mall image watermarked by the proposed algorithm is 40.7~48.5[dB] and BER(Bit Error Rate) after JPEG with QF = 70 is 0. This means the watermarked image is high quality and the algorithm is robust to JPEG compression that is used generally at the online shopping malls. Also, for 40% change in size and 40 degrees of rotation, the BER is 0. In general, the shopping malls are used compressed images with QF which is higher than 90. Because the pirated image is used to replicate from original image, the proposed algorithm can identify the copyright infringement in the most cases. As shown the experimental results, the proposed algorithm is suitable to the shopping mall images with simple background. However, the future study should be carried out to enhance the robustness of the proposed algorithm because the robustness loss is occurred after mask process.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

A Critical Path Search and The Project Activities Scheduling (임계경로 탐색과 프로젝트 활동 일정 수립)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.141-150
    • /
    • 2012
  • This paper suggests a critical path search algorithm that can easily draw PERT/GANTT chart which manages and plans a project schedule. In order to evaluate a critical path that determines the project schedule, Critical Path Method (CPM) is generally utilized. However, CPM undergoes 5 stages to calculate the critical path for a network diagram that is previously designed according to correlative relationship and execution period of project execution activities. And it may not correctly evaluate $T_E$ (The Earliest Time), since it does not suggest the way how to determine the sequence of the nodes activities that calculate the $T_E$. Also, the sequence of the network diagram activities obtained from CPM cannot be visually represented, and hence Lucko suggested an algorithm which undergoes 9 stages. On the other hand, the suggested algorithm, first of all, decides the sequence in advance, by reallocating the nodes into levels after Breadth-First Search of the network diagram that is previously designed. Next, it randomly chooses nodes of each level and immediately determines the critical path only after calculation of $T_E$. Finally, it enables the representation of the execution sequence of the project activity to be seen precisely visual by means of a small movement of $T_E$ of the nodes that are not belonging to the critical path, on basis of the $T_E$ of the nodes which belong to the critical path. The suggested algorithm has been proved its applicability to 10 real project data. It is able to get the critical path from all the projects, and precisely and visually represented the execution sequence of the activities. Also, this has advantages of, firstly, reducing 5 stages of CPM into 1, simplifying Lucko's 9 stages into 2 stages that are used to clearly express the execution sequence of the activities, and directly converting the representation into PERT/GANTT chart.

A Study on the Development of Urine Analysis System using Strip and Evaluation of Experimental Result by means of Fuzzy Inference (스트립을 이용한 요분석시스템의 개발과 퍼지추론에 의한 검사결과 평가에 관한 연구)

  • Jun, K. R.;Lee, S. J.;Choi, B. C.;An, S. H.;Ha, K.;Kim, J. Y.;Kim, J. H.
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.5
    • /
    • pp.477-486
    • /
    • 1998
  • In this paper, we implemented the urine analysis system capable of measuring a qualitative and semi-quantitative and assay using strip. The analysis algorithm of urine analysis was adopted a fuzzy logic-based classifiers that was robust to external error factors such as temperature and electric power noises. The spectroscopic properties of 9 pads In a strip were studied to developing the urine analysis system was designed for robustnesss and stability. The urine analysis system was consisted of hardware and software. The hardware of the urine analysis system was based on one-chip microprocessor, and Its peripherals which composed of optic modulo, tray control, preamplifier, communication with PC, thermal printer and operating status indicator. The software of the urine analysis system was composed of system program and classification program. The system program did duty fort system control, data acquisition and data analysis. The classification program was composed of fuzzy inference engine and membership function generator. The membership function generator made triangular membership functions by statical method for quality control. Resulted data was transferred through serial cable to PC. The transferred data was arranged and saved be data acquisition program coded by C+ + language. The precision of urine analysis system and the stability of fuzzy classifier were evaluated by testing the standard urine samples. Experimental results showed a good stability states and a exact classification.

  • PDF

Metabolic Changes in Patients with Parkinson's Disease after Stereotactic Neurosurgery by Follow-up 1H MR Spectroscopy

  • Choe, Bo-Young;Baik, Hyun-Man;Chun, Shin-Soo;Son, Byung-Chul;Kim, Moon-Chan;Kim, Bum-Soo;Lee, Hyoung-Koo;Suh, Tae-Suk
    • Journal of the Korean Magnetic Resonance Society
    • /
    • v.5 no.2
    • /
    • pp.99-109
    • /
    • 2001
  • Authors investigated neuronal changes of local cellular metabolism in the cerebral lesions of Parkinsonian symptomatic side between before and after stereotactic neurosurgery by follow-up 1H magnetic resonance spectroscopy (MRS). Patients with Parkinson's disease (PD) (n = 15) and age-matched normal controls (n = 15) underwen MRS examinations using a stimulated echo acquisition mode (STEAM) pulse sequence that provided 2${\times}$2${\times}$2 ㎤ (8ml) volume of interest in the regions of substantia nigra, thalamus, and lentiform nucleus. Spectral parameters were 20 ms TE, 2000 ms TR, 128 averages,2500 Hz spectral width, and 2048 data points. Raw data were processed by the SAGE data analysis package (GE Medical Systems). Peak areas of N-acetylaspartate (NAA), creatine (Cr), choline-containing compounds (Cho), inositols (Ins), and the sum (Glx) of glutamate and GABA were calculated by means of fitting the spectrum to a summation of Lorentzian curves using Marquardt algorithm. After blindly processed, we evaluated neuronal alterations of observable metabolite ratios between before and after stereotactic neurosurgery using Pearson product-moment analysis (SPSS, Ver. 6.0). A significant reduction of NAA/Cho ratio was observed in the cerebral lesion in substantia nigra of PD patient related to the symptomatic side after neurosurgery (P : 0.03). In thalamus, NAA/Cho ratio was also significantly decreased in the cerebral lesion including the electrode-surgical region (P : 0.03). A significant reduction of NAA/Cho ratio in lentiform nucleus was not oberved, but tended toward significant reduction after neurosurgery (P = 0.08). In particular, remarkable lactate signal was noted from the surgical thalamic lesions of 6 among 8 patients and internal segments of globus pallidus of 6 among 7 patients, respectively. Significant metabolic alterations of NAA/Cho ratio might reflect functional changes of neuropathological processes in the lesion of substantia nigra, thalamus, and lentiform nucleus, and could be a valuable finding fur evaluation of Parkinson's disease after neurosurgery. Increase of lactate signals, being remarkable in surgical lesions, could be consistent with a common consequence of neurosurgical necrosis. Thus, IH MRS could be a useful modality to evaluate the diagnostic and prognostic implications fur Parkinsons disease after functional neurosurgery.

  • PDF

Response Modeling for the Marketing Promotion with Weighted Case Based Reasoning Under Imbalanced Data Distribution (불균형 데이터 환경에서 변수가중치를 적용한 사례기반추론 기반의 고객반응 예측)

  • Kim, Eunmi;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.29-45
    • /
    • 2015
  • Response modeling is a well-known research issue for those who have tried to get more superior performance in the capability of predicting the customers' response for the marketing promotion. The response model for customers would reduce the marketing cost by identifying prospective customers from very large customer database and predicting the purchasing intention of the selected customers while the promotion which is derived from an undifferentiated marketing strategy results in unnecessary cost. In addition, the big data environment has accelerated developing the response model with data mining techniques such as CBR, neural networks and support vector machines. And CBR is one of the most major tools in business because it is known as simple and robust to apply to the response model. However, CBR is an attractive data mining technique for data mining applications in business even though it hasn't shown high performance compared to other machine learning techniques. Thus many studies have tried to improve CBR and utilized in business data mining with the enhanced algorithms or the support of other techniques such as genetic algorithm, decision tree and AHP (Analytic Process Hierarchy). Ahn and Kim(2008) utilized logit, neural networks, CBR to predict that which customers would purchase the items promoted by marketing department and tried to optimized the number of k for k-nearest neighbor with genetic algorithm for the purpose of improving the performance of the integrated model. Hong and Park(2009) noted that the integrated approach with CBR for logit, neural networks, and Support Vector Machine (SVM) showed more improved prediction ability for response of customers to marketing promotion than each data mining models such as logit, neural networks, and SVM. This paper presented an approach to predict customers' response of marketing promotion with Case Based Reasoning. The proposed model was developed by applying different weights to each feature. We deployed logit model with a database including the promotion and the purchasing data of bath soap. After that, the coefficients were used to give different weights of CBR. We analyzed the performance of proposed weighted CBR based model compared to neural networks and pure CBR based model empirically and found that the proposed weighted CBR based model showed more superior performance than pure CBR model. Imbalanced data is a common problem to build data mining model to classify a class with real data such as bankruptcy prediction, intrusion detection, fraud detection, churn management, and response modeling. Imbalanced data means that the number of instance in one class is remarkably small or large compared to the number of instance in other classes. The classification model such as response modeling has a lot of trouble to recognize the pattern from data through learning because the model tends to ignore a small number of classes while classifying a large number of classes correctly. To resolve the problem caused from imbalanced data distribution, sampling method is one of the most representative approach. The sampling method could be categorized to under sampling and over sampling. However, CBR is not sensitive to data distribution because it doesn't learn from data unlike machine learning algorithm. In this study, we investigated the robustness of our proposed model while changing the ratio of response customers and nonresponse customers to the promotion program because the response customers for the suggested promotion is always a small part of nonresponse customers in the real world. We simulated the proposed model 100 times to validate the robustness with different ratio of response customers to response customers under the imbalanced data distribution. Finally, we found that our proposed CBR based model showed superior performance than compared models under the imbalanced data sets. Our study is expected to improve the performance of response model for the promotion program with CBR under imbalanced data distribution in the real world.

A Fault Tolerant ATM Switch using a Fully Adaptive Self-routing Algorithm - The Cyclic Banyan Network (실내 무선 통신로에서 파일럿 심볼을 삽입한 Concatenated FEC 부호에 의한 WATM의 성능 개선)

  • 박기식;강영흥;김종원;정해원;양해권;조성준
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.9A
    • /
    • pp.1276-1284
    • /
    • 1999
  • We have evaluated the BER's and CLP's of Wireless ATM (WATM) cells employing the concatenated FEC code with pilot symbols for fading compensation through the simulation in indoor wireless channel modeled as a Rayleigh and a Rician fading channel, respectively. The results of the performance evaluation are compared with those obtained by employing the convolutional code in the same condition. In Rayleigh fading channel, considering the maximum tolerance BER ( $10^-3$) as a criterion of the voice service, it is blown that the performance improvement of about 4 dB is obtained in terms of $E_b/N_o$ by employing the concatenated FEC code with pilot symbols rather than the convolutional code with pilot symbols.When the values of K parameter which means the ratio of the direct signal to scattered signal power in Rician fading channel are 6 and 10, it is shown that the performance improvement of about 4 dB and 2 dB is obtained, respectively, in terms of $E_b/N_o$ by employing the concatenated FEC code with pilot symbols considering the maximum tolerance BER of the voice service. Also in Rician fading channel of K=6 and K= 10, considering CLP = $10^-3$ as a criterion, it is observed that the performance improvement of about 3.5 dB and1.5 dB is obtained, respectively, in terms of $E_b/N_o$ by employing the concatenated FEC code with pilot symbols.

  • PDF

Patterning Zooplankton Dynamics in the Regulated Nakdong River by Means of the Self-Organizing Map (자가조직화 지도 방법을 이용한 조절된 낙동강 내 동물플랑크톤 역동성의 모형화)

  • Kim, Dong-Kyun;Joo, Gea-Jae;Jeong, Kwang-Seuk;Chang, Kwang-Hyson;Kim, Hyun-Woo
    • Korean Journal of Ecology and Environment
    • /
    • v.39 no.1 s.115
    • /
    • pp.52-61
    • /
    • 2006
  • The aim of this study was to analyze the seasonal patterns of zooplankton community dynamics in the lower Nakdong River (Mulgum, RK; river kilometer; 27 km from the estuarine barrage), with a Self-Organizing Map (SOM) based on weekly sampled data collected over ten years(1994 ${\sim}$ 2003). It is well known that zooplankton groups had important role in the food web of freshwater ecosystems, however, less attention has been paid to this group compared with other community constituents. A non-linear patterning algorithm of the SOM was applied to discover the relationship among river environments and zooplankton community dynamics. Limnological variables (water temperature, dissolved oxygen, pH , Secchi transparency, turbidity, chlorophyll a, discharge, etc.) were taken into account to implement patterning seasonal changes of zooplankton community structures (consisting of rotifers, cladocerans and copepods). The trained SOM model allocated zooplankton on the map plane with limnological parameters. Three zooplankton groups had high similarities to one another in their changing seasonal patterns, Among the limnological variables, water temporature was highly related to the zooplankton community dynamics (especially for cladocerans). The SOM model illustrated the suppression of zooplankton due to the increased river discharge, particularly in summer. Chlorophyll a concentrations were separated from zooplankton data set on the map plane, which would intimate the herbivorous activity of dominant grazers. This study introduces the zooplankton dynamics associated with limnological parameters using a nonlinear method, and the information will be useful for managing the river ecosystem, with respect to the food web interactions.