• Title/Summary/Keyword: Segmentation Method

Search Result 2,178, Processing Time 0.029 seconds

Case Study of Customer Value Analysis using K-means (K-means를 적용한 고객 가치 사례 분석)

  • Dong-Jun Lee;Si-Hwan Jang;Jong-Seok Ryu;Hwang-Yong Choi;Sung-Soo Kim
    • Journal of Industrial Technology
    • /
    • v.44 no.1
    • /
    • pp.25-34
    • /
    • 2024
  • Customer identification for company is very valuable for direct marketing and increase of profit to target the population who are to become most profitable customer to the company based on target customer analysis and customer segmentation. Customer value analysis involves seeking the profitable groups of customers through analysis of customer's attributes. Data mining techniques can help to accomplish to extract or detect hidden customer values and behaviors from big data. The objective of this paper is to propose customer value analysis based on RFM (R: Recency, F: Frequency, M: Monetary) model to identify the profitable segments (top target customer) of customer based on customer' underlying characteristics. We use the case study of S-company (122 customers with 6639 transactions from 2017/09/01 to 2018/08/31) to show the procedure of customer value analysis based on RFM model. We show how we can make the scores of RFM attributes and segment customers. K-means is one of the most important technique in data mining. K-means is used for five group market segmentations based on valid index intra-cluster distance which is a popular and efficient data clustering method. Our experiments and simulation results show the 26 top target customers out of 122 customers. We also propose the product recommend system based on RFM model for efficient marketing strategy with high priority.

Analyzing the Issue Life Cycle by Mapping Inter-Period Issues (기간별 이슈 매핑을 통한 이슈 생명주기 분석 방법론)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.25-41
    • /
    • 2014
  • Recently, the number of social media users has increased rapidly because of the prevalence of smart devices. As a result, the amount of real-time data has been increasing exponentially, which, in turn, is generating more interest in using such data to create added value. For instance, several attempts are being made to analyze the relevant search keywords that are frequently used on new portal sites and the words that are regularly mentioned on various social media in order to identify social issues. The technique of "topic analysis" is employed in order to identify topics and themes from a large amount of text documents. As one of the most prevalent applications of topic analysis, the technique of issue tracking investigates changes in the social issues that are identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has two limitations. First, when a new period is included, topic analysis must be repeated for all the documents of the entire period, rather than being conducted only on the new documents of the added period. This creates practical limitations in the form of significant time and cost burdens. Therefore, this traditional approach is difficult to apply in most applications that need to perform an analysis on the additional period. Second, the issue is not only generated and terminated constantly, but also one issue can sometimes be distributed into several issues or multiple issues can be integrated into one single issue. In other words, each issue is characterized by a life cycle that consists of the stages of creation, transition (merging and segmentation), and termination. The existing issue tracking methods do not address the connection and effect relationship between these issues. The purpose of this study is to overcome the two limitations of the existing issue tracking method, one being the limitation regarding the analysis method and the other being the limitation involving the lack of consideration of the changeability of the issues. Let us assume that we perform multiple topic analysis for each multiple period. Then it is essential to map issues of different periods in order to trace trend of issues. However, it is not easy to discover connection between issues of different periods because the issues derived for each period mutually contain heterogeneity. In this study, to overcome these limitations without having to analyze the entire period's documents simultaneously, the analysis can be performed independently for each period. In addition, we performed issue mapping to link the identified issues of each period. An integrated approach on each details period was presented, and the issue flow of the entire integrated period was depicted in this study. Thus, as the entire process of the issue life cycle, including the stages of creation, transition (merging and segmentation), and extinction, is identified and examined systematically, the changeability of the issues was analyzed in this study. The proposed methodology is highly efficient in terms of time and cost, as it sufficiently considered the changeability of the issues. Further, the results of this study can be used to adapt the methodology to a practical situation. By applying the proposed methodology to actual Internet news, the potential practical applications of the proposed methodology are analyzed. Consequently, the proposed methodology was able to extend the period of the analysis and it could follow the course of progress of each issue's life cycle. Further, this methodology can facilitate a clearer understanding of complex social phenomena using topic analysis.

Liver Splitting Using 2 Points for Liver Graft Volumetry (간 이식편의 체적 예측을 위한 2점 이용 간 분리)

  • Seo, Jeong-Joo;Park, Jong-Won
    • The KIPS Transactions:PartB
    • /
    • v.19B no.2
    • /
    • pp.123-126
    • /
    • 2012
  • This paper proposed a method to separate a liver into left and right liver lobes for simple and exact volumetry of the river graft at abdominal MDCT(Multi-Detector Computed Tomography) image before the living donor liver transplantation. A medical team can evaluate an accurate river graft with minimized interaction between the team and a system using this algorithm for ensuring donor's and recipient's safe. On the image of segmented liver, 2 points(PMHV: a point in Middle Hepatic Vein and PPV: a point at the beginning of right branch of Portal Vein) are selected to separate a liver into left and right liver lobes. Middle hepatic vein is automatically segmented using PMHV, and the cutting line is decided on the basis of segmented Middle Hepatic Vein. A liver is separated on connecting the cutting line and PPV. The volume and ratio of the river graft are estimated. The volume estimated using 2 points are compared with a manual volume that diagnostic radiologist processed and estimated and the weight measured during surgery to support proof of exact volume. The mean ${\pm}$ standard deviation of the differences between the actual weights and the estimated volumes was $162.38cm^3{\pm}124.39$ in the case of manual segmentation and $107.69cm^3{\pm}97.24$ in the case of 2 points method. The correlation coefficient between the actual weight and the manually estimated volume is 0.79, and the correlation coefficient between the actual weight and the volume estimated using 2 points is 0.87. After selection the 2 points, the time involved in separation a liver into left and right river lobe and volumetry of them is measured for confirmation that the algorithm can be used on real time during surgery. The mean ${\pm}$ standard deviation of the process time is $57.28sec{\pm}32.81$ per 1 data set ($149.17pages{\pm}55.92$).

Assessment of the Inundation Area and Volume of Tonle Sap Lake using Remote Sensing and GIS (원격탐사와 GIS를 이용한 Tonle Sap호의 홍수량 평가)

  • Chae, Hyosok
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.8 no.3
    • /
    • pp.96-106
    • /
    • 2005
  • The ability of remote sensing and GIS technique, which used to provide valuable informations in the time and space domain, has been known to be very useful in providing permanent records by mapping and monitoring flooded area. In 2000, floods were at the worst stage of devastation in Tonle Sap Lake, Mekong River Basin, for the second time in records during July and October. In this study, Landsat ETM+ and RADARSAT imagery were used to obtain the basic information on computation of the inundation area and volume using ISODATA classifier and segmentation technique. However, the extracted inundatton area showed only a small fraction than the actually inundated area because of clouds in the imagery and complex ground conditions. To overcome these limitations, the cost-distance method of GIS was used to estimate the inundated area at the peak level by integrating the inundated area from satellite imagery in corporation with digital elevation model (DEM). The estimated inundation area was simply converted with the inundation volume using GIS. The inundation volume was compared with the volume based on hydraulic modeling with MIKE 11. which is the most poppular among the dynamic river modeling system. The method is suitable for estimating inundation volume even when Landsat ETM+ has many clouds in the imagery.

  • PDF

Method of Walking Surface Identification Technique for Automatic Change of Walking Mode of Intelligent Bionic Leg (지능형 의족의 보행모드 자동변경을 위한 보행노면 판별 기법)

  • Yoo, Seong-Bong;Lim, Young-Kwang;Eom, Su-Hong;Lee, Eung-Hyuk
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.11 no.1
    • /
    • pp.81-89
    • /
    • 2017
  • In this paper, we propose a gait pattern recognition method for intelligent prosthesis that enables walking in various environments of femoral amputees. The proposed gait mode changing method is a single sensor based algorithm which can discriminate gait surface and gait phase using only strain gauges sensor, and it is designed to simplify the algorithm based on multiple sensors of existing intelligent prosthesis and to reduce cost of prosthesis system. For the recognition algorithm, we analyzed characteristics of the ground reaction force generated during gait of normal person and defined gait step segmentation and gait detection condition, A gait analyzer was constructed for the gait experiment in the environment similar to the femoral amputee. The validity of the paper was verified through the defined detection conditions and fabricated instruments. The accuracy of the algorithm based on the single sensor was 95%. Based on the proposed single sensor-based algorithm, it is considered that the intelligent prosthesis system can be made inexpensive, and the user can directly grasp the state of the walking surface and shift the walking mode. It is confirmed that it is possible to change the automatic walking mode to switch the walking mode that is suitable for the walking mode.

Development of Velocity Imaging Method for Motility of Left Ventricle in Gated SPECT (게이트 심근 SPECT에서 좌심실의 운동성 분석을 위한 속도영상화 기법 개발)

  • Jo, Mi-Jung;Lee, Byeong-Il;Choi, Hyun-Ju;Hwang, Hae-Gil;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.7
    • /
    • pp.808-817
    • /
    • 2006
  • Although the generally used the velocity index of doppler effect is a very significant factor in the functional evaluation of the left ventricle, it depends on the subjective evaluation of an inspector. The objective data of the motility can be obtained from the gated myocardial SPECT images by quantitative analysis. However, it is difficult to image visual of the velocity of the motion. The aim of our study is to develop a new method for the imaging velocity using the gated myocardial SPECT images and use it as an evaluation index for analyzing motility. First we visualized left ventricle into 3 dimensions using the coordinates of the points which were obtained through a segmentation of myocardium. Each point was represented by the different colors, according to the velocity of each point. We performed a validation study using 7 normal subjects and 15 myocardial infarction patients. To analyze motility, we used the average of the moved distance and the velocity. In normal cases, the average of the moved distance was 4.3mm and the average of the velocity was 11.9mm. In patient cases, the average of the moved distance was 3.9mm and the average of the velocity was 10.5mm. These results show that the motility of normal subjects is higher than the abnormal subjects. We expect that our proposed method could become a way to improve the accuracy and reproducibility for the functional evaluation of myocardial wall.

  • PDF

User-Class based Service Acceptance Policy using Cluster Analysis (군집분석 (Cluster Analysis)을 활용한 사용자 등급 기반의 서비스 수락 정책)

  • Park Hea-Sook;Baik Doo-Kwon
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.461-470
    • /
    • 2005
  • This paper suggests a new policy for consolidating a company's profits by segregating the clients using the contents service and allocating the media server's resources distinctively by clusters using the cluster analysis method of CRM, which is mainly applied to marketing. In this case, CRM refers to the strategy of consolidating a company's profits by efficiently managing the clients, providing them with a more effective, personalized service, and managing the resources more effectively. For the realization of a new service policy, this paper analyzes the level of contribution $vis-\acute{a}-vis$ the clients' service pattern (total number of visits to the homepage, service type, service usage period, total payment, average service period, service charge per homepage visit) and profits through the cluster analysis of clients' data applying the K-Means Method. Clients were grouped into 4 clusters according to the contribution level in terms of profits. Likewise, the CRFA (Client Request Filtering algorithm) was suggested per cluster to allocate media server resources. CRFA issues approval within the resource limit of the cluster where the client belongs. In addition, to evaluate the efficiency of CRFA within the Client/Server environment the acceptance rate per class was determined, and an evaluation experiment on network traffic was conducted before and after applying CRFA. The results of the experiments showed that the application of CRFA led to the decrease in network expenses and growth of the acceptance rate of clients belonging to the cluster as well as the significant increase in the profits of the company.

Stereo Matching For Satellite Images using The Classified Terrain Information (지형식별정보를 이용한 입체위성영상매칭)

  • Bang, Soo-Nam;Cho, Bong-Whan
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.4 no.1 s.6
    • /
    • pp.93-102
    • /
    • 1996
  • For an atomatic generation of DEM(Digital Elevation Model) by computer, it is a time-consumed work to determine adquate matches from stereo images. Correlation and evenly distributed area-based method is generally used for matching operation. In this paper, we propose a new approach that computes matches efficiantly by changing the size of mask window and search area according to the given terrain information. For image segmentation, at first edge-preserving smoothing filter is used for preprocessing, and then region growing algorithm is applied for the filterd images. The segmented regions are classifed into mountain, plain and water area by using MRF(Markov Random Filed) model. Maching is composed of predicting parallex and fine matching. Predicted parallex determines the location of search area in fine matching stage. The size of search area and mask window is determined by terrain information for each pixel. The execution time of matching is reduced by lessening the size of search area in the case of plain and water. For the experiments, four images which are covered $10km{\times}10km(1024{\times}1024\;pixel)$ of Taejeon-Kumsan in each are studied. The result of this study shows that the computing time of the proposed method using terrain information for matching operation can be reduced from 25% to 35%.

  • PDF

Adaptable Center Detection of a Laser Line with a Normalization Approach using Hessian-matrix Eigenvalues

  • Xu, Guan;Sun, Lina;Li, Xiaotao;Su, Jian;Hao, Zhaobing;Lu, Xue
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.317-329
    • /
    • 2014
  • In vision measurement systems based on structured light, the key point of detection precision is to determine accurately the central position of the projected laser line in the image. The purpose of this research is to extract laser line centers based on a decision function generated to distinguish the real centers from candidate points with a high recognition rate. First, preprocessing of an image adopting a difference image method is conducted to realize image segmentation of the laser line. Second, the feature points in an integral pixel level are selected as the initiating light line centers by the eigenvalues of the Hessian matrix. Third, according to the light intensity distribution of a laser line obeying a Gaussian distribution in transverse section and a constant distribution in longitudinal section, a normalized model of Hessian matrix eigenvalues for the candidate centers of the laser line is presented to balance reasonably the two eigenvalues that indicate the variation tendencies of the second-order partial derivatives of the Gaussian function and constant function, respectively. The proposed model integrates a Gaussian recognition function and a sinusoidal recognition function. The Gaussian recognition function estimates the characteristic that one eigenvalue approaches zero, and enhances the sensitivity of the decision function to that characteristic, which corresponds to the longitudinal direction of the laser line. The sinusoidal recognition function evaluates the feature that the other eigenvalue is negative with a large absolute value, making the decision function more sensitive to that feature, which is related to the transverse direction of the laser line. In the proposed model the decision function is weighted for higher values to the real centers synthetically, considering the properties in the longitudinal and transverse directions of the laser line. Moreover, this method provides a decision value from 0 to 1 for arbitrary candidate centers, which yields a normalized measure for different laser lines in different images. The normalized results of pixels close to 1 are determined to be the real centers by progressive scanning of the image columns. Finally, the zero point of a second-order Taylor expansion in the eigenvector's direction is employed to refine further the extraction results of the central points at the subpixel level. The experimental results show that the method based on this normalization model accurately extracts the coordinates of laser line centers and obtains a higher recognition rate in two group experiments.

High-Quality Depth Map Generation of Humans in Monocular Videos (단안 영상에서 인간 오브젝트의 고품질 깊이 정보 생성 방법)

  • Lee, Jungjin;Lee, Sangwoo;Park, Jongjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.2
    • /
    • pp.1-11
    • /
    • 2014
  • The quality of 2D-to-3D conversion depends on the accuracy of the assigned depth to scene objects. Manual depth painting for given objects is labor intensive as each frame is painted. Specifically, a human is one of the most challenging objects for a high-quality conversion, as a human body is an articulated figure and has many degrees of freedom (DOF). In addition, various styles of clothes, accessories, and hair create a very complex silhouette around the 2D human object. We propose an efficient method to estimate visually pleasing depths of a human at every frame in a monocular video. First, a 3D template model is matched to a person in a monocular video with a small number of specified user correspondences. Our pose estimation with sequential joint angular constraints reproduces a various range of human motions (i.e., spine bending) by allowing the utilization of a fully skinned 3D model with a large number of joints and DOFs. The initial depth of the 2D object in the video is assigned from the matched results, and then propagated toward areas where the depth is missing to produce a complete depth map. For the effective handling of the complex silhouettes and appearances, we introduce a partial depth propagation method based on color segmentation to ensure the detail of the results. We compared the result and depth maps painted by experienced artists. The comparison shows that our method produces viable depth maps of humans in monocular videos efficiently.