• Title/Summary/Keyword: Level Set Segmentation

Search Result 86, Processing Time 0.036 seconds

Study on the Determinants of Customer Satisfaction for Jewelry Brands (주얼리 브랜드에 대한 고객만족의 결정요인에 관한 연구)

  • Yoon, Sung-Joon
    • Asia-Pacific Journal of Business
    • /
    • v.10 no.2
    • /
    • pp.43-64
    • /
    • 2019
  • Same as other product brands, it is very important for jewelry brands to correctly identify customer characteristics, and seek high level service quality, and develop products that can set apart from competitions in order to increase customer satisfaction and strengthen repurchase intention. This study, in consideration of these pruduct characteristics, aims to verify whether service quality and/or product traits impact customer satisfaction. In addition, the study investigates whether customer trait plays a role of moderator in its effect on customer satisfaction. Finally, the study provides useful theoretical and practical implications on customer segmentation strategies that are contingent upon customer characteristics.

RECOGNITION ALGORITHM OF DRIED OAK MUSHROOM GRADINGS USING GRAY LEVEL IMAGES

  • Lee, C.H.;Hwang, H.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.773-779
    • /
    • 1996
  • Dried oak mushroom have complex and various visual features. Grading and sorting of dried oak mushrooms has been done by the human expert. Though actions involved in human grading looked simple, a decision making underneath the simple action comes from the result of the complex neural processing of the visual image. Through processing details involved in human visual recognition has not been fully investigated yet, it might say human can recognize objects via one of three ways such as extracting specific features or just image itself without extracting those features or in a combined manner. In most cases, extracting some special quantitative features from the camera image requires complex algorithms and processing of the gray level image requires the heavy computing load. This fact can be worse especially in dealing with nonuniform, irregular and fuzzy shaped agricultural products, resulting in poor performance because of the sensitiveness to the crisp criteria or specific ules set up by algorithms. Also restriction of the real time processing often forces to use binary segmentation but in that case some important information of the object can be lost. In this paper, the neuro net based real time recognition algorithm was proposed without extracting any visual feature but using only the directly captured raw gray images. Specially formated adaptable size of grids was proposed for the network input. The compensation of illumination was also done to accomodate the variable lighting environment. The proposed grading scheme showed very successful results.

  • PDF

Feasibility Study on the Optimization of Offsite Consequence Analysis by Particle Size Distribution Setting and Multi-Threading (입자크기분포 설정 및 멀티스레딩을 통한 소외사고영향분석 최적화 타당성 평가)

  • Seunghwan Kim;Sung-yeop Kim
    • Journal of the Korean Society of Safety
    • /
    • v.39 no.1
    • /
    • pp.96-103
    • /
    • 2024
  • The demand for mass calculation of offsite consequence analysis to conduct exhaustive single-unit or multi-unit Level 3 PSA is increasing. In order to perform efficient offsite consequence analyses, the Korea Atomic Energy Research Institute is conducting model optimization studies to minimize the analysis time while maintaining the accuracy of the results. A previous study developed a model optimization method using efficient plume segmentation and verified its effectiveness. In this study, we investigated the possibility of optimizing the model through particle size distribution setting by checking the reduction in analysis time and deviation of the results. Our findings indicate that particle size distribution setting affects the results, but its effect on analysis time is insignificant. Therefore, it is advantageous to set the particle size distribution as fine as possible. Furthermore, we evaluated the effect of multithreading and confirmed its efficiency. Future optimization studies should be conducted on various input factors of offsite consequence analysis, such as spatial grid settings.

Improved Shape Extraction Using Inward and Outward Curve Evolution (양방향 곡선 전개를 이용한 개선된 형태 추출)

  • Kim Ha-Hyoung;Kim Seong-Kon;Kim Doo-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.1 no.1
    • /
    • pp.23-31
    • /
    • 2000
  • Iterative curve evolution techniques are powerful methods for image segmentation. Classical methods proposed curve evolutions which guarantee close contours at convergence and, combined with the level set method, they easily handled curve topology changes. In this paper, we present a new geometric active contour model based on level set methods introduced by Osher & Sethian for detection of object boundaries or shape and we adopt anisotropic diffusion filtering method for removing noise from original image. Classical methods allow only one-way curve evolutions : shrinking or expanding of the curve. Thus, the initial curve must encircle all the objects to be segmented or several curves must be used, each one totally inside one object. But our method allows a two-way curve evolution : parts of the curve evolve in the outward direction while others evolve in the inward direction. It offers much more freedom in the initial curve position than with a classical geodesic search method. Our algorithm performs accurate and precise segmentations from noisy images with complex objects(jncluding sharp angles, deep concavities or holes), Besides it easily handled curve topology changes. In order to minimize the processing time, we use the narrow band method which allows us to perform calculations in the neighborhood of the contour and not in the whole image.

  • PDF

Anonymity of Medical Brain Images (의료 두뇌영상의 익명성)

  • Lee, Hyo-Jong;Du, Ruoyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.1
    • /
    • pp.81-87
    • /
    • 2012
  • The current defacing method for keeping an anonymity of brain images damages the integrity of a precise brain analysis due to over removal, although it maintains the patients' privacy. A novel method has been developed to create an anonymous face model while keeping the voxel values of an image exactly the same as that of the original one. The method contains two steps: construction of a mockup brain template from ten normalized brain images and a substitution of the mockup brain to the brain image. A level set segmentation algorithm is applied to segment a scalp-skull apart from the whole brain volume. The segmented mockup brain is coregistered and normalized to the subject brain image to create an anonymous face model. The validity of this modification is tested through comparing the intensity of voxels inside a brain area from the mockup brain with the original brain image. The result shows that the intensity of voxels inside from the mockup brain is same as ones from an original brain image, while its anonymity is guaranteed.

Adaptive Key-point Extraction Algorithm for Segmentation-based Lane Detection Network (세그멘테이션 기반 차선 인식 네트워크를 위한 적응형 키포인트 추출 알고리즘)

  • Sang-Hyeon Lee;Duksu Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.1
    • /
    • pp.1-11
    • /
    • 2023
  • Deep-learning-based image segmentation is one of the most widely employed lane detection approaches, and it requires a post-process for extracting the key points on the lanes. A general approach for key-point extraction is using a fixed threshold defined by a user. However, finding the best threshold is a manual process requiring much effort, and the best one can differ depending on the target data set (or an image). We propose a novel key-point extraction algorithm that automatically adapts to the target image without any manual threshold setting. In our adaptive key-point extraction algorithm, we propose a line-level normalization method to distinguish the lane region from the background clearly. Then, we extract a representative key point for each lane at a line (row of an image) using a kernel density estimation. To check the benefits of our approach, we applied our method to two lane-detection data sets, including TuSimple and CULane. As a result, our method achieved up to 1.80%p and 17.27% better results than using a fixed threshold in the perspectives of accuracy and distance error between the ground truth key-point and the predicted point.

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

Superpixel-based Vehicle Detection using Plane Normal Vector in Dispar ity Space

  • Seo, Jeonghyun;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.6
    • /
    • pp.1003-1013
    • /
    • 2016
  • This paper proposes a framework of superpixel-based vehicle detection method using plane normal vector in disparity space. We utilize two common factors for detecting vehicles: Hypothesis Generation (HG) and Hypothesis Verification (HV). At the stage of HG, we set the regions of interest (ROI) by estimating the lane, and track them to reduce computational cost of the overall processes. The image is then divided into compact superpixels, each of which is viewed as a plane composed of the normal vector in disparity space. After that, the representative normal vector is computed at a superpixel-level, which alleviates the well-known problems of conventional color-based and depth-based approaches. Based on the assumption that the central-bottom of the input image is always on the navigable region, the road and obstacle candidates are simultaneously extracted by the plane normal vectors obtained from K-means algorithm. At the stage of HV, the separated obstacle candidates are verified by employing HOG and SVM as for a feature and classifying function, respectively. To achieve this, we trained SVM classifier by HOG features of KITTI training dataset. The experimental results demonstrate that the proposed vehicle detection system outperforms the conventional HOG-based methods qualitatively and quantitatively.

Assessment of The Accuracy of The MR Abdominal Adipose Tissue Volumetry using 3D Gradient Dual Echo 2-Point DIXON Technique using CT as Reference

  • Kang, Sung-Jin
    • Journal of Magnetics
    • /
    • v.21 no.4
    • /
    • pp.603-615
    • /
    • 2016
  • In this study, in order to determine the validity and accuracy of MR imaging of 3D gradient dual echo 2-point DIXON technique for measuring abdominal adipose tissue volume and distribution, the measurements obtained by CT were set as a reference for comparison and their correlations were evaluated. CT and MRI scans were performed on each subject (17 healthy male volunteers who were fully informed about this study) to measure abdominal adipose tissue volume. Two skilled investigators individually observed the images acquired by CT and MRI in an independent environment, and directly separated the total volume using region-based thresholding segmentation method, and based on this, the total adipose tissue volume, subcutaneous adipose tissue volume and visceral adipose tissue volume were respectively measured. The correlation of the adipose tissue volume measurements with respect to the observer was examined using the Spearman test and the inter-observer agreement was evaluated using the intra-class correlation test. The correlation of the adipose tissue volume measurements by CT and MRI imaging methods was examined by simple regression analysis. In addition, using the Bland-Altman plot, the degree of agreement between the two imaging methods was evaluated. All of the statistical analysis results showed highly statistically significant correlation (p<0.05) respectively from the results of each adipose tissue volume measurements. In conclusion, MR abdominal adipose volumetry using the technique of 3D gradient dual echo 2-point DIXON showed a very high level of concordance even when compared with the adipose tissue measuring method using CT as reference.

A Study on Mapping 3-D River Boundary Using the Spatial Information Datasets (공간정보를 이용한 3차원 하천 경계선 매핑에 관한 연구)

  • Choung, Yun-Jae;Park, Hyen-Cheol;Jo, Myung-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.1
    • /
    • pp.87-98
    • /
    • 2012
  • A river boundary is defined as the intersection between a main stream of a river and the land. Mapping of the river boundary is important for the protection of the properties in river areas, the prevention of flooding and the monitoring of the topographic changes in river areas. However, the utilization of the ground surveying technologies is not efficient for the mapping of the river boundary due to the irregular surfaces of river zones and the dynamic changes of water level of a river stream. Recently, the spatial information data sets such as the airborne LiDAR and aerial images are widely used for coastal mapping due to the acquisition of the topographic information without human accessibility. Due to these advantages, this research proposes a semi-automatic method for mapping of the river boundary using the spatial information data set such as the airborne LiDAR and the aerial photographs. Multiple image processing technologies such as the image segmentation algorithm and the edge detection algorithm are applied for the generation of the 3D river boundary using the aerial photographs and airborne topographic LiDAR data. Check points determined by the experienced expert are used for the measurement of the horizontal and vertical accuracy of the generated 3D river boundary. Statistical results show that the generated river boundary has a high accuracy in horizontal and vertical direction.