• Title/Summary/Keyword: 데이터 중복제거

Search Result 257, Processing Time 0.019 seconds

Mobile Commerce Success Factors: A Value-Focused Analysis (모바일 커머스의 성공 요인들에 관한 연구 : 가치 중심적인 분석)

  • 이정우;이승희
    • The Journal of Society for e-Business Studies
    • /
    • v.8 no.4
    • /
    • pp.129-149
    • /
    • 2003
  • Explosive growth of mobile devices over the past years has greatly increased commercial interests in mobile commerce, but mobile commerce has not yet seen enough increase on demand side as anticipated. Despite the slow start, mobile commerce has a lot of potential from users' perspective. In order to make this potential a reality, businesses must focus on the values from their customers' perspective, not technical competencies. This research employed Keeney's(1992) value focused thinking approach and explore values of mobile commerce from actual users' viewpoint. In depth interviews were conducted with seventy Practical users of mobile commerce. Through an extensive focus group sessions, 748 statements obtained from actual users were classified into 18 categories of means objectives and 12 categories of fundamental objectives of mobile commerce. The means-ends network diagram of values seen in mobile commerce by actual users were presented and compared to typical electronic commerce diagram. Results suggest the mobile commerce needs to be handled differently from the traditional electronic commerce.

  • PDF

Arrhythmia Classification Method using QRS Pattern of ECG Signal according to Personalized Type (대상 유형별 ECG 신호의 QRS 패턴을 이용한 부정맥 분류)

  • Cho, Ik-sung;Jeong, Jong -Hyeog;Kwon, Hyeog-soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.7
    • /
    • pp.1728-1736
    • /
    • 2015
  • Several algorithms have been developed to classify arrhythmia which either rely on specific ECG(Electrocardiogram) database. Nevertheless personalized difference of ECG signal exist, performance degradation occurs because of carrying out diagnosis by general classification rule. Most methods require accurate detection of P-QRS-T point, higher computational cost and larger processing time. But it is difficult to detect the P and T wave signal because of person's individual difference. Therefore it is necessary to design efficient algorithm that classifies different arrhythmia in realtime and decreases computational cost by extracting minimal feature. In this paper, we propose arrhythmia classification method using QRS Pattern of ECG signal according to personalized type. For this purpose, we detected R wave through the preprocessing method and define QRS pattern of ECG signal by QRS feature Also, we detect and modify by pattern classification, classified arrhythmia duplicated QRS pattern in realtime. Normal, PVC, PAC, LBBB, RBBB, Paced beat classification is evaluated by using 43 record of MIT-BIH arrhythmia database. The achieved scores indicate the average of 99.98%, 97.22%, 95.14%, 91.47%, 94.85%, 97.48% in PVC, PAC, Normal, BBB, Paced beat classification.

An Intra Prediction Hardware Architecture Design for Computational Complexity Reduction of HEVC Decoder (HEVC 복호기의 연산 복잡도 감소를 위한 화면내 예측 하드웨어 구조 설계)

  • Jung, Hongkyun;Ryoo, Kwangki
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.5
    • /
    • pp.1203-1212
    • /
    • 2013
  • In this paper, an intra prediction hardware architecture is proposed to reduce computational complexity of intra prediction in HEVC decoder. The architecture uses shared operation units and common operation units and adopts a fast smoothing decision algorithm and a fast algorithm to generate coefficients of a filter. The shared operation unit shares adders processing common equations to remove the computational redundancy. The unit computes an average value in DC mode for reducing the number of execution cycles in DC mode. In order to reduce operation units, the common operation unit uses one operation unit generating predicted pixels and filtered pixels in all prediction modes. In order to reduce processing time and operators, the decision algorithm uses only bit-comparators and the fast algorithm uses LUT instead of multiplication operators. The proposed architecture using four shared operation units and eight common operation units which can reduce execution cycles of intra prediction. The architecture is synthesized using TSMC 0.13um CMOS technology. The gate count and the maximum operating frequency are 40.5k and 164MHz, respectively. As the result of measuring the performance of the proposed architecture using the extracted data from HM 7.1, the execution cycle of the architecture is about 93.7% less than the previous design.

Evaluation of Grid-Based ROI Extraction Method Using a Seamless Digital Map (연속수치지형도를 활용한 격자기준 관심 지역 추출기법의 평가)

  • Jeong, Jong-Chul
    • Journal of Cadastre & Land InformatiX
    • /
    • v.49 no.1
    • /
    • pp.103-112
    • /
    • 2019
  • Extraction of region of interest for satellite image classification is one of the important techniques for efficient management of the national land space. However, recent studies on satellite image classification often depend on the information of the selected image in selecting the region of interest. This study propose an effective method of selecting the area of interest using the continuous digital topographic map constructed from high resolution images. The spatial information used in this research is based on the digital topographic map from 2013 to 2017 provided by the National Geographical Information Institute and the 2015 Sejong City land cover map provided by the Ministry of Environment. To verify the accuracy of the extracted area of interest, KOMPSAT-3A satellite images were used which taken on October 28, 2018 and July 7, 2018. The baseline samples for 2015 were extracted using the unchanged area of the continuous digital topographic map for 2013-2015 and the land cover map for 2015, and also extracted the baseline samples in 2018 using the unchanged area of the continuous digital topographic map for 2015-2017 and the land cover map for 2015. The redundant areas that occurred when merging continuous digital topographic maps and land cover maps were removed to prevent confusion of data. Finally, the checkpoints are generated within the region of interest, and the accuracy of the region of interest extracted from the K3A satellite images and the error matrix in 2015 and 2018 is shown, and the accuracy is approximately 93% and 72%, respectively. The accuracy of the region of interest can be used as a region of interest, and the misclassified region can be used as a reference for change detection.

Development of an Algorithm for Automatic Quantity Take-off of Slab Rebar (슬래브 철근 물량 산출 자동화 알고리즘 개발)

  • Kim, Suhwan;Kim, Sunkuk;Suh, Sangwook;Kim, Sangchul
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.5
    • /
    • pp.52-62
    • /
    • 2023
  • The objective of this study is to propose an automated algorithm for precise cutting length of slab rebar complying with regulations such as anchorage length, standard hooks, and lapping length. This algorithm aims to improve the traditional manual quantity take-off process typically outsourced by external contractors. By providing accurate rebar quantity data at BBS(Bar Bending Schedule) level from the bidding phase, uncertainty in quantity take-off can be eliminated and reliance on out-sourcing reduced. In addition, the algorithm allows for early determination of precise quantities, enabling construction firms to preapre competitive and optimized bids, leading to increased profit margins during contract negotiations. The proposed algorithm not only streamlines redundant tasks across various processes, including estimating, budgeting, and BBS generation but also offers flexibility in handling post-contract structural drawing changes. In particular, the proposed algorithm, when combined with BIM, can solve the technical problems of using BIM in the early phases of construction, and the algorithm's formulas and shape codes that built as REVIT-based family files, can help saving time and manpower.

The Performance Bottleneck of Subsequence Matching in Time-Series Databases: Observation, Solution, and Performance Evaluation (시계열 데이타베이스에서 서브시퀀스 매칭의 성능 병목 : 관찰, 해결 방안, 성능 평가)

  • 김상욱
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.381-396
    • /
    • 2003
  • Subsequence matching is an operation that finds subsequences whose changing patterns are similar to a given query sequence from time-series databases. This paper points out the performance bottleneck in subsequence matching, and then proposes an effective method that improves the performance of entire subsequence matching significantly by resolving the performance bottleneck. First, we analyze the disk access and CPU processing times required during the index searching and post processing steps through preliminary experiments. Based on their results, we show that the post processing step is the main performance bottleneck in subsequence matching, and them claim that its optimization is a crucial issue overlooked in previous approaches. In order to resolve the performance bottleneck, we propose a simple but quite effective method that processes the post processing step in the optimal way. By rearranging the order of candidate subsequences to be compared with a query sequence, our method completely eliminates the redundancy of disk accesses and CPU processing occurred in the post processing step. We formally prove that our method is optimal and also does not incur any false dismissal. We show the effectiveness of our method by extensive experiments. The results show that our method achieves significant speed-up in the post processing step 3.91 to 9.42 times when using a data set of real-world stock sequences and 4.97 to 5.61 times when using data sets of a large volume of synthetic sequences. Also, the results show that our method reduces the weight of the post processing step in entire subsequence matching from about 90% to less than 70%. This implies that our method successfully resolves th performance bottleneck in subsequence matching. As a result, our method provides excellent performance in entire subsequence matching. The experimental results reveal that it is 3.05 to 5.60 times faster when using a data set of real-world stock sequences and 3.68 to 4.21 times faster when using data sets of a large volume of synthetic sequences compared with the previous one.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.