• Title/Summary/Keyword: k-mean algorithm

Search Result 1,275, Processing Time 0.024 seconds

Personalized insurance product based on similarity (유사도를 활용한 맞춤형 보험 추천 시스템)

  • Kim, Joon-Sung;Cho, A-Ra;Oh, Hayong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1599-1607
    • /
    • 2022
  • The data mainly used for the model are as follows: the personal information, the information of insurance product, etc. With the data, we suggest three types of models: content-based filtering model, collaborative filtering model and classification models-based model. The content-based filtering model finds the cosine of the angle between the users and items, and recommends items based on the cosine similarity; however, before finding the cosine similarity, we divide into several groups by their features. Segmentation is executed by K-means clustering algorithm and manually operated algorithm. The collaborative filtering model uses interactions that users have with items. The classification models-based model uses decision tree and random forest classifier to recommend items. According to the results of the research, the contents-based filtering model provides the best result. Since the model recommends the item based on the demographic and user features, it indicates that demographic and user features are keys to offer more appropriate items.

Comparison of the Quality of Various Polychromatic and Monochromatic Dual-Energy CT Images with or without a Metal Artifact Reduction Algorithm to Evaluate Total Knee Arthroplasty

  • Hye Jung Choo;Sun Joo Lee;Dong Wook Kim;Yoo Jin Lee;Jin Wook Baek;Ji-yeon Han;Young Jin Heo
    • Korean Journal of Radiology
    • /
    • v.22 no.8
    • /
    • pp.1341-1351
    • /
    • 2021
  • Objective: To compare the quality of various polychromatic and monochromatic images with or without using an iterative metal artifact reduction algorithm (iMAR) obtained from a dual-energy computed tomography (CT) to evaluate total knee arthroplasty. Materials and Methods: We included 58 patients (28 male and 30 female; mean age [range], 71.4 [61-83] years) who underwent 74 knee examinations after total knee arthroplasty using dual-energy CT. CT image sets consisted of polychromatic image sets that linearly blended 80 kVp and tin-filtered 140 kVp using weighting factors of 0.4, 0, and -0.3, and monochromatic images at 130, 150, 170, and 190 keV. These image sets were obtained with and without applying iMAR, creating a total of 14 image sets. Two readers qualitatively ranked the image quality (1 [lowest quality] through 14 [highest quality]). Volumes of high- and low-density artifacts and contrast-to-noise ratios (CNRs) between the bone and fat tissue were quantitatively measured in a subset of 25 knees unaffected by metal artifacts. Results: iMAR-applied, polychromatic images using weighting factors of -0.3 and 0.0 (P-0.3i and P0.0i, respectively) showed the highest image-quality rank scores (median of 14 for both by one reader and 13 and 14, respectively, by the other reader; p < 0.001). All iMAR-applied image series showed higher rank scores than the iMAR-unapplied ones. The smallest volumes of low-density artifacts were found in P-0.3i, P0.0i, and iMAR-applied monochromatic images at 130 keV. The smallest volumes of high-density artifacts were noted in P-0.3i. The CNRs were best in polychromatic images using a weighting factor of 0.4 with or without iMAR application, followed by polychromatic images using a weighting factor of 0.0 with or without iMAR application. Conclusion: Polychromatic images combined with iMAR application, P-0.3i and P0.0i, provided better image qualities and substantial metal artifact reduction compared with other image sets.

Enhancing Predictive Accuracy of Collaborative Filtering Algorithms using the Network Analysis of Trust Relationship among Users (사용자 간 신뢰관계 네트워크 분석을 활용한 협업 필터링 알고리즘의 예측 정확도 개선)

  • Choi, Seulbi;Kwahk, Kee-Young;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.113-127
    • /
    • 2016
  • Among the techniques for recommendation, collaborative filtering (CF) is commonly recognized to be the most effective for implementing recommender systems. Until now, CF has been popularly studied and adopted in both academic and real-world applications. The basic idea of CF is to create recommendation results by finding correlations between users of a recommendation system. CF system compares users based on how similar they are, and recommend products to users by using other like-minded people's results of evaluation for each product. Thus, it is very important to compute evaluation similarities among users in CF because the recommendation quality depends on it. Typical CF uses user's explicit numeric ratings of items (i.e. quantitative information) when computing the similarities among users in CF. In other words, user's numeric ratings have been a sole source of user preference information in traditional CF. However, user ratings are unable to fully reflect user's actual preferences from time to time. According to several studies, users may more actively accommodate recommendation of reliable others when purchasing goods. Thus, trust relationship can be regarded as the informative source for identifying user's preference with accuracy. Under this background, we propose a new hybrid recommender system that fuses CF and social network analysis (SNA). The proposed system adopts the recommendation algorithm that additionally reflect the result analyzed by SNA. In detail, our proposed system is based on conventional memory-based CF, but it is designed to use both user's numeric ratings and trust relationship information between users when calculating user similarities. For this, our system creates and uses not only user-item rating matrix, but also user-to-user trust network. As the methods for calculating user similarity between users, we proposed two alternatives - one is algorithm calculating the degree of similarity between users by utilizing in-degree and out-degree centrality, which are the indices representing the central location in the social network. We named these approaches as 'Trust CF - All' and 'Trust CF - Conditional'. The other alternative is the algorithm reflecting a neighbor's score higher when a target user trusts the neighbor directly or indirectly. The direct or indirect trust relationship can be identified by searching trust network of users. In this study, we call this approach 'Trust CF - Search'. To validate the applicability of the proposed system, we used experimental data provided by LibRec that crawled from the entire FilmTrust website. It consists of ratings of movies and trust relationship network indicating who to trust between users. The experimental system was implemented using Microsoft Visual Basic for Applications (VBA) and UCINET 6. To examine the effectiveness of the proposed system, we compared the performance of our proposed method with one of conventional CF system. The performances of recommender system were evaluated by using average MAE (mean absolute error). The analysis results confirmed that in case of applying without conditions the in-degree centrality index of trusted network of users(i.e. Trust CF - All), the accuracy (MAE = 0.565134) was lower than conventional CF (MAE = 0.564966). And, in case of applying the in-degree centrality index only to the users with the out-degree centrality above a certain threshold value(i.e. Trust CF - Conditional), the proposed system improved the accuracy a little (MAE = 0.564909) compared to traditional CF. However, the algorithm searching based on the trusted network of users (i.e. Trust CF - Search) was found to show the best performance (MAE = 0.564846). And the result from paired samples t-test presented that Trust CF - Search outperformed conventional CF with 10% statistical significance level. Our study sheds a light on the application of user's trust relationship network information for facilitating electronic commerce by recommending proper items to users.

The Study of Land Surface Change Detection Using Long-Term SPOT/VEGETATION (장기간 SPOT/VEGETATION 정규화 식생지수를 이용한 지면 변화 탐지 개선에 관한 연구)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, In-Hwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.13 no.4
    • /
    • pp.111-124
    • /
    • 2010
  • To monitor the environment of land surface change is considered as an important research field since those parameters are related with land use, climate change, meteorological study, agriculture modulation, surface energy balance, and surface environment system. For the change detection, many different methods have been presented for distributing more detailed information with various tools from ground based measurement to satellite multi-spectral sensor. Recently, using high resolution satellite data is considered the most efficient way to monitor extensive land environmental system especially for higher spatial and temporal resolution. In this study, we use two different spatial resolution satellites; the one is SPOT/VEGETATION with 1 km spatial resolution to detect coarse resolution of the area change and determine objective threshold. The other is Landsat satellite having high resolution to figure out detailed land environmental change. According to their spatial resolution, they show different observation characteristics such as repeat cycle, and the global coverage. By correlating two kinds of satellites, we can detect land surface change from mid resolution to high resolution. The K-mean clustering algorithm is applied to detect changed area with two different temporal images. When using solar spectral band, there are complicate surface reflectance scattering characteristics which make surface change detection difficult. That effect would be leading serious problems when interpreting surface characteristics. For example, in spite of constant their own surface reflectance value, it could be changed according to solar, and sensor relative observation location. To reduce those affects, in this study, long-term Normalized Difference Vegetation Index (NDVI) with solar spectral channels performed for atmospheric and bi-directional correction from SPOT/VEGETATION data are utilized to offer objective threshold value for detecting land surface change, since that NDVI has less sensitivity for solar geometry than solar channel. The surface change detection based on long-term NDVI shows improved results than when only using Landsat.

The Character of Distribution of Solar Radiation in Mongolia based on Meteorological Satellite Data (위성자료를 이용한 몽골의 일사량 분포 특성)

  • Jee, Joon-Bum;Jeon, Sang-Hee;Choi, Young-Jean;Lee, Seung-Woo;Park, Young-San;Lee, Kyu-Tae
    • Journal of the Korean earth science society
    • /
    • v.33 no.2
    • /
    • pp.139-147
    • /
    • 2012
  • Mongolia's solar-meteorological resources map has been developed using satellite data and reanalysis data. Solar radiation was calculated using solar radiation model, in which the input data were satellite data from SRTM, TERA, AQUA, AURA and MTSAT-1R satellites and the reanalysis data from NCEP/NCAR. The calculated results are validated by the DSWRF (Downward Short-Wave Radiation Flux) from NCEP/NCAR reanalysis. Mongolia is composed of mountainous region in the western area and desert or semi-arid region in middle and southern parts of the country. South-central area comprises inside the continent with a clear day and less rainfall, and irradiation is higher than other regions on the same latitude. The western mountain region is reached a lot of solar energy due to high elevation but the area is covered with snow (high albedo) throughout the year. The snow cover is a cause of false detection from the cloud detection algorithm of satellite data. Eventually clearness index and solar radiation are underestimated. And southern region has high total precipitable water and aerosol optical depth, but high solar radiation reaches the surface as it is located on the relatively lower latitude. When calculated solar radiation is validated by DSWRF from NCEP/NCAR reanalysis, monthly mean solar radiation is 547.59 MJ which is approximately 2.89 MJ higher than DSWRF. The correlation coefficient between calculation and reanalysis data is 0.99 and the RMSE (Root Mean Square Error) is 6.17 MJ. It turned out to be highest correlation (r=0.94) in October, and lowest correlation (r=0.62) in March considering the error of cloud detection with melting and yellow sand.

Comparison of Three- and Four-dimensional Robotic Radiotherapy Treatment Plans for Lung Cancers (폐암환자의 종양추적 정위방사선치료를 위한 삼차원 및 사차원 방사선치료계획의 비교)

  • Chai, Gyu-Young;Lim, Young-Kyung;Kang, Ki-Mun;Jeong, Bae-Gwon;Ha, In-Bong;Park, Kyung-Bum;Jung, Jin-Myung;Kim, Dong-Wook
    • Radiation Oncology Journal
    • /
    • v.28 no.4
    • /
    • pp.238-248
    • /
    • 2010
  • Purpose: To compare the dose distributions between three-dimensional (3D) and four-dimensional (4D) radiation treatment plans calculated by Ray-tracing or the Monte Carlo algorithm, and to highlight the difference of dose calculation between two algorithms for lung heterogeneity correction in lung cancers. Materials and Methods: Prospectively gated 4D CTs in seven patients were obtained with a Brilliance CT64-Channel scanner along with a respiratory bellows gating device. After 4D treatment planning with the Ray Tracing algorithm in Multiplan 3.5.1, a CyberKnife stereotactic radiotherapy planning system, 3D Ray Tracing, 3D and 4D Monte Carlo dose calculations were performed under the same beam conditions (same number, directions, monitor units of beams). The 3D plan was performed in a primary CT image setting corresponding to middle phase expiration (50%). Relative dose coverage, D95 of gross tumor volume and planning target volume, maximum doses of tumor, and the spinal cord were compared for each plan, taking into consideration the tumor location. Results: According to the Monte Carlo calculations, mean tumor volume coverage of the 4D plans was 4.4% higher than the 3D plans when tumors were located in the lower lobes of the lung, but were 4.6% lower when tumors were located in the upper lobes of the lung. Similarly, the D95 of 4D plans was 4.8% higher than 3D plans when tumors were located in the lower lobes of lung, but was 1.7% lower when tumors were located in the upper lobes of lung. This tendency was also observed at the maximum dose of the spinal cord. Lastly, a 30% reduction in the PTV volume coverage was observed for the Monte Carlo calculation compared with the Ray-tracing calculation. Conclusion: 3D and 4D robotic radiotherapy treatment plans for lung cancers were compared according to a dosimetric viewpoint for a tumor and the spinal cord. The difference of tumor dose distributions between 3D and 4D treatment plans was only significant when large tumor movement and deformation was suspected. Therefore, 4D treatment planning is only necessary for large tumor motion and deformation. However, a Monte Carlo calculation is always necessary, independent of tumor motion in the lung.

Comparison of Intensity Modulated Radiation Therapy Dose Calculations with a PBC and AAA Algorithms in the Lung Cancer (폐암의 세기조절방사선치료에서 PBC 알고리즘과 AAA 알고리즘의 비교연구)

  • Oh, Se-An;Kang, Min-Kyu;Yea, Ji-Woon;Kim, Sung-Hoon;Kim, Ki-Hwan;Kim, Sung-Kyu
    • Progress in Medical Physics
    • /
    • v.23 no.1
    • /
    • pp.48-53
    • /
    • 2012
  • The pencil beam convolution (PBC) algorithms in radiation treatment planning system have been widely used to calculate the radiation dose. A new photon dose calculation algorithm, referred to as the anisotropic analytical algorithm (AAA), was released for use by the Varian medical system. The aim of this paper was to investigate the difference in dose calculation between the AAA and PBC algorithm using the intensity modulated radiation therapy (IMRT) plan for lung cancer cases that were inhomogeneous in the low density. We quantitatively analyzed the differences in dose using the eclipse planning system (Varian Medical System, Palo Alto, CA) and I'mRT matirxx (IBA, Schwarzenbruck, Germany) equipment to compare the gamma evaluation. 11 patients with lung cancer at various sites were used in this study. We also used the TLD-100 (LiF) to measure the differences in dose between the calculated dose and measured dose in the Alderson Rando phantom. The maximum, mean, minimum dose for the normal tissue did not change significantly. But the volume of the PTV covered by the 95% isodose curve was decreased by 6% in the lung due to the difference in the algorithms. The difference dose between the calculated dose by the PBC algorithms and AAA algorithms and the measured dose with TLD-100 (LiF) in the Alderson Rando phantom was -4.6% and -2.7% respectively. Based on the results of this study, the treatment plan calculated using the AAA algorithms is more accurate in lung sites with a low density when compared to the treatment plan calculated using the PBC algorithms.

Numerical Study on the Development of the Seismic Response Prediction Method for the Low-rise Building Structures using the Limited Information (제한된 정보를 이용한 저층 건물 구조물의 지진 응답 예측 기법 개발을 위한 해석적 연구)

  • Choi, Se-Woon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.33 no.4
    • /
    • pp.271-277
    • /
    • 2020
  • There are increasing cases of monitoring the structural response of structures using multiple sensors. However, owing to cost and management problems, limited sensors are installed in the structure. Thus, few structural responses are collected, which hinders analyzing the behavior of the structure. Therefore, a technique to predict responses at a location where sensors are not installed to a reliable level using limited sensors is necessary. In this study, a numerical study is conducted to predict the seismic response of low-rise buildings using limited information. It is assumed that the available response information is only the acceleration responses of the first and top floors. Using both information, the first natural frequency of the structure can be obtained. The acceleration information on the first floor is used as the ground motion information. To minimize the error on the acceleration history response of the top floor and the first natural frequency error of the target structure, the method for predicting the mass and stiffness information of a structure using the genetic algorithm is presented. However, the constraints are not considered. To determine the range of design variables that mean the search space, the parameter prediction method based on artificial neural networks is proposed. To verify the proposed method, a five-story structure is used as an example.

A Study on the Bio-response to the Underthreshold Stimulation (임계치 이하의 자극에 대한 생체의 반응 연구)

  • Che, Gyu-Shik
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.3
    • /
    • pp.439-445
    • /
    • 2010
  • The signal transmission of human body is processed by the action potential from each cell unit. This kind of action potential is taken place and transmitted by the ions through cell membrane, and ultimately explained as an electrical signal concept. The fact that the information is established as an electrical status as well as various senses from the bio-organozm has been addressed through several studies. By the way, this nervous transmission relation has been described and analyzed qualitatively in the mean time. I established new algorithm to analyze these relations quantitatively and implemented them using existing bio-data in this paper. The study, however, was limited to underthreshold potential to excite the nervous system against the outer stimulation. This is very much analog to electrical transient of the switching circuit, and therefore, I analyzed it based on this analog. I made it clear that the results derived here is the basis further study topic.

Energy Saving Characteristics on Burst Packet Configuration Method using Adaptive Inverse-function Buffering Interval in IP Core Networks (IP 네트워크에서 적응적 역함수 버퍼링 구간을 적용한 버스트패킷 구성 방식에서 에너지 절약 특성)

  • Han, Chimoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.8
    • /
    • pp.19-27
    • /
    • 2016
  • Nowadays the adaptive buffering techniques for burst stream packet configuration and its operation algorithm to save energy in IP core network have been studied. This paper explains the selection method of packet buffering interval for energy saving when configuring burst stream packet at the ingress router in IP core network. Especially the adaptive buffering interval and its implementation scheme are required to improve the energy saving efficiency at the input part of the ingress router. In this paper, we propose the best adaptive buffering scheme that a current buffering interval is adaptively buffering scheme based on the input traffic of the past buffering interval, and analyze its characteristics of energy saving and end-to-end delay by computer simulation. We show the improvement of energy saving effect and reduction of mean delay variation when using an appropriate inverse-function selecting the buffering interval for the configuration of burst stream packet in this paper. We confirm this method have superior properties compared to other method. The proposed method shows that it is less sensitive to the various input traffic type of ingress router and a practical method.