• 제목/요약/키워드: Ground truth

검색결과 297건 처리시간 0.029초

A modified U-net for crack segmentation by Self-Attention-Self-Adaption neuron and random elastic deformation

  • Zhao, Jin;Hu, Fangqiao;Qiao, Weidong;Zhai, Weida;Xu, Yang;Bao, Yuequan;Li, Hui
    • Smart Structures and Systems
    • /
    • 제29권1호
    • /
    • pp.1-16
    • /
    • 2022
  • Despite recent breakthroughs in deep learning and computer vision fields, the pixel-wise identification of tiny objects in high-resolution images with complex disturbances remains challenging. This study proposes a modified U-net for tiny crack segmentation in real-world steel-box-girder bridges. The modified U-net adopts the common U-net framework and a novel Self-Attention-Self-Adaption (SASA) neuron as the fundamental computing element. The Self-Attention module applies softmax and gate operations to obtain the attention vector. It enables the neuron to focus on the most significant receptive fields when processing large-scale feature maps. The Self-Adaption module consists of a multiplayer perceptron subnet and achieves deeper feature extraction inside a single neuron. For data augmentation, a grid-based crack random elastic deformation (CRED) algorithm is designed to enrich the diversities and irregular shapes of distributed cracks. Grid-based uniform control nodes are first set on both input images and binary labels, random offsets are then employed on these control nodes, and bilinear interpolation is performed for the rest pixels. The proposed SASA neuron and CRED algorithm are simultaneously deployed to train the modified U-net. 200 raw images with a high resolution of 4928 × 3264 are collected, 160 for training and the rest 40 for the test. 512 × 512 patches are generated from the original images by a sliding window with an overlap of 256 as inputs. Results show that the average IoU between the recognized and ground-truth cracks reaches 0.409, which is 29.8% higher than the regular U-net. A five-fold cross-validation study is performed to verify that the proposed method is robust to different training and test images. Ablation experiments further demonstrate the effectiveness of the proposed SASA neuron and CRED algorithm. Promotions of the average IoU individually utilizing the SASA and CRED module add up to the final promotion of the full model, indicating that the SASA and CRED modules contribute to the different stages of model and data in the training process.

위성영상의 토지정보 분석정확도 향상을 위한 응용체계의 개발 - 다중시기 영상과 주성분분석 및 정준상관분류 알고리즘을 이용하여 - (Development of a Compound Classification Process for Improving the Correctness of Land Information Analysis in Satellite Imagery - Using Principal Component Analysis, Canonical Correlation Classification Algorithm and Multitemporal Imagery -)

  • 박민호
    • 대한토목학회논문집
    • /
    • 제28권4D호
    • /
    • pp.569-577
    • /
    • 2008
  • 본 연구의 목적은 위성영상으로부터 보다 정확한 토지정보를 취득하기 위해 다중시기데이터의 혼합과 특정 영상강조기법 및 영상분류알고리즘을 병합하여 적용하는 응용분류체계의 개발이다. 즉, 본 연구에서는 혼합된 다중시기데이터를 주성분분석한 후 정준상관분류기법을 적용하는 분류과정을 제안한다. 이 분류과정의 결과를 단일영상별 정준상관분류결과, 다중시기혼합영상의 정준상관분류결과, 시기별 주성분분석 후 정준상관분류결과와 비교한다. 사용된 위성영상은 1994년 7월 26일과 1996년 9월 1일에 취득된 Landsat 5 TM 영상이다. 정확도평가를 위한 지상실제데이터는 지형도 및 항공사진으로부터 취득되었으며, 연구대상영역 전체가 정확도평가 대상으로 사용되었다. 제안된 응용분류체계는 단일영상만을 사용하여 정준상관분류를 수행한 경우보다 분류정확도면에서 약 8.2% 상승되는 우수한 효과를 보여주었다. 특히, 복잡한 토지특성이 혼합되어 있는 도시역을 정확히 분류하는데 유효하였다. 결론적으로 Landsat TM 영상을 사용한 토지피복정보 추출시 분류정확도를 높이기 위해서, 다중시기영상을 사전에 주성분분석 후 정준상관분류기법을 적용하면 매우 효과적임을 확인하였다.

Development and Validation of a Deep Learning System for Segmentation of Abdominal Muscle and Fat on Computed Tomography

  • Hyo Jung Park;Yongbin Shin;Jisuk Park;Hyosang Kim;In Seob Lee;Dong-Woo Seo;Jimi Huh;Tae Young Lee;TaeYong Park;Jeongjin Lee;Kyung Won Kim
    • Korean Journal of Radiology
    • /
    • 제21권1호
    • /
    • pp.88-100
    • /
    • 2020
  • Objective: We aimed to develop and validate a deep learning system for fully automated segmentation of abdominal muscle and fat areas on computed tomography (CT) images. Materials and Methods: A fully convolutional network-based segmentation system was developed using a training dataset of 883 CT scans from 467 subjects. Axial CT images obtained at the inferior endplate level of the 3rd lumbar vertebra were used for the analysis. Manually drawn segmentation maps of the skeletal muscle, visceral fat, and subcutaneous fat were created to serve as ground truth data. The performance of the fully convolutional network-based segmentation system was evaluated using the Dice similarity coefficient and cross-sectional area error, for both a separate internal validation dataset (426 CT scans from 308 subjects) and an external validation dataset (171 CT scans from 171 subjects from two outside hospitals). Results: The mean Dice similarity coefficients for muscle, subcutaneous fat, and visceral fat were high for both the internal (0.96, 0.97, and 0.97, respectively) and external (0.97, 0.97, and 0.97, respectively) validation datasets, while the mean cross-sectional area errors for muscle, subcutaneous fat, and visceral fat were low for both internal (2.1%, 3.8%, and 1.8%, respectively) and external (2.7%, 4.6%, and 2.3%, respectively) validation datasets. Conclusion: The fully convolutional network-based segmentation system exhibited high performance and accuracy in the automatic segmentation of abdominal muscle and fat on CT images.

Deep Learning-Based Assessment of Functional Liver Capacity Using Gadoxetic Acid-Enhanced Hepatobiliary Phase MRI

  • Hyo Jung Park;Jee Seok Yoon;Seung Soo Lee;Heung-Il Suk;Bumwoo Park;Yu Sub Sung;Seung Baek Hong;Hwaseong Ryu
    • Korean Journal of Radiology
    • /
    • 제23권7호
    • /
    • pp.720-731
    • /
    • 2022
  • Objective: We aimed to develop and test a deep learning algorithm (DLA) for fully automated measurement of the volume and signal intensity (SI) of the liver and spleen using gadoxetic acid-enhanced hepatobiliary phase (HBP)-magnetic resonance imaging (MRI) and to evaluate the clinical utility of DLA-assisted assessment of functional liver capacity. Materials and Methods: The DLA was developed using HBP-MRI data from 1014 patients. Using an independent test dataset (110 internal and 90 external MRI data), the segmentation performance of the DLA was measured using the Dice similarity score (DSS), and the agreement between the DLA and the ground truth for the volume and SI measurements was assessed with a Bland-Altman 95% limit of agreement (LOA). In 276 separate patients (male:female, 191:85; mean age ± standard deviation, 40 ± 15 years) who underwent hepatic resection, we evaluated the correlations between various DLA-based MRI indices, including liver volume normalized by body surface area (LVBSA), liver-to-spleen SI ratio (LSSR), MRI parameter-adjusted LSSR (aLSSR), LSSR × LVBSA, and aLSSR × LVBSA, and the indocyanine green retention rate at 15 minutes (ICG-R15), and determined the diagnostic performance of the DLA-based MRI indices to detect ICG-R15 ≥ 20%. Results: In the test dataset, the mean DSS was 0.977 for liver segmentation and 0.946 for spleen segmentation. The Bland-Altman 95% LOAs were 0.08% ± 3.70% for the liver volume, 0.20% ± 7.89% for the spleen volume, -0.02% ± 1.28% for the liver SI, and -0.01% ± 1.70% for the spleen SI. Among DLA-based MRI indices, aLSSR × LVBSA showed the strongest correlation with ICG-R15 (r = -0.54, p < 0.001), with area under receiver operating characteristic curve of 0.932 (95% confidence interval, 0.895-0.959) to diagnose ICG-R15 ≥ 20%. Conclusion: Our DLA can accurately measure the volume and SI of the liver and spleen and may be useful for assessing functional liver capacity using gadoxetic acid-enhanced HBP-MRI.

Feasibility of Deep Learning-Based Analysis of Auscultation for Screening Significant Stenosis of Native Arteriovenous Fistula for Hemodialysis Requiring Angioplasty

  • Jae Hyon Park;Insun Park;Kichang Han;Jongjin Yoon;Yongsik Sim;Soo Jin Kim;Jong Yun Won;Shina Lee;Joon Ho Kwon;Sungmo Moon;Gyoung Min Kim;Man-deuk Kim
    • Korean Journal of Radiology
    • /
    • 제23권10호
    • /
    • pp.949-958
    • /
    • 2022
  • Objective: To investigate the feasibility of using a deep learning-based analysis of auscultation data to predict significant stenosis of arteriovenous fistulas (AVF) in patients undergoing hemodialysis requiring percutaneous transluminal angioplasty (PTA). Materials and Methods: Forty patients (24 male and 16 female; median age, 62.5 years) with dysfunctional native AVF were prospectively recruited. Digital sounds from the AVF shunt were recorded using a wireless electronic stethoscope before (pre-PTA) and after PTA (post-PTA), and the audio files were subsequently converted to mel spectrograms, which were used to construct various deep convolutional neural network (DCNN) models (DenseNet201, EfficientNetB5, and ResNet50). The performance of these models for diagnosing ≥ 50% AVF stenosis was assessed and compared. The ground truth for the presence of ≥ 50% AVF stenosis was obtained using digital subtraction angiography. Gradient-weighted class activation mapping (Grad-CAM) was used to produce visual explanations for DCNN model decisions. Results: Eighty audio files were obtained from the 40 recruited patients and pooled for the study. Mel spectrograms of "pre-PTA" shunt sounds showed patterns corresponding to abnormal high-pitched bruits with systolic accentuation observed in patients with stenotic AVF. The ResNet50 and EfficientNetB5 models yielded an area under the receiver operating characteristic curve of 0.99 and 0.98, respectively, at optimized epochs for predicting ≥ 50% AVF stenosis. However, Grad-CAM heatmaps revealed that only ResNet50 highlighted areas relevant to AVF stenosis in the mel spectrogram. Conclusion: Mel spectrogram-based DCNN models, particularly ResNet50, successfully predicted the presence of significant AVF stenosis requiring PTA in this feasibility study and may potentially be used in AVF surveillance.

Fully Automatic Segmentation of Acute Ischemic Lesions on Diffusion-Weighted Imaging Using Convolutional Neural Networks: Comparison with Conventional Algorithms

  • Ilsang Woo;Areum Lee;Seung Chai Jung;Hyunna Lee;Namkug Kim;Se Jin Cho;Donghyun Kim;Jungbin Lee;Leonard Sunwoo;Dong-Wha Kang
    • Korean Journal of Radiology
    • /
    • 제20권8호
    • /
    • pp.1275-1284
    • /
    • 2019
  • Objective: To develop algorithms using convolutional neural networks (CNNs) for automatic segmentation of acute ischemic lesions on diffusion-weighted imaging (DWI) and compare them with conventional algorithms, including a thresholding-based segmentation. Materials and Methods: Between September 2005 and August 2015, 429 patients presenting with acute cerebral ischemia (training:validation:test set = 246:89:94) were retrospectively enrolled in this study, which was performed under Institutional Review Board approval. Ground truth segmentations for acute ischemic lesions on DWI were manually drawn under the consensus of two expert radiologists. CNN algorithms were developed using two-dimensional U-Net with squeeze-and-excitation blocks (U-Net) and a DenseNet with squeeze-and-excitation blocks (DenseNet) with squeeze-and-excitation operations for automatic segmentation of acute ischemic lesions on DWI. The CNN algorithms were compared with conventional algorithms based on DWI and the apparent diffusion coefficient (ADC) signal intensity. The performances of the algorithms were assessed using the Dice index with 5-fold cross-validation. The Dice indices were analyzed according to infarct volumes (< 10 mL, ≥ 10 mL), number of infarcts (≤ 5, 6-10, ≥ 11), and b-value of 1000 (b1000) signal intensities (< 50, 50-100, > 100), time intervals to DWI, and DWI protocols. Results: The CNN algorithms were significantly superior to conventional algorithms (p < 0.001). Dice indices for the CNN algorithms were 0.85 for U-Net and DenseNet and 0.86 for an ensemble of U-Net and DenseNet, while the indices were 0.58 for ADC-b1000 and b1000-ADC and 0.52 for the commercial ADC algorithm. The Dice indices for small and large lesions, respectively, were 0.81 and 0.88 with U-Net, 0.80 and 0.88 with DenseNet, and 0.82 and 0.89 with the ensemble of U-Net and DenseNet. The CNN algorithms showed significant differences in Dice indices according to infarct volumes (p < 0.001). Conclusion: The CNN algorithm for automatic segmentation of acute ischemic lesions on DWI achieved Dice indices greater than or equal to 0.85 and showed superior performance to conventional algorithms.

Deep Learning Algorithm for Automated Segmentation and Volume Measurement of the Liver and Spleen Using Portal Venous Phase Computed Tomography Images

  • Yura Ahn;Jee Seok Yoon;Seung Soo Lee;Heung-Il Suk;Jung Hee Son;Yu Sub Sung;Yedaun Lee;Bo-Kyeong Kang;Ho Sung Kim
    • Korean Journal of Radiology
    • /
    • 제21권8호
    • /
    • pp.987-997
    • /
    • 2020
  • Objective: Measurement of the liver and spleen volumes has clinical implications. Although computed tomography (CT) volumetry is considered to be the most reliable noninvasive method for liver and spleen volume measurement, it has limited application in clinical practice due to its time-consuming segmentation process. We aimed to develop and validate a deep learning algorithm (DLA) for fully automated liver and spleen segmentation using portal venous phase CT images in various liver conditions. Materials and Methods: A DLA for liver and spleen segmentation was trained using a development dataset of portal venous CT images from 813 patients. Performance of the DLA was evaluated in two separate test datasets: dataset-1 which included 150 CT examinations in patients with various liver conditions (i.e., healthy liver, fatty liver, chronic liver disease, cirrhosis, and post-hepatectomy) and dataset-2 which included 50 pairs of CT examinations performed at ours and other institutions. The performance of the DLA was evaluated using the dice similarity score (DSS) for segmentation and Bland-Altman 95% limits of agreement (LOA) for measurement of the volumetric indices, which was compared with that of ground truth manual segmentation. Results: In test dataset-1, the DLA achieved a mean DSS of 0.973 and 0.974 for liver and spleen segmentation, respectively, with no significant difference in DSS across different liver conditions (p = 0.60 and 0.26 for the liver and spleen, respectively). For the measurement of volumetric indices, the Bland-Altman 95% LOA was -0.17 ± 3.07% for liver volume and -0.56 ± 3.78% for spleen volume. In test dataset-2, DLA performance using CT images obtained at outside institutions and our institution was comparable for liver (DSS, 0.982 vs. 0.983; p = 0.28) and spleen (DSS, 0.969 vs. 0.968; p = 0.41) segmentation. Conclusion: The DLA enabled highly accurate segmentation and volume measurement of the liver and spleen using portal venous phase CT images of patients with various liver conditions.

표층과 심층의 시각에서 바라본 대순진리회 - 종교적 경험의 관점에서 - (Daesoonjinrihoe from both Superficial Religious Perspectives and Deep Religious Perspectives : Focused on Religious Experience)

  • 이은희
    • 대순사상논총
    • /
    • 제27집
    • /
    • pp.245-282
    • /
    • 2016
  • 지금 전 세계에는 자기 안의 신성을 되찾고자 하는 영성의 바람이 거세게 불고 있다. 하지만 아직도 종교 갈등은 진행 중이다. 테러 사건, 종교 간의 분쟁 등이 끊임없이 일어나는 등 오히려 더 규모가 커지고 전 세계로 확대되고 있다. 종교 간의 화합은 더욱 요원해 보인다. 종교 간 갈등의 근본 원인은 무엇일까? 종교공동체 사이에 소통한다는 것이 이토록 어려운가? 비록 문화가 다르고 교리적 의례적 표현은 다르지만 어느 종교이든 핵심적인 부분인 심층을 들여다보면 대체로 종교 상호 간에 일맥상통하는 면이 있는 것으로 보인다. 공통점을 찾고 차이점을 인정할 때 서로 배움의 자세가 되어 소통이 용이하게 된다. 그렇다면 종교 간의 공통점으로 무엇이 있을까? 많은 학자들은 각 종교의 신비주의에서 말하는 '하나됨'의 경지를 주장한다. 이 하나됨의 경지는 하루아침에 되는 것이 아니라 신앙을 성숙시키고자 하는 끊임없는 노력의 궁극적 도달점인 것이다. 이 도달점에 이르는 과정을 중요시하는 깨달음의 종교가 심층종교라고 할 수 있다. 표층종교가 기복적이고 무조건적인 믿음을 강조하는 것이라면 심층종교는 내 안의 신성(神聖), 참나, 큰나를 깨닫는 것을 강조하는 종교이다. 표층종교와 심층종교라는 것은 비교종교학자인 오강남 교수가 편의상 분류한 용어로, 이 잣대는 상대적 개념이며 명확히 구분할 수 있는 것은 더더욱 아니다. 그러나 표층·심층종교의 개념은 종교생활이나 종교성의 발달을 모두 포괄할 수 있다는 측면에서 종교에 대한 논의를 보다 분명하고 수월하게 할 수 있다는 장점이 있다. 표층·심층의 분류를 이러한 의미에서 제한적으로만 사용하고자 한다. 필자는 표층·심층의 용어를 빌리되 여러 학자들의 분류를 참조하여 재고찰해 보고, 이 시각을 종교적 경험과 연결해 보고자 한다. 종교성의 발달 즉 신앙의 성숙은, 진리에 대한 깊이 있는 깨달음은, 개방적이고 공감하는 태도는 어떻게 가능할까? 대부분의 많은 학자들은 '종교적 경험'을 꼽는다. 종교적 경험을 통해 기복적이고 자기중심적이고 표층적인 믿음에서 좀 더 성숙한 신앙으로, 계속되는 깨달음과 그 실천으로 더욱더 깊은 신앙으로 나아갈 수 있다는 것이다. 본 연구에서는 종교사에는 표층종교와 심층종교의 측면이 어떻게 나타났는지, 역대 종교에 대한 비판의 소리는 어떤 것이 있었는지 살펴본다. 이러한 표층과 심층의 시각으로 대순진리회 수도인들의 종교적 경험 수기 몇 가지를 분석하여, 표층에서 심층으로의 종교성의 발달이 종교적 경험을 통해 어떻게 일어나는지, 그 특성은 어떠한지 알아보고자 한다.

하이퍼스펙트럴영상 분류에서 정준상관분류기법의 유용성 (Usefulness of Canonical Correlation Classification Technique in Hyper-spectral Image Classification)

  • 박민호
    • 대한토목학회논문집
    • /
    • 제26권5D호
    • /
    • pp.885-894
    • /
    • 2006
  • 본 논문의 의도는 하이퍼스펙트럴 영상의 다량의 밴드를 사용하면서도 효율적인 분류기법의 개발에 초점을 두고 있다. 본 연구에서는 하이퍼스펙트럴 영상의 분류에 있어 이론적으로 밴드수가 많아질수록 분류정확도가 높을 것이라 예상되는, 다변량 통계분석기법중의 하나인 정준상관분석을 적용한 분류기법을 제안한다. 그리고 기존의 대표적인 전통적 분류기법인 최대 우도분류 방법과 비교한다. 사용되는 하이퍼스펙트럴 영상은 2001년 9월 2일 취득된 EO1-Hyperion 영상이다. 실험을 위한 밴드수는 LANDSAT TM 영상에서 열밴드를 제외한 나머지 데이터의 파장대와 일치하는 부분을 감안하여 30개 밴드로 선정하였다. 지상실제데이터로서 비교기본도를 채택하였다. 이 비교기본도와 시각적으로 윤곽을 비교하고, 중첩분석하여 정확도를 평가하였다. 최대우도분류의 경우 수역 분류를 제외하고는 전혀 분류기법으로서의 역할을 하지 못하는 것으로 판단되며, 수역의 경우도 큰 호수 외에 작은 호수나 골프장내 연못, 부분적으로 물이 존재하는 작은 영역 등은 전혀 분류하지 못하고 있는 것으로 나타났다. 그러나 정준상관분류결과는 비교기본도와 형태적으로 시각적 비교를 해볼 때 골프장잔디를 거의 명확히 분류해 내고 있으며, 도시역에 대해서도 고속도로의 선형 등을 상당히 잘 분류해내고 있음을 알 수 있다. 또한 수역의 경우도 골프장 연못이나 대학교내 연못, 기타지역의 연못, 웅덩이 등 까지도 잘 분류해내고 있음을 확인할 수 있다. 결과적으로 정준상관분석 알고리즘의 개념상 트레이닝 영역 선정시 시행착오를 겪지 않고도 정확한 분류를 할 수 있었다. 또한 분류항목 중에서 잔디와 그 외 식물을 구분해 내는 능력과 수역을 추출해 내는 능력이 최대우도분류기법에 비해 우수하였다. 이상의 결과로 판단해 볼 때 하이퍼스펙트럴영상에 적용되는 정준상관분류기법은 농작물 작황 예측과 지표수 탐사에 매우 유용하리라 판단되며, 나아가서는 분광적 고해상도 영상인 하이퍼스펙트럴 데이터를 이용한 GIS 데이터베이스 구축에 중요한 역할을 할 수 있을 것으로 기대된다.

폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근 (A Folksonomy Ranking Framework: A Semantic Graph-based Approach)

  • 박현정;노상규
    • Asia pacific journal of information systems
    • /
    • 제21권2호
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.