• Title/Summary/Keyword: space experiment

Search Result 1,945, Processing Time 0.027 seconds

Biodeodorization of Trimethylamine by Biofilter Packed with Waste Tire-Chips (폐타이어칩 충진형 바이오 필터에 의한 Trimethylamine 제거)

  • Park, Hun-Ju;Kim, Chang-Gyun
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.30 no.8
    • /
    • pp.789-797
    • /
    • 2008
  • This study was conducted to investigate removal characteristics of gaseous trimethylamine(TMA) through biofilter packed with waste tire-chips. The sludge in this experiment was collected from an activated sludge operated in a wastewater treatment facility treating malodorous pollutants. The nominal amount of collected sludge was inoculated through packing materials in the filter. The removal efficiencies for varying concentrations and SVs(Space velocity) were assessed based on TMA, COD$_{Cr}$, NO$_3{^-}$-N, NO$_2{^-}$-N, NH$_4{^+}$-N and EPS(Extracellular Polymeric Substances) in leachate, since biofilter had been steady-stately operated. The influent concentration of 10 ppm of TMA was removed to approximately 95% regardless of changing SV at 120 and 180 hr$^{-1}$, but it was lowered to 80 to 90% at SV 240 hr$^{-1}$. As influent concentration was gradually increased from 5 to 55 ppm, the removal efficiencies of TMA were initially high for 95% in the range of 5 to 10 ppm, but lowered to 80% for 10 to 30 ppm. As a part of kinetic study for TMA decomposition, V$_m$(maximum substrate removal rate) and $K_s$(substrate infinity coefficient) were 14.3 g$\cdot$m$^{-3}$$\cdot$h$^{-1}$ and 0.043 g$\cdot$m$^{-3}$, respectively while adapted period was shown in the range of 100 to 150 hr. Also, the EPS concentration was consistently observed from the leachate showing 100 to 200 ppm, which indicates that biofilm has been continuously formed and sustained throughout tire-chips packed reactor.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Ultrastructural Changes Induced by Telluric Acid in the Rat Liver (Telluric Acid가 흰쥐 간조직의 미세구조에 미치는 영향)

  • Son, Serk-Joo;Jeong, Young-Gil;Cho, Seung-Muk;Baik, Tai-Kyung;Choi, Chang-Do;Choi, Wol-Bong
    • Applied Microscopy
    • /
    • v.25 no.4
    • /
    • pp.83-103
    • /
    • 1995
  • This experiment was carried out to investigate the effects of telluric acid on the histological and fine structural changes in the rat liver. Fischer 344 rats($150{\sim}200gm$) were used in this study as control and experimental groups. Telluric acid(5 mg/100 gm of body weight) suspensed in olive oil was given intraperitoneally to the animals of the experimental group and only olive oil to those of the control group. At the intervals of 3, 6 and 12 hours, 1, 2, 3, 5, 10, 20, 30 and 60 days after administration, the animals were sacrificed, and livers were obtained from the rats. For light microscopic examination of the liver, sections($5{\mu}m$) were stained with hematoxylineosin(H-E). For electron microscopic examination of the liver, sections were stained with uranyl acetate and lead citrate, finally examined with Zeiss EM 109 electron microscopes. The results obtained were as follows. 1. In the control group, round nucleus. well developed mitochondria, Golgi apparatus, rough endoplasmic reticulum(RER) and numerous glycogen particles were observed in the cytoplasm of the hepatocyte. In the cytoplasmic membranes of the hepatocyte, sinusoidal surface had numerous microvilli and cellular surface is combinated adjacent hepatocyte with desmosomes. The RER cisterns were dilated and zymogen granules were fewer than those of the dark cells. Kupffer cells with irregular nuclear membrane were observed. Fat storing cell and collagenous fiber bundle were observed in the Disse space. 2. Kupffer cell, inflammatory cells in the connective tissue of hepatic triad and lysosome were increased in the 3, 6, and 12 hour experimental group comparing with that of the control group. 3. In the 1 day experimental group, infiltration of inflammatory cells in interlobular connective tissue, dilatation of sinusoidal capillary and increasing of Kupffer cell were observed. Atropic change of hepatocyte and aggregation of glycogen particles in the cytoplasm of hepatocyte were observed. In this group, desmosome near bile canaliculi and collagenous fiber bundle in the Disse space were increased comparing with that of the 12 hours experimental group. In the 2 days experimental group, desmosome, lysosome, peroxisome and collagenous fiber bundle were increased comparing with that of the 1 day experimental group. Furthermore, lamellated bodies were also seen in the cytoplasm of the hepatocyte. 4. In 3 and 5 days experimental groups, transformations of hepatic cell cord and degeneration of the hepatocyte were markedly inclosed comparing with the all experimental groups. And damaged RER and mitochondria. collagenous fiber bundle were also inclosed comparing with that of the 2 days experimental group. Autophagosome and fat storing cells with large lipid droplets were also observed comparing with that of the 2 days experimental group. Tight junction and desmosome between the hepatocytes were separated. These degenerating changes were severe through the all experimental groups. 5. In the 10 and 20 days experimental groups, arrangement of hepatic cell cords and cell organelles of hepatocytes were similar to those of the control group. However, aggregation of glycogen particles, dilatation of sinusoidal capillary and infiltration of inflammatory cells remained. 6. In the 30 days experimental group, the tissue findings were similar to those of the control grout. But lamellated bodies in some hepatocytes and lysosome were remained in the cytoplasms of the Kupffer cells. In the 60 days experimental group, these all changes were recovered as the control group. In conclusion, telluric acid would directly induce the degenerative and necrotic changes on the hepatic tissue. However, these changes were perfectly recoverd in the 60 days experimental group as the control group.

  • PDF

Semi-automated Tractography Analysis using a Allen Mouse Brain Atlas : Comparing DTI Acquisition between NEX and SNR (알렌 마우스 브레인 아틀라스를 이용한 반자동 신경섬유지도 분석 : 여기수와 신호대잡음비간의 DTI 획득 비교)

  • Im, Sang-Jin;Baek, Hyeon-Man
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.2
    • /
    • pp.157-168
    • /
    • 2020
  • Advancements in segmentation methodology has made automatic segmentation of brain structures using structural images accurate and consistent. One method of automatic segmentation, which involves registering atlas information from template space to subject space, requires a high quality atlas with accurate boundaries for consistent segmentation. The Allen Mouse Brain Atlas, which has been widely accepted as a high quality reference of the mouse brain, has been used in various segmentations and can provide accurate coordinates and boundaries of mouse brain structures for tractography. Through probabilistic tractography, diffusion tensor images can be used to map comprehensive neuronal network of white matter pathways of the brain. Comparisons between neural networks of mouse and human brains showed that various clinical tests on mouse models were able to simulate disease pathology of human brains, increasing the importance of clinical mouse brain studies. However, differences between brain size of human and mouse brain has made it difficult to achieve the necessary image quality for analysis and the conditions for sufficient image quality such as a long scan time makes using live samples unrealistic. In order to secure a mouse brain image with a sufficient scan time, an Ex-vivo experiment of a mouse brain was conducted for this study. Using FSL, a tool for analyzing tensor images, we proposed a semi-automated segmentation and tractography analysis pipeline of the mouse brain and applied it to various mouse models. Also, in order to determine the useful signal-to-noise ratio of the diffusion tensor image acquired for the tractography analysis, images with various excitation numbers were compared.

THE STUDY ON THE NOISE IN THE VESSEL -Effect of the Noise Control by the Noise Arresting Rooms- (선박소음에 관한 연구 -방음실에 의한 소음제어효과실험-)

  • PARK Jung-Hee
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.9 no.3
    • /
    • pp.215-221
    • /
    • 1976
  • In this study, noise arresting effect of the noise control room from the transmission of surrounding noise was tested when the packing noise control rooms were set up in the test room in which the prerecorded noise from an engine room was reradiated at the same level as the original pressure. The inner space of control room A is $3.389m^3(1.19\times1.19\times2.14m)$ having walls furnished with plywood board 9mm in thickness and noise control room door$(60\times45cm) $ and illumination lamp are placed. In case of the control room B, noise absorption board(10mm fiber board which holds the corntype concavity with diameter of 5mm, depth 5mm, space 15mm) is adhered to the internal ceiling and styrol foam boards(20mm) to the walls. The other struction is same as the control room A. Type C is the same as B except wool board(Glass Fiber, 33mm) on the walls. Type D is same as type A except that the thickness of wall is 12mm and wood pyramid type cone$(5\times5\times13cm)$ is adhered to the ceiling ana walls(Fig. 1). When the recorded noise and vibrated noise were controlled in various levels. The noise pressure which passed through the control rooms was measured by sound level meter(Bruel & Kjar 2205, measuring range 37-140dB). In order to calculate the absorption rate in the control rooms the noise pressure was measured at different distances when the recorded noise pressure was radiated. The followings are the results obtained from the experiment. 1. When the noise pressure of the test room was 60dB, transmission rate of type A was $69.7\%$ and increased $3.3\%$ per 10dB. At the same condition, the rate was $53.9\%$ and increased $4.5\%$ per 10dB in type D. Type D was the most effective in noise arresting of the four and the effect was D,C,B and A in order(Fig.2). 2. When the oscillator sound and vessels noise were radiated in 1,000Hz, at one meter distance to the type A and D, the oscillator sound pressure were 77dB and 73dB, while the vessels noise pressure were 73.3dB and 66.2dB respectivley(Fig.3). 3. Refering to the influence of the frequency to the lower oscillator sound(1,000Hz) pressure, both type C and D were almost same at 140cm but type C was 0.3dB lower than type D at 20cm distance(Fig.4).

  • PDF

A New Item Recommendation Procedure Using Preference Boundary

  • Kim, Hyea-Kyeong;Jang, Moon-Kyoung;Kim, Jae-Kyeong;Cho, Yoon-Ho
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.81-99
    • /
    • 2010
  • Lately, in consumers' markets the number of new items is rapidly increasing at an overwhelming rate while consumers have limited access to information about those new products in making a sensible, well-informed purchase. Therefore, item providers and customers need a system which recommends right items to right customers. Also, whenever new items are released, for instance, the recommender system specializing in new items can help item providers locate and identify potential customers. Currently, new items are being added to an existing system without being specially noted to consumers, making it difficult for consumers to identify and evaluate new products introduced in the markets. Most of previous approaches for recommender systems have to rely on the usage history of customers. For new items, this content-based (CB) approach is simply not available for the system to recommend those new items to potential consumers. Although collaborative filtering (CF) approach is not directly applicable to solve the new item problem, it would be a good idea to use the basic principle of CF which identifies similar customers, i,e. neighbors, and recommend items to those customers who have liked the similar items in the past. This research aims to suggest a hybrid recommendation procedure based on the preference boundary of target customer. We suggest the hybrid recommendation procedure using the preference boundary in the feature space for recommending new items only. The basic principle is that if a new item belongs within the preference boundary of a target customer, then it is evaluated to be preferred by the customer. Customers' preferences and characteristics of items including new items are represented in a feature space, and the scope or boundary of the target customer's preference is extended to those of neighbors'. The new item recommendation procedure consists of three steps. The first step is analyzing the profile of items, which are represented as k-dimensional feature values. The second step is to determine the representative point of the target customer's preference boundary, the centroid, based on a personal information set. To determine the centroid of preference boundary of a target customer, three algorithms are developed in this research: one is using the centroid of a target customer only (TC), the other is using centroid of a (dummy) big target customer that is composed of a target customer and his/her neighbors (BC), and another is using centroids of a target customer and his/her neighbors (NC). The third step is to determine the range of the preference boundary, the radius. The suggested algorithm Is using the average distance (AD) between the centroid and all purchased items. We test whether the CF-based approach to determine the centroid of the preference boundary improves the recommendation quality or not. For this purpose, we develop two hybrid algorithms, BC and NC, which use neighbors when deciding centroid of the preference boundary. To test the validity of hybrid algorithms, BC and NC, we developed CB-algorithm, TC, which uses target customers only. We measured effectiveness scores of suggested algorithms and compared them through a series of experiments with a set of real mobile image transaction data. We spilt the period between 1st June 2004 and 31st July and the period between 1st August and 31st August 2004 as a training set and a test set, respectively. The training set Is used to make the preference boundary, and the test set is used to evaluate the performance of the suggested hybrid recommendation procedure. The main aim of this research Is to compare the hybrid recommendation algorithm with the CB algorithm. To evaluate the performance of each algorithm, we compare the purchased new item list in test period with the recommended item list which is recommended by suggested algorithms. So we employ the evaluation metric to hit the ratio for evaluating our algorithms. The hit ratio is defined as the ratio of the hit set size to the recommended set size. The hit set size means the number of success of recommendations in our experiment, and the test set size means the number of purchased items during the test period. Experimental test result shows the hit ratio of BC and NC is bigger than that of TC. This means using neighbors Is more effective to recommend new items. That is hybrid algorithm using CF is more effective when recommending to consumers new items than the algorithm using only CB. The reason of the smaller hit ratio of BC than that of NC is that BC is defined as a dummy or virtual customer who purchased all items of target customers' and neighbors'. That is centroid of BC often shifts from that of TC, so it tends to reflect skewed characters of target customer. So the recommendation algorithm using NC shows the best hit ratio, because NC has sufficient information about target customers and their neighbors without damaging the information about the target customers.

A study on the design and applicability of stereoscopic sign for improving the visibility of traffic sign in double-deck tunnel (복층터널 교통표지판 시인성 향상을 위한 입체표지판 설계 및 적용 가능성에 대한 연구)

  • Park, Sang-Heon;Hwang, Ju-Hwan;Han, Sang-Ju;An, Sung-Joo;Kim, Hoon-Jae
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.20 no.6
    • /
    • pp.899-915
    • /
    • 2018
  • In this study, in order to construct an eco-friendly advanced road transportation network, the multi-layer tunnel, which is a small-sized car road, is designed to have a height of less than 60 cm. However, the shape of the tunnel is low and the height of the traffic sign is small. In order to solve these problems, traffic sign characters were designed in three dimensions, and the possibility of applying the design of the three - dimensional sign that can obtain greater visibility than the existing signs at the same distance and the possibility verification through virtual simulation were performed. The three-dimensional sign is horizontally installed on the ceiling of the multi-layer tunnel. To be seen vertically, it is enlarged by a certain ratio by the perspective, and the width and height are enlarged. Respectively. In addition, 3D simulation was performed to verify the visibility of the stereoscopic signs when the driver ran through the stereoscopic sign design specifications. As a result of the design and experimental study, it was confirmed that the stereoscopic sign could be designed through the theoretical formula and that it could provide the driver with a larger traffic sign character because there is no limitation of the facility limit compared to the existing vertical traffic sign. Also, we confirmed that it can be implemented in the side wall by using the stereoscopic sign design principle installed on the ceiling part. It was confirmed that the design of the stereoscopic sign can be designed to be smaller as the distance that the driver visually recognizes the sperm is shorter, the height of the protrusion vertically at the lower part of the stereoscopic sign becomes higher. As a result of 3D simulation running experiment based on the design information of the stereoscopic sign, it was confirmed that the stereoscopic sign is visually the same as the vertical sign at the planned distance. Although the detailed research and institutional improvement of stereoscopic signs have not been made in Korea and abroad, it is evolved into a core technology of new road traffic facilities through various studies through the possibility of designing and applying stereoscopic signs developed through this study Expect.

Laboratory chamber test for prediction of hazardous ground conditions ahead of a TBM tunnel face using electrical resistivity survey (전기비저항 탐사 기반 TBM 터널 굴진면 전방 위험 지반 예측을 위한 실내 토조실험 연구)

  • Lee, JunHo;Kang, Minkyu;Lee, Hyobum;Choi, Hangseok
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.6
    • /
    • pp.451-468
    • /
    • 2021
  • Predicting hazardous ground conditions ahead of a TBM (Tunnel Boring Machine) tunnel face is essential for efficient and stable TBM advance. Although there have been several studies on the electrical resistivity survey method for TBM tunnelling, sufficient experimental data considering TBM advance were not established yet. Therefore, in this study, the laboratory-scale model experiments for simulating TBM excavation were carried out to analyze the applicability of an electrical resistivity survey for predicting hazardous ground conditions ahead of a TBM tunnel face. The trend of electrical resistivity during TBM advance was experimentally evaluated under various hazardous ground conditions (fault zone, seawater intruded zone, soil to rock transition zone, and rock to soil transition zone) ahead of a tunnel face. In the course of the experiments, a scale-down rock ground was provided using granite blocks to simulate the rock TBM tunnelling. Based on the experimental data, the electrical resistivity tends to decrease as the tunnel approaches the fault zone. While the seawater intruded zone follows a similar trend with the fault zone, the resistivity value of the seawater intrude zone decreased significantly compared to that of the fault zone. In case of the soil-to-rock transition zone, the electrical resistivity increases as the TBM approaches the rock with relatively high electrical resistivity. Conversely, in case of the rock-to-soil transition zone, the opposite trend was observed. That is, electrical resistivity decreases as the tunnel face approaches the rock with relatively low electrical resistivity. The experiment results represent that hazardous ground conditions (fault zone, seawater intruded zone, soil-to-rock transition zone, rock-to-soil transition zone) can be efficiently predicted by utilizing an electrical resistivity survey during TBM tunnelling.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.