• Title/Summary/Keyword: Control efficiency

Search Result 9,285, Processing Time 0.042 seconds

Viability Test and Bulk Harvest of Marine Phytoplankton Communities to Verify the Efficacy of a Ship's Ballast Water Management System Based on USCG Phase II (USCG Phase II 선박평형수 성능 평가를 위한 해양 식물플랑크톤군집 대량 확보 및 생물사멸시험)

  • Hyun, Bonggil;Baek, Seung Ho;Lee, Woo Jin;Shin, Kyoungsoon
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.22 no.5
    • /
    • pp.483-489
    • /
    • 2016
  • The type approval test for USCG Phase II must be satisfied such that living natural biota occupy more than 75 % of whole biota in a test tank. Thus, we harvested a community of natural organisms using a net at Masan Bay (eutrophic) and Jangmok Bay (mesotrophic) during winter season to meet this guideline. Furthermore, cell viability was measured to determine the mortality rate. Based on the organism concentration volume (1 ton) at Masan and Jangmok Bay, abundance of ${\geq}10$ and $<50{\mu}m$ sized organisms was observed to be $4.7{\times}10^4cells\;mL^{-1}$and $0.8{\times}10^4cells\;mL^{-1}$, and their survival rates were 90.4 % and 88.0 %, respectively. In particular, chain-forming small diatoms such as Skeletonema costatum-like species were abundant at Jangmok Bay, while small flagellate ($<10{\mu}m$) and non chain-forming large dinoflagellates, such as Akashiwo sanguinea and Heterocapsa triquetra, were abundant at Masan Bay. Due to the size-difference of the dominant species, concentration efficiency was higher at Jangmok Bay than at Masan Bay. The mortality rate in samples treated by Ballast Water Treatment System (BWMS) (Day 0) was a little lower for samples from Jangmok Bay than from Masan Bay, with values of 90.4% and 93%, respectively. After 5 days, the mortality rates in control and treatment group were found to be 6.7% and >99%, respectively. Consequently, the phytoplankton concentration method alone did not easily satisfy the type approval standards of USCG Phase II ($>1.0{\times}10^3cells\;mL^{-1}$ in 500-ton tank) during winter season, and alternative options such as mass culture and/or harvesting system using natural phytoplankton communities may be helpful in meeting USCG Phase II biological criteria.

Analysis of the Effect of the Etching Process and Ion Injection Process in the Unit Process for the Development of High Voltage Power Semiconductor Devices (고전압 전력반도체 소자 개발을 위한 단위공정에서 식각공정과 이온주입공정의 영향 분석)

  • Gyu Cheol Choi;KyungBeom Kim;Bonghwan Kim;Jong Min Kim;SangMok Chang
    • Clean Technology
    • /
    • v.29 no.4
    • /
    • pp.255-261
    • /
    • 2023
  • Power semiconductors are semiconductors used for power conversion, transformation, distribution, and control. Recently, the global demand for high-voltage power semiconductors is increasing across various industrial fields, and optimization research on high-voltage IGBT components is urgently needed in these industries. For high-voltage IGBT development, setting the resistance value of the wafer and optimizing key unit processes are major variables in the electrical characteristics of the finished chip. Furthermore, the securing process and optimization of the technology to support high breakdown voltage is also important. Etching is a process of transferring the pattern of the mask circuit in the photolithography process to the wafer and removing unnecessary parts at the bottom of the photoresist film. Ion implantation is a process of injecting impurities along with thermal diffusion technology into the wafer substrate during the semiconductor manufacturing process. This process helps achieve a certain conductivity. In this study, dry etching and wet etching were controlled during field ring etching, which is an important process for forming a ring structure that supports the 3.3 kV breakdown voltage of IGBT, in order to analyze four conditions and form a stable body junction depth to secure the breakdown voltage. The field ring ion implantation process was optimized based on the TEG design by dividing it into four conditions. The wet etching 1-step method was advantageous in terms of process and work efficiency, and the ring pattern ion implantation conditions showed a doping concentration of 9.0E13 and an energy of 120 keV. The p-ion implantation conditions were optimized at a doping concentration of 6.5E13 and an energy of 80 keV, and the p+ ion implantation conditions were optimized at a doping concentration of 3.0E15 and an energy of 160 keV.

Effect of Varying Excessive Air Ratios on Nitrogen Oxides and Fuel Consumption Rate during Warm-up in a 2-L Hydrogen Direct Injection Spark Ignition Engine (2 L급 수소 직접분사 전기점화 엔진의 워밍업 시 공기과잉률에 따른 질소산화물 배출 및 연료 소모율에 대한 실험적 분석)

  • Jun Ha;Yongrae Kim;Cheolwoong Park;Young Choi;Jeongwoo Lee
    • Journal of the Korean Institute of Gas
    • /
    • v.27 no.3
    • /
    • pp.52-58
    • /
    • 2023
  • With the increasing awareness of the importance of carbon neutrality in response to global climate change, the utilization of hydrogen as a carbon-free fuel source is also growing. Hydrogen is commonly used in fuel cells (FC), but it can also be utilized in internal combustion engines (ICE) that are based on combustion. Particularly, ICEs that already have established infrastructure for production and supply can greatly contribute to the expansion of hydrogen energy utilization when it becomes difficult to rely solely on fuel cells or expand their infrastructure. However, a disadvantage of utilizing hydrogen through combustion is the potential generation of nitrogen oxides (NOx), which are harmful emissions formed when nitrogen in the air reacts with oxygen at high temperatures. In particular, for the EURO-7 exhaust regulation, which includes cold start operation, efforts to reduce exhaust emissions during the warm-up process are required. Therefore, in this study, the characteristics of nitrogen oxides and fuel consumption were investigated during the warm-up process of cooling water from room temperature to 88℃ using a 2-liter direct injection spark ignition (SI) engine fueled with hydrogen. One advantage of hydrogen, compared to conventional fuels like gasoline, natural gas, and liquefied petroleum gas (LPG), is its wide flammable range, which allows for sparser control of the excessive air ratio. In this study, the excessive air ratio was varied as 1.6/1.8/2.0 during the warm-up process, and the results were analyzed. The experimental results show that as the excessive air ratio becomes sparser during warm-up, the emission of nitrogen oxides per unit time decreases, and the thermal efficiency relatively increases. However, as the time required to reach the final temperature becomes longer, the cumulative emissions and fuel consumption may worsen.

A study on Convergence Weapon Systems of Self propelled Mobile Mines and Supercavitating Rocket Torpedoes (자항 기뢰와 초공동 어뢰의 융복합 무기체계 연구)

  • Lee, Eunsu;Shin, Jin
    • Maritime Security
    • /
    • v.7 no.1
    • /
    • pp.31-60
    • /
    • 2023
  • This study proposes a new convergence weapon system that combines the covert placement and detection abilities of a self-propelled mobile mine with the rapid tracking and attack abilities of supercavitating rocket torpedoes. This innovative system has been designed to counter North Korea's new underwater weapon, 'Haeil'. The concept behind this convergence weapon system is to maximize the strengths and minimize the weaknesses of each weapon type. Self-propelled mobile mines, typically placed discreetly on the seabed or in the water, are designed to explode when a vessel or submarine passes near them. They are generally used to defend or control specific areas, like traditional sea mines, and can effectively limit enemy movement and guide them in a desired direction. The advantage that self-propelled mines have over traditional sea mines is their ability to move independently, ensuring the survivability of the platform responsible for placing the sea mines. This allows the mines to be discreetly placed even deeper into enemy lines, significantly reducing the time and cost of mine placement while ensuring the safety of the deployed platforms. However, to cause substantial damage to a target, the mine needs to detonate when the target is very close - typically within a few yards. This makes the timing of the explosion crucial. On the other hand, supercavitating rocket torpedoes are capable of traveling at groundbreaking speeds, many times faster than conventional torpedoes. This rapid movement leaves little room for the target to evade, a significant advantage. However, this comes with notable drawbacks - short range, high noise levels, and guidance issues. The high noise levels and short range is a serious disadvantage that can expose the platform that launched the torpedo. This research proposes the use of a convergence weapon system that leverages the strengths of both weapons while compensating for their weaknesses. This strategy can overcome the limitations of traditional underwater kill-chains, offering swift and precise responses. By adapting the weapon acquisition criteria from the Defense force development Service Order, the effectiveness of the proposed system was independently analyzed and proven in terms of underwater defense sustainability, survivability, and cost-efficiency. Furthermore, the utility of this system was demonstrated through simulated scenarios, revealing its potential to play a critical role in future underwater kill-chain scenarios. However, realizing this system presents significant technical challenges and requires further research.

  • PDF

Analysis of Greenhouse Thermal Environment by Model Simulation (시뮬레이션 모형에 의한 온실의 열환경 분석)

  • 서원명;윤용철
    • Journal of Bio-Environment Control
    • /
    • v.5 no.2
    • /
    • pp.215-235
    • /
    • 1996
  • The thermal analysis by mathematical model simulation makes it possible to reasonably predict heating and/or cooling requirements of certain greenhouses located under various geographical and climatic environment. It is another advantages of model simulation technique to be able to make it possible to select appropriate heating system, to set up energy utilization strategy, to schedule seasonal crop pattern, as well as to determine new greenhouse ranges. In this study, the control pattern for greenhouse microclimate is categorized as cooling and heating. Dynamic model was adopted to simulate heating requirements and/or energy conservation effectiveness such as energy saving by night-time thermal curtain, estimation of Heating Degree-Hours(HDH), long time prediction of greenhouse thermal behavior, etc. On the other hand, the cooling effects of ventilation, shading, and pad ||||&|||| fan system were partly analyzed by static model. By the experimental work with small size model greenhouse of 1.2m$\times$2.4m, it was found that cooling the greenhouse by spraying cold water directly on greenhouse cover surface or by recirculating cold water through heat exchangers would be effective in greenhouse summer cooling. The mathematical model developed for greenhouse model simulation is highly applicable because it can reflects various climatic factors like temperature, humidity, beam and diffuse solar radiation, wind velocity, etc. This model was closely verified by various weather data obtained through long period greenhouse experiment. Most of the materials relating with greenhouse heating or cooling components were obtained from model greenhouse simulated mathematically by using typical year(1987) data of Jinju Gyeongnam. But some of the materials relating with greenhouse cooling was obtained by performing model experiments which include analyzing cooling effect of water sprayed directly on greenhouse roof surface. The results are summarized as follows : 1. The heating requirements of model greenhouse were highly related with the minimum temperature set for given greenhouse. The setting temperature at night-time is much more influential on heating energy requirement than that at day-time. Therefore It is highly recommended that night- time setting temperature should be carefully determined and controlled. 2. The HDH data obtained by conventional method were estimated on the basis of considerably long term average weather temperature together with the standard base temperature(usually 18.3$^{\circ}C$). This kind of data can merely be used as a relative comparison criteria about heating load, but is not applicable in the calculation of greenhouse heating requirements because of the limited consideration of climatic factors and inappropriate base temperature. By comparing the HDM data with the results of simulation, it is found that the heating system design by HDH data will probably overshoot the actual heating requirement. 3. The energy saving effect of night-time thermal curtain as well as estimated heating requirement is found to be sensitively related with weather condition: Thermal curtain adopted for simulation showed high effectiveness in energy saving which amounts to more than 50% of annual heating requirement. 4. The ventilation performances doting warm seasons are mainly influenced by air exchange rate even though there are some variations depending on greenhouse structural difference, weather and cropping conditions. For air exchanges above 1 volume per minute, the reduction rate of temperature rise on both types of considered greenhouse becomes modest with the additional increase of ventilation capacity. Therefore the desirable ventilation capacity is assumed to be 1 air change per minute, which is the recommended ventilation rate in common greenhouse. 5. In glass covered greenhouse with full production, under clear weather of 50% RH, and continuous 1 air change per minute, the temperature drop in 50% shaded greenhouse and pad & fan systemed greenhouse is 2.6$^{\circ}C$ and.6.1$^{\circ}C$ respectively. The temperature in control greenhouse under continuous air change at this time was 36.6$^{\circ}C$ which was 5.3$^{\circ}C$ above ambient temperature. As a result the greenhouse temperature can be maintained 3$^{\circ}C$ below ambient temperature. But when RH is 80%, it was impossible to drop greenhouse temperature below ambient temperature because possible temperature reduction by pad ||||&|||| fan system at this time is not more than 2.4$^{\circ}C$. 6. During 3 months of hot summer season if the greenhouse is assumed to be cooled only when greenhouse temperature rise above 27$^{\circ}C$, the relationship between RH of ambient air and greenhouse temperature drop($\Delta$T) was formulated as follows : $\Delta$T= -0.077RH+7.7 7. Time dependent cooling effects performed by operation of each or combination of ventilation, 50% shading, pad & fan of 80% efficiency, were continuously predicted for one typical summer day long. When the greenhouse was cooled only by 1 air change per minute, greenhouse air temperature was 5$^{\circ}C$ above outdoor temperature. Either method alone can not drop greenhouse air temperature below outdoor temperature even under the fully cropped situations. But when both systems were operated together, greenhouse air temperature can be controlled to about 2.0-2.3$^{\circ}C$ below ambient temperature. 8. When the cool water of 6.5-8.5$^{\circ}C$ was sprayed on greenhouse roof surface with the water flow rate of 1.3 liter/min per unit greenhouse floor area, greenhouse air temperature could be dropped down to 16.5-18.$0^{\circ}C$, whlch is about 1$0^{\circ}C$ below the ambient temperature of 26.5-28.$0^{\circ}C$ at that time. The most important thing in cooling greenhouse air effectively with water spray may be obtaining plenty of cool water source like ground water itself or cold water produced by heat-pump. Future work is focused on not only analyzing the feasibility of heat pump operation but also finding the relationships between greenhouse air temperature(T$_{g}$ ), spraying water temperature(T$_{w}$ ), water flow rate(Q), and ambient temperature(T$_{o}$).

  • PDF

An Overview of the Rationale of Monetary and Banking Intervention: The Role of the Central Bank in Money and Banking Revisited (화폐(貨幣)·금융개입(金融介入)의 이론적(理論的) 근거(根據)에 대한 고찰(考察) : 중앙은행(中央銀行)의 존립근거(存立根據)에 대한 개관(槪觀))

  • Jwa, Sung-hee
    • KDI Journal of Economic Policy
    • /
    • v.12 no.3
    • /
    • pp.71-94
    • /
    • 1990
  • This paper reviews the rationale of monetary and banking intervention by an outside authority, either the government or the central bank, and seeks to delineate clearly the optimal limits to the monetary and banking deregulation currently underway in Korea as well as on a global scale. Furthermore, this paper seeks to establish an objective and balanced view on the role of the central bank, especially in light of the current discussion on the restructuring of Korea's central bank, which has been severely contaminated by interest-group politics. The discussion begins with the recognition that the modern free banking school and the new monetary economics are becoming formidable challenges to the traditional role of the government or the central bank in the monetary and banking sector. The paper reviews six arguments that have traditionally been presented to support intervention: (1) the possibility of an over-issue of bank notes under free banking instead of central banking; (2) externalities in and the public good nature of the use of money; (3) economies of scale and natural monopoly in producing money; (4) the need for macro stabilization policy due to the instability of the real sector; (5) the external effects of bank failure due to the inherent instability of the existing banking system; and (6) protection for small banknote users and depositors. Based on an analysis of the above arguments, the paper speculates on the optimal role of the government or central bank in the monetary and banking system and the optimal degree of monetary and banking deregulation. By contrast to the arguments for free banking or laissez-faire monetary systems, which become fashionable in recent years, monopoly and intervention by the government or central bank in the outside money system can be both necessary and optimal. In this case, of course, an over-issue of fiat money may be possible due to political considerations, but this issue is beyond the scope of this paper. On the other hand, the issue of inside monies based on outside money could indeed be provided for optimally under market competition by private institutions. A competitive system in issuing inside monies would help realize, to the maxim urn extent possible, external economies generated by using a single outside money. According to this reasoning, free banking activities will prevail in the inside money system, while a government monopoly will prevail in the outside money system. This speculation, then, also implies that the monetary and banking deregulation currently underway should and most likely will be limited to the inside money system, which could be liberalized to the fullest degree. It is also implied that it will be impractical to deregulate the outside money system and to allow market competition to provide outside money, in accordance with the arguments of the free banking school and the new monetary economics. Furthermore, the role of the government or central bank in this new environment will not be significantly different from their current roles. As far as the supply of fiat money continues to be monopolized by the government, the control of the supply of base money and such related responsibilities as monetary policy (argument(4)) and the lender of the last resort (argument (5)) will naturally be assigned to the outside money supplier. However, a mechanism for controlling an over-issue of fiat money by a monopolistic supplier will definitely be called for (argument(1)). A monetary policy based on a certain policy rule could be one possibility. More importantly, the deregulation of the inside money system would further increase the systemic risk inherent in the current fractional banking system, while enhancing the efficiency of the system (argument (5)). In this context, the role of the lender of the last resort would again become an instrument of paramount importance in alleviating liquidity crises in the early stages, thereby disallowing the possibility of a widespread bank run. Similarly, prudential banking supervision would also help maintain the safety and soundness of the fully deregulated banking system. These functions would also help protect depositors from losses due to bank failures (argument (6)). Finally, these speculations suggest that government or central bank authorities have probably been too conservative on the issue of the deregulation of the financial system, beyond the caution necessary to preserve system safety. Rather, only the fullest deregulation of the inside money system seems to guarantee the maximum enjoyment of external economies in the single outside money system.

  • PDF

Meteorological Constraints and Countermeasures in Rice Breeding -Breeding for cold tolerance- (기상재해와 수도육종상의 대책 - 내냉성품종육성방안-)

  • Mun-Hue Heu;Young-Soo Han
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.27 no.4
    • /
    • pp.371-384
    • /
    • 1982
  • Highly cold tolerant varieties are requested not only at high latitute cool area but also tropical high elevated areas, and the required tolerance is different from location to location. IRRI identified 6 different types of cold tolerance required in the world for breeding purpose; a) Hokkaido type, b) Suweon type, c) Taipei 1st season type, d) Taipei 2nd season type, e) Tropical alpine type and, f) Bangladesh type. The cold tolerance requested in Korea is more eargent in Tongil group cultivars and their required tolerance is the one such as the physiological activities at low temperature are as active as in Japonica group cultivars at least during young seedling stage and reproduction stage. With conventional Japonica cultivars, such cold tolerant characters are requested as short growth duration but stable basic vegetative growth, less sensitive to high temperature and less prolonged growth duration at low temperature. The methods screening for cold tolerance were developed rapidly after the Tongil cultivar was reliesed. The facilities of screening for cold tolerance, such as, low temperature incubator, cold water tank, growth cabinet, phytotron, cold water nursery in Chuncheon, breeding nursery located in Jinbu, Unbong and Youngduk, are well established. Foreign facilities such as, cold water tank with the rapid generation advancement facilities, cold nurseries located in Banaue, Kathmandu and Kashimir may be available for the screening of some limitted breeding materials. For the reference, screening methods applied at different growth stages in Japan are introduced. The component characters of cold tolerance are not well identified, but the varietal differences in a) germinability, b) young seedling growth, c) rooting, d) tillering, e) discolation, f) nutrition uptake, g) photosynthesis rate, h) delay in heading, i) pollen sterility, and j) grain fertility at low temperature are reported to be distinguishable. Relationships among those traits are not consistent. Reported studies on the inheritance of cold tolerance are summarized. Four or more genes are controlling low temperature germinability, one or several genes are controlling seedling tolerance, and four or more genes are responsible for the pollen fertility of the rice treated with cold air or grown in the cold water nursery. But most of those data indicate that the results may come out in different way if those were tested at different temperature. Many cold tolerant parents among Japonicas, Indicas and Javanicas were identified as the results of the improvement of cold tolerance screening techniques and IRTP efforts and they are ready to be utilized. Considering a) diversification of germ plasm, b) integration of resistances to diseases and insects, c) identification of adaptability of recommending cultivars and, d) systematic control of recommending cultivars, breeding strategies for short term and long term are suggested. For short term, efforts will be concentrated mainly to the conventional cultivar group. Domestic cultivars will be used as foundation stock and ecologically different foreign introductions such as from Hokkaido, China or from Taiwan, will be used as cross parents for the adjustment of growth durations and synthsize the prototype of tolerances. While at the other side, extreme early waxy Japonicas will be crossed with the Indica parents which are identified for their resistances to the diseases and insects. Through the back corsses to waxy Japonicas, those Indica resistances will be transfered to the Japonicas and these will be utilized to the crosses for the improvement of resistances of prototype. For the long term, efforts will be payed to synthsize all the available tolerances identified any from Japonicas, Indicas and Javanicas to diversify the germ plasm. The tolerant cultivars newly synthsized, should be stable and affected minimum. to the low temperature at all the growing stages. The resistances to the diseases and insects should be integrated also. The rapid generation advancement, pollen culture and international cooperations were emphasized to maximize the breeding efficiency.

  • PDF

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Comparison of Batch Assay and Random Assay Using Automatic Dispenser in Radioimmunoassay (핵의학 체외 검사에서 자동분주기를 이용한 Random Assay 가능성평가)

  • Moon, Seung-Hwan;Lee, Ho-Young;Shin, Sun-Young;Min, Gyeong-Sun;Lee, Hyun-Joo;Jang, Su-Jin;Kang, Ji-Yeon;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.4
    • /
    • pp.323-329
    • /
    • 2009
  • Purpose: Radioimmunoassay (RIA) was usually performed by the batch assay. To improve the efficiency of RIA without increase of the cost and time, random assay could be a choice. We investigated the possibility of the random assay using automatic dispenser by assessing the agreement between batch assay and random assay. Materials and Methods: The experiments were performed with four items; Triiodothyronine (T3), free thyroxine (fT4), Prostate specific antigen (PSA), Carcinoembryonic antigen (CEA). In each item, the sera of twenty patients, the standard, and the control samples were used. The measurements were done 4 times with 3 hour time intervals by random assay and batch assay. The coefficient of variation (CV) of the standard samples and patients' data in T3, fT4, PSA, and CEA were assessed. ICC (Intraclass correlation coefficient) and coefficient of correlation were measured to assessing the agreement between two methods. Results: The CVs (%) of T3, fT4, PSA, and CEA measured by batch assay were 3.2$\pm$1.7%, 3.9$\pm$2.1%, 7.1$\pm$6.2%, 11.2$\pm$7.2%. The CVs by random assay were 2.1$\pm$1.7%, 4.8$\pm$3.1%, 3.6$\pm$4.8%, and 7.4$\pm$6.2%. The ICC between the batch assay and random assay were 0.9968 (T3), 0.9973 (fT4), 0.9996 (PSA), and 0.9901 (CEA). The coefficient of correlation between the batch assay and random assay were 0.9924(T3), 0.9974 (fT4), 0.9994 (PSA), and 0.9989 (CEA) (p<0.05). Conclusion: The results of random assay showed strong agreement with the batch assay in a day. These results suggest that random assay using automatic dispenser could be used in radioimmunoassay.

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.