• Title/Summary/Keyword: etc....

Search Result 28,503, Processing Time 0.063 seconds

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

A study on the second edition of Koryo Dae-Jang-Mock-Lock (고려재조대장목록고)

  • Jeong Pil-mo
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.17
    • /
    • pp.11-47
    • /
    • 1989
  • This study intends to examine the background and the procedure of the carving of the tablets of the second edition of Dae-Jang-Mock­Lock(재조대장목록). the time and the route of the moving of the tablets. into Haein-sa, and the contents and the system of it. This study is mainly based on the second edition of Dae-Jang-Mock-Lock. But the other closely related materials such as restored first. edition of the Dae- Jang-Mock-Lock, Koryo Sin-Jo-Dae-Jang-Byeol-Lock (고려신조대장교정별록). Kae-Won-Seok-Kyo-Lock (개원석교록). Sok-Kae­Won-Seok-Kyo-Lock (속개원석교록). Jeong-Won-Sin-Jeong-Seok-Kyo­Lock(정원신정석교록), Sok-Jeong-Won-Seok-Kyo-Lock(속정원석교록), Dea-Jung-Sang-Bu-Beob-Bo-Lock(대중상부법보록), and Kyeong-Woo-Sin-Su-Beob-Bo-Lock(경우신수법보록), are also analysed and closely examined. The results of this study can be summarized as follows: 1. The second edition of Tripitaka Koreana(고려대장경) was carved for the purpose of defending the country from Mongolia with the power of Buddhism, after the tablets of the first edition in Buin-sa(부이사) was destroyed by fire. 2. In 1236. Dae-Jang-Do-Gam(대장도감) was established, and the preparation for the recarving of the tablets such as comparison between the content, of the first edition of Tripitalk Koreana, Gal-Bo-Chik-Pan-Dae­Jang-Kyeong and Kitan Dae- Jang-Kyeong, transcription of the original copy and the preparation of the wood, etc. was started. 3. In 1237 after the announcement of Dae-Jang-Gyeong-Gak-Pan-Gun­Sin-Gi-Go-Mun(대장경핵판군신석고문), the carving was started on a full scale. And seven years later (1243), Bun-Sa-Dae-Jang-Do-Gam(분사대장도감) was established in the area of the South to expand and hasten the work. And a large number of the tablets were carved in there. 4. It took 16 years to carve the main text and the supplements of the second edition of Tripitaka Koreana, the main text being carved from 1237 to 1248 and the supplement from 1244 to 1251. 5. It can be supposed that the tablets of the second edition of Tripitaka Koreana, stored in Seon-Won-Sa(선원사), Kang-Wha(강화), for about 140 years, was moved to Ji-Cheon-Sa(지천사), Yong-San(용산), and to Hae-In-Sa(해인사) again, through the west and the south sea and Jang-Gyeong-Po(장경포), Go-Ryeong(고령), in the autumn of the same year. 6. The second edition of Tripitaka Koreana was carved mainly based on the first edition, comparing with Gae-Bo-Chik-Pan-Dae-Jang-Kyeong(개보판대장경) and Kitan Dae-Jang-Kyeong(계단대장경). And the second edition of Dae-Jang-Mock-Lock also compiled mainly based on the first edition with the reference to Kae-Won-Seok-Kyo-Lock and Sok-Jeong-Won-Seok-Kyo-Lock. 7. Comparing with the first edition of Dae-Jang-Mock-Lock, in the second edition 7 items of 9 volumes of Kitan text such as Weol-Deung­Sam-Mae-Gyeong-Ron(월증삼매경론) are added and 3 items of 60 volumes such as Dae-Jong-Ji-Hyeon-Mun-Ron(대종지현문논) are substituted into others from Cheon chest(천함) to Kaeng chest(경함), and 92 items of 601 volumes such as Beob-Won-Ju-Rim-Jeon(법원주임전) are added after Kaeng chest. And 4 items of 50 volumes such as Yuk-Ja-Sin-Ju-Wang-Kyeong(육자신주왕경) are ommitted in the second edition. 8. Comparing with Kae-Won-Seok-Kyo-Lock, Cheon chest to Young chest (영함) of the second edition is compiled according to Ib-Jang-Lock(입장록) of Kae-Won-Seok-Kyo-Lock. But 15 items of 43 vol­umes such as Bul-Seol-Ban-Ju-Sam-Mae-Kyeong(불설반주삼매경) are ;added and 7 items of 35 volumes such as Dae-Bang-Deung-Dae-Jib-Il­Jang-Kyeong(대방등대집일장경) are ommitted. 9. Comparing with Sok-Jeong-Won-Seok-Kyo-Lock, 3 items of the 47 volumes (or 49 volumes) are ommitted and 4 items of 96 volumes are ;added in Caek chest(책함) to Mil chest(밀함) of the second edition. But the items are arranged in the same order. 10. Comparing with Dae- Jung-Sang-Bo-Beob-Bo-Lock, the arrangement of the second edition is entirely different from it. But 170 items of 329 volumes are also included in Doo chest(두함) to Kyeong chest(경함) of the second edition, and 53 items of 125 volumes in Jun chest(존함) to Jeong chest(정함). And 10 items of 108 volumes in the last part of Dae-Jung-Sang-Bo-Beob-Bo-Lock are ommitted and 3 items of 131 volumes such as Beob-Won-Ju-Rim-Jeon(법원주임전) are added in the second edition. 11. Comparing with Kyeong-Woo-Sin-Su-Beob-Bo-Lock, all of the items (21 items of 161 volumes) are included in the second edition without ;any classificatory system. And 22 items of 172 volumes in the Seong­Hyeon-Jib-Jeon(성현집전) part such as Myo-Gak-Bi-Cheon(묘각비전) are ommitted. 12. The last part of the second edition, Joo chest(주함) to Dong chest (동함), includes 14 items of 237 volumes. But these items cannot be found in any other former Buddhist catalog. So it might be supposed as the Kitan texts. 13. Besides including almost all items in Kae-Won-Seok-Kyo-Lock and all items in Sok-Jeong-Won-Seok-Kyo-Lock, Dae-Jung-Sang-Bo­Beob-Bo-Lock, and Kyeong-Woo-Sin-Su-Beob-Bo-Lock, the second edition of Dae-Jang-Mock-Lock includes more items, at least 20 items of about 300 volumes of Kitan Tripitaka and 15 items of 43 volumes of traditional Korean Tripitake that cannot be found any others. Therefore, Tripitaka Koreana can be said as a comprehensive Tripitaka covering all items of Tripitakas translated in Chinese character.

  • PDF

Legal Study for the KSLV launching - Products & Third Party Liability - (KSLV발사에 따른 제작 및 제3자피해 책임에 대한 우주법적 소고)

  • Shin, Sung-Hwan
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.21 no.1
    • /
    • pp.169-189
    • /
    • 2006
  • In 2007, KSLV(Korea Small Launching Vehicle) that we made at Goheung National Space Center is going to launch and promotes of our space exploration systematically and 'Space Exploration Promotion Act' was enter into force. 'Space Exploration Promotion Act' article 3, section 1, as is prescribing "Korean government keeps the space treaties contracted with other countries and international organizations and pursues after peaceful uses of outer space." The representative international treaties are Outer Space Treaty (1967) and Liability Convention (1972) etc. In Liability convention article 2, "A launching State shall be absolutely liable to pay compensation for damage caused by its space object on the surface of the earth or to aircraft in flight. The important content of the art. 2 is the responsible entity is the 'State' not the 'Company'. According by Korean Space Exploration Act art. 14, person who launches space objects according to art. 8 and art. 11 must bear the liability for damages owing to space accidents of the space objects. Could Korean government apply the Products Liability Act which is enter into force from July 1, 2002 to space launching person? And what is the contact type between Korea Aerospace Research Institute(KARl) and Russia manufacturer. Is that a Co-Development contract or Licence Product contract? And there is no exemption clause to waive the Russia manufacturer's liability which we could find it from other similar contract condition. If there is no exemption clause to the Russia manufacturer, could we apply the Korean Products Liability Act to Russia one? The most important legal point is whether we could apply the Korean Products Liability Act to the main component company. According by the art. 17 of the contract between KARl and the company, KARl already apply the Products Liability Act to the main component company. For reference, we need to examine the Appalachian Insurance co. v. McDonnell Douglas case, this case is that long distance electricity communication satellite of Western Union Telegraph company possessions fails on track entry. In Western Union's insurance company supplied to Western Union with insurance of $ 105 millions, which has the satellite regard as entirely damage. Five insurance companies -Appalachian insurance company, Commonwealth insurance company, Industrial Indemnity, Mutual Marine Office, Northbrook Excess & Surplus insurance company- went to court against McDonnell Douglases, Morton Thiokol and Hitco company to inquire for fault and strict liability of product. By the Appalachian Insurance co. v. McDonnell Douglas case, KARl should waiver the main component's product liability burden. And we could study the possibility of the adapt 'Government Contractor Defense' theory to the main component company.

  • PDF

A Study on Aviation Safety and Third Country Operator of EU Regulation in light of the Convention on international Civil Aviation (시카고협약체계에서의 EU의 항공법규체계 연구 - TCO 규정을 중심으로 -)

  • Lee, Koo-Hee
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.29 no.1
    • /
    • pp.67-95
    • /
    • 2014
  • Some Contracting States of the Chicago Convention issue FAOC(Foreign Air Operator Certificate) and conduct various safety assessments for the safety of the foreign operators which operate to their state. These FAOC and safety audits on the foreign operators are being expanded to other parts of the world. While this trend is the strengthening measure of aviation safety resulting in the reduction of aircraft accident. FAOC also burdens the other contracting States to the Chicago Convention due to additional requirements and late permission. EASA(European Aviation Safety Agency) is a body governed by European Basic Regulation. EASA was set up in 2003 and conduct specific regulatory and executive tasks in the field of civil aviation safety and environmental protection. EASA's mission is to promote the highest common standards of safety and environmental protection in civil aviation. The task of the EASA has been expanded from airworthiness to air operations and currently includes the rulemaking and standardization of airworthiness, air crew, air operations, TCO, ATM/ANS safety oversight, aerodromes, etc. According to Implementing Rule, Commission Regulation(EU) No 452/2014, EASA has the mandate to issue safety authorizations to commercial air carriers from outside the EU as from 26 May 2014. Third country operators (TCO) flying to any of the 28 EU Member States and/or to 4 EFTA States (Iceland, Norway, Liechtenstein, Switzerland) must apply to EASA for a so called TCO authorization. EASA will only take over the safety-related part of foreign operator assessment. Operating permits will continue to be issued by the national authorities. A 30-month transition period ensures smooth implementation without interrupting international air operations of foreign air carriers to the EU/EASA. Operators who are currently flying to Europe can continue to do so, but must submit an application for a TCO authorization before 26 November 2014. After the transition period, which lasts until 26 November 2016, a valid TCO authorization will be a mandatory prerequisite, in the absence of which an operating permit cannot be issued by a Member State. The European TCO authorization regime does not differentiate between scheduled and non-scheduled commercial air transport operations in principle. All TCO with commercial air transport need to apply for a TCO authorization. Operators with a potential need of operating to the EU at some time in the near future are advised to apply for a TCO authorization in due course, even when the date of operations is unknown. For all the issue mentioned above, I have studied the function of EASA and EU Regulation including TCO Implementing Rule newly introduced, and suggested some proposals. I hope that this paper is 1) to help preparation of TCO authorization, 2) to help understanding about the international issue, 3) to help the improvement of korean aviation regulations and government organizations, 4) to help compliance with international standards and to contribute to the promotion of aviation safety, in addition.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

A Study on the Improvement Plans of Police Fire Investigation (경찰화재조사의 개선방안에 관한 연구)

  • SeoMoon, Su-Cheol
    • Journal of Korean Institute of Fire Investigation
    • /
    • v.9 no.1
    • /
    • pp.103-121
    • /
    • 2006
  • We are living in more comfortable circumstances with the social developments and the improvement of the standard of living, but, on the other hand, we are exposed to an increase of the occurrences of tires on account of large-sized, higher stories, deeper underground building and the use of various energy resources. The materials of the floor in a residence modern society have been going through various alterations in accordance with the uses of a residence and are now used as final goods in interioring the bottom of apartments, houses and shops. There are so many kinds of materials you usually come in contact with, but in the first place, we need to make an experiment on the spread of the fire with the hypocaust used as the floors of apartments, etc. and the floor covers you usually can get easily. We, scientific investigators, can get in contact with the accidents caused by incendiarism or an accidental fire closely connected with petroleum stuffs on the floor materials that give rise to lots of problems. on this account, I'd like to propose that we conduct an experiment on fire shapes by each petroleum stuff and that discriminate an accidental tire from incendiarism. In an investigation, it seems that finding a live coal could be an essential part of clearing up the cause of a tire but it could not be the cause of a fire itself. And besides, all sorts of tire cases or fire accidents have some kind of legislation and standard to minimize and at an early stage cope with the damage by tires. That is to say, we are supposed to install each kind of electric apparatus, automatic alarm equipment, automatic fire extinguisher in order to protect ourselves from the danger of fires and check them at any time and also escape urgently in case of fire-outbreaking or build a tire-proof construction to prevent flames from proliferating to the neighboring areas. Namely, you should take several factors into consideration to investigate a cause of a case or an accident related to fire. That means it's not in reason for one investigator or one investigative team to make clear of the starting part and the cause of a tire. accordingly, in this thesis, explanations would be given set limits to the judgement and verification on the cause of a fire and the concrete tire-spreading part through investigation on the very spot that a fire broke out. The fire-discernment would also be focused on the early stage fire-spreading part fire-outbreaking resources, and I think the realities of police tire investigations and the problems are still a matter of debate. The cause of a fire must be examined into by logical judgement on the basis of abundant scientific knowledge and experience covering the whole of fire phenomena. The judgement of the cause should be made with fire-spreading situation at the spot as the central figure and in case of verifying, you are supposed to prove by the situational proof from the traces of the tire-spreading to the fire-outbreaking sources. The causal relation on a fire-outbreak should not be proved by arbitrary opinion far from concrete facts, and also there is much chance of making mistakes if you draw deduction from a coincidence. It is absolutely necessary you observe in an objective attitude and grasp the situation of a tire in the investigation of the cause. Having a look at the spot with a prejudice is not allowed. The source of tire-outbreak itself is likely to be considered as the cause of a tire and that makes us doubt about the results according to interests of the independent investigators. So to speak, they set about investigations, the police investigation in the hope of it not being incendiarism, the fire department in the hope of it not being problems in installments or equipments, insurance companies in the hope of it being any incendiarism, electric fields in the hope of it not being electric defects, the gas-related in the hope of it not being gas problems. You could not look forward to more fair investigation and break off their misgivings. It is because the firing source itself is known as the cause of a fire and civil or criminal responsibilities are respected to the firing source itself. On this occasion, investigating the cause of a fire should be conducted with research, investigation, emotion independent, and finally you should clear up the cause with the results put together.

  • PDF

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

Analysis on Factors Influencing Welfare Spending of Local Authority : Implementing the Detailed Data Extracted from the Social Security Information System (지방자치단체 자체 복지사업 지출 영향요인 분석 : 사회보장정보시스템을 통한 접근)

  • Kim, Kyoung-June;Ham, Young-Jin;Lee, Ki-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.141-156
    • /
    • 2013
  • Researchers in welfare services of local government in Korea have rather been on isolated issues as disables, childcare, aging phenomenon, etc. (Kang, 2004; Jung et al., 2009). Lately, local officials, yet, realize that they need more comprehensive welfare services for all residents, not just for above-mentioned focused groups. Still cases dealt with focused group approach have been a main research stream due to various reason(Jung et al., 2009; Lee, 2009; Jang, 2011). Social Security Information System is an information system that comprehensively manages 292 welfare benefits provided by 17 ministries and 40 thousand welfare services provided by 230 local authorities in Korea. The purpose of the system is to improve efficiency of social welfare delivery process. The study of local government expenditure has been on the rise over the last few decades after the restarting the local autonomy, but these studies have limitations on data collection. Measurement of a local government's welfare efforts(spending) has been primarily on expenditures or budget for an individual, set aside for welfare. This practice of using monetary value for an individual as a "proxy value" for welfare effort(spending) is based on the assumption that expenditure is directly linked to welfare efforts(Lee et al., 2007). This expenditure/budget approach commonly uses total welfare amount or percentage figure as dependent variables (Wildavsky, 1985; Lee et al., 2007; Kang, 2000). However, current practice of using actual amount being used or percentage figure as a dependent variable may have some limitation; since budget or expenditure is greatly influenced by the total budget of a local government, relying on such monetary value may create inflate or deflate the true "welfare effort" (Jang, 2012). In addition, government budget usually contain a large amount of administrative cost, i.e., salary, for local officials, which is highly unrelated to the actual welfare expenditure (Jang, 2011). This paper used local government welfare service data from the detailed data sets linked to the Social Security Information System. The purpose of this paper is to analyze the factors that affect social welfare spending of 230 local authorities in 2012. The paper applied multiple regression based model to analyze the pooled financial data from the system. Based on the regression analysis, the following factors affecting self-funded welfare spending were identified. In our research model, we use the welfare budget/total budget(%) of a local government as a true measurement for a local government's welfare effort(spending). Doing so, we exclude central government subsidies or support being used for local welfare service. It is because central government welfare support does not truly reflect the welfare efforts(spending) of a local. The dependent variable of this paper is the volume of the welfare spending and the independent variables of the model are comprised of three categories, in terms of socio-demographic perspectives, the local economy and the financial capacity of local government. This paper categorized local authorities into 3 groups, districts, and cities and suburb areas. The model used a dummy variable as the control variable (local political factor). This paper demonstrated that the volume of the welfare spending for the welfare services is commonly influenced by the ratio of welfare budget to total local budget, the population of infants, self-reliance ratio and the level of unemployment factor. Interestingly, the influential factors are different by the size of local government. Analysis of determinants of local government self-welfare spending, we found a significant effect of local Gov. Finance characteristic in degree of the local government's financial independence, financial independence rate, rate of social welfare budget, and regional economic in opening-to-application ratio, and sociology of population in rate of infants. The result means that local authorities should have differentiated welfare strategies according to their conditions and circumstances. There is a meaning that this paper has successfully proven the significant factors influencing welfare spending of local government in Korea.