• Title/Summary/Keyword: SpaceX

Search Result 2,608, Processing Time 0.035 seconds

Analysis of Traffic Accidents Injury Severity in Seoul using Decision Trees and Spatiotemporal Data Visualization (의사결정나무와 시공간 시각화를 통한 서울시 교통사고 심각도 요인 분석)

  • Kang, Youngok;Son, Serin;Cho, Nahye
    • Journal of Cadastre & Land InformatiX
    • /
    • v.47 no.2
    • /
    • pp.233-254
    • /
    • 2017
  • The purpose of this study is to analyze the main factors influencing the severity of traffic accidents and to visualize spatiotemporal characteristics of traffic accidents in Seoul. To do this, we collected the traffic accident data that occurred in Seoul for four years from 2012 to 2015, and classified as slight, serious, and death traffic accidents according to the severity of traffic accidents. The analysis of spatiotemporal characteristics of traffic accidents was performed by kernel density analysis, hotspot analysis, space time cube analysis, and Emerging HotSpot Analysis. The factors affecting the severity of traffic accidents were analyzed using decision tree model. The results show that traffic accidents in Seoul are more frequent in suburbs than in central areas. Especially, traffic accidents concentrated in some commercial and entertainment areas in Seocho and Gangnam, and the traffic accidents were more and more intense over time. In the case of death traffic accidents, there were statistically significant hotspot areas in Yeongdeungpo-gu, Guro-gu, Jongno-gu, Jung-gu and Seongbuk. However, hotspots of death traffic accidents by time zone resulted in different patterns. In terms of traffic accident severity, the type of accident is the most important factor. The type of the road, the type of the vehicle, the time of the traffic accident, and the type of the violation of the regulations were ranked in order of importance. Regarding decision rules that cause serious traffic accidents, in case of van or truck, there is a high probability that a serious traffic accident will occur at a place where the width of the road is wide and the vehicle speed is high. In case of bicycle, car, motorcycle or the others there is a high probability that a serious traffic accident will occur under the same circumstances in the dawn time.

A STUDY ON IN VIVO AND IN VITRO AMALGAM CORROSION (아말감의 구강내 부식 및 인공 부식에 관한 연구)

  • Lim, Byong-Mok;Kwon, Hyuk-Choon;Um, Chung-Moon
    • Restorative Dentistry and Endodontics
    • /
    • v.22 no.1
    • /
    • pp.1-33
    • /
    • 1997
  • The objective of this study was to analyze the in vitro and in vivo corrosion products of low and high copper amalgams. The four different types of amalgam alloy used in this study were Fine cut, Caulk spherical, Dispersalloy, and Tytin. After each amalgam alloy and Hg were triturated according to the directions of the manufacturer by means of the mechanical amalgamator(Amalgam mixer. Shinhung Co. Korea), the triturated mass was inserted into a cylindrical metal mold which was 12mm in diameter and 10mm in height. The mass was condensed by 150Kg/cm compressive force. The specimen was removed from the mold and aged at room temperature for about seven days. The standard surface preparation was routinely carried out by emery paper polishing under running water. In vitro amalgam specimens were potentiostatically polarized ten times in a normal saline solution at $37^{\circ}C$(potentiostat : HA-301. Hukuto Denko Corp. Japan). Each specimen was subjected to anodic polarization scan within the potential range -1700mV to+400mV(SCE). After corrosion tests, anodic polarization curves and corrosion potentials were obtained. The amount of component elements dissolved from amalgams into solution was measured three times by ICP AES(Inductive Coupled Plasma Atomic Emission Spectrometry: Plasma 40. Perkim Elmer Co. U.S.A.). The four different types of amalgam were filled in occlusal and buccal class I cavities of four human 3rd molars. After about five years the restorations were carefully removed after tooth extraction to preserve the structural details including the deteriorated margins. The occlusal surface, amalgam-tooth interface and the fractured surface of in vivo amalgam corrosion products were analyzed. In vivo and in vitro amalgam specimens were examined and analyzed metallographically by SEM(Scanning Electron Microscope: JSM 840. Jeol Co. Japan) and EDAX(Energy Dispersive Micro X-ray Analyser: JSM 840. Jeol Co. Japan). 1. The following results are obtained from in vitro corrosion tests. 1) Corrosion potentials of all amalgams became more noble after ten times passing through the in vitro corrosion test compared to first time. 2) After times through the test, released Cu concentration in saline solution was almost equal but highest in Fine cut. Ag and Hg ion concentration was highest in Caulk spherical and Sn was highest in Dispersalloy. 3) Analyses of surface corrosion products in vitro reveal the following results. a)The corroded surface of Caulk spherical has Na-Sn-Cl containing clusters of $5{\mu}m$ needle-like crystals and oval shapes of Sn-Cl phase, polyhedral Sn oxide phase. b)In Fine cut, there appeared to be a large Sn containing phase, surrounded by many Cu-Sn phases of $1{\mu}m$ granular shapes. c)Dispersalloy was covered by a thick reticular layer which contained Zn-Cl phase. d)In Tytin, a very thin, corroded layer had formed with irregularly growing Sn-Cl phases that looked like a stack of plates. 2. The following results are obtained by an analysis of in vivo amalgam corrosion products. 1) Occlusal surfaces of all amalgams were covered by thick amorphous layers containing Ca-P elements which were abraded by occlusal force. 2) In tooth-amalgam interface, Ca-P containing products were examined in all amalgams but were most clearly seen in low copper amalgams. 3) Sn oxide appeared as a polyhedral shape in internal space in Caulk spherical and Fine cut. 4) Apical pyramidal shaped Sn oxide and curved plate-like Sn-Cl phases resulted in Dispersalloy. 5) In Tytin, Sn oxide and Sn hydroxide were not seen but polyhedral Ag-Hg phase crystal appeared in internal space which assumed a ${\beta}_l$ phase.

  • PDF

Preparation and Reactivity of Cu-Zn-Al Based Hybrid Catalysts for Direct Synthesis of Dimethyl Ether by Physical Mixing and Precipitation Methods (물리혼합 및 침전법에 의한 DME 직접 합성용 Cu-Zn-Al계 혼성촉매의 제조 및 반응특성)

  • Bang, Byoung Man;Park, No-Kuk;Han, Gi Bo;Yoon, Suk Hoon;Lee, Tae Jin
    • Korean Chemical Engineering Research
    • /
    • v.45 no.6
    • /
    • pp.566-572
    • /
    • 2007
  • Two hybrid catalysts for the direct synthesis of DME were prepared and the catalytic activity of these catalysts were investigated. The hybrid catalyst for the direct synthesis of DME was composed as the catalytic active components of methanol synthesis and dehydration. The methanol synthesis catalyst was formed from the precursor contained Cu and Zn, the methanol dehydration catalyst was used ${\gamma}-Al_2O_3$. As PM-CZ+D and CP-CZA/D, Two hybrid catalysts were prepared by physical mixing method (PM-CZ+D) and precipitation method (CP-CZA/D), respectively. PM-CZ+D was prepared by physically mixing methanol synthesis catalyst and methanol dehydration catalyst, CP-CZA/D was prepared by depositing Cu-Zn or Cu-Zn-Al components on ${\gamma}-Al_2O_3$. The crystallinity and the surface morphology of synthesized catalyst were analyzed by X-ray diffraction (XRD) and scanning electron microscope (SEM) to investigate the physical property of prepared catalyst. And BET surface area by $N_2$ adsorption and the surface area of Cu by $N_2O$ chemisorption were investigated about the hybrid catalysts. In addition, catalytic activity of these hybrid catalysts was examined with varying reaction conditions. At that time, the reaction temperature of $250{\sim}290^{\circ}C$, the reaction pressure of 50~70 atm, the $[H_2]/[CO]$ mole ratio of 0.5~2.0 and the space velocity of $1,500{\sim}6,000h^{-1}$ were investigated the catalytic activity. From these results, it was confirmed that the reactivity of CP-CZA/D was higher than that of PM-CZ+D. When the conditions of reaction temperature, pressure, $[H_2]/[CO]$ ratio and space velocity were $260^{\circ}C$, 50 atm and 1.0, $3,000h^{-1}$ respectively, CO conversion using CP-CZA/D hybrid catalyst was 72% and the CO conversion of CP-CZA/D was more than 20% compared with the CO conversion of PM-CZ+D. It was known that Cu surface area of CP-CZA/D hybrid catalyst was higher than that of hybrid PM-CZ+D catalyst using $N_2O$ chemisorption. It was assumed that the catalytic activity was improved because Cu particle of hybrid catalyst prepared by precipitation method was well dispersed.

The effect of Postural Changes on Pleural Fluid Constituents (흉수 구성 성분의 체위에 따른 차이)

  • Park, Byung-Kyu;Lee, Hyo-Jin;Kim, Yun-Seong;Heo, Jeong;Yang, Yong-Seok;Seoung, Nak-Heon;Lee, Min-Ki;Park, Soon-Kew;Shin, Young-Kee;Han, Kyeong-Moon;Choi, Pil-Sun;Soon, Choon-Hee
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.2
    • /
    • pp.221-227
    • /
    • 1996
  • Background : Measurement of pleural fluid constituents are of value in the diagnosis of pleural effusions and in the seperation of exudates from transudates. The position of the patient(sitting or lying) prior to thoracentesis may result in difference in the measurement of these constituents. The purpose of this study is to determine whether postural differences in pleural fluid constituents exist, and if so, whether they are of any clinical significance. Method : 41 patients with pleural effusions on chest roentgenography were prospectively studied. The fluid cell counts, partial gas tension, and concentrations of chemical constituents were compared in the supine and upright positions. Results : 1) A total of 10 patients were found to have an transudative effusion. In the transudates there was no significant difference in pleural fluid constituents according to posture change. 2) A total of 31 patients were found to have an exudative effusion. Statistically significant postural changes were noted in pH, WBC counts, protein, and LDH concentrations in the exudates. It may be due to postural sedimentary effect in the pleural space. 3) The PCO2 measurements and glucose concentration were not affected by changes in position in exudates or transudates. Conclusion : Postural sedimentary effect occurs in the pleural space with reference to the measurement of certain pleural fluid constituents when an inflammatory process is present. Therefore it is recommended that thoracentesis after 30 minutes in the sitting position should be performed.

  • PDF

The Comparison of Knee Joint Displaying between The Anteroposterior Weight Bearing View and the Metatarsophalangeal View with Osteoarthritis Patients (골관절염 환자의 촬영방법에 대한 고찰 : AP-WB(Weight-bearing AP), MTP(semiflexed) 촬영법의 비교 고찰을 중심으로)

  • Jeon, Ju-Seob;Park, Hwan-Sang;Moon, Il-Bong;Moon, Ju-Wan;Choi, Nam-Kil;Kim, Chang-Bok;Eun, Sung-Jong
    • Journal of radiological science and technology
    • /
    • v.28 no.2
    • /
    • pp.97-103
    • /
    • 2005
  • Objective : The aim of this study was to compare the knee joint displaying between the anteroposterior weight bearing(AP-WB) View and the metatarsophalangeal(MTP) view for assessing joint space narrowing(JSN) and osteophytes in osteoarthritis patients. Subjects and Materials : Two hundreds of twenty patients(38 men) who came rheumatoid caused by knee pain, had both AP-WB and MTP views taken on a day. Radiographs were evaluated independently by 13 experienced observers(3 orthopedics surgeon, 2 rheumatogist, 3 radiologist, 5 radiological technologist) They assessed JSN and osteophytes using by PACS monitor JSN was scored by the optic evaluation to the nearest at the narrowest point in medial compartments of the tibiofemoral joint in both knees. Osteophytes were graded 0 to 3(bad 0, not bad 1, good 2 and very good 3) according to a standard atlas. All exam was using by Philips(Buckey Diagnostic-TH) X-ray material. Exposure condition was 60 kv, 8 mAs and 100 cm focus to film distance. Results : JSN was scored $1.32{\pm}0.050$ in AP-WB view, $2.51{\pm}0.046$ in MTP view. MTP view of JSN score is higher to AP-WB view significantly(p<0.05). Osteophytes scored $2.14{\pm}0.054$ in AP-WB view, $2.10{\pm}0.054$ in MTP view. There was no difference(p<0.05) between MTP view and AP-WB view in osteophytes. But MTP view was more reproducible than AP-WB view Conclusions : Joint space narrowing is most important factor to diagnosis with knee joint Osteoarthritis patients. This study was summarized as follows; In comparision of JSN, MTP view was more widely displayed than AP-WB view. In comparision of Osteophytes, there was no difference between MTP view and AP-WB view. It was concluded MTP view was more useful method to diagnosis of knee joint Osteoarthritis patients.

  • PDF

Crystal Structures of $Cd_6-A$ Dehydrated at $750^{\circ}C$ and Dehydrated $Cd_6-A$ Reacted with Cs Vapor ($750^{\circ}C$ 에서 탈수한 $Cd_6-A$의 결정구조와 이 결정을 세슘 증기로 반응시킨 결정구조)

  • Se Bok Jang;Yang Kim
    • Journal of the Korean Chemical Society
    • /
    • v.37 no.2
    • /
    • pp.191-198
    • /
    • 1993
  • The crystal structures of $Cd_{6-}A$ evacuated at $2{\times}10^{-6}$ torr and $750^{\circ}C$ (a = 12.204(1) $\AA$) and dehydrated $Cd_{6-}A$ reacted with 0.1 torr of Cs vapor at $250^{\circ}C$ for 12 hours (a = 12.279(1) $\AA$) have been determined by single crystal X-ray diffraction techniques in the cubic space group Pm3m at $21(1)^{\circ}C.$ Their structures were refined to final error indices, $R_1=$ 0.081 and $R_2=$ 0.091 with 151 reflections and $R_1=$ 0.095 and $R_2=$ 0.089 with 82 reflections, respectively, for which I > $3\sigma(I).$ In vacuum dehydrated $Cd_{6-}A$, six $Cd^{2+}$ ions occupy threefold-axis positions near 6-ring, recessed 0.460(3) $\AA$ into the sodalite cavity from the (111) plane at O(3) : Cd-O(3) = 2.18(2) $\AA$ and O(3)-Cd-O(3) = $115.7(4)^{\circ}.$ Upon treating it with 0.1 torr of Cs vapor at $250^{\circ}C$, all 6 $Cd^{2+}$ ions in dehydrated $Cd_{6-}A$ are reduced by Cs vapor and Cs species are found at 4 crystallographic sites : 3.0 $Cs^+$ ions lie at the centers of the 8-rings at sites of $D_{4h}$ symmetry; ca. 9.0 Cs+ ions lie on the threefold axes of unit cell, ca. 7 in the large cavity and ca. 2 in the sodalite cavity; ca. 0.5 $Cs^+$ ion is found near a 4-ring. In this structure, ca. 12.5 Cs species are found per unit cell, more than the twelve $Cs^+$ ions needed to balance the anionic charge of zeolite framework, indicating that sorption of Cs0 has occurred. The occupancies observed are simply explained by two unit cell arrangements, $Cs_{12}-A$ and $Cs_{13}-A$. About 50% of unit cells may have two $Cs^+$ ions in sodalite unit near opposite 6-rings, six in the large cavity near 6-ring and one in the large cavity near a 4-ring. The remaining 50% of unit cells may have two Cs species in the sodalite unit which are closely associated with two out of 8 $Cs^+$ ions in the large cavity to form linear $(Cs_4)^{3+}$ clusters. These clusters lie on threefold axes and extend through the centers of sodalite units. In all unit cells, three $Cs^+$ ions fill equipoints of symmetry $D_{4h}$ at the centers of 8-rings.

  • PDF

A Study on measurement of scattery ray of Computed Tomography (전산화 단층촬영실의 산란선 측정에 대한 연구)

  • Cho, Pyong-Kon;Lee, Joon-Hyup;Kim, Yoon-Sik;Lee, Chang-Yeop
    • Journal of radiological science and technology
    • /
    • v.26 no.2
    • /
    • pp.37-42
    • /
    • 2003
  • Purpose : Computed tomographic equipment is essential for diagnosis by means of radiation. With passage of time and development of science computed tomographic was developed time and again and in future examination by means of this equipment is expected to increase. In this connection these authors measured rate of scatter ray generation at front of lead glass for patients within control room of computed tomographic equipment room and outside of entrance door for exit and entrance of patients and attempted to ind out method for minimizing exposure to scatter ray. Material and Method : From November 2001 twenty five units of computed tomographic equipments which were already installed and operation by 13 general hospitals and university hospitals in Seoul were subjected to this study. As condition of photographing those recommended by manufacturer for measuring exposure to sauter ray was use. At the time objects used DALI CT Radiation Dose Test Phantom fot Head (${\oint}16\;cm$ Plexglas) and Phantom for Stomache(${\oint}32\;cm$ Plexglas) were used. For measurement of scatter ray Reader (Radiation Monitor Controller Model 2026) and G-M Survey were used to Survey Meter of Radical Corporation, model $20{\times}5-1800$, Electrometer/Ion Chamber, S/N 21740. Spots for measurement of scatter ray included front of lead glass for patients within control room of computed tomographic equipment room which is place where most of work by gradiographic personnel are carried out and is outside of entrance door for exit and entrance of patients and their guardians and at spot 100 cm off from isocenter at the time of scanning the object. The results : Work environment within computed tomography room which was installed and under operation by each hospital showed considerable difference depending on circumstances of pertinent hospitals and status of scatter ray was as follows. 1) From isocenter of computed tomographic equipment to lead glass for patients within control room average distance was 377 cm. At that time scatter ray showed diverse distribution from spot where no presence was detected to spot where about 100 mR/week was detected. But it met requirement of weekly tolerance $2.58{\times}10^{-5}\;C/kg$(100 mR/week). 2) From isocenter of computed tomographic equipment to outside of entrance door where patients and their guardians exit and enter was 439 cm in average, At that time scatter ray showed diverse distribution from spot where almost no presence was detected to spot with different level but in most of cases it satisfied requirement of weekly tolerance of $2.58{\times}10^{-6}\;C/kg$(100 mR/week). 3) At the time of scanning object amount of scatter ray at spot with 100 cm distance from isocenter showed considerable difference depending on equipments. Conclusion : Use of computed tomographic equipment as one for generation of radiation for diagnosis is increasing daily. Compared to other general X-ray photographing field of diagnosis is very high but there is a high possibility of exposure to radiation and scatter ray. To be free from scatter ray at computed tomographic equipment room even by slight degree it is essential to secure sufficient space and more effort should be exerted for development of variety of skills to enable maximum photographic image at minimum cost.

  • PDF

Strategy for Store Management Using SOM Based on RFM (RFM 기반 SOM을 이용한 매장관리 전략 도출)

  • Jeong, Yoon Jeong;Choi, Il Young;Kim, Jae Kyeong;Choi, Ju Choel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.93-112
    • /
    • 2015
  • Depending on the change in consumer's consumption pattern, existing retail shop has evolved in hypermarket or convenience store offering grocery and daily products mostly. Therefore, it is important to maintain the inventory levels and proper product configuration for effectively utilize the limited space in the retail store and increasing sales. Accordingly, this study proposed proper product configuration and inventory level strategy based on RFM(Recency, Frequency, Monetary) model and SOM(self-organizing map) for manage the retail shop effectively. RFM model is analytic model to analyze customer behaviors based on the past customer's buying activities. And it can differentiates important customers from large data by three variables. R represents recency, which refers to the last purchase of commodities. The latest consuming customer has bigger R. F represents frequency, which refers to the number of transactions in a particular period and M represents monetary, which refers to consumption money amount in a particular period. Thus, RFM method has been known to be a very effective model for customer segmentation. In this study, using a normalized value of the RFM variables, SOM cluster analysis was performed. SOM is regarded as one of the most distinguished artificial neural network models in the unsupervised learning tool space. It is a popular tool for clustering and visualization of high dimensional data in such a way that similar items are grouped spatially close to one another. In particular, it has been successfully applied in various technical fields for finding patterns. In our research, the procedure tries to find sales patterns by analyzing product sales records with Recency, Frequency and Monetary values. And to suggest a business strategy, we conduct the decision tree based on SOM results. To validate the proposed procedure in this study, we adopted the M-mart data collected between 2014.01.01~2014.12.31. Each product get the value of R, F, M, and they are clustered by 9 using SOM. And we also performed three tests using the weekday data, weekend data, whole data in order to analyze the sales pattern change. In order to propose the strategy of each cluster, we examine the criteria of product clustering. The clusters through the SOM can be explained by the characteristics of these clusters of decision trees. As a result, we can suggest the inventory management strategy of each 9 clusters through the suggested procedures of the study. The highest of all three value(R, F, M) cluster's products need to have high level of the inventory as well as to be disposed in a place where it can be increasing customer's path. In contrast, the lowest of all three value(R, F, M) cluster's products need to have low level of inventory as well as to be disposed in a place where visibility is low. The highest R value cluster's products is usually new releases products, and need to be placed on the front of the store. And, manager should decrease inventory levels gradually in the highest F value cluster's products purchased in the past. Because, we assume that cluster has lower R value and the M value than the average value of good. And it can be deduced that product are sold poorly in recent days and total sales also will be lower than the frequency. The procedure presented in this study is expected to contribute to raising the profitability of the retail store. The paper is organized as follows. The second chapter briefly reviews the literature related to this study. The third chapter suggests procedures for research proposals, and the fourth chapter applied suggested procedure using the actual product sales data. Finally, the fifth chapter described the conclusion of the study and further research.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).