• Title/Summary/Keyword: application case

Search Result 8,026, Processing Time 0.049 seconds

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Visual Media Education in Visual Arts Education (미술교육에 있어서 시각적 미디어를 통한 조형교육에 관한 연구)

  • Park Ji-Sook
    • Journal of Science of Art and Design
    • /
    • v.7
    • /
    • pp.64-104
    • /
    • 2005
  • Visual media transmits image and information reproduced in large quantities, such as a photography, film, television, video, advertisement, or computer image. Correspondence to the students' reception and recognition of culture in the future. arrangements for the field of studies of visual culture. 'Visual Culture' implies cultural phenomena of visual images via visual media, which includes not only the categories of traditional arts like a painting, sculpture, print, or design, but the performance arts including a fashion show or parade of carnival, and the mass and electronic media like a photography, film, television, video, advertisement, cartoon, animation, or computer image. In the world of visual media, Image' functions as an essential medium of communication. Therefore, people call the culture of today fra of Image Culture', which has been converted from an alphabet convergence era to an image convergence one. Image, via visual media, has become a dominant means for communication in large part of human life, so we can designate an Image' as a typical aspect of visual culture today. Image, as an essential medium of communication, plays an important role in contemporary society. The one way is the conversion of analogue image like an actual picture, photograph, or film into digital one through the digitalization of digital camera or scanner as 'an analogue/digital commutator'. The other is a way of process with a computer drawing, or modeling of objects. It is appropriate to the production of pictorial and surreal images. Digital images, produced by the other, can be divided into the form of Pixel' and form of Vector'. Vector is a line linking the point of departure to the point of end, which organizes informations. Computer stores each line's standard location and correlative locations to one another Digital image shows for more 'Perfectness' than any other visual media. Digital image has been evolving in the diverse aspects, such as a production of geometrical or organic image compositing, interactive art, multimedia art, or web art, which has been applied a computer as an extended trot of painting. Someone often interprets digitalized copy with endless reproduction of original even as an extension of a print. Visual af is no longer a simple activity of representation by a painter or sculptor, but now is intimately associated with a matter of application of media. There is some problem in images via visual media. First, the image via media doesn't reflect a reality as it is, but reflects an artificial manipulated world, that is, a virtual reality. Second, the introduction of digital effect and the development of image processing technology have enhanced a spectacle of destructive and violent scenes. Third, a child intends to recognize the interactive images of computer game and virtual reality as a reality, or truth. Education needs not only to point out an ill effect of mass media and prevent the younger generation from being damaged by it, but also to offer a knowledge and know-how to cope actively with social, cultural circumstances. Visual media education is one of these essential methods for the contemporary and future human being in the overflowing of image informations. The fosterage of 'Visual Literacy' can be considered as a very purpose of visual media education. This is a way to lead an individual to the discerning, active consumer and producer of visual media in life as far as possible. The elements of 'Visual Literacy' can be divided into a faculty of recognition related to the visual media, a faculty of critical reception, a faculty of appropriate application, a faculty of active work and a faculty of creative modeling, which are promoted at the same time by the education of 'visual literacy'. In conclusion, the education of 'Visual Literacy' guides students to comprehend and discriminate the visual image media carefully, or receive them critically, apply them properly, or produce them creatively and voluntarily. Moreover, it leads to an artistic activity by means of new media. This education can be approached and enhanced by the connection and integration with real life. Visual arts and education of them play an important role in the digital era depended on visual communications via image information. Visual me야a of day functions as an essential element both in daily life and in arts. Students can soundly understand visual phenomena of today by means of visual media, and apply it as an expression tool of life culture as well. A new recognition and valuation visual image and media education is required to cultivate the capability of active, upright dealing with the changes of history of civilization. 1) Visual media education helps to cultivate a sensibility for images, which reacts to and deals with the circumstances. 2) It helps students to comprehend the contemporary arts and culture via new media. 3) It supplies a chance of students' experiencing a visual modeling by means of new media. 4) There are educational opportunities of images with temporality and spaciality, and therefore a discerning person becomes to increase. 5) The modeling activity via new media leads students to be continuously interested in the school and production of plastic arts. 6) It raises the ability of visual communications dealing with image information society. 7) An education of digital image is significant in respect of cultivation of man of talent for the future society of image information as well. To correspond to the changing and developing social, cultural circumstances, and the form and recognition of students' reception of them, visual arts education must arrange the field of studying on a new visual culture. Besides, a program needs to be developed, which is in more systematic and active level in relation to visual media education. Educational contents should be extended to the media for visual images, that is, photography, film, television, video, computer graphic, animation, music video, computer game and multimedia. Every media must be separately approached, because they maintain the modes and peculiarities of their own according to the conveyance form of message. The concrete and systematic method of teaching and the quality of education must be researched and developed, centering around the development of a course of study. Teacher's foundational capability of teaching should be cultivated for the visual media education. In this case, it must be paid attention to the fact that a technological level of media is considered as a secondary. Because school education doesn't intend to train expert and skillful producers, but intends to lay stress on the essential aesthetic one with visual media under the social and cultural context, in respect of a consumer including a man of culture.

  • PDF

Medical Information Dynamic Access System in Smart Mobile Environments (스마트 모바일 환경에서 의료정보 동적접근 시스템)

  • Jeong, Chang Won;Kim, Woo Hong;Yoon, Kwon Ha;Joo, Su Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.47-55
    • /
    • 2015
  • Recently, the environment of a hospital information system is a trend to combine various SMART technologies. Accordingly, various smart devices, such as a smart phone, Tablet PC is utilized in the medical information system. Also, these environments consist of various applications executing on heterogeneous sensors, devices, systems and networks. In these hospital information system environment, applying a security service by traditional access control method cause a problems. Most of the existing security system uses the access control list structure. It is only permitted access defined by an access control matrix such as client name, service object method name. The major problem with the static approach cannot quickly adapt to changed situations. Hence, we needs to new security mechanisms which provides more flexible and can be easily adapted to various environments with very different security requirements. In addition, for addressing the changing of service medical treatment of the patient, the researching is needed. In this paper, we suggest a dynamic approach to medical information systems in smart mobile environments. We focus on how to access medical information systems according to dynamic access control methods based on the existence of the hospital's information system environments. The physical environments consist of a mobile x-ray imaging devices, dedicated mobile/general smart devices, PACS, EMR server and authorization server. The software environment was developed based on the .Net Framework for synchronization and monitoring services based on mobile X-ray imaging equipment Windows7 OS. And dedicated a smart device application, we implemented a dynamic access services through JSP and Java SDK is based on the Android OS. PACS and mobile X-ray image devices in hospital, medical information between the dedicated smart devices are based on the DICOM medical image standard information. In addition, EMR information is based on H7. In order to providing dynamic access control service, we classify the context of the patients according to conditions of bio-information such as oxygen saturation, heart rate, BP and body temperature etc. It shows event trace diagrams which divided into two parts like general situation, emergency situation. And, we designed the dynamic approach of the medical care information by authentication method. The authentication Information are contained ID/PWD, the roles, position and working hours, emergency certification codes for emergency patients. General situations of dynamic access control method may have access to medical information by the value of the authentication information. In the case of an emergency, was to have access to medical information by an emergency code, without the authentication information. And, we constructed the medical information integration database scheme that is consist medical information, patient, medical staff and medical image information according to medical information standards.y Finally, we show the usefulness of the dynamic access application service based on the smart devices for execution results of the proposed system according to patient contexts such as general and emergency situation. Especially, the proposed systems are providing effective medical information services with smart devices in emergency situation by dynamic access control methods. As results, we expect the proposed systems to be useful for u-hospital information systems and services.

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

Development of Porcine Pericardial Heterograft for Clinical Application (Microscopic Analysis of Various Fixation Methods) (돼지의 심낭, 판막을 이용한 이종이식 보철편의 개발(고정 방법에 따른 조직학적 분석))

  • Kim, Kwan-Chang;Choi, Chang-Hyu;Lee, Chang-Ha;Lee, Chul;Oh, Sam-Sae;Park, Seong-Sik;Kim, Woong-Han;Kim, Kyung-Hwan;Kim, Yong-Jiin
    • Journal of Chest Surgery
    • /
    • v.41 no.3
    • /
    • pp.295-304
    • /
    • 2008
  • Background: Various experimental trials for the development of bioprosthetic devices are actively underway, secondary to the limited supply of autologous and homograft tissue to treat cardiac diseases. In this study, porcine bioprostheses that were treated with glutaraldehyde (GA), ethanol, or sodium dodecylsulfate (SDS) were examined with light microscopy and transmission electron microscopy for mechanical and physical imperfections before implantation, Material and Method: 1) Porcine pericardium, aortic valve, and pulmonary valve were examined using light microscopy and JEM-100CX II transmission electron microscopy, then compared with human pericardium and commercially produced heterografts. 2) Sections from six treated groups (GA-Ethanol, Ethanol-GA, SDS only, SDS-GA, Ethanol-SDS-GA and SDS-Ethanol-GA) were observed using the same methods. Result: 1) Porcine pericardium was composed of a serosal layer, fibrosa, and epicardial connective tissue. Treatment with GA, ethanol, or SDS had little influence on the collagen skeleton of porcine pericardium, except in the case of SDS pre-treatment. There was no alteration in the collagen skeleton of the porcine pericardium compared to commercially produced heterografts. 2) Porcine aortic valve was composed of lamina fibrosa, lamina spongiosa, and lamina ventricularis. Treatment with GA, ethanol, or SDS had little influence on these three layers and the collagen skeleton of porcine aortic valve, except in the case of SDS pre-treatment. There were no alterations in the three layers or the collagen. skeleton of porcine aortic valve compared to commercially produced heterografts. Conclusion: There was little physical and mechanical damage incurred in porcine bioprosthesis structures during various glutaraldehyde fixation processes combined with anti-calcification or decellularization treatments. However, SDS treatment preceding GA fixation changed the collagen fibers into a slightly condensed form, which degraded during transmission electron micrograph. The optimal methods and conditions for sodium dodecylsulfate (SDS) treatment need to be modified.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

Studies on the Virus in Silkworm Bombyx mori L. -Resistance to Virus Disease- (가잠 Virus에 관한 연구 -저항성에 관한 기초조사-)

  • 박광의;강석권
    • Journal of Sericultural and Entomological Science
    • /
    • v.9
    • /
    • pp.67-87
    • /
    • 1969
  • 1. Objectives and Importance. Many silkworms have been damaged by nuclear polyhedrosis virus diseases thoughout the country every year causing a decease in cocoon production by approximately ten per cent per year. The damage caused by the infections virus has occured in spite of complete disinfection. In this respect, it is well known it is impossible, at the present time, to protect the silkworm from these virus infections through chemical and physical control methods. Therefore, this author has attempted to solve this urgent problem from the view point of heredity and breeding, discovering the different resistances and heritabilities among 120 stains collected from throughout the country, and selecting the ones with highest resistance for the basic materials in the silkworm breeding. 2. Results of work 1) The strains with strong resistance to the nuclear polyhedrosis virus diseases are N$_4$, N$\sub$15/, N$\sub$48/, C$\sub$55/ and Em. the log ED$\sub$50/ values of them vary between 0.799 and 1.611. The susceptible strains are N$\sub$20/, C$\sub$62/, N$\sub$76/, N$\sub$79/ and C$\sub$108/, the log ED$\sub$50/ values of them vary between 5.159 and 7.258. (Reference Table 4) The Japanese strain with a log ED$\sub$50/, value of 3.770 is the strongest, followed by the Chinese strain with a log ED$\sub$50/ value of 3.564. The weakest is the European strain with a log ED$\sub$50/ value of 3.3381. The direction coefficient of the regression equation of the susceptibility varies between 0.1 and 0.6, the uniformness of the resistance of the preserved strains of this country is comparatively low. The hereditary henomena of the resistance of each strain and the conerete method of its application for silkworm breeding main the subjects for later studies. 2) The content of water and ash in silkworm has not been correlated with the capability for resistance to the virus diseases(Reference. Table 8). but it is very significantly correlated with mortality rate (in common reaning). In the case of the silkworms which have just completed the fourth moulting the content of water and ash is not related to the mortality rate. In the case of the silkworms which have just completed the third moulting, however, the water( +0.326) and ash (+0.362) registered a high significance. The ash content in the first ($\div$0.520) and second ($\div$0.386) moults is highly significant but water content in both cases is not significant (Reference Table 7). 3) The No. 205 strain proved to be the best in character among the various F$_1$ hybrids. No. 204 was very good in strength but a little lower in cocoon character than the control. No. 212 was a little low in coccon character and mortality was average, but the cocoon harvest was the best among all the varieties offered for (Reference Table 9). 4), In short, the above mentioned strains which are known to have strong resistance to the virus disease are expected to provide basic data for breeding strong varieties. It is proposed that continued research should be conducted on the characteristics of various strains for a satisfactory preservation of various characteristics.research should be conducted on the characteristics of various strains for a satisfactory preservation of various characteristics.

  • PDF

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

The Effects of the Computer Aided Innovation Capabilities on the R&D Capabilities: Focusing on the SMEs of Korea (Computer Aided Innovation 역량이 연구개발역량에 미치는 효과: 국내 중소기업을 대상으로)

  • Shim, Jae Eok;Byeon, Moo Jang;Moon, Hyo Gon;Oh, Jay In
    • Asia pacific journal of information systems
    • /
    • v.23 no.3
    • /
    • pp.25-53
    • /
    • 2013
  • This study analyzes the effect of Computer Aided Innovation (CAI) to improve R&D Capabilities empirically. Survey was distributed by e-mail and Google Docs, targeting CTO of 235 SMEs. 142 surveys were returned back (rate of return 60.4%) from companies. Survey results from 119 companies (83.8%) which are effective samples except no-response, insincere response, estimated value, etc. were used for statistics analysis. Companies with less than 50billion KRW sales of entire researched companies occupy 76.5% in terms of sample traits. Companies with less than 300 employees occupy 83.2%. In terms of the type of company business Partners (called 'partners with big companies' hereunder) who work with big companies for business occupy 68.1%. SMEs based on their own business (called 'independent small companies') appear to occupy 31.9%. The present status of holding IT system according to traits of company business was classified into partners with big companies versus independent SMEs. The present status of ERP is 18.5% to 34.5%. QMS is 11.8% to 9.2%. And PLM (Product Life-cycle Management) is 6.7% to 2.5%. The holding of 3D CAD is 47.1% to 21%. IT system-holding and its application of independent SMEs seemed very vulnerable, compared with partner companies of big companies. This study is comprised of IT infra and IT Utilization as CAI capacity factors which are independent variables. factors of R&D capabilities which are independent variables are organization capability, process capability, HR capability, technology-accumulating capability, and internal/external collaboration capability. The highest average value of variables was 4.24 in organization capability 2. The lowest average value was 3.01 in IT infra which makes users access to data and information in other areas and use them with ease when required during new product development. It seems that the inferior environment of IT infra of general SMEs is reflected in CAI itself. In order to review the validity used to measure variables, Factors have been analyzed. 7 factors which have over 1.0 pure value of their dependent and independent variables were extracted. These factors appear to explain 71.167% in total of total variances. From the result of factor analysis about measurable variables in this study, reliability of each item was checked by Cronbach's Alpha coefficient. All measurable factors at least over 0.611 seemed to acquire reliability. Next, correlation has been done to explain certain phenomenon by correlation analysis between variables. As R&D capabilities factors which are arranged as dependent variables, organization capability, process capability, HR capability, technology-accumulating capability, and internal/external collaboration capability turned out that they acquire significant correlation at 99% reliability level in all variables of IT infra and IT Utilization which are independent variables. In addition, correlation coefficient between each factor is less than 0.8, which proves that the validity of this study judgement has been acquired. The pair with the highest coefficient had 0.628 for IT utilization and technology-accumulating capability. Regression model which can estimate independent variables was used in this study under the hypothesis that there is linear relation between independent variables and dependent variables so as to identify CAI capability's impact factors on R&D. The total explanations of IT infra among CAI capability for independent variables such as organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability are 10.3%, 7%, 11.9%, 30.9%, and 10.5% respectively. IT Utilization exposes comprehensively low explanatory capability with 12.4%, 5.9%, 11.1%, 38.9%, and 13.4% for organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability respectively. However, both factors of independent variables expose very high explanatory capability relatively for technology-accumulating capability among independent variable. Regression formula which is comprised of independent variables and dependent variables are all significant (P<0.005). The suitability of regression model seems high. When the results of test for dependent variables and independent variables are estimated, the hypothesis of 10 different factors appeared all significant in regression analysis model coefficient (P<0.01) which is estimated to affect in the hypothesis. As a result of liner regression analysis between two independent variables drawn by influence factor analysis for R&D capability and R&D capability. IT infra and IT Utilization which are CAI capability factors has positive correlation to organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability with inside and outside which are dependent variables, R&D capability factors. It was identified as a significant factor which affects R&D capability. However, considering adjustable variables, a big gap is found, compared to entire company. First of all, in case of partner companies with big companies, in IT infra as CAI capability, organization capability, process capability, human resources capability, and technology capability out of R&D capacities seems to have positive correlation. However, collaboration capability appeared insignificance. IT utilization which is a CAI capability factor seemed to have positive relation to organization capability, process capability, human resources capability, and internal/external collaboration capability just as those of entire companies. Next, by analyzing independent types of SMEs as an adjustable variable, very different results were found from those of entire companies or partner companies with big companies. First of all, all factors in IT infra except technology-accumulating capability were rejected. IT utilization was rejected except technology-accumulating capability and collaboration capability. Comprehending the above adjustable variables, the following results were drawn in this study. First, in case of big companies or partner companies with big companies, IT infra and IT utilization affect improving R&D Capabilities positively. It was because most of big companies encourage innovation by using IT utilization and IT infra building over certain level to their partner companies. Second, in all companies, IT infra and IT utilization as CAI capability affect improving technology-accumulating capability positively at least as R&D capability factor. The most of factor explanation is low at around 10%. However, technology-accumulating capability is rather high around 25.6% to 38.4%. It was found that CAI capability contributes to technology-accumulating capability highly. Companies shouldn't consider IT infra and IT utilization as a simple product developing tool in R&D section. However, they have to consider to use them as a management innovating strategy tool which proceeds entire-company management innovation centered in new product development. Not only the improvement of technology-accumulating capability in department of R&D. Centered in new product development, it has to be used as original management innovative strategy which proceeds entire company management innovation. It suggests that it can be a method to improve technology-accumulating capability in R&D section and Dynamic capability to acquire sustainable competitive advantage.