• Title/Summary/Keyword: operational concept

Search Result 530, Processing Time 0.027 seconds

A Study on the Development of Standard Diagnostic Table for Oak Mushroom Management and its Applicability (표고버섯 경영 표준진단표의 개발 및 현지 적용)

  • Jeon, Jun-Heon;Won, Hyun-Kyu;Yoo, Byoung-Il;Lee, Seong-Youn;Lee, Jung-Min;Ji, Dong-Hyun;Kim, Yeon-Tae;Kang, Kil-Nam;Oh, Duk-Sil
    • Journal of Korean Society of Forest Science
    • /
    • v.102 no.2
    • /
    • pp.272-280
    • /
    • 2013
  • This study aims to develop a standard diagnostic table for management and administration that oak mushroom cultivators of forestry households can utilize. By diagnosing their current level of management with the table, the cultivators themselves will be able to grasp and address their operational challenges better. The table is composed of; questions on the status of forestry households; columns for a series of management performance indices, and; questionnaire with 4 categories and 20 subcategories to check the current level of management and administration by the households. In order to prepare the standard diagnostic table for oak mushroom management, 196 forestry households throughout 10 areas - Cheong-yang, Gong-ju, Bu-yeo and Seocheon in Chungcheongnam-do, Mun-kyung and Ye-cheon in Gyeongsangbuk-do, Jin-an in Jeollabuk-do, Jangheung and Jang-seong in Jeollanam-do, and Jeju Islands - were interviewed with and a total of 190 questionnaires were acquired and made use of in the result analysis. Then, the score on management level of each forestry household was determined in the way of aggregating scores from each subcategory. The overall average score of every household was calculated at 62.2 point with more than half of the respondents, 54.7%, belonging to the range of 60 to 80. When considered by regional groups, the average score of Jin-an was the lowest with 57.6 point while that of Jang-seong was the highest with 69.6 point. In case of the 'cultivation management' category, there were a lot of cultivators who expressed a negative awareness of the term 'pest control' because they had a tendency to think the term in connection with 'herbicides or pesticides'. So it is inevitable to adapt and modify existing groups and grades to make sure that the cultivators can make a right choice without confusing the concept 'pest control' with 'herbicides or pesticides'. Meanwhile, the average scores of 'management and administration' categories were mostly low. It was remarkable in these categories that forestry households in Jeolla province, which had remained in lower ranks in the other three categories, recorded higher scores than those in Chungcheong province, boasting a relatively high level of management and administration.

A Method for Evaluating News Value based on Supply and Demand of Information Using Text Analysis (텍스트 분석을 활용한 정보의 수요 공급 기반 뉴스 가치 평가 방안)

  • Lee, Donghoon;Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.45-67
    • /
    • 2016
  • Given the recent development of smart devices, users are producing, sharing, and acquiring a variety of information via the Internet and social network services (SNSs). Because users tend to use multiple media simultaneously according to their goals and preferences, domestic SNS users use around 2.09 media concurrently on average. Since the information provided by such media is usually textually represented, recent studies have been actively conducting textual analysis in order to understand users more deeply. Earlier studies using textual analysis focused on analyzing a document's contents without substantive consideration of the diverse characteristics of the source medium. However, current studies argue that analytical and interpretive approaches should be applied differently according to the characteristics of a document's source. Documents can be classified into the following types: informative documents for delivering information, expressive documents for expressing emotions and aesthetics, operational documents for inducing the recipient's behavior, and audiovisual media documents for supplementing the above three functions through images and music. Further, documents can be classified according to their contents, which comprise facts, concepts, procedures, principles, rules, stories, opinions, and descriptions. Documents have unique characteristics according to the source media by which they are distributed. In terms of newspapers, only highly trained people tend to write articles for public dissemination. In contrast, with SNSs, various types of users can freely write any message and such messages are distributed in an unpredictable way. Again, in the case of newspapers, each article exists independently and does not tend to have any relation to other articles. However, messages (original tweets) on Twitter, for example, are highly organized and regularly duplicated and repeated through replies and retweets. There have been many studies focusing on the different characteristics between newspapers and SNSs. However, it is difficult to find a study that focuses on the difference between the two media from the perspective of supply and demand. We can regard the articles of newspapers as a kind of information supply, whereas messages on various SNSs represent a demand for information. By investigating traditional newspapers and SNSs from the perspective of supply and demand of information, we can explore and explain the information dilemma more clearly. For example, there may be superfluous issues that are heavily reported in newspaper articles despite the fact that users seldom have much interest in these issues. Such overproduced information is not only a waste of media resources but also makes it difficult to find valuable, in-demand information. Further, some issues that are covered by only a few newspapers may be of high interest to SNS users. To alleviate the deleterious effects of information asymmetries, it is necessary to analyze the supply and demand of each information source and, accordingly, provide information flexibly. Such an approach would allow the value of information to be explored and approximated on the basis of the supply-demand balance. Conceptually, this is very similar to the price of goods or services being determined by the supply-demand relationship. Adopting this concept, media companies could focus on the production of highly in-demand issues that are in short supply. In this study, we selected Internet news sites and Twitter as representative media for investigating information supply and demand, respectively. We present the notion of News Value Index (NVI), which evaluates the value of news information in terms of the magnitude of Twitter messages associated with it. In addition, we visualize the change of information value over time using the NVI. We conducted an analysis using 387,014 news articles and 31,674,795 Twitter messages. The analysis results revealed interesting patterns: most issues show lower NVI than average of the whole issue, whereas a few issues show steadily higher NVI than the average.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Analysis of Lumbar Herniated Intervertebral Disc Patients' Healthcare Utilization of Western-Korean Collaborative Treatment: Using Health Insurance Review & Assessment Service's Patients Sample Data (요추 추간판 탈출증 환자의 의·한의 협진 의료이용 현황 분석: 건강보험심사평가원 환자표본 데이터를 이용하여)

  • Ko, Jun-Hyuk;Yu, Ji-Woong;Seo, Sang-Woo;Seo, Joon-Won;Kang, Jun-Hyuk;Kim, Tae-Oh;Cho, Whi-Sung;Seo, Yeon-Ho;Ahn, Jong-Hyun;Lee, Woo-Joo;Kim, Bo-Hyung;Choi, Man-Khu;Kim, Sung-Bum;Kim, Hyung-Suk;Kim, Koh-Woon;Cho, Jae-Heung;Song, Mi-Yeon;Chung, Won-Seok
    • Journal of Korean Medicine Rehabilitation
    • /
    • v.31 no.4
    • /
    • pp.105-116
    • /
    • 2021
  • Objectives Lumbar herniated intervertebral disc (L-HIVD) is common disease in which Western-Korean collaborative treatment is performed in Korea. This study aimed to analyze Western-Korean collaborative treatment utilization of Korean patients with L-HIVD using Health Insurance Review & Assessment Service's Patients Sample Data. Methods This study used the Health Insurance Review & Assessment Service-National Patient Sample (HIRA-NPS) in 2018. Claim data of L-HIVD patients were extracted. The claim data were rebuilt with the operational concept of 'episode of care' and divided into Korean medicine episode group (KM), Western medicine episode group (WM) and collaborative treatment episode group (CT). General characteristics, medical expenses and healthcare utilization were analyzed. In addition, the difference of average visit day and average medical expenses between non-collaborative group (KM plus WM) and CT were analyzed by the propensity score matching method. Results A Total of 64,333 patients and 365,745 claims were extracted. The number of episodes of WM, KM and CT was 69,383 (92.97%), 3,903 (5.23%), and 1,341 (1.80%) respectively. The frequency of collaborative treatment episode was higher in women and the age of 50s. The most frequently described treatment in CT was acupuncture therapy. As a result of the propensity score matching, the number of visit days and medical expenses in the collaborative treatment group was higher than in the non-collaborative group. Conclusions The analysis of healthcare utilization of Korean-Western collaborative treatment may be used as basic data for establishing medical policies and systematic collaborative treatment model in the future.

Process Governance Meta Model and Framework (프로세스 거버넌스 메타모델과 프레임워크)

  • Lee, JungGyu;Jeong, Seung Ryul
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.63-72
    • /
    • 2019
  • As a sub-concept of corporate or organization governance, business governance and IT governance have become major research topics in academia. However, despite the importance of process as a construct for mediating the domain between business and information technology, research on process governance is relatively inadequate. Process Governance focuses on activities that link business strategy with IT system implementation and explains the creation of corporate core values. The researcher studied the basic conceptual governance models of political science, sociology, public administration, and classified governance styles into six categories. The researcher focused on the series of metamodels. For examples, the traditional Strategy Alignment Model(SAM) by Henderson and Venkatraman which is replaced by the neo-SAM model, organizational governance network model, sequential organization governance model, organization governance meta model, process governance CUBE model, COSO and process governance CUBE comparison model, and finally Process Governance Framework and etc. The Major difference between SAM and neo-SAM model is Process Governance domain inserted between Business Governance and IT Governance. Among several metamodels, Process Governance framework, the core conceptual model consists of four activity dimensions: strategic aligning, human empowering, competency enhancing, and autonomous organizing. The researcher designed five variables for each activity dimensions, totally twenty variables. Besides four activity dimensions, there are six driving forces for Process Governance cycle: De-normalizing power, micro-power, vitalizing power, self-organizing power, normalizing power and sense-making. With four activity dimensions and six driving powers, an organization can maintain the flexibility of process governance cycle to cope with internal and external environmental changes. This study aims to propose the Process Governance competency model and Process Governance variables. The situation of the industry is changing from the function-oriented organization management to the process-oriented perspective. Process Governance framework proposed by the researcher will be the contextual reference models for the further diffusion of the research on Process Governance domain and the operational definition for the development of Process Governance measurement tools in detail.

Survey of Operation and Status of the Human Research Protection Program (HRPP) in Korea (2019) (임상시험 및 대상자보호프로그램의 운영과 현황에 대한 설문조사 연구(2019))

  • Maeng, Chi Hoon;Lee, Sun Ju;Cho, Sung Ran;Kim, Jin Seok;Rha, Sun Young;Kim, Yong Jin;Chung, Jong Woo;Kim, Seung Min
    • The Journal of KAIRB
    • /
    • v.2 no.2
    • /
    • pp.37-48
    • /
    • 2020
  • Purpose: The purpose of this study is to assess the operational status and level of understanding among IRB and HRPP staffs at a hospital or a research institute to the HRPP guideline set by the Ministry of Food and Drug Safety (MFDS) and to provide recommendations. Methods: Online survey was distributed among members of Korean Association of IRB (KAIRB) through each IRB office. The result was separated according to topic and descriptive statistics was used for analysis. Result: Survey notification was sent out to 176 institutions and 65 (37.1%) institutions answered the survey by online. Of 65 institutions that answered the survey; 83.1% was hospital, 12.3% was university, 3.1% was medical college, 1.5% was research institution. 23 institutions (25.4%) established independent HRPP offices and 39 institutions (60.0%) did not. 12 institutions (18.5%) had separate IRB and HRPP heads, 21 (32.3%) institutions separated business reporting procedure and person in charge, 12 institutions separated the responsibility of IRB and HRPP among staff, and 45 institutions (69.2%) had audit & non-compliance managers. When asked about the most important basic task for HRPP, 23% answered self-audit. And according to 43.52%, self-audit was also the most by both institutions that operated HRPP and institutions that did not. When basic task performance status was analyzed, on average, the institutions that operated HRPP was 14% higher than institutions that only operated IRB. 9 (13.8%) institutions were evaluated and obtained HRPP accreditation from MFDS and the most common reason for obtaining the accreditation was to be selected as Institution for the education of persons conducting clinical trial (6 institutions). The most common reason for not obtaining HRPP accreditation was because of insufficient staff and limited capacity of the institution (28%). Institutions with and without a plan to be HRPP accredited by MFDS were 20 (37.7%) each. 34 institutions (52.3%) answered HRPP evaluation method and accreditation by MFDS was appropriate while 31 institutions (47.7%) answered otherwise. 36 institutions answered that HRPP evaluation and accreditation by MFDS was credible while 29 institutions (44.5%) answered that HRPP evaluation method and accreditation by MFDS was not credible. Conclusion: 1. MFDS's HRPP accreditation program can facilitate the main objective of HRPP and MFDS's HRPP accreditation program should be encouraged to non-tertiary hospitals by taking small staff size into consideration and issuing accreditation by segregating accreditation. 2. While issuing Institution for the education of persons conducting clinical trial status as a benefit of MFDS's HRPP accreditation program, it can also hinder access to MFDS's HRPP accreditation program. It should also be considered that the non-contact culture during COVID-19 pandemic eliminated time and space limitation for education. 3. For clinical research conducted internally by an institution, internal audit is the most effective and sole method of protecting safety and right of the test subjects and integrity for research in Korea. For this reason, regardless of the size of the institution, an internal audit should be enforced. 4. It is necessary for KAIRB and MFDSto improve HRPP awareness by advocating and educating the concept and necessity of HRPP in clinical research. 5. A new HRPP accreditation system should be setup for all clinical research with human subjects, including Investigational New Drug (IND) application in near future.

  • PDF

A Study on the Effect of the Introduction Characteristics of Cloud Computing Services on the Performance Expectancy and the Intention to Use: From the Perspective of the Innovation Diffusion Theory (클라우드 컴퓨팅 서비스의 도입특성이 조직의 성과기대 및 사용의도에 미치는 영향에 관한 연구: 혁신확산 이론 관점)

  • Lim, Jae Su;Oh, Jay In
    • Asia pacific journal of information systems
    • /
    • v.22 no.3
    • /
    • pp.99-124
    • /
    • 2012
  • Our society has long been talking about necessity for innovation. Since companies in particular need to carry out business innovation in their overall processes, they have attempted to apply many innovation factors on sites and become to pay more attention to their innovation. In order to achieve this goal, companies has applied various information technologies (IT) on sites as a means of innovation, and consequently IT have been greatly developed. It is natural for the field of IT to have faced another revolution which is called cloud computing, which is expected to result in innovative changes in software application via the Internet, data storing, the use of devices, and their operations. As a vehicle of innovation, cloud computing is expected to lead the changes and advancement of our society and the business world. Although many scholars have researched on a variety of topics regarding the innovation via IT, few studies have dealt with the issue of could computing as IT. Thus, the purpose of this paper is to set the variables of innovation attributes based on the previous articles as the characteristic variables and clarify how these variables affect "Performance Expectancy" of companies and the intention of using cloud computing. The result from the analysis of data collected in this study is as follows. The study utilized a research model developed on the innovation diffusion theory to identify influences on the adaptation and spreading IT for cloud computing services. Second, this study summarized the characteristics of cloud computing services as a new concept that introduces innovation at its early stage of adaptation for companies. Third, a theoretical model is provided that relates to the future innovation by suggesting variables for innovation characteristics to adopt cloud computing services. Finally, this study identified the factors affecting expectation and the intention to use the cloud computing service for the companies that consider adopting the cloud computing service. As the parameter and dependent variable respectively, the study deploys the independent variables that are aligned with the characteristics of the cloud computing services based on the innovation diffusion model, and utilizes the expectation for performance and Intention to Use based on the UTAUT theory. Independent variables for the research model include Relative Advantage, Complexity, Compatibility, Cost Saving, Trialability, and Observability. In addition, 'Acceptance for Adaptation' is applied as an adjustment variable to verify the influences on the expected performances from the cloud computing service. The validity of the research model was secured by performing factor analysis and reliability analysis. After confirmatory factor analysis is conducted using AMOS 7.0, the 20 hypotheses are verified through the analysis of the structural equation model, accepting 12 hypotheses among 20. For example, Relative Advantage turned out to have the positive effect both on Individual Performance and on Strategic Performance from the verification of hypothesis, while it showed meaningful correlation to affect Intention to Use directly. This indicates that many articles on the diffusion related Relative Advantage as the most important factor to predict the rate to accept innovation. From the viewpoint of the influence on Performance Expectancy among Compatibility and Cost Saving, Compatibility has the positive effect on both Individual Performance and on Strategic Performance, while it showed meaningful correlation with Intention to Use. However, the topic of the cloud computing service has become a strategic issue for adoption in companies, Cost Saving turns out to affect Individual Performance without a significant influence on Intention to Use. This indicates that companies expect practical performances such as time and cost saving and financial improvements through the adoption of the cloud computing service in the environment of the budget squeezing from the global economic crisis from 2008. Likewise, this positively affects the strategic performance in companies. In terms of effects, Trialability is proved to give no effects on Performance Expectancy. This indicates that the participants of the survey are willing to afford the risk from the high uncertainty caused by innovation, because they positively pursue information about new ideas as innovators and early adopter. In addition, they believe it is unnecessary to test the cloud computing service before the adoption, because there are various types of the cloud computing service. However, Observability positively affected both Individual Performance and Strategic Performance. It also showed meaningful correlation with Intention to Use. From the analysis of the direct effects on Intention to Use by innovative characteristics for the cloud computing service except the parameters, the innovative characteristics for the cloud computing service showed the positive influence on Relative Advantage, Compatibility and Observability while Complexity, Cost saving and the likelihood for the attempt did not affect Intention to Use. While the practical verification that was believed to be the most important factor on Performance Expectancy by characteristics for cloud computing service, Relative Advantage, Compatibility and Observability showed significant correlation with the various causes and effect analysis. Cost Saving showed a significant relation with Strategic Performance in companies, which indicates that the cost to build and operate IT is the burden of the management. Thus, the cloud computing service reflected the expectation as an alternative to reduce the investment and operational cost for IT infrastructure due to the recent economic crisis. The cloud computing service is not pervasive in the business world, but it is rapidly spreading all over the world, because of its inherited merits and benefits. Moreover, results of this research regarding the diffusion innovation are more or less different from those of the existing articles. This seems to be caused by the fact that the cloud computing service has a strong innovative factor that results in a new paradigm shift while most IT that are based on the theory of innovation diffusion are limited to companies and organizations. In addition, the participants in this study are believed to play an important role as innovators and early adapters to introduce the cloud computing service and to have competency to afford higher uncertainty for innovation. In conclusion, the introduction of the cloud computing service is a critical issue in the business world.

  • PDF

Analysis and Implication on the International Regulations related to Unmanned Aircraft -with emphasis on ICAO, U.S.A., Germany, Australia- (세계 무인항공기 운용 관련 규제 분석과 시사점 - ICAO, 미국, 독일, 호주를 중심으로 -)

  • Kim, Dong-Uk;Kim, Ji-Hoon;Kim, Sung-Mi;Kwon, Ky-Beom
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.32 no.1
    • /
    • pp.225-285
    • /
    • 2017
  • In regard to the regulations related to the RPA(Remotely Piloted Aircraft), which is sometimes called in other countries as UA(Unmanned Aircraft), ICAO stipulates the regulations in the 'RPAS manual (2015)' in detail based on the 'Chicago Convention' in 1944, and enacts provisions for the Rules of UAS or RPAS. Other contries stipulates them such as the Federal Airline Rules (14 CFR), Public Law (112-95) in the United States, the Air Transport Act, Air Transport Order, Air Transport Authorization Order (through revision in "Regulations to operating Rules on unmanned aerial System") based on EASA Regulation (EC) No.216/2008 in the case of unmanned aircaft under 150kg in Germany, and Civil Aviation Act (CAA 1998), Civil Aviation Act 101 (CASR Part 101) in Australia. Commonly, these laws exclude the model aircraft for leisure purpose and require pilots on the ground, not onboard aricraft, capable of controlling RPA. The laws also require that all managements necessary to operate RPA and pilots safely and efficiently under the structure of the unmanned aircraft system within the scope of the regulations. Each country classifies the RPA as an aircraft less than 25kg. Australia and Germany further break down the RPA at a lower weight. ICAO stipulates all general aviation operations, including commercial operation, in accordance with Annex 6 of the Chicago Convention, and it also applies to RPAs operations. However, passenger transportation using RPAs is excluded. If the operational scope of the RPAs includes the airspace of another country, the special permission of the relevant country shall be required 7 days before the flight date with detail flight plan submitted. In accordance with Federal Aviation Regulation 107 in the United States, a small non-leisure RPA may be operated within line-of-sight of a responsible navigator or observer during the day in the speed range up to 161 km/hr (87 knots) and to the height up to 122 m (400 ft) from surface or water. RPA must yield flight path to other aircraft, and is prohibited to load dangerous materials or to operate more than two RPAs at the same time. In Germany, the regulations on UAS except for leisure and sports provide duty to avoidance of airborne collisions and other provisions related to ground safety and individual privacy. Although commercial UAS of 5 kg or less can be freely operated without approval by relaxing the existing regulatory requirements, all the UAS regardless of the weight must be operated below an altitude of 100 meters with continuous monitoring and pilot control. Australia was the first country to regulate unmanned aircraft in 2001, and its regulations have impacts on the unmanned aircraft laws of ICAO, FAA, and EASA. In order to improve the utiliity of unmanned aircraft which is considered to be low risk, the regulation conditions were relaxed through the revision in 2016 by adding the concept "Excluded RPA". In the case of excluded RPA, it can be operated without special permission even for commercial purpose. Furthermore, disscussions on a new standard manual is being conducted for further flexibility of the current regulations.

  • PDF

The Effect of Structured Information on the Sleep Amount of Patients Undergoing Open Heart Surgery (계획된 간호 정보가 수면량에 미치는 영향에 관한 연구 -개심술 환자를 중심으로-)

  • 이소우
    • Journal of Korean Academy of Nursing
    • /
    • v.12 no.2
    • /
    • pp.1-26
    • /
    • 1982
  • The main purpose of this study was to test the effect of the structured information on the sleep amount of the patients undergoing open heart surgery. This study has specifically addressed to the Following two basic research questions: (1) Would the structed in formation influence in the reduction of sleep disturbance related to anxiety and Physical stress before and after the operation? and (2) that would be the effects of the structured information on the level of preoperative state anxiety, the hormonal change, and the degree of behavioral change in the patients undergoing an open heart surgery? A Quasi-experimental research was designed to answer these questions with one experimental group and one control group. Subjects in both groups were matched as closely as possible to avoid the effect of the differences inherent to the group characteristics, Baseline data were also. collected on both groups for 7 days prior to the experiment and found that subjects in both groups had comparable sleep patterns, trait anxiety, hormonal levels and behavioral level. A structured information as an experimental input was given to the subjects in the experimental group only. Data were collected and compared between the experimental group and the control group on the sleep amount of the consecutive pre and post operative days, on preoperative state anxiety level, and on hormonal and behavioral changes. To test the effectiveness of the structured information, two main hypotheses and three sub-hypotheses were formulated as follows; Main hypothesis 1: Experimental group which received structured information will have more sleep amount than control group without structured information in the night before the open heart surgery. Main hypothesis 2: Experimental group with structured information will have more sleep, amount than control group without structured information during the week following the open heart surgery Sub-hypothesis 1: Experimental group with structured information will be lower in the level of State anxiety than control group without structured information in the night before the open heart surgery. Sub-hypothesis 2 : Experimental group with structured information will have lower hormonal level than control group without stuctured information on the 5th day after the open heart surgery Sub-hypothesis 3: Experimental group with structured information will be lower in the behavioral change level than control group without structured information during the week after the open heart surgery. The research was conducted in a national university hospital in Seoul, Korea. The 53 Subjects who participated in the study were systematically divided into experimental group and control group which was decided by random sampling method. Among 53 subjects, 26 were placed in the experimental group and 27 in the control group. Instruments; (1) Structed information: Structured information as an independent variable was constructed by the researcher on the basis of Roy's adaptation model consisting of physiologic needs, self-concept, role function and interdependence needs as related to the sleep and of operational procedures. (2) Sleep amount measure: Sleep amount as main dependent variable was measured by trained nurses through observation on the basis of the established criteria, such as closed or open eyes, regular or irregular respiration, body movement, posture, responses to the light and question, facial expressions and self report after sleep. (3) State anxiety measure: State Anxiety as a sub-dependent variable was measured by Spi-elberger's STAI Anxiety scale, (4) Hormornal change measure: Hormone as a sub-dependent variable was measured by the cortisol level in plasma. (5) Behavior change measure: Behavior as a sub-dependent variable was measured by the Behavior and Mood Rating Scale by Wyatt. The data were collected over a period of four months, from June to October 1981, after the pretest period of two months. For the analysis of the data and test for the hypotheses, the t-test with mean differences and analysis of covariance was used. The result of the test for instruments show as follows: (1) STAI measurement for trait and state anxiety as analyzed by Cronbachs alpha coefficient analysis for item analysis and reliability showed the reliability level at r= .90 r= .91 respectively. (2) Behavior and Mood Rating Scale measurement was analyzed by means of Principal Component Analysis technique. Seven factors retained were anger, anxiety, hyperactivity, depression, bizarre behavior, suspicious behavior and emotional withdrawal. Cumulative percentage of each factor was 71.3%. The result of the test for hypotheses show as follows; (1) Main hypothesis, was not supported. The experimental group has 282 minutes of sleep as compared to the 255 minutes of sleep by the control group. Thus the sleep amount was higher in experimental group than in control group, however, the difference was not statistically significant at .05 level. (2) Main hypothesis 2 was not supported. The mean sleep amount of the experimental group and control group were 297 minutes and 278 minutes respectively Therefore, the experimental group had more sleep amount as compared to the control group, however, the difference was not statistically significant at .05 level. Thus, the main hypothesis 2 was not supported. (3) Sub-hypothesis 1 was not supported. The mean state anxiety of the experimental group and control group were 42.3, 43.9 in scores. Thus, the experimental group had slightly lower state anxiety level than control group, howe-ver, the difference was not statistically significant at .05 level. (4) Sub-hypothesis 2 was not supported. . The mean hormonal level of the experimental group and control group were 338 ㎍ and 440 ㎍ respectively. Thus, the experimental group showed decreased hormonal level than the control group, however, the difference was not statistically significant at .05 level. (5) Sub-hypothesis 3 was supported. The mean behavioral level of the experimental group and control group were 29.60 and 32.00 respectively in score. Thus, the experimental group showed lower behavioral change level than the control group. The difference was statistically significant at .05 level. In summary, the structured information did not influence the sleep amount, state anxiety or hormonal level of the subjects undergoing an open heart surgery at a statistically significant level, however, it showed a definite trends in their relationships, not least to mention its significant effect shown on behavioral change level. It can further be speculated that a great degree of individual differences in the variables such as sleep amount, state anxiety and fluctuation in hormonal level may partly be responsible for the statistical insensitivity to the experimentation.

  • PDF

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.