• Title/Summary/Keyword: implementation procedure

Search Result 822, Processing Time 0.029 seconds

Efficacy and Accuracy of Patient Specific Customize Bolus Using a 3-Dimensional Printer for Electron Beam Therapy (전자선 빔 치료 시 삼차원프린터를 이용하여 제작한 환자맞춤형 볼루스의 유용성 및 선량 정확도 평가)

  • Choi, Woo Keun;Chun, Jun Chul;Ju, Sang Gyu;Min, Byung Jun;Park, Su Yeon;Nam, Hee Rim;Hong, Chae-Seon;Kim, MinKyu;Koo, Bum Yong;Lim, Do Hoon
    • Progress in Medical Physics
    • /
    • v.27 no.2
    • /
    • pp.64-71
    • /
    • 2016
  • We develop a manufacture procedure for the production of a patient specific customized bolus (PSCB) using a 3D printer (3DP). The dosimetric accuracy of the 3D-PSCB is evaluated for electron beam therapy. In order to cover the required planning target volume (PTV), we select the proper electron beam energy and the field size through initial dose calculation using a treatment planning system. The PSCB is delineated based on the initial dose distribution. The dose calculation is repeated after applying the PSCB. We iteratively fine-tune the PSCB shape until the plan quality is sufficient to meet the required clinical criteria. Then the contour data of the PSCB is transferred to an in-house conversion software through the DICOMRT protocol. This contour data is converted into the 3DP data format, STereoLithography data format and then printed using a 3DP. Two virtual patients, having concave and convex shapes, were generated with a virtual PTV and an organ at risk (OAR). Then, two corresponding electron treatment plans with and without a PSCB were generated to evaluate the dosimetric effect of the PSCB. The dosimetric characteristics and dose volume histograms for the PTV and OAR are compared in both plans. Film dosimetry is performed to verify the dosimetric accuracy of the 3D-PSCB. The calculated planar dose distribution is compared to that measured using film dosimetry taken from the beam central axis. We compare the percent depth dose curve and gamma analysis (the dose difference is 3%, and the distance to agreement is 3 mm) results. No significant difference in the PTV dose is observed in the plan with the PSCB compared to that without the PSCB. The maximum, minimum, and mean doses of the OAR in the plan with the PSCB were significantly reduced by 9.7%, 36.6%, and 28.3%, respectively, compared to those in the plan without the PSCB. By applying the PSCB, the OAR volumes receiving 90% and 80% of the prescribed dose were reduced from $14.40cm^3$ to $0.1cm^3$ and from $42.6cm^3$ to $3.7cm^3$, respectively, in comparison to that without using the PSCB. The gamma pass rates of the concave and convex plans were 95% and 98%, respectively. A new procedure of the fabrication of a PSCB is developed using a 3DP. We confirm the usefulness and dosimetric accuracy of the 3D-PSCB for the clinical use. Thus, rapidly advancing 3DP technology is able to ease and expand clinical implementation of the PSCB.

Development of BIM-based 3D Modeling Instruction Materials and its Application Analysis for Professional Drafting Subject of Specialized Vocational High School (특성화고 전문제도 과목을 위한 BIM 기반 3D 주택설계 수업자료 개발 및 적용)

  • Kwon, Se-Jeong;Yoo, Hyun-Seok
    • 대한공업교육학회지
    • /
    • v.43 no.1
    • /
    • pp.1-19
    • /
    • 2018
  • As the BIM designing technology has been applied recently in the construction field, architectural design education in the field of work and university has been changing to 3D modeling. Nevertheless, architectural design & drafting education in the construction specialized vocational high school is not responding appropriately to change. Despite the fact that students need to have 3D modeling design ability, there is a very lack of 3D housing design instructional material that can satisfy the change. The purpose of this study is to develop BIM-based 3D modeling instruction material and apply to analyze effect on interest and task performance ability on Housing Design. The 3D modeling instruction material used in this study was developed through four stages of preparation, development, implementation and evaluation according to the PDIE model procedure. Also, the experimental design model for hypothesis testing was used nonequivalent control group pretest-posttest design. Based on the experimental design model, BIM-based 3D modeling instruction material was performed in the experimental group and 2D CAD-based standard instruction material was taught in the control group. Experimental treatment was conducted on the students of construction specialized vocatinonal high school, and applied to the subject of Professional Drafting in the 12 hours. Before and after the experimental treatment, the interest and task performance ability on Housing Design were tested. Based on the test results, we analyzed the effects of the 3D modeling instruction material through the independent samples t-test. The results of the study are as follows. First, BIM-based 3D modeling instruction material was developed of 'Housing Design & Drafting' unit on the subject of Professional Drafting in construction specialized vocational high school. Second, the application of 3D modeling instruction material has shown to be effective in improving students' interest. Third, the application of 3D modeling instruction material has shown to be effective in improving students' task performance ability on Housing Design.

The possibility of South Korea to become a member state of APSCO: an analysis from Legal and political perspectives (韓國加入亞太空間合作組織的可能性 : 基于法律与政策的分析)

  • Nie, Mingyan
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.31 no.2
    • /
    • pp.237-269
    • /
    • 2016
  • Asia-Pacific Space Cooperation Organization (APSCO) is the only intergovernmental space cooperation organization in Asia. Since its establishment to date, eight countries have signed the convention and become member states. South Korea participated actively in the preparatory phase of creating the organization, and one conference organized by AP-MCSTA which is the predecessor of APSCO was held in South Korea. However, after the APSCO Convention was opened for signature in 2005 to date, South Korea does not ratify the Convention and become a member. The rapid development of space commercialization and privatization, as well as the fastest growing commercial space market in Asia, provides opportunities for Asian countries to cooperate with each other in relevant space fields. And to participate in the existing cooperation framework (e.g., the APSCO) by the Asian space countries (e.g., South Korea) could be a proper choice. Even if the essential cooperation in particular space fields is challenging, joint space programs among different Asian countries for dealing with the common events can be initiated at the first steps. Since APSCO has learned the successful legal arrangements from ESA, the legal measures established by its Convention are believed to be qualified to ensure the achievement of benefits of different member states. For example, the regulation of the "fair return" principle confirms that the return of interests from the relevant programs is in proportion to the member's investment in the programs. Moreover, the distinguish of basic and optional activities intends to authorize the freedom of the members to choose programs to participate. And for the voting procedure, the acceptance of the "consensus" by the Council is in favor of protecting the member's interest when making decisions. However, political factors that are potential to block the participation of South Korea in APSCO are difficult to be ignored. A recent event is an announcement of deploying THAAD by South Korea, which causes tension between South Korea and China. The cooperation between these two states in space activities will be influenced. A long-standing barrier is that China acts as a non-member of the main international export control mechanism, i.e., the MTCR. The U.S takes this fact as the main reason to prevent South Korea to cooperate with China in developing space programs. Although the political factors that will block the participation of South Korea in APSCO are not easy to removed shortly, legal measures can be taken to reduce the political influence. More specifically, APSCO is recommended to ensure the achievement of commercial interests of different cooperation programs by regulating precisely the implementation of the "fair return" principle. Furthermore, APSCO is also suggested to contribute to managing the common regional events by sharing satellite data. And it is anticipated that these measures can effectively response the requirements of the rapid development of space commercialization and the increasing common needs of Asia, thereby to provide a platform for the further cooperation. In addition, in order to directly reduce the political influence, two legal measures are necessary to be taken: Firstly, to clarify the rights and responsibilities of the host state (i.e., China) as providing assistance, coordination and services to the management of the Organization to release the worries of the other member states that the host state will control the Organization's activities. And secondly, to illustrate that the cooperation in APSCO is for the non-military purpose (a narrow sense of "peaceful purpose") to reduce the political concerns. Regional cooperation in Asia regarding space affairs is considered to be a general trend in the future, so if the participation of South Korea in APSCO can be finally proved to be feasible, there will be an opportunity to discuss the creation of a comprehensive institutionalized framework for space cooperation in Asia.

An Estimation of Price Elasticities of Import Demand and Export Supply Functions Derived from an Integrated Production Model (생산모형(生産模型)을 이용(利用)한 수출(輸出)·수입함수(輸入函數)의 가격탄성치(價格彈性値) 추정(推定))

  • Lee, Hong-gue
    • KDI Journal of Economic Policy
    • /
    • v.12 no.4
    • /
    • pp.47-69
    • /
    • 1990
  • Using an aggregator model, we look into the possibilities for substitution between Korea's exports, imports, domestic sales and domestic inputs (particularly labor), and substitution between disaggregated export and import components. Our approach heavily draws on an economy-wide GNP function that is similar to Samuelson's, modeling trade functions as derived from an integrated production system. Under the condition of homotheticity and weak separability, the GNP function would facilitate consistent aggregation that retains certain properties of the production structure. It would also be useful for a two-stage optimization process that enables us to obtain not only the net output price elasticities of the first-level aggregator functions, but also those of the second-level individual components of exports and imports. For the implementation of the model, we apply the Symmetric Generalized McFadden (SGM) function developed by Diewert and Wales to both stages of estimation. The first stage of the estimation procedure is to estimate the unit quantity equations of the second-level exports and imports that comprise four components each. The parameter estimates obtained in the first stage are utilized in the derivation of instrumental variables for the aggregate export and import prices being employed in the upper model. In the second stage, the net output supply equations derived from the GNP function are used in the estimation of the price elasticities of the first-level variables: exports, imports, domestic sales and labor. With these estimates in hand, we can come up with various elasticities of both the net output supply functions and the individual components of exports and imports. At the aggregate level (first-level), exports appear to be substitutable with domestic sales, while labor is complementary with imports. An increase in the price of exports would reduce the amount of the domestic sales supply, and a decrease in the wage rate would boost the demand for imports. On the other hand, labor and imports are complementary with exports and domestic sales in the input-output structure. At the disaggregate level (second-level), the price elasticities of the export and import components obtained indicate that both substitution and complement possibilities exist between them. Although these elasticities are interesting in their own right, they would be more usefully applied as inputs to the computational general equilibrium model.

  • PDF

M2 Velocity and Expected Inflation in Korea: Implications for Interest Rate Policy (인플레와 M2 유통속도(流通速度))

  • Park, Woo-kyu
    • KDI Journal of Economic Policy
    • /
    • v.13 no.2
    • /
    • pp.3-19
    • /
    • 1991
  • This paper attempts to identify key determinants of long run movements of real M2 by using the Johansen procedure for estimating and testing cointegration relations. It turns out that the real M2 equation has been stable over the long run despite rapid changes in financial structure since 1975. Moreover, the real M2 equation can be reduced to a velocity equation with the opportunity cost variable, expected inflation less the weighted average rate paid on M2 deposits, being the key determinant. However, it does not work to use a market interest rate such as the yield on corporate bonds in place of expected inflation for calculation of the opportunity cost. In the U.S., a market interest rate can be used, but not in Korea. Presumably, two somewhat different reasonings can be used to explain this result. One is that the yield on corporate bonds may not adequately reflect the inflationary expectations due to regulations on movements in interest rates. The other is that M2 deposits are not readily substitutable with such assets as corporate bonds because of market segmentations, regulations, and so on. From the policymaker's point of view, this implies that the inflation rate is an important indicator of a policy response. On the other hand, policymakers do not regard movements of the yield on corporate bonds as an important policy indicator. Altogether, the role of interest rates has been quite limited in Korea because of incomplete interest rate liberalization, an underdeveloped financial system, implementation procedures of policy measures, and so on. The result that M2 velocity has a positive cointegration relation with expected inflation minus the average rate on M2 implies that frequent adjustments of the regulated rates on M2 will be necessary as market conditions change. As the expected inflation gets higher, M2 velocity will eventually increase, given that the rates on M2 do not change. This will cause higher inflation. If interest rates are liberalized, then increases in market interest rates will result in lagged increases in deposits rates on M2. However, in Korea a substantial portion of deposit rates are regulated and will not change without the authority's initiatives. A tight monetary policy will cause increases in a few market interest rates. But the market mechanism, upward pressure for interest rate adjustments, never reaches regulated deposit rates. Hence the overall effects of tight monetary policy diminish considerably, only causing distortions in the flow of funds. Therefore, frequent adjustments of deposit rates are necessary as market conditions such as inflationary expectations change. Then it becomes important for the policymaker to actively engage in adjusting regulated deposit rates, because the financial sector in Korea is not fully developed.

  • PDF

The Adoption and Diffusion of Semantic Web Technology Innovation: Qualitative Research Approach (시맨틱 웹 기술혁신의 채택과 확산: 질적연구접근법)

  • Joo, Jae-Hun
    • Asia pacific journal of information systems
    • /
    • v.19 no.1
    • /
    • pp.33-62
    • /
    • 2009
  • Internet computing is a disruptive IT innovation. Semantic Web can be considered as an IT innovation because the Semantic Web technology possesses the potential to reduce information overload and enable semantic integration, using capabilities such as semantics and machine-processability. How should organizations adopt the Semantic Web? What factors affect the adoption and diffusion of Semantic Web innovation? Most studies on adoption and diffusion of innovation use empirical analysis as a quantitative research methodology in the post-implementation stage. There is criticism that the positivist requiring theoretical rigor can sacrifice relevance to practice. Rapid advances in technology require studies relevant to practice. In particular, it is realistically impossible to conduct quantitative approach for factors affecting adoption of the Semantic Web because the Semantic Web is in its infancy. However, in an early stage of introduction of the Semantic Web, it is necessary to give a model and some guidelines and for adoption and diffusion of the technology innovation to practitioners and researchers. Thus, the purpose of this study is to present a model of adoption and diffusion of the Semantic Web and to offer propositions as guidelines for successful adoption through a qualitative research method including multiple case studies and in-depth interviews. The researcher conducted interviews with 15 people based on face-to face and 2 interviews by telephone and e-mail to collect data to saturate the categories. Nine interviews including 2 telephone interviews were from nine user organizations adopting the technology innovation and the others were from three supply organizations. Semi-structured interviews were used to collect data. The interviews were recorded on digital voice recorder memory and subsequently transcribed verbatim. 196 pages of transcripts were obtained from about 12 hours interviews. Triangulation of evidence was achieved by examining each organization website and various documents, such as brochures and white papers. The researcher read the transcripts several times and underlined core words, phrases, or sentences. Then, data analysis used the procedure of open coding, in which the researcher forms initial categories of information about the phenomenon being studied by segmenting information. QSR NVivo version 8.0 was used to categorize sentences including similar concepts. 47 categories derived from interview data were grouped into 21 categories from which six factors were named. Five factors affecting adoption of the Semantic Web were identified. The first factor is demand pull including requirements for improving search and integration services of the existing systems and for creating new services. Second, environmental conduciveness, reference models, uncertainty, technology maturity, potential business value, government sponsorship programs, promising prospects for technology demand, complexity and trialability affect the adoption of the Semantic Web from the perspective of technology push. Third, absorptive capacity is an important role of the adoption. Fourth, suppler's competence includes communication with and training for users, and absorptive capacity of supply organization. Fifth, over-expectance which results in the gap between user's expectation level and perceived benefits has a negative impact on the adoption of the Semantic Web. Finally, the factor including critical mass of ontology, budget. visible effects is identified as a determinant affecting routinization and infusion. The researcher suggested a model of adoption and diffusion of the Semantic Web, representing relationships between six factors and adoption/diffusion as dependent variables. Six propositions are derived from the adoption/diffusion model to offer some guidelines to practitioners and a research model to further studies. Proposition 1 : Demand pull has an influence on the adoption of the Semantic Web. Proposition 1-1 : The stronger the degree of requirements for improving existing services, the more successfully the Semantic Web is adopted. Proposition 1-2 : The stronger the degree of requirements for new services, the more successfully the Semantic Web is adopted. Proposition 2 : Technology push has an influence on the adoption of the Semantic Web. Proposition 2-1 : From the perceptive of user organizations, the technology push forces such as environmental conduciveness, reference models, potential business value, and government sponsorship programs have a positive impact on the adoption of the Semantic Web while uncertainty and lower technology maturity have a negative impact on its adoption. Proposition 2-2 : From the perceptive of suppliers, the technology push forces such as environmental conduciveness, reference models, potential business value, government sponsorship programs, and promising prospects for technology demand have a positive impact on the adoption of the Semantic Web while uncertainty, lower technology maturity, complexity and lower trialability have a negative impact on its adoption. Proposition 3 : The absorptive capacities such as organizational formal support systems, officer's or manager's competency analyzing technology characteristics, their passion or willingness, and top management support are positively associated with successful adoption of the Semantic Web innovation from the perceptive of user organizations. Proposition 4 : Supplier's competence has a positive impact on the absorptive capacities of user organizations and technology push forces. Proposition 5 : The greater the gap of expectation between users and suppliers, the later the Semantic Web is adopted. Proposition 6 : The post-adoption activities such as budget allocation, reaching critical mass, and sharing ontology to offer sustainable services are positively associated with successful routinization and infusion of the Semantic Web innovation from the perceptive of user organizations.

A Study on the Component-based GIS Development Methodology using UML (UML을 활용한 컴포넌트 기반의 GIS 개발방법론에 관한 연구)

  • Park, Tae-Og;Kim, Kye-Hyun
    • Journal of Korea Spatial Information System Society
    • /
    • v.3 no.2 s.6
    • /
    • pp.21-43
    • /
    • 2001
  • The environment to development information system including a GIS has been drastically changed in recent years in the perspectives of the complexity and diversity of the software, and the distributed processing and network computing, etc. This leads the paradigm of the software development to the CBD(Component Based Development) based object-oriented technology. As an effort to support these movements, OGC has released the abstract and implementation standards to enable approaching to the service for heterogeneous geographic information processing. It is also common trend in domestic field to develop the GIS application based on the component technology for municipal governments. Therefore, it is imperative to adopt the component technology considering current movements, yet related research works have not been made. This research is to propose a component-based GIS development methodology-ATOM(Advanced Technology Of Methodology)-and to verify its adoptability through the case study. ATOM can be used as a methodology to develop component itself and enterprise GIS supporting the whole procedure for the software development life cycle based on conventional reusable component. ATOM defines stepwise development process comprising activities and work units of each process. Also, it provides input and output, standardized items and specs for the documentation, detailed instructions for the easy understanding of the development methodology. The major characteristics of ATOM would be the component-based development methodology considering numerous features of the GIS domain to generate a component with a simple function, the smallest size, and the maximum reusability. The case study to validate the adoptability of the ATOM showed that it proves to be a efficient tool for generating a component providing relatively systematic and detailed guidelines for the component development. Therefore, ATOM would lead to the promotion of the quality and the productivity for developing application GIS software and eventually contribute to the automatic production of the GIS software, the our final goal.

  • PDF

Effect of Corporate Characteristics of Startups on Overcoming the Death Valley: Focusing on Moderating Effect of Open Innovation and Venture Capital Support (스타트업의 기업 특성이 데스밸리 극복에 미치는 영향: 개방형 혁신과 벤처캐피탈 지원의 조절효과)

  • Park, Hyun Suk;Na, Hee Kyung;Moon, Gye Wan
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.13-29
    • /
    • 2023
  • Overcoming the death valley, a period in which resources are depleted and profitability declines or stagnates in the entrepreneurial process, is an essential procedure for success. In this study, we examined how the strategic orientation(technology, customer, competitor orientations) and absorptive capacity(potential, realized capacities) of startups affect the possibility of startups to overcome the challenges of death valley, and also empirically analyzed whether support of open innovation and venture capital has a moderating influence to the overcoming of death valley. The result of this study shows that customer orientation and realized absorptive capacity have a positive influence on overcoming the death valley. In addition, we found that the support of open innovation and venture capital has a moderating effect only in the technology orientation among the three types of strategic orientations. The result of this research emphasizes (1) the need for startups to take a more customer-oriented approach to overcome the death valley. The customer-oriented behavior and strategies of startups are vital for their longterm survival and success if we consider the fact that most of the companies investigated in this study were technology-based startups and only customer orientation showed significant results in overcoming the death valley. The research outcome also shows that (2) implementing innovation in a more open way and securing venture capital funding can make it easier for startups to overcome the death valley. This study has academic significance in that it empirically analyzed the relationship of key factors influencing the overcoming of death valley in a situation where the majority of existing research remains at the conceptual level of discussion or methodology of case study. Furthermore, this research also provides practical implications for the establishment and implementation of effective strategies to confront the challenges of death valley for startups, government and related organizations.

  • PDF

A Study on the long-term Hemodialysis patient중s hypotension and preventation from Blood loss in coil during the Hemodialysis (장기혈액투석환자의 투석중 혈압하강과 Coil내 혈액손실 방지를 위한 기초조사)

  • 박순옥
    • Journal of Korean Academy of Nursing
    • /
    • v.11 no.2
    • /
    • pp.83-104
    • /
    • 1981
  • Hemodialysis is essential treatment for the chronic renal failure patient's long-term cure and for the patient management before and after kidney transplantation. It sustains the endstage renal failure patient's life which didn't get well despite strict regimen and furthermore it becomes an essential treatment to maintain civil life. Bursing implementation in hemodialysis may affect the significant effect on patient's life. The purpose of this study was to obtain the basic data to solve the hypotension problem encountable to patient and the blood loss problem affecting hemodialysis patient'a anemic states by incomplete rinsing of blood in coil through all process of hemodialysis. The subjects for this study were 44 patients treated hemodialysis 691 times in the hemodialysis unit, The .data was collected at Gang Nam 51. Mary's Hospital from January 1, 1981 to April 30, 1981 by using the direct observation method and the clinical laboratory test for laboratory data and body weight and was analysed by the use of analysis of Chi-square, t-test and anlysis of varience. The results obtained an follows; A. On clinical laboratory data and other data by dialysis Procedure. The average initial body weight was 2.37 ± 0.97kg, and average body weight after every dialysis was 2.33 ± 0.9kg. The subject's average hemoglobin was 7.05±1.93gm/dl and average hematocrit was 20.84± 3.82%. Average initial blood pressure was 174.03±23,75mmHg and after dialysis was 158.45±25.08mmHg. The subject's average blood ion due to blood sample for laboratory data was 32.78±13.49cc/ month. The subject's average blood replacement for blood complementation was 1.31 ±0.88 pint/ month for every patient. B. On the hypotensive state and the coping approaches occurrence rate of hypotension was 28.08%. It was 194 cases among 691 times. 1. In degrees of initial blood pressure, the most 36.6% was in the group of 150-179mmHg, and in degrees of hypotension during dialysis, the most 28.9% in the group of 40-50mmHg, especially if the initial blood pressure was under 180mmHg, 59.8% clinical symptoms appeared in the group of“above 20mmHg of hypotension”. If initial blood pressure was above 180mmHg, 34.2% of clinical symptoms were appeared in the group of“above 40mmHg of hypotension”. These tendencies showed the higher initial blood pressure and the stronger degree of hypotension, these results showed statistically singificant differences. (P=0.0000) 2. Of the occuring times of hypotension,“after 3 hrs”were 29.4%, the longer the dialyzing procedure, the stronger degree of hypotension ann these showed statistically significant differences. (P=0.0142). 3. Of the dispersion of symptoms observed, sweat and flush were 43.3%, and Yawning, and dizziness 37.6%. These were the important symptoms implying hypotension during hemodialysis accordingly. Strages of procedures in coping with hypotension were as follows ; 45.9% were recovered by reducing the blood flow rate from 200cc/min to 1 00cc/min, and by reducing venous pressure to 0-30mmHg. 33.51% were recovered by controling (adjusting) blood flow rate and by infusion of 300cc of 0,9% Normal saline. 4.1% were recovered by infusion of over 300cc of 0.9% normal saline. 3.6% by substituting Nor-epinephiine, 5.7% by substituting blood transfusion, and 7,2% by substituting Albumin were recovered. And the stronger the degree of symptoms observed in hypotention, the more the treatments required for recovery and these showed statistically significant differences (P=0.0000). C. On the effects of the changes of blood pressure and osmolality by albumin and hemofiltration. 1. Changes of blood pressure in the group which didn't required treatment in hypotension and the group required treatment, were averaged 21.5mmHg and 44.82mmHg. So the difference in the latter was bigger than the former and these showed statistically significant difference (P=0.002). On the changes of osmolality, average mean were 12.65mOsm, and 17.57mOsm. So the difference was bigger in the latter than in the former but these not showed statistically significance (P=0.323). 2. Changes of blood pressure in the group infused albumin and in the group didn't required treatment in hypotension, were averaged 30mmHg and 21.5mmHg. So there was no significant differences and it showed no statistical significance (P=0.503). Changes of osmolality were averaged 5.63mOsm and 12.65mOsm. So the difference was smaller in the former but these was no stitistical significance (P=0.287). Changes of blood pressure in the group infused Albumin and in the group required treatment in hypotension were averaged 30mmHg and 44.82mmHg. So the difference was smaller in the former but there is no significant difference (P=0.061). Changes of osmolality were averaged 8.63mOsm, and 17.59mOsm. So the difference were smaller in the former but these not showed statistically significance (P=0.093). 3. Changes of blood pressure in the group iutplemented hemofiltration and in the Uoup didn't required treatment in hypotension were averaged 22mmHg and 21.5mmHg. So there was no significant differences and also these showed no statistical significance (P=0.320). Changes of osmolality were averaged 0.4mOsm and 12.65mOsm. So the difference was smaller in the former but these not showed statistical significance(P=0.199). Changes of blood pressure in the group implemented hemofiltration and in the group required treatment in hypotension were averaged 22mmHg and 44.82mmHg. So the difference was smatter in the former and these showed statistically significant differences (P=0.035). Changes of osmolality were averaged 0.4mOsm and 17.59mOsm. So the difference was smaller in the former but these not showed statistical significance (P=0.086). D. On the changes of body weight, and blood pressure, between the group of hemofiltration and hemodialysis. 1, Changes of body weight in the group implemented hemofiltration and hemodialysis were averaged 3.340 and 3.320. So there was no significant differences and these showed no statistically significant difference, (P=0.185) but standard deviation of body weight averaged in comparison with standard difference of body weight was statistically significant difference (P=0.0000). Change of blood Pressure in the group implemented hemofiltration and hemodialysis were averaged 17.81mmHg and 19.47mmHg. So there was no significant differences and these showed no statistically significant difference (P=0.119), But in comparison with standard deviation about difference of blood pressure was statistically significant difference. (P=0.0000). E. On the blood infusion method in coil after hemodialysis and residual blood losing method in coil. 1, On comparing and analysing Hct of residual blood in coil by factors influencing blood infusion method. Infusion method of saline 200cc reduced residual blood in coil after the quantitative comparison of Saline Occ, 50cc, 100cc, 200cc and the differences showed statistical significance (p < 0.001). Shaking Coil method reduced residual blood in Coil in comparison of Shaking Coil method and Non-Shaking Coil method this showed statistically significant difference (P < 0.05). Adjusting pressure in Coil at OmmHg method reduced residual blood in Coil in comparison of adjusting pressure in Coil at OmmHg and 200mmHg, and this showed statistically significant difference (P < 0.001). 2. Comparing blood infusion method divided into 10 methods in Coil with every factor respectively, there was seldom difference in group of choosing Saline 100cc infusion between Coil at OmmHg. The measured quantity of blood loss was averaged 13.49cc. Shaking Coil method in case of choosing saline 50cc infusion while adjusting pressure in coil at OmmHg was the most effective to reduce residual blood. The measured quantity of blood loss was averaged 15.18cc.

  • PDF

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.