• Title/Summary/Keyword: out-of-order

Search Result 21,681, Processing Time 0.055 seconds

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Lower Lung Field Tuberculosis (폐 하야 결핵)

  • Moon, Doo-Seop;Lim, Byung-Sung;Kim, Yeon-Soo;Kim, Seong-Min;Lee, Jae-Young;Lee, Dong-Suck;Sohn, Jang-Won;Lee, Kyung-Sang;Yang, Suck-Chul;Yoon, Ho-Joo;Shin, Dong-Ho;Park, Sung-Soo;Lee, Jung-Hee
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.2
    • /
    • pp.232-240
    • /
    • 1997
  • Background : Postprimary pulmonary tuberculosis is located mainly in upper lobes. The tuberculous lesion involving the lower lobes usually arises from the upper lobe cavity through endobronchial spread. When tuberculosis is confined to the lower lung field, it often masquerades as pneumonia, lung cancer, bronchiectasis, or lung abscess. Thus the correct diagnosis may be sometimes delayed for a long time. Methods : We carried out, retrospectively, a clinical study on 50 patients confirmed with lower lung field tuberculosis who visited the Department of Pulmonary Medicine at Hanyang University Hospital from January 1992 to December 1994. The following results were obtained. Results : Lower lung field tuberculosis without concomitant upper lobe disease occurred in fifty patients representing 6.9% of the total admission with active pulmonary tuberculosis over a period of 3 years. It occurred most frequently in the third decade but age distribution was relatively even. The mean age was 43 years old. Female was more frequently affected than male (male to female ratio 1 : 1.9). The most common symptom was cough(68%), followed by sputum(52%), fever(38%), and chest discomfort(30%). On chest X-ray of the 50patients, consolidation was the most common finding in 52%, followed by solitary nodule(22%) collapse(16%), cavitary lesion(10%), in decreasing order. The disease confined to the right side in 25 cases, left side 20 cases, and both sides 5 cases. Endobronchial tuberculosis (1) Endobronchial involvement was proved by bronchoscopic examination in 20 of 50patients. (2) Mean age was 44years old and female was more affected than man (male to female ratio 1 : 3). Sputum AFB stain and Mycobacterium tuberculosis culture were positive only in 50% of cases unlikely upper lobe tuberculosis, additional diagnostic methods were needed. In our study, bronchoscopic examination and percutaneous fine needle aspiration biopsy increased diagnostic yield by 18% and 32%, respectively. The most common associated condition was diabetes mellitus(18%) and others were anemia, anorexia nervosa, stomach cancer, and systemic steroid usage. Conclusion : When we find a lower lung field lesion, we should suspect tuberculosis if the patient has diabetes mellitus, anemia, systemic steroid usage, malignancy or other immune suppressed states. Because diagnostic yield of sputum AFB smear & Mycobacterium tuberculosis culture was low, additional diagnostic methods such as bronchoscopy and fine needle aspiration biopsy were needed.

  • PDF

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Effects of Dietary Fats and Oils On the Growth and Serum Cholesterol Content of Rats and Chicks (섭취(攝取) 지방(脂肪)의 종류(種類)가 흰쥐와 병아리의 성장(成長) 및 혈청(血淸) Cholesterol 함량(含量)에 미치는 영향(影響))

  • Park, Kiw-Rye;Han, In-Kyu
    • Journal of Nutrition and Health
    • /
    • v.9 no.2
    • /
    • pp.59-67
    • /
    • 1976
  • A series of experiment was carried out to study the effect of commonly used dietary fat or oils on the growth, feed efficiency, nutrient utilizability, nitrogen retention and serum cholesterol of rats and chicks fed various fat or oils at the level of 10% during 12 weeks of experimentation. Fat and oils used in this experiment were also analyzed for the composition of some fatty acids. The main observations made are as follows: 1. All groups received fat or oils gained more body weight than unsupplemented control group except chicks fed fish oil and rapeseed oil although no statistical significance was found between treatments. It was found that body weight gain achieved by the rats fed soybean oil, rapeseed oil, animal fat or corn oil was much greater than other group and that achieved by the chicks fed corn oil and animal fat was greater than other vegetable oil groups, although no statistical significance was found among treatments. 2. Feed intake data indicated that corn oil group of both rats and chicks consumed considerably more feed than other groups. Whereas feed intake of fish oil groups was the lowest among the experimental animals indicating that fish oil might contain unfavorable compound that depresses the palatability. In feed efficiency, soybean oil group of rats and corn oil group of chicks were significantly better than other experimental groups. In general, addition of fat or oils in the diet improved feed effeciency of diet. 3. Nutrient utiIizabiIity and nitrogen retention data showed that fat in the experimental diet containing 10% fat or oils was absorbed better than crude fat in control diet. It was also found that there was no significant difference in nitrogen retention among treatment. 4. Liver fat content of rapeseed oil group was much higher than that of control group and other group. It was also noticed that feeding more polyunsaturated fatty acids resulted in higher content of Iiver fat. 5. Present data indicated that serum cholesterol content of rapeseed oil and sesame oil group of rat was the higher than that of control group. Serum cholesterol content of animal fat group of chicks was higher than other group. It was interesting to note that serum cholesterol content of chicken was higher than that of rats?regardless of the kind of oils received. 6. Analytical data revealed that fatty acid composition of vegetable oil was composed mainly of oleic acid and linoleic acid, whereas animal fat and fish oil were composed of saturated fatty acid such as, myristic and palmitic acid. It should be mentionted that the perilla oil contained a very large amount of linolenic acid (58.4%) comparing with that in order vegetable oils. Little arachidonic acid was detected in vegetable oil, whereas none in animal fat and. fish oil.

  • PDF

Approach to the Extraction Method on Minerals of Ginseng Extract (추출조건(抽出條件)에 따른 인삼(人蔘)엑기스의 무기성분정량(無機成分定量)에 관(關)한 연구(硏究))

  • Cho, Han-Ok;Lee, Joong-Hwa;Cho, Sung-Hwan;Choi, Young-Hee
    • Korean Journal of Food Science and Technology
    • /
    • v.8 no.2
    • /
    • pp.95-106
    • /
    • 1976
  • In order to investigate chemical components and mineral of ginseng cultivated in Korea and to establish an appropriate extraction method, the present work was carried out with Raw ginseng(SC), White ginseng(SB) and Ginseng tail(SA). The results determined could be summarized as follows : 1. Among the proximate components, moisture content of SC, SB and SA were 66.37%, 12.61% and 12.20% respectively. The content of crude ash in SA was the highest value of three kinds of ginseng root: SA 6.04%, SB 3.52% and SC 1.56%. The crude protein of Dried ginseng root(SA and SB) was about 12-14%, which was more than two times compared with that of SC(6.30%) The content of pure protein seemed to be in similar tendency with that of crude protein in three kinds of ginseng root: 2.26% in SC, 5.94% in SB and 5.76% in SA. There was no significant difference in the content of fat among the kinds of ginseng root. $(1.1{\sim}2.5%)$ 2. The highest Ginseng extract was obtained by use of Continuous extractor which is a modified Soxhlet apparatus for 60 hours extraction with 60-80% ethanol. 3. Ginseng and the above-mentioned ginseng extract (Ginseng tail extract: SAE, White Ginseng extract : SBE, Raw Ginseng extract: SCE) were analyzed by volumetric method for the determination of Chlorine and Calcium, by colorimetric method for that of Iron and Phosphorus, by Atomic Absorption Spectrophotometer for that of Zinc, Copper and Manganese. The results were as follows : 1. The content of phosphorus in SA, SB and SC were 1.818%, 1.362%, 0.713% respectively and phosphorus content in three kinds of extract were in low level (SAE: 0.03%, SBE: 0.063%, SCE: 0.036%) 2. In the Calcium content, SA, SB and SC were 0.147%, 0.238%, 0.126% and the Calcium contents of Ginseng extracts were 0.023%, 0.011% and 0.016%. The extraction ratio of Calcium from SA was the highest value (15.6%), while that in the case of SB was 4.6%. 3. The Chlorine content of SA was 0.11%, this was slightly higher than others(SB: 0.07%, SC: 0.09%) and extraction ratio of SA and SB were 36.4%, 67.1% while that of SC was 84.4%. 4. The Iron content of SA, SB and SC were 125ppm, 32.5ppm and 20ppm but extraction ratio was extremely low (SAE: 1.33%, SBE: 0.83%, SCE: 1.08%), 5. The Manganese content of SA, SB and SC were 62.5ppm, 25.0ppm and 5.0ppm respectively but the Manganese content of extract could not determined, Copper content of SA, SB and SC were 15.0ppm, 20.0ppm and those of extract were 7.5ppm, 6.5ppm, 4.5ppm while those of extraction ratio were 50%, 32.5% and 90% respectively, Zinc was abundant in Ginseng compared with other herbs, (SA: 45.5ppm, SB: 27.5ppm and SC: 5.5ppm) and the extracted amount were 4.5ppm, 1.25ppm 1.50ppm respectively.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Factors Related to Waiting and Staying Time for Patient Care in Emergency Care Center (응급의료센터 내원환자 진료시 소요시간과 관련된 요인)

  • Han, Nam Sook;Park, Jae Yong;Lee, Sam Beom;Do, Byung Soo;Kim, Seok Beom
    • Quality Improvement in Health Care
    • /
    • v.7 no.2
    • /
    • pp.138-155
    • /
    • 2000
  • Background: Factors related to waiting and staying time for patient care in emergency care center (ECC) were examined during 1 month from Apr. 1 to Apr. 30, 1997 at an ECC of Yeungnam university hospital in Taegu metropolitan city, to obtain the baseline data on the strategy of effective management of emergency patients. Method: The study subjects consisted of the 1,742 patients who visited at ECC and the data were obtained from the medical records of ECC and direct surveys. Results: The mean interval between ECC admission time and initial care time by each ECC duty residents was 83.1 minutes for male patients and 84.9 minutes for female patients, and mean ECC staying time (time interval between admission and final disposition from ECC) was 718.0 minutes in men and 670.5 minutes in women. As the results, the mean staying time in ECC was higher in older age, and especially the both of initial care time and staying time were highest in patients of medical aid, and shortest in patients of worker's accident compensation insurance. The on admission or not, previously endotracheal-intubation state of patient. The ECC staying ti initial care time was much more delayed in patients of not having previous medical records and the ECC staying time was higher in referred patients from out-patient department, in transferred patients from the other hospitals and patients having previous records, and in patients partly used the order-communicating system. The factors associated with the initial care time were the numbers of ECC patients and the existence of any true emergent patients, being cardiopulmonary resuscitation (CPR) statusme was much more longer in patients of drug intoxication, in CPR patients, in medical department patients, in transfused patients and in patients related to 3 or more departments. And according to the numbers of duty internships, the ECC staying time for four internships was more longer than for five internships and after admission ordering was done, also-more longer in status being of no available beds. As above mentioned results, the factors for the ECC staying time were thought to be statistically significant (P<0.01) according to the patient's age and the laboratory orders and the X-ray films checked. And also the factor for the ECC staying time were thought to be statistically significant (P<0.01) according to the status being of no available beds, the laboratory orders and/or the special laboratory orders, the X-ray films checked, final disposing department, transferred to other hospital or not, home medication or not, admission or not, the grades of beds, the year grades of residents, the causes of ECC visit, the being CPR status on admission or not, the surgical operation or not, being known personells in our hospital. Conclution: Authors concluded that the relieving method of long-staying time in ECC was being establishing the legally proved apparatus which could differentiate the true emergency or non-emergency patients, and that the methods of shortening ECC staying time were doing definitely necessary laboratory orders and managing beds more flexibly to admit for ECC patients and finally this methods were thought to be a method of unloading for ECC personnels and improving the quality of care in emergency patients.

  • PDF

The Effect on Aviation Industry by WTO Agreement on Trade in Civil Aircraft and Policy Direction of Korea (WTO 민간항공기 교역 협정이 항공산업에 미치는 영향과 우리나라의 정책 방향)

  • Lee, Kang-Bin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.35 no.2
    • /
    • pp.247-280
    • /
    • 2020
  • For customs-free and liberalization on the trade of aircraft parts, the WTO Agreement on Trade in Civil Aircraft was separately concluded as plurilateral trade agreement at the time of launching WTO in 1995, and currently 33 countries including the United States and the EU are acceded but Korea does not. Major details of the Agreement on Trade in Civil Aircraft include product coverage, the elimination of customs duties and other charges, the prohibition of government-directed procurement of civil aircraft, the application of the Agreement on Subsides and Countervailing Measures, and the consultation on issues related to this Agreement and dispute resolution. Article 89 paragraph 6 of the current Customs Act was newly established on December 31, 2018, and the tariff reduction rate for imports of aircraft parts will be reduced in stages from May 2019 and the tariff reduction system will be abolished in 2026. Accordingly, looking at the impact of the Agreement on Trade in Civil Aircraft on the aviation industry, first, as for the impact on the air transport industry, an tariff allotment of the domestic air transport industry is expected to reach about 160 billion won a year from 2026, and upon acceding to the Agreement on Trade in Civil Aircraft, the domestic air transport industry will be able to import aircraft parts at no tariff, so it will not have to pay 3 to 8 percent import duties. Second, as for the impact on the aviation MRO industry, if the tariff reduction system for aircraft parts is phased out or abolished in stages, overseas outsourcing costs in the engine maintenance and parts maintenance are expected to increase, and upon acceding to the Agreement on Trade in Civil Aircraft, the aviation MRO industry will be able to import aircraft parts at no tariff, so it will reduce overseas outsourcing costs. If the author proposes a policy direction for the trade liberalization of aircraft parts to ensure competitiveness of the aviation industry, first, as for the tariff reduction by the use of FTA, in order to be favored with the tariff reduction by the use of FTA, it is necessary to secure the certificate of origin from foreign traders in the United States and the EU, and to revise the provisions of Korea-Singapore and Korea-EU FTA. Second, as for the push of acceding to the Agreement on Trade in Civil Aircraft, it would be resonable to push the acceding to Agreement on Trade in Civil Aircraft for customs-free on the trade of aircraft parts, as the tariff reduction method by the use of FTA has limits. Third, as for the improvement of the tariff reduction system for aircraft parts under the Customs Act, it is expected that there will take a considerable amount of time until the acceding to the Agreement on Trade in Civil Aircraft, so separate improvement measures are needed to continue the tariff reduction system of aircraft parts under Article 89 paragraph 6 of the Customs Act. In conclusion, Korea should accede to the WTO Agreement on Trade in Civil Aircraft to create an environment in which our aviation industry can compete fairly with foreign aviation industries and ensure competitiveness by achieving customs-free and liberalization on the trade of aircraft parts.

The Relationship Between DEA Model-based Eco-Efficiency and Economic Performance (DEA 모형 기반의 에코효율성과 경제적 성과의 연관성)

  • Kim, Myoung-Jong
    • Journal of Environmental Policy
    • /
    • v.13 no.4
    • /
    • pp.3-49
    • /
    • 2014
  • Growing interest of stakeholders on corporate responsibilities for environment and tightening environmental regulations are highlighting the importance of environmental management more than ever. However, companies' awareness of the importance of environment is still falling behind, and related academic works have not shown consistent conclusions on the relationship between environmental performance and economic performance. One of the reasons is different ways of measuring these two performances. The evaluation scope of economic performance is relatively narrow and the performance can be measured by a unified unit such as price, while the scope of environmental performance is diverse and a wide range of units are used for measuring environmental performances instead of using a single unified unit. Therefore, the results of works can be different depending on the performance indicators selected. In order to resolve this problem, generalized and standardized performance indicators should be developed. In particular, the performance indicators should be able to cover the concepts of both environmental and economic performances because the recent idea of environmental management has expanded to encompass the concept of sustainability. Another reason is that most of the current researches tend to focus on the motive of environmental investments and environmental performance, and do not offer a guideline for an effective implementation strategy for environmental management. For example, a process improvement strategy or a market discrimination strategy can be deployed through comparing the environment competitiveness among the companies in the same or similar industries, so that a virtuous cyclical relationship between environmental and economic performances can be secured. A novel method for measuring eco-efficiency by utilizing Data Envelopment Analysis (DEA), which is able to combine multiple environmental and economic performances, is proposed in this report. Based on the eco-efficiencies, the environmental competitiveness is analyzed and the optimal combination of inputs and outputs are recommended for improving the eco-efficiencies of inefficient firms. Furthermore, the panel analysis is applied to the causal relationship between eco-efficiency and economic performance, and the pooled regression model is used to investigate the relationship between eco-efficiency and economic performance. The four-year eco-efficiencies between 2010 and 2013 of 23 companies are obtained from the DEA analysis; a comparison of efficiencies among 23 companies is carried out in terms of technical efficiency(TE), pure technical efficiency(PTE) and scale efficiency(SE), and then a set of recommendations for optimal combination of inputs and outputs are suggested for the inefficient companies. Furthermore, the experimental results with the panel analysis have demonstrated the causality from eco-efficiency to economic performance. The results of the pooled regression have shown that eco-efficiency positively affect financial perform ances(ROA and ROS) of the companies, as well as firm values(Tobin Q, stock price, and stock returns). This report proposes a novel approach for generating standardized performance indicators obtained from multiple environmental and economic performances, so that it is able to enhance the generality of relevant researches and provide a deep insight into the sustainability of environmental management. Furthermore, using efficiency indicators obtained from the DEA model, the cause of change in eco-efficiency can be investigated and an effective strategy for environmental management can be suggested. Finally, this report can be a motive for environmental management by providing empirical evidence that environmental investments can improve economic performance.

  • PDF