• Title/Summary/Keyword: System of Systems

Search Result 59,780, Processing Time 0.09 seconds

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

The effects of aqueous extracts of plant roots on germination of seeds and growth of seedings (식물근의 추출물질이 종자발아 및 유식물의 생장에 미치는 영향)

  • Chan-Ho Park
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.4 no.1
    • /
    • pp.1-23
    • /
    • 1968
  • This study aimed at contributing to the improvement of cropping systems after finding out the effects of excrements and components of crop root influence on other crops as well as themselves. The following forage crops suitable for our country were selected for the present study. Aqueous extracts of fresh roots, aqueous extracts of rotting roots and aqueous solutions of excrements of red clover, orchard grass and brome grass were studied for the effects influencing the germination and growth of seedlings of red clover, ladino clover, lespedeza, soybean, orchard grass, Italian ryegrass, brome grass, barley, wheat, sorghum, corn and Hog-millet. In view of the possibility that the organic acid might be closely related to the excrements and components of crop root connected with soil sickness, the acid components of three species of roots were analysed by paper chromatography and gas chromatography method. The following results were obtained: 1. Effects of Aqueous Extracts of Fresh Roots : Aqueous extracts of red clover: The extracts inhibited the growth of seedlings of the ladino clover and lespedeza and also inhibited the development of most crops except that of sorghum among the Graminaceae. Aqueous extracts of orchard grass: The extracts promoted the seedlings growth of red clover and soybean, while it inhibited the germination and growth of orchard grass. There were no noticeable effects influencing other crops while it inhibited the growth of barley and Hog-millet. Aqueous extracts of brome grass: There was no effect on Italian ryegrass but there was an inhibiting effect on the other crops. 2. Effects of Aqueous Extracts of Rotting Roots : Aqueous extracts of red clover: The extracts promoted the seedling growth of red clover. But it reflected the inhibiting effects on other crops except sorghum. Aqueous extracts of orchard grass: The extracts promoted the growth of red clover, ladino clover, soybean and sorghun, while it inhibited the germination and rooting of barley and Hog-millet. Aqueous extracts of brome grass: The extracts gave the promotive effects to the growth of red clover, soybean and sorghum, but caused inhibiting effects on orchard grass, brome grass, barley and Hog-millet. 3. Effects of Aqueous Solutions of Excrements : The aqueous solution of excrements of red clover reflected the inhibition effects to the growth of Graminaceae, while the aqueous solutions of excrements of orchard grass and Italian ryegrass caused the promotive effects on the growth of red clover. 4. Results of Organic Acid Analysis : The oxalic acid, citric acid, tartaric acid, malonic acid, malic acid and succinic acid were included in the roots of red clover as unvolatile organic acid, and in the orchard grass and brome grass there were included the oxalic acid, citric acid, tartaric acid and malic acid. And formic acid was confirmed in the red clover, orchard grass and brome grass as volatile organic acid. In consideration of the results mentioned in above the effects of excrements and components of roots found in this studies may be summarized as follows. 1) The red clover generally gave a disadvantageous effect on the Graminaceae. Such trend was considered chiefly caused by the presence of many organic acids, namely oxalic, citric, tartaric, malonic, malic, succinic and formic acid. 2) The orchard grass generally gave an advantageous effect on the Leguminosae. This may be due to a few kinds of organic acid contained in the root, namely oxalic, citric, tartaric, malic and formic acid. Furthermore a certain of promotive materials for growth was noted. 3) As long as the root of brome grass are not rotten, it gave a disadvantageous effect on the Leguminosae and Graminaceae. This may be due to the fact that several unidentified volatile organic acid were also included besides the confirmed organic acid, namely oxalic, citric, tartaric, malic and formic acid. 5. Effects of Components in Roots to the Soil Sickness : 1) It was considered that the cause of alleged red clover's soil sickness did not result from the toxic components of the roots. 2) It was recognized that the toxic components of roots might be the cause of soil sickness in case the orchard grass and brome grass were put into the long-term single cropping. 6. Effects of Rooted Components to the Companion Crops in the Cropping System : a) In case of aqueous extracts of fresh roots and aqueous excrements (Inter cropping and mixed cropping) : 1) Advantageous combinations : Orchard grass->Red clover, Soybean, Italian ryegrass->Red clover, 2) Disadvantageous combinations : Red clover->Ladino clover, Lespedeza, Orchard grass, Italian ryegrass, Fescue Ky-31, Brome grass, Barley, Wheat, Corn and Hog.millet, Orchard grass->Lespedeza, Orchard grass, Barley and Hog-millet, Brome grass->Red clover, Ladino clover, Lespedeza, Soybean, Orchard grass, Brome grass, Barley, Wheat, Sorghum, Corn and Hog-millet, 3) Harmless combinations : Red clover->Red clover, Soybean and Sorghum, Orchard grass->Ladino clover, Italian ryegrass, Brome grass, Wheat, Sorghum and Corn, Brome grass->Italian ryegrass, b) In case of aquecus extracts of rotting roots(After cropping) : 1) Advantageous combinations : Red clover->Red clover and Sorghum, Orchard grass->Red clover, Ladino clover, Soybean, Sorghum, and Corn, Brome grass->Red clover, Soybean and Sorghum, 2) Disadvantageous combinations : Red clover->Lespedeza, Orchard grass, Italian ryegrass, Brome grass, Barley, Wheat, and Hog-millet Orchard grass->Barley and Hog-millet, Brome grass->Orchard grass, Brome grass, Barley and Hog-millet, 3) Harmless combinations : Red clover->Ladino clover, Soybean and Corn, Orchard grass->Lespedeza, Orchard grass, Italian ryegrass, Brome grass and Wheat Brome gass->Ladino clover, Lespedeza, Italian ryegrass and Wheat.

  • PDF

Analysis of media trends related to spent nuclear fuel treatment technology using text mining techniques (텍스트마이닝 기법을 활용한 사용후핵연료 건식처리기술 관련 언론 동향 분석)

  • Jeong, Ji-Song;Kim, Ho-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.33-54
    • /
    • 2021
  • With the fourth industrial revolution and the arrival of the New Normal era due to Corona, the importance of Non-contact technologies such as artificial intelligence and big data research has been increasing. Convergent research is being conducted in earnest to keep up with these research trends, but not many studies have been conducted in the area of nuclear research using artificial intelligence and big data-related technologies such as natural language processing and text mining analysis. This study was conducted to confirm the applicability of data science analysis techniques to the field of nuclear research. Furthermore, the study of identifying trends in nuclear spent fuel recognition is critical in terms of being able to determine directions to nuclear industry policies and respond in advance to changes in industrial policies. For those reasons, this study conducted a media trend analysis of pyroprocessing, a spent nuclear fuel treatment technology. We objectively analyze changes in media perception of spent nuclear fuel dry treatment techniques by applying text mining analysis techniques. Text data specializing in Naver's web news articles, including the keywords "Pyroprocessing" and "Sodium Cooled Reactor," were collected through Python code to identify changes in perception over time. The analysis period was set from 2007 to 2020, when the first article was published, and detailed and multi-layered analysis of text data was carried out through analysis methods such as word cloud writing based on frequency analysis, TF-IDF and degree centrality calculation. Analysis of the frequency of the keyword showed that there was a change in media perception of spent nuclear fuel dry treatment technology in the mid-2010s, which was influenced by the Gyeongju earthquake in 2016 and the implementation of the new government's energy conversion policy in 2017. Therefore, trend analysis was conducted based on the corresponding time period, and word frequency analysis, TF-IDF, degree centrality values, and semantic network graphs were derived. Studies show that before the 2010s, media perception of spent nuclear fuel dry treatment technology was diplomatic and positive. However, over time, the frequency of keywords such as "safety", "reexamination", "disposal", and "disassembly" has increased, indicating that the sustainability of spent nuclear fuel dry treatment technology is being seriously considered. It was confirmed that social awareness also changed as spent nuclear fuel dry treatment technology, which was recognized as a political and diplomatic technology, became ambiguous due to changes in domestic policy. This means that domestic policy changes such as nuclear power policy have a greater impact on media perceptions than issues of "spent nuclear fuel processing technology" itself. This seems to be because nuclear policy is a socially more discussed and public-friendly topic than spent nuclear fuel. Therefore, in order to improve social awareness of spent nuclear fuel processing technology, it would be necessary to provide sufficient information about this, and linking it to nuclear policy issues would also be a good idea. In addition, the study highlighted the importance of social science research in nuclear power. It is necessary to apply the social sciences sector widely to the nuclear engineering sector, and considering national policy changes, we could confirm that the nuclear industry would be sustainable. However, this study has limitations that it has applied big data analysis methods only to detailed research areas such as "Pyroprocessing," a spent nuclear fuel dry processing technology. Furthermore, there was no clear basis for the cause of the change in social perception, and only news articles were analyzed to determine social perception. Considering future comments, it is expected that more reliable results will be produced and efficiently used in the field of nuclear policy research if a media trend analysis study on nuclear power is conducted. Recently, the development of uncontact-related technologies such as artificial intelligence and big data research is accelerating in the wake of the recent arrival of the New Normal era caused by corona. Convergence research is being conducted in earnest in various research fields to follow these research trends, but not many studies have been conducted in the nuclear field with artificial intelligence and big data-related technologies such as natural language processing and text mining analysis. The academic significance of this study is that it was possible to confirm the applicability of data science analysis technology in the field of nuclear research. Furthermore, due to the impact of current government energy policies such as nuclear power plant reductions, re-evaluation of spent fuel treatment technology research is undertaken, and key keyword analysis in the field can contribute to future research orientation. It is important to consider the views of others outside, not just the safety technology and engineering integrity of nuclear power, and further reconsider whether it is appropriate to discuss nuclear engineering technology internally. In addition, if multidisciplinary research on nuclear power is carried out, reasonable alternatives can be prepared to maintain the nuclear industry.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Comparison of CT based-CTV plan and CT based-ICRU38 plan in Brachytherapy Planning of Uterine Cervix Cancer (자궁경부암 강내조사 시 CT를 이용한 CTV에 근거한 치료계획과 ICRU 38에 근거한 치료계획의 비교)

  • Cho, Jung-Ken;Han, Tae-Jong
    • Journal of Radiation Protection and Research
    • /
    • v.32 no.3
    • /
    • pp.105-110
    • /
    • 2007
  • Purpose : In spite of recent remarkable improvement of diagnostic imaging modalities such as CT, MRI, and PET and radiation therapy planing systems, ICR plan of uterine cervix cancer, based on recommendation of ICRU38(2D film-based) such as Point A, is still used widely. A 3-dimensional ICR plan based on CT image provides dose-volume histogram(DVH) information of the tumor and normal tissue. In this study, we compared tumor-dose, rectal-dose and bladder-dose through an analysis of DVH between CTV plan and ICRU38 plan based on CT image. Method and Material : We analyzed 11 patients with a cervix cancer who received the ICR of Ir-192 HDR. After 40Gy of external beam radiation therapy, ICR plan was established using PLATO(Nucletron) v.14.2 planing system. CT scan was done to all the patients using CT-simulator(Ultra Z, Philips). We contoured CTV, rectum and bladder on the CT image and established CTV plan which delivers the 100% dose to CTV and ICRU plan which delivers the 100% dose to the point A. Result : The volume$(average{\pm}SD)$ of CTV, rectum and bladder in all of 11 patients is $21.8{\pm}6.6cm^3,\;60.9{\pm}25.0cm^3,\;111.6{\pm}40.1cm^3$ respectively. The volume covered by 100% isodose curve is $126.7{\pm}18.9cm^3$ in ICRU plan and $98.2{\pm}74.5cm^3$ in CTV plan(p=0.0001), respectively. In (On) ICRU planning, $22.0cm^3$ of CTV volume was not covered by 100% isodose curve in one patient whose residual tumor size is greater than 4cm, while more than 100% dose was irradiated unnecessarily to the normal organ of $62.2{\pm}4.8cm^3$ other than the tumor in the remaining 10 patients with a residual tumor less than 4cm in size. Bladder dose recommended by ICRU 38 was $90.1{\pm}21.3%$ and $68.7{\pm}26.6%$ in ICRU plan and in CTV plan respectively(p=0.001) while rectal dose recommended by ICRU 38 was $86.4{\pm}18.3%$ and $76.9{\pm}15.6%$ in ICRU plan and in CTV plan, respectively(p=0.08). Bladder and rectum maximum dose was $137.2{\pm}50.1%,\;101.1{\pm}41.8%$ in ICRU plan and $107.6{\pm}47.9%,\;86.9{\pm}30.8%$ in CTV plan, respectively. Therefore, the radiation dose to normal organ was lower in CTV plan than in ICRU plan. But the normal tissue dose was remarkably higher than a recommended dose in CTV plan in one patient whose residual tumor size was greater than 4cm. The volume of rectum receiving more than 80% isodose (V80rec) was $1.8{\pm}2.4cm^3$ in ICRU plan and $0.7{\pm}1.0cm^3$ in CTV plan(p=0.02). The volume of bladder receiving more than 80% isodose(V80bla) was $12.2{\pm}8.9cm^3$ in ICRU plan and $3.5{\pm}4.1cm^3$ in CTV plan(p=0.005). According to these parameters, CTV plan could also save more normal tissue compared to ICRU38 plan. Conclusion : An unnecessary excessive radiation dose is irradiated to normal tissues within 100% isodose area in the traditional ICRU plan in case of a small size of cervix cancer, but if we use CTV plan based on CT image, the normal tissue dose could be reduced remarkably without a compromise of tumor dose. However, in a large tumor case, we need more research on an effective 3D-planing to reduce the normal tissue dose.

Current and Future Operation of Menu Management in the School Foodservices of Chungbuk (1) - Menu Planning - (충북지역 학교급식 영양(교)사의 식단관리 운영실태 및 개선방안(1) - 식단계획 -)

  • Ahn, Yoon-Ju;Lee, Young-Eun
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.41 no.8
    • /
    • pp.1118-1133
    • /
    • 2012
  • This research aimed to suggest an efficient improvement plan for school food services by investigating the operating situation and recognition of menu management in school food services for school food service dietitians (and nutrition teachers) in Chungbuk. A total of 328 questionnaires were distributed to school food service dietitians (and nutrition teachers) in Chungbuk by e-mail in September, 2010. A total of 265 questionnaires (80.8%) were used for the analysis. The highest allocation of nutrients and calories per day in school food services was 1:1.5:1.5 (breakfast : lunch : dinner) (38.5%). The reasoning for applying a flexible allocation of nutrients and calories per day was 'considering the ratio of students who do not eat breakfast' (59.2%). And the way to apply the flexible allocation for nutrients and calories per day was 'by agreement from the school operating committee in arbitrary data without situation surveys' (86 respondents, 49.4%), and 'by agreement from the school operating committee in analysis data through situation surveys' (80 respondents, 46.0%). The operational method of standardized recipes was 'cooking management site of national education information systems' (87.5%) and the items included in standardized recipes were menu name, food material name, portion size, cooking method, nutrition analysis, and critical control point in HACCP. The main reason for not utilizing all items of a cooking management site of the national education information system was 'no big trouble in menu management even though it is used partly (29.1%). In addition, the highest use of standardized recipe was for 'maintaining consistency of food production quantity' (74.0%).

A Study Concerning Health Needs in Rural Korea (농촌(農村) 주민(住民)들의 의료필요도(醫療必要度)에 관(關)한 연구(硏究))

  • Lee, Sung-Kwan;Kim, Doo-Hie;Jung, Jong-Hak;Chunge, Keuk-Soo;Park, Sang-Bin;Choy, Chung-Hun;Heng, Sun-Ho;Rah, Jin-Hoon
    • Journal of Preventive Medicine and Public Health
    • /
    • v.7 no.1
    • /
    • pp.29-94
    • /
    • 1974
  • Today most developed countries provide modern medical care for most of the population. The rural area is the more neglected area in the medical and health field. In public health, the philosophy is that medical care for in maintenance of health is a basic right of man; it should not be discriminated against racial, environmental or financial situations. The deficiency of the medical care system, cultural bias, economic development, and ignorance of the residents about health care brought about the shortage of medical personnel and facilities on the rural areas. Moreover, medical students and physicians have been taught less about rural health care than about urban health care. Medical care, therefore, is insufficient in terms of health care personnel/and facilities in rural areas. Under such a situation, there is growing concern about the health problems among the rural population. The findings presented in this report are useful measures of the major health problems and even more important, as a guide to planning for improved medical care systems. It is hoped that findings from this study will be useful to those responsible for improving the delivery of health service for the rural population. Objectives: -to determine the health status of the residents in the rural areas. -to assess the rural population's needs in terms of health and medical care. -to make recommendations concerning improvement in the delivery of health and medical care for the rural population. Procedures: For the sampling design, the ideal would be to sample according to the proportion of the composition age-groups. As the health problems would be different by group, the sample was divided into 10 different age-groups. If the sample were allocated by proportion of composition of each age group, some age groups would be too small to estimate the health problem. The sample size of each age-group population was 100 people/age-groups. Personal interviews were conducted by specially trained medical students. The interviews dealt at length with current health status, medical care problems, utilization of medical services, medical cost paid for medical care and attitudes toward health. In addition, more information was gained from the public health field, including environmental sanitation, maternal and child health, family planning, tuberculosis control, and dental health. The sample Sample size was one fourth of total population: 1,438 The aged 10-14 years showed the largest number of 254 and the aged under one year was the smallest number of 81. Participation in examination Examination sessions usually were held in the morning every Tuesday, Wenesday, and Thursday for 3 hours at each session at the Namchun Health station. In general, the rate of participation in medical examination was low especially in ages between 10-19 years old. The highest rate of participation among are groups was the under one year age-group by 100 percent. The lowest use rate as low as 3% of those in the age-groups 10-19 years who are attending junior and senior high school in Taegu city so the time was not convenient for them to recieve examinations. Among the over 20 years old group, the rate of participation of female was higher than that of males. The results are as follows: A. Publie health problems Population: The number of pre-school age group who required child health was 724, among them infants numbered 96. Number of eligible women aged 15-44 years was 1,279, and women with husband who need maternal health numbered 700. The age-group of 65 years or older was 201 needed more health care and 65 of them had disabilities. (Table 2). Environmental sanitation: Seventy-nine percent of the residents relied upon well water as a primary source of dringking water. Ninety-three percent of the drinking water supply was rated as unfited quality for drinking. More than 90% of latrines were unhygienic, in structure design and sanitation (Table 15). Maternal and child health: Maternal health Average number of pregnancies of eligible women was 4 times. There was almost no pre- and post-natal care. Pregnancy wastage Still births was 33 per 1,000 live births. Spontaneous abortion was 156 per 1,000 live births. Induced abortion was 137 per 1,000 live births. Delivery condition More than 90 percent of deliveries were conducted at home. Attendants at last delivery were laymen by 76% and delivery without attendants was 14%. The rate of non-sterilized scissors as an instrument used to cut the umbilical cord was as high as 54% and of sickles was 14%. The rate of difficult delivery counted for 3%. Maternal death rate estimates about 35 per 10,000 live births. Child health Consultation rate for child health was almost non existant. In general, vaccination rate of children was low; vaccination rates for children aged 0-5 years with BCG and small pox were 34 and 28 percent respectively. The rate of vaccination with DPT and Polio were 23 and 25% respectively but the rate of the complete three injections were as low as 5 and 3% respectively. The number of dead children was 280 per 1,000 living children. Infants death rate was 45 per 1,000 live births (Table 16), Family planning: Approval rate of married women for family planning was as high as 86%. The rate of experiences of contraception in the past was 51%. The current rate of contraception was 37%. Willingness to use contraception in the future was as high as 86% (Table 17). Tuberculosis control: Number of registration patients at the health center currently was 25. The number indicates one eighth of estimate number of tuberculosis in the area. Number of discharged cases in the past accounted for 79 which showed 50% of active cases when discharged time. Rate of complete treatment among reasons of discharge in the past as low as 28%. There needs to be a follow up observation of the discharged cases (Table 18). Dental problems: More than 50% of the total population have at least one or more dental problems. (Table 19) B. Medical care problems Incidence rate: 1. In one month Incidence rate of medical care problems during one month was 19.6 percent. Among these health problems which required rest at home were 11.8 percent. The estimated number of patients in the total population is 1,206. The health problems reported most frequently in interviews during one month are: GI trouble, respiratory disease, neuralgia, skin disease, and communicable disease-in that order, The rate of health problems by age groups was highest in the 1-4 age group and in the 60 years or over age group, the lowest rate was the 10-14 year age group. In general, 0-29 year age group except the 1-4 year age group was low incidence rate. After 30 years old the rate of health problems increases gradually with aging. Eighty-three percent of health problems that occured during one month were solved by primary medical care procedures. Seventeen percent of health problems needed secondary care. Days rested at home because of illness during one month were 0.7 days per interviewee and 8days per patient and it accounts for 2,161 days for the total productive population in the area. (Table 20) 2. In a year The incidence rate of medical care problems during a year was 74.8%, among them health problems which required rest at home was 37 percent. Estimated number of patients in the total population during a year was 4,600. The health problems that occured most frequently among the interviewees during a year were: Cold (30%), GI trouble (18), respiratory disease (11), anemia (10), diarrhea (10), neuralgia (10), parasite disease (9), ENT (7), skin (7), headache (7), trauma (4), communicable disease (3), and circulatory disease (3) -in that order. The rate of health problems by age groups was highest in the infants group, thereafter the rate decreased gradually until the age 15-19 year age group which showed the lowest, and then the rate increased gradually with aging. Eighty-seven percent of health problems during a year were solved by primary medical care. Thirteen percent of them needed secondary medical care procedures. Days rested at home because of illness during a year were 16 days per interviewee and 44 days per patient and it accounted for 57,335 days lost among productive age group in the area (Table 21). Among those given medical examination, the conditions observed most frequently were respiratory disease, GI trouble, parasite disease, neuralgia, skin disease, trauma, tuberculosis, anemia, chronic obstructive lung disease, eye disorders-in that order (Table 22). The main health problems required secondary medical care are as fellows: (previous page). Utilization of medical care (treatment) The rate of treatment by various medical facilities for all health problems during one month was 73 percent. The rate of receiving of medical care of those who have health problems which required rest at home was 52% while the rate of those who have health problems which did not required rest was 61 percent (Table 23). The rate of receiving of medical care for all health problems during a year was 67 percent. The rate of receiving of medical care of those who have health problems which required rest at home was 82 percent while the rate of those who have health problems which did not required rest was as low as 53 percent (Table 24). Types of medical facilitied used were as follows: Hospital and clinics: 32-35% Herb clinics: 9-10% Drugstore: 53-58% Hospitalization Rate of hospitalization was 1.7% and the estimate number of hospitalizations among the total population during a year will be 107 persons (Table 25). Medical cost: Average medical cost per person during one month and a year were 171 and 2,800 won respectively. Average medical cost per patient during one month and a year were 1,109 and 3,740 won respectively. Average cost per household during a year was 15,800 won (Table 26, 27). Solution measures for health and medical care problems in rural area: A. Health problems which could be solved by paramedical workers such as nurses, midwives and aid nurses etc. are as follows: 1. Improvement of environmental sanitation 2. MCH except medical care problems 3. Family planning except surgical intervention 4. Tuberculosis control except diagnosis and prescription 5. Dental care except operational intervention 6. Health education for residents for improvement of utilization of medical facilities and early diagnosis etc. B. Medical care problems 1. Eighty-five percent of health problems could be solved by primary care procedures by general practitioners. 2. Fifteen percent of health problems need secondary medical procedures by a specialist. C. Medical cost Concidering the economic situation in rural area the amount of 2,062 won per residents during a year will be burdensome, so financial assistance is needed gorvernment to solve health and medical care problems for rural people.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

The Characteristics and Performances of Manufacturing SMEs that Utilize Public Information Support Infrastructure (공공 정보지원 인프라 활용한 제조 중소기업의 특징과 성과에 관한 연구)

  • Kim, Keun-Hwan;Kwon, Taehoon;Jun, Seung-pyo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.1-33
    • /
    • 2019
  • The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.