• Title/Summary/Keyword: measure

Search Result 31,082, Processing Time 0.074 seconds

Construction of Consumer Confidence index based on Sentiment analysis using News articles (뉴스기사를 이용한 소비자의 경기심리지수 생성)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.1-27
    • /
    • 2017
  • It is known that the economic sentiment index and macroeconomic indicators are closely related because economic agent's judgment and forecast of the business conditions affect economic fluctuations. For this reason, consumer sentiment or confidence provides steady fodder for business and is treated as an important piece of economic information. In Korea, private consumption accounts and consumer sentiment index highly relevant for both, which is a very important economic indicator for evaluating and forecasting the domestic economic situation. However, despite offering relevant insights into private consumption and GDP, the traditional approach to measuring the consumer confidence based on the survey has several limits. One possible weakness is that it takes considerable time to research, collect, and aggregate the data. If certain urgent issues arise, timely information will not be announced until the end of each month. In addition, the survey only contains information derived from questionnaire items, which means it can be difficult to catch up to the direct effects of newly arising issues. The survey also faces potential declines in response rates and erroneous responses. Therefore, it is necessary to find a way to complement it. For this purpose, we construct and assess an index designed to measure consumer economic sentiment index using sentiment analysis. Unlike the survey-based measures, our index relies on textual analysis to extract sentiment from economic and financial news articles. In particular, text data such as news articles and SNS are timely and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. There exist two main approaches to the automatic extraction of sentiment from a text, we apply the lexicon-based approach, using sentiment lexicon dictionaries of words annotated with the semantic orientations. In creating the sentiment lexicon dictionaries, we enter the semantic orientation of individual words manually, though we do not attempt a full linguistic analysis (one that involves analysis of word senses or argument structure); this is the limitation of our research and further work in that direction remains possible. In this study, we generate a time series index of economic sentiment in the news. The construction of the index consists of three broad steps: (1) Collecting a large corpus of economic news articles on the web, (2) Applying lexicon-based methods for sentiment analysis of each article to score the article in terms of sentiment orientation (positive, negative and neutral), and (3) Constructing an economic sentiment index of consumers by aggregating monthly time series for each sentiment word. In line with existing scholarly assessments of the relationship between the consumer confidence index and macroeconomic indicators, any new index should be assessed for its usefulness. We examine the new index's usefulness by comparing other economic indicators to the CSI. To check the usefulness of the newly index based on sentiment analysis, trend and cross - correlation analysis are carried out to analyze the relations and lagged structure. Finally, we analyze the forecasting power using the one step ahead of out of sample prediction. As a result, the news sentiment index correlates strongly with related contemporaneous key indicators in almost all experiments. We also find that news sentiment shocks predict future economic activity in most cases. In almost all experiments, the news sentiment index strongly correlates with related contemporaneous key indicators. Furthermore, in most cases, news sentiment shocks predict future economic activity; in head-to-head comparisons, the news sentiment measures outperform survey-based sentiment index as CSI. Policy makers want to understand consumer or public opinions about existing or proposed policies. Such opinions enable relevant government decision-makers to respond quickly to monitor various web media, SNS, or news articles. Textual data, such as news articles and social networks (Twitter, Facebook and blogs) are generated at high-speeds and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. Although research using unstructured data in economic analysis is in its early stages, but the utilization of data is expected to greatly increase once its usefulness is confirmed.

The Effects of Pergola Wisteria floribunda's LAI on Thermal Environment (그늘시렁 Wisteria floribunda의 엽면적지수가 온열환경에 미치는 영향)

  • Ryu, Nam-Hyong;Lee, Chun-Seok
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.45 no.6
    • /
    • pp.115-125
    • /
    • 2017
  • This study was to investigate the user's thermal environments under the pergola($L\;7,200{\times}W\;4,200{\times}H\;2,700mn$) covered with Wisteria floribunda(Willd.) DC. according to the variation of leaf area index(LAI). We carried out detailed measurements with two human-biometeorological stations on a popular square Jinju, Korea($N35^{\circ}10^{\prime}59.8^{{\prime}{\prime}}$, $E\;128^{\circ}05^{\prime}32.0^{{\prime}{\prime}}$, elevation: 38m). One of the stations stood under a pergola, while the other in the sun. The measurement spots were instrumented with microclimate monitoring stations to continuously measure air temperature and relative humidity, wind speed, shortwave and longwave radiation from the six cardinal directions at the height of 0.6m so as to calculate the Universal Thermal Climate Index(UTCI) from $9^{th}$ April to $27^{th}$ September 2017. The LAI was measured using the LAI-2200C Plant Canopy Analyzer. The analysis results of 18 day's 1 minute term human-biometeorological data absorbed by a man in sitting position from 10am to 4pm showed the following. During the whole observation period, daily average air temperatures under the pergola were respectively $0.7{\sim}2.3^{\circ}C$ lower compared with those in the sun, daily average wind speed and relative humidity under the pergola were respectively 0.17~0.38m/s and 0.4~3.1% higher compared with those in the sun. There was significant relationship in LAI, Julian day number and were expressed in the equation $y=-0.0004x^2+0.1719x-11.765(R^2=0.9897)$. The average $T_{mrt}$ under the pergola were $11.9{\sim}25.4^{\circ}C$ lower and maximum ${\Delta}T_{mrt}$ under the pergola were $24.1{\sim}30.2^{\circ}C$ when compared with those in the sun. There was significant relationship in LAI, reduction ratio(%) of daily average $T_{mrt}$ compared with those in the sun and was expressed in the equation $y=0.0678{\ln}(x)+0.3036(R^2=0.9454)$. The average UTCI under the pergola were $4.1{\sim}8.3^{\circ}C$ lower and maximum ${\Delta}UTCI$ under the pergola were $7.8{\sim}10.2^{\circ}C$ when compared with those in the sun. There was significant relationship in LAI, reduction ratio(%) of daily average UTCI compared with those in the sun and were expressed in the equation $y=0.0322{\ln}(x)+0.1538(R^2=0.8946)$. The shading by the pergola covered with vines was very effective for reducing daytime UTCI absorbed by a man in sitting position at summer largely through a reduction in mean radiant temperature from sun protection, lowering thermal stress from very strong(UTCI >$38^{\circ}C$) and strong(UTCI >$32^{\circ}C$) down to strong(UTCI >$32^{\circ}C$) and moderate(UTCI >$26^{\circ}C$). Therefore the pergola covered with vines used for shading outdoor spaces is essential to mitigate heat stress and can create better human thermal comfort especially in cities during summer. But the thermal environments under the pergola covered with vines during the heat wave supposed to user "very strong heat stress(UTCI>$38^{\circ}C$)". Therefore users must restrain themselves from outdoor activities during the heat waves.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Product Community Analysis Using Opinion Mining and Network Analysis: Movie Performance Prediction Case (오피니언 마이닝과 네트워크 분석을 활용한 상품 커뮤니티 분석: 영화 흥행성과 예측 사례)

  • Jin, Yu;Kim, Jungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.49-65
    • /
    • 2014
  • Word of Mouth (WOM) is a behavior used by consumers to transfer or communicate their product or service experience to other consumers. Due to the popularity of social media such as Facebook, Twitter, blogs, and online communities, electronic WOM (e-WOM) has become important to the success of products or services. As a result, most enterprises pay close attention to e-WOM for their products or services. This is especially important for movies, as these are experiential products. This paper aims to identify the network factors of an online movie community that impact box office revenue using social network analysis. In addition to traditional WOM factors (volume and valence of WOM), network centrality measures of the online community are included as influential factors in box office revenue. Based on previous research results, we develop five hypotheses on the relationships between potential influential factors (WOM volume, WOM valence, degree centrality, betweenness centrality, closeness centrality) and box office revenue. The first hypothesis is that the accumulated volume of WOM in online product communities is positively related to the total revenue of movies. The second hypothesis is that the accumulated valence of WOM in online product communities is positively related to the total revenue of movies. The third hypothesis is that the average of degree centralities of reviewers in online product communities is positively related to the total revenue of movies. The fourth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. The fifth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. To verify our research model, we collect movie review data from the Internet Movie Database (IMDb), which is a representative online movie community, and movie revenue data from the Box-Office-Mojo website. The movies in this analysis include weekly top-10 movies from September 1, 2012, to September 1, 2013, with in total. We collect movie metadata such as screening periods and user ratings; and community data in IMDb including reviewer identification, review content, review times, responder identification, reply content, reply times, and reply relationships. For the same period, the revenue data from Box-Office-Mojo is collected on a weekly basis. Movie community networks are constructed based on reply relationships between reviewers. Using a social network analysis tool, NodeXL, we calculate the averages of three centralities including degree, betweenness, and closeness centrality for each movie. Correlation analysis of focal variables and the dependent variable (final revenue) shows that three centrality measures are highly correlated, prompting us to perform multiple regressions separately with each centrality measure. Consistent with previous research results, our regression analysis results show that the volume and valence of WOM are positively related to the final box office revenue of movies. Moreover, the averages of betweenness centralities from initial community networks impact the final movie revenues. However, both of the averages of degree centralities and closeness centralities do not influence final movie performance. Based on the regression results, three hypotheses, 1, 2, and 4, are accepted, and two hypotheses, 3 and 5, are rejected. This study tries to link the network structure of e-WOM on online product communities with the product's performance. Based on the analysis of a real online movie community, the results show that online community network structures can work as a predictor of movie performance. The results show that the betweenness centralities of the reviewer community are critical for the prediction of movie performance. However, degree centralities and closeness centralities do not influence movie performance. As future research topics, similar analyses are required for other product categories such as electronic goods and online content to generalize the study results.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

A Review of Personal Radiation Dose per Radiological Technologists Working at General Hospitals (전국 종합병원 방사선사의 개인피폭선량에 대한 고찰)

  • Jung, Hong-Ryang;Lim, Cheong-Hwan;Lee, Man-Koo
    • Journal of radiological science and technology
    • /
    • v.28 no.2
    • /
    • pp.137-144
    • /
    • 2005
  • To find the personal radiation dose of radiological technologists, a survey was conducted to 623 radiological technologists who had been working at 44 general hospitals in Korea's 16 cities and provinces from 1998 to 2002. A total of 2,624 cases about personal radiological dose that were collected were analyzed by region, year and hospital, the results of which look as follows : 1. The average radiation dose per capita by region and year for the 5 years was 1.61 mSv. By region, Daegu showed the highest amount 4.74 mSv, followed by Gangwon 4.65 mSv and Gyeonggi 2.15 mSv. The lowest amount was recorded in Chungbuk 0.91 mSv, Jeju 0.94 mSv and Busan 0.97 mSv in order. By year, 2000 appeared to be the year showing the highest amount of radiation dose 1.80 mSv, followed by 2002 1.77 mSv, 1999 1.55 mSv, 2001 1.50 mSv and 1998 1.36 mSv. 2. In 1998, Gangwon featured the highest amount of radiological dose per capita 3.28 mSv, followed by Gwangju 2.51 mSv and Daejeon 2.25 mSv, while Jeju 0.86mSv and Chungbuk 0.85 mSv belonged to the area where the radiation dose remained less than 1.0 mSv In 1999, Gangwon also topped the list with 5.67 mSv, followed by Daegu with 4.35 mSv and Gyeonggi with 2.48 mSv. In the same year, the radiation dose was kept below 1.0 mSv. in Ulsan 0.98 mSv, Gyeongbuk 0.95 mSv and Jeju 0.91 mSv. 3. In 2000, Gangwon was again at the top of the list with 5.73 mSv. Ulsan turned out to have less than 1.0 mSv of radiation dose in the years 1998 and 1999 consecutively, whereas the amount increased relatively high to 5.20 mSv. Chungbuk remained below the level of 1.0 mSv with 0.79 mSv. 4. In 2001, Daegu recorded the highest amount of radiation dose among those ever analyzed for 5 years with 9.05 mSv, followed by Gangwon with 4.01 mSv. The area with less than 1.0 mSv included Gyeongbuk 0.99 mSv and Jeonbuk 0.92 mSv. In 2002, Gangwon also led the list with 4.65 mSv while Incheon 0.88 mSv, Jeonbuk 0.96 mSv and Jeju 0.68 mSv belonged to the regions with less than 1.0 mSv of radiation dose. 5. By hospital, KMH in Daegu showed the record high amount of average radiation dose during the period of 5 years 6.82 mSv, followed by GAH 5.88 mSv in Gangwon and CAH 3.66 mSv in Seoul. YSH in Jeonnam 0.36 mSv comes first in the order of the hospitals with least amount of radiation dose, followed by GNH in Gyeongnam 0.39 mSv and DKH in Chungnam 0.51 mSv. There is a limit to the present study in that a focus is laid on the radiological technologists who are working at the 3rd referral hospitals which are regarded to be stable in terms of working conditions while radiological technologists who are working at small-sized hospitals are excluded from the survey. Besides, there are also cases in which hospitals with less than 5 years since establishment are included in the survey and the radiological technologists who have worked for less than 5 years at a hospital are also put to survey. We can't exclude the possibility, either, of assumption that the difference of personal average radiological dose by region, hospital and year might be ascribed to the different working conditions and facilities by medical institutions. It seems therefore desirable to develop standardized instruments to measure working environment objectively and to invent device to compare and analyze them by region and hospital more accurately in the future.

  • PDF

Studies on the Estimation of Growth Pattern Cut-up Parts in Four Broiler Strain in Growing Body Weight (육용계에 있어서 계통간 산육능력 및 체중증가에 따른 각 부위별 증가양상 추정에 관한 연구)

  • 양봉국;조병욱
    • Korean Journal of Poultry Science
    • /
    • v.17 no.3
    • /
    • pp.141-156
    • /
    • 1990
  • The experiments were conducted to investigate the possibility of improving the effectiveness of the existing method to estimate the edible meat weight in the live broiler chicken. A total of 360 birds, five male and female chicks from each line were sacrificed at Trial 1 (body weight 900-1, 000g), Trial 2 (body weight 1.200-1, 400g), Trial 3(body weight 1, 600-1, 700), and Trial 4(body weight 2, 000g) in order to measure the body weight, edible meat weight of breast, thigh and drumsticks, and various components of body weight. Each line was reared at the Poultry Breeding Farm, Seoul National University from the second of july, 1987 to the thirteenth of September, 1987. The results obtained from this study were summarized as follows : 1. The average body weights of each line( H. T, M, A) were $2150.5\pm$34.9, $2133.0\pm$26.2, $1960.0\pm$23.1, and $2319.3\pm$27.9, respectively. at 7 weeks of age. The feed to body weight eain ratio for each line chicks was 2.55, 2.13, 2.08, and 2.03, respectively, for 0 to 7 weeks of age. The viability of each line was 99.7. 99.7, 100.0, and 100.0%, respectively, for 0 to 7 weeks of age.01 was noticed that A Line chicks grow significantly heavier than did T, H, M line chic ks from 0 to 7 weeks of age. The regression coefficients of growth curves from each line chicks were bA=1.015, bH=0.265, bM=0.950 and bT=0.242, respectively. 2. Among the body weight components, the feather. abdominal fat, breast, and thigh and drumsticks increased in their weight percentage as the birds grew older, while neck. head, giblets and inedible viscera decreased. No difference wat apparent in shank, wings and hack. 3. The weight percentages of breast in edible part for each line thicks were 19.2, 19.0, 19.9 and 19.0% at Trial 4, respectively. The weight percentages of thigh and drumsticks in edible part for each line chicks were 23.1, 23.3, 22.8, and 23.0% at Trial 4. respective1y. 4. The values for the percentage meat yield from breast were 77.2. 78.9 73.5 and 74.8% at Trial 4 in H, T, M and A Line chicks. respectively. For thigh and drumstick, the values of 80.3, 78.4. 79.7 and 80.2% were obtained. These data indicate that the percentage meat yield increase as the birds grow older. 5. The correlation coefficients between body weight and blood. head, shanks. breast. thigh-drumstick were high. The degree if correlation between abdominal fat(%) and percentage of edible meat were extremely low at all times, but those between abdominal fat (%) and inedible viscera were significantly high.

  • PDF

Changes of Brain Natriuretic Peptide Levels according to Right Ventricular HemodynaMics after a Pulmonary Resection (폐절제술 후 우심실의 혈역학적 변화에 따른 BNP의 변화)

  • Na, Myung-Hoon;Han, Jong-Hee;Kang, Min-Woong;Yu, Jae-Hyeon;Lim, Seung-Pyung;Lee, Young;Choi, Jae-Sung;Yoon, Seok-Hwa;Choi, Si-Wan
    • Journal of Chest Surgery
    • /
    • v.40 no.9
    • /
    • pp.593-599
    • /
    • 2007
  • Background: The correlation between levels of brain natriuretic peptide (BNP) and the effect of pulmonary resection on the right ventricle of the heart is not yet widely known. This study aims to assess the relationship between the change in hemodynamic values of the right ventricle and increased BNP levels as a compensatory mechanism for right heart failure following pulmonary resection and to evaluate the role of the BNP level as an index of right heart failure after pulmonary resection. Material and Method: In 12 non small cell lung cancer patients that had received a lobectomy or pnemonectomy, the level of NT-proBNP was measured using the immunochemical method (Elecsys $1010^{(R)}$, Roche, Germany) which was compared with hemodynamic variables determined through the use of a Swan-Garz catheter prior to and following the surgery. Echocardiography was performed prior to and following the surgery, to measure changes in right ventricular and left ventricular pressures. For statistical analysis, the Wilcoxon rank sum test and linear regression analysis were conducted using SPSSWIN (version, 11.5). Result: The level of postoperative NT-proBNP (pg/mL) significantly increased for 6 hours, then for 1 day, 2 days, 3 days and 7 days after the surgery (p=0.003, 0.002, 0.002, 0.006, 0.004). Of the hemodynamic variables measured using the Swan-Ganz catheter, the mean pulmonary artery pressure after the surgery when compared with the pressure prior to surgery significantly increased at 0 hours, 6 hours, then 1 day, 2 days, and 3 days after the surgery (p=0.002, 0,002, 0.006, 0.007, 0.008). The right ventricular pressure significantly increased at 0 hours, 6 hours, then 1 day, and 3 days after the surgery (p=0.000, 0.009, 0.044, 0.032). The pulmonary vascular resistance index [pulmonary vascular resistance index=(mean pulmonary artery pressure-mean pulmonary capillary wedge pressure)/cardiac output index] significantly increased at 6 hours, then 2 days after the surgery (p=0.008, 0.028). When a regression analysis was conducted for changes in the mean pulmonary artery pressure and NT-proBNP levels after the surgery, significance was evident after 6 hours (r=0.602, p=0.038) and there was no significance thereafter. Echocardiography displayed no significant changes after the surgery. Conclusion: There was a significant correlation between changes in the mean pulmonary artery pressure and the NT-proBNP level 6 hours after a pulmonary resection. Therefore, it can be concluded that changes in NT-proBNP level after a pulmonary resection can serve as an index that reflects early hemodynamic changes in the right ventricle after a pulmonary resection.

Literature Analysis of Radiotherapy in Uterine Cervix Cancer for the Processing of the Patterns of Care Study in Korea (한국에서 자궁경부알 방사선치료의 Patterns of Care Study 진행을 위한 문헌 비교 연구)

  • Choi Doo Ho;Kim Eun Seog;Kim Yong Ho;Kim Jin Hee;Yang Dae Sik;Kang Seung Hee;Wu Hong Gyun;Kim Il Han
    • Radiation Oncology Journal
    • /
    • v.23 no.2
    • /
    • pp.61-70
    • /
    • 2005
  • Purpose: Uterine cervix cancer is one of the most prevalent women cancer in Korea. We analysed published papers in Korea with comparing Patterns of Care Study (PCS) articles of United States and Japan for the purpose of developing and processing Korean PCS. Materials and Methods: We searched PCS related foreign-produced papers in the PCS homepage (212 articles and abstracts) and from the Pub Med to find Structure and Process of the PCS. To compare their study with Korean papers, we used the internet site 'Korean Pub Med' to search 99 articles regarding uterine cervix cancer and radiation therapy. We analysed Korean paper by comparing them with selected PCS papers regarding Structure, Process and Outcome and compared their items between the period of before 1980's and 1990's. Results: Evaluable papers were 28 from United States, 10 from the Japan and 73 from the Korea which treated cervix PCS items. PCS papers for United States and Japan commonly stratified into $3\~4$ categories on the bases of the scales characteristics of the facilities, numbers of the patients, doctors, Researchers restricted eligible patients strictly. For the process of the study, they analysed factors regarding pretreatment staging in chronological order, treatment related factors, factors in addition to FIGO staging and treatment machine. Papers in United States dealt with racial characteristics, socioeconomic characteristics of the patients, tumor size (6), and bilaterality of parametrial or pelvic side wail invasion (5), whereas papers from Japan treated of the tumor markers. The common trend in the process of staging work-up was decreased use of lymphangiogram, barium enema and increased use of CT and MRI over the times. The recent subject from the Korean papers dealt with concurrent chemoradiotherapy (9 papers), treatment duration (4), tumor markers (B) and unconventional fractionation. Conclusion: By comparing papers among 3 nations, we collected items for Korean uterine cervix cancer PCS. By consensus meeting and close communication, survey items for cervix cancer PCS were developed to measure structure, process and outcome of the radiation treatment of the cervix cancer. Subsequent future research will focus on the use of brachytherapy and its impact on outcome including complications. These finding and future PCS studies will direct the development of educational programs aimed at correcting identified deficits in care.