• Title/Summary/Keyword: problem analysis

Search Result 16,360, Processing Time 0.05 seconds

A study on the application of M2PL-Q model for analyzing assessment data considering both content and cognitive domains: An analysis of TIMSS 2019 mathematics data (내용 및 인지 영역을 함께 고려한 평가 데이터 분석을 위한 Q행렬 기반 다차원 문항반응모형의 활용 방안 연구: TIMSS 2019 수학 평가 분석)

  • Kim, Rae Yeong;Hwang, Su Bhin;Lee, Seul Gi;Yoo, Yun Joo
    • Communications of Mathematical Education
    • /
    • v.38 no.3
    • /
    • pp.379-400
    • /
    • 2024
  • This study aims to propose a method for analyzing mathematics assessment data that integrates both content and cognitive domains, utilizing the multidimensional two-parameter logistic model with a Q-matrix (M2PL-Q; da Silva, 2019). The method was applied to the TIMSS 2019 8th-grade mathematics assessment data. The results demonstrate that the M2PL-Q model effectively estimates students' ability levels across both domains, highlighting the interrelationships between abilities in each domain. Additionally, the M2PL-Q model was found to be effective in estimating item characteristics by differentiating between content and cognitive domain, revealing that their influence on problem-solving can vary across items. This study is significant in that it offers a comprehensive analytical approach that incorporates both content and cognitive domains, which were traditionally analyzed separately. By using the estimated ability levels for individual student diagnostics, students' strengths and weaknesses in specific content and cognitive areas can be identified, supporting more targeted learning interventions. Furthermore, by considering the detailed characteristics of each assessment item and applying them appropriately based on the context and purpose of the assessment, the validity and efficiency of assessments can be enhanced, leading to more accurate diagnoses of students' ability levels.

A Study on the Problems and Promotion Plans of Eco-Friendly Ship Finance (친환경 선박 금융의 문제점 및 활성화 방안에 관한 연구)

  • Hwang, Seung-Pyo;Song, Sang-Keun;Shin, Yong-John
    • Journal of Korea Port Economic Association
    • /
    • v.40 no.3
    • /
    • pp.27-45
    • /
    • 2024
  • This study analyzed the problems and promotion plans of shipping finance for eco-friendly ships to respond to marine pollution reduction regulations. The overall level of awareness of the problems of eco-friendly ship financing was found to be average(4.0~4.53), as the demand for construction and introduction of eco-friendly ships is not yet high. However, the response to the item about shipping fiance companies' lack of awareness of the importance of eco-friendly ships was low at 3.35. In the plan to promote eco-friendly ship finance, The need for security token offering(STO) and converting financial settlement to Korean won were low. Issuance of green bonds, creation of sovereign wealth fund(SWF), utilization of pension and superannuation fund, accelerated depreciation after construction, interest subsidy of government, establishment of a shipping exchange(promoting the introduction of eco-friendly ships), and development of eco-friendly ship supply chain connections were evaluated slightly higher than average. However, it was found that there is a high need for supporting shipping subsidies, providing special benefits for financial institutions, strengthening credit guarantees, reviving tax benefits for ship investment funds, providing incentives for early retirement vessels, establishing cooperative networks, supporting research and development, and establishing standards for operation eco-friendly ships. However, in seven promotion plans, such as strengthening credit guarantees, the level of awareness of shipping companies was statistically significantly higher than that of shipping fiancé companies.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

A Methodology to Develop a Curriculum based on National Competency Standards - Focused on Methodology for Gap Analysis - (국가직무능력표준(NCS)에 근거한 조경분야 교육과정 개발 방법론 - 갭분석을 중심으로 -)

  • Byeon, Jae-Sang;Ahn, Seong-Ro;Shin, Sang-Hyun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.43 no.1
    • /
    • pp.40-53
    • /
    • 2015
  • To train the manpower to meet the requirements of the industrial field, the introduction of the National Qualification Frameworks(hereinafter referred to as NQF) was determined in 2001 by National Competency Standards(hereinafter referred to as NCS) centrally of the Office for Government Policy Coordination. Also, for landscape architecture in the construction field, the "NCS -Landscape Architecture" pilot was developed in 2008 to be test operated for 3 years starting in 2009. Especially, as the 'realization of a competence-based society, not by educational background' was adopted as one of the major government projects in the Park Geun-Hye government(inaugurated in 2013) the NCS system was constructed on a nationwide scale as a detailed method for practicing this. However, in the case of the NCS developed by the nation, the ideal job performing abilities are specified, therefore there are weaknesses of not being able to reflect the actual operational problem differences in the student level between universities, problems of securing equipment and professors, and problems in the number of current curricula. For soft landing to practical curriculum, the process of clearly analyzing the gap between the current curriculum and the NCS must be preceded. Gap analysis is the initial stage methodology to reorganize the existing curriculum into NCS based curriculum, and based on the ability unit elements and performance standards for each NCS ability unit, the discrepancy between the existing curriculum within the department or the level of coincidence used a Likert scale of 1 to 5 to fill in and analyze. Thus, the universities wishing to operate NCS in the future measuring the level of coincidence and the gap between the current university curriculum and NCS can secure the basic tool to verify the applicability of NCS and the effectiveness of further development and operation. The advantages of reorganizing the curriculum through gap analysis are, first, that the government financial support project can be connected to provide quantitative index of the NCS adoption rate for each qualitative department, and, second, an objective standard is provided on the insufficiency or sufficiency when reorganizing to NCS based curriculum. In other words, when introducing in the subdivisions of the relevant NCS, the insufficient ability units and the ability unit elements can be extracted, and the supplementary matters for each ability unit element per existing subject can be extracted at the same time. There is an advantage providing directions for detailed class program and basic subject opening. The Ministry of Education and the Ministry of Employment and Labor must gather people from the industry to actively develop and supply the NCS standard a practical level to systematically reflect the requirements of the industrial field the educational training and qualification, and the universities wishing to apply NCS must reorganize the curriculum connecting work and qualification based on NCS. To enable this, the universities must consider the relevant industrial prospect and the relation between the faculty resources within the university and the local industry to clearly select the NCS subdivision to be applied. Afterwards, gap analysis must be used for the NCS based curriculum reorganization to establish the direction of the reorganization more objectively and rationally in order to participate in the process evaluation type qualification system efficiently.

Major Class Recommendation System based on Deep learning using Network Analysis (네트워크 분석을 활용한 딥러닝 기반 전공과목 추천 시스템)

  • Lee, Jae Kyu;Park, Heesung;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.95-112
    • /
    • 2021
  • In university education, the choice of major class plays an important role in students' careers. However, in line with the changes in the industry, the fields of major subjects by department are diversifying and increasing in number in university education. As a result, students have difficulty to choose and take classes according to their career paths. In general, students choose classes based on experiences such as choices of peers or advice from seniors. This has the advantage of being able to take into account the general situation, but it does not reflect individual tendencies and considerations of existing courses, and has a problem that leads to information inequality that is shared only among specific students. In addition, as non-face-to-face classes have recently been conducted and exchanges between students have decreased, even experience-based decisions have not been made as well. Therefore, this study proposes a recommendation system model that can recommend college major classes suitable for individual characteristics based on data rather than experience. The recommendation system recommends information and content (music, movies, books, images, etc.) that a specific user may be interested in. It is already widely used in services where it is important to consider individual tendencies such as YouTube and Facebook, and you can experience it familiarly in providing personalized services in content services such as over-the-top media services (OTT). Classes are also a kind of content consumption in terms of selecting classes suitable for individuals from a set content list. However, unlike other content consumption, it is characterized by a large influence of selection results. For example, in the case of music and movies, it is usually consumed once and the time required to consume content is short. Therefore, the importance of each item is relatively low, and there is no deep concern in selecting. Major classes usually have a long consumption time because they have to be taken for one semester, and each item has a high importance and requires greater caution in choice because it affects many things such as career and graduation requirements depending on the composition of the selected classes. Depending on the unique characteristics of these major classes, the recommendation system in the education field supports decision-making that reflects individual characteristics that are meaningful and cannot be reflected in experience-based decision-making, even though it has a relatively small number of item ranges. This study aims to realize personalized education and enhance students' educational satisfaction by presenting a recommendation model for university major class. In the model study, class history data of undergraduate students at University from 2015 to 2017 were used, and students and their major names were used as metadata. The class history data is implicit feedback data that only indicates whether content is consumed, not reflecting preferences for classes. Therefore, when we derive embedding vectors that characterize students and classes, their expressive power is low. With these issues in mind, this study proposes a Net-NeuMF model that generates vectors of students, classes through network analysis and utilizes them as input values of the model. The model was based on the structure of NeuMF using one-hot vectors, a representative model using data with implicit feedback. The input vectors of the model are generated to represent the characteristic of students and classes through network analysis. To generate a vector representing a student, each student is set to a node and the edge is designed to connect with a weight if the two students take the same class. Similarly, to generate a vector representing the class, each class was set as a node, and the edge connected if any students had taken the classes in common. Thus, we utilize Node2Vec, a representation learning methodology that quantifies the characteristics of each node. For the evaluation of the model, we used four indicators that are mainly utilized by recommendation systems, and experiments were conducted on three different dimensions to analyze the impact of embedding dimensions on the model. The results show better performance on evaluation metrics regardless of dimension than when using one-hot vectors in existing NeuMF structures. Thus, this work contributes to a network of students (users) and classes (items) to increase expressiveness over existing one-hot embeddings, to match the characteristics of each structure that constitutes the model, and to show better performance on various kinds of evaluation metrics compared to existing methodologies.

FEM Analysis of the Effects of Mouth guard material properties on the Head and Brain under Mandibular Impact (구강보호장치의 재료적인 특성이 하악골 충격 시악골 및 두부에 미치는 영향에 관한 유한요소분석)

  • Kang, Nam-Hyun;Kim, Hyung-Sub;Woo, Yi-Hyung;Choi, Dae-Gyun
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.4
    • /
    • pp.325-334
    • /
    • 2008
  • Statement of problem & Purpose: The purpose of this study was to investigate the effect of a mouth guard material properties on the skull and brain when they were under impact loads on mandible. Material and methods: Two customized mouth protectors having different material propeerst ieach other were made for a female Korean who had no history of brain trauma, no cerebral diseases, nomal occlusion and natural dentition. The 3D finite element model of human skull and brain scanned by means of computed tomography was constructed. The FEM model of head was composed of 407,825 elements and 82,138 nodes, including skull, brain, maxilla, mandible, articular disc, teeth and mouth guard. The stress concentrations on maxillary teeth, maxilla and skull with two mouth guards were evaluated under oblique impact load of 800N onto mandibular 3 loading points for 0.1sec. And the brain relative displacement was compared in two different mouth guard materials under same condition. Result and Conclusion: The results were as follows; 1. In comparison of von Mises stress on maxillary teeth, a soft mouth guard material had significantly lower stress values on measuring point than a hard mouth protector materials (P < .05). 2. In comparison of von Mises stress on maxilla and skull, A soft mouth protector material had significantly lower stress values on measuring point than a hard mouth protector materials (P < .05). 3. For impact loads on mandible, there were more stress concentrated area on maxilla and skull with hard mouth guard than soft with mouth protector. 4. For impact loads on mandible, brain relative displacement had little relation with mouth guard material properties. In results of this study, soft mouth guard materials were superior to hard mouth guard materials for mandible impact loads for prevention of sports injuries. Although the results of this study were not enough to figure out the roles of needed mouth guard material properties for a human head, we got some knowledge of the pattern about stress concentration and distribution on maxilla and skull for impact loads with soft or hard mouth protector. More studies are needed to substantiate the relationship between the mouth guard materials and sports injuries.

Applicability of Partial Post-Tension Method for Deflection Control of Reinforced Concrete Slabs (RC슬래브의 처짐제어를 위한 상향긴장식 부분PT공법의 적용)

  • Lee, Deuck-Hang;Kim, Kang-Su;Kim, Sang-Sik;Kim, Yong-Nam;Lim, Joo-Hyuk
    • Journal of the Korea Concrete Institute
    • /
    • v.21 no.3
    • /
    • pp.347-358
    • /
    • 2009
  • Recently, it is getting into a good situation for the flat-plate slab system to be applied. The flat-plate slab without beam, however, is often too weak to control deflection properly compared to other typical slab-beam structures, for which the post-tension method is generally regarded as one of best solutions. The post-tension (PT) method can effectively control deflection without increase of slab thickness. Despite this good advantage, however, the application of PT method has been very limited due to cost increase, technical problems, and lack of experiences. Therefore, in order to reduce difficulties on applying full PT method under the current domestic circumstances and to enhance constructability of PT system, this research proposed the partial PT method with top jacking anchorage applied in a part of span as need. For the top jacking anchorage system, the efficiency of deflection control shall be considered in detail because it can vary widely depending on the location of anchorage that can be placed anywhere as need, and tensile stresses induced at back of the anchorage zone also shall be examined. Therefore, in this study, analysis were performed on the efficiency of deflection control depending on the location of anchorage and on tensile stresses or forces using finite element method and strut and tie model in the proposed top jacking anchorage system. The proposed jacking system were also applied to the floor slabs at a construction site to investigate its applicability and the analysis results of slab behavior were compared to the measured values obtained from the PT slab constructed by the partial PT method. The result of this study indicates that the partial PT method can be very efficiently applied with little cost increase to control deflection and tensile stresses in the region as a need basis where problem exists.

The Group Counseling Program for Terminal Cancer Patients and their Family Members in the Seoul National University Hospital (말기 암환자와 가족을 위한 집단상담 프로그램 - 서울대학교병원 경험의 분석-)

  • Lee, Young-Sook;Heo, Dae-Seog;Yun, Young-Ho;Kim, Hyun-Sook;Choi, Kyung-Sook;Yun, Yeo-Jung
    • Journal of Hospice and Palliative Care
    • /
    • v.1 no.1
    • /
    • pp.56-64
    • /
    • 1998
  • Purpose : Seoul National University Hospital developed a group counseling program for the terminal cancer patients and their family members. This program consists of each of doctor, nutritionist, nurse, pharmacist, and social worker to provide them with the information and to enhance their ability to cope with terminal cancer. This research aims to introduce this new program per se, and to appreciate its validity and applicability to the terminal cancer patients and their family members by analyzing the concerns and specific questions of the participants. Methods : The methodological approach employed in this research is 1996 content analysis of the group counseling reports, and interview of the 312 participants. The analysis includes the general characteristics of the subjects, family relationship to the patients, times of attendance to the group session, source of information to the program. Results : The participants consist of 261 family members(84%) and 51 patients(16%). Majority responded to the program with a single-attendance. Diagnosis are mainly lung cancer, stomach cancer, liver cancer. The ratio of participants by family members is decreased in the order of spouse, children, daughter-in-law, brothers and sisters, and parents. The source of information to the program is largely through medical staff(69%) as compared with posters in the hospital (26%). The participants are interested primarily in the medical information. Their interests are various, such as pain control, patient care, nutrition, psychosocial problem and etc. Conclusion : This program is characterized largely as a family-supporting program which primarily offers information for terminal cancer. This program is a sort of a hospice program, which maximizes the present quality of living of the terminal cancer patients as long as life continues by encouraging them to live with terminal cancer. Thus, this group program can be employed as an active support network for the patients and their family. In order to develop comprehensive care-giving services, it is required to have 24-hour telephone service, hospice facilities, home care service, and communication between the referral hospitals and the primary care physicians, in particular. Such a development of services is the ultimate goal for improving care. But the immediate goal of the program is to make possible better education for the patients and their family to live with terminal cancer.

  • PDF

A Study on the Surgical Hand Scrub and Surgical Glove Perforation (외과적 손씻기 및 외과용 장갑의 천공율에 대한 연구)

  • 윤혜상
    • Journal of Korean Academy of Nursing
    • /
    • v.25 no.4
    • /
    • pp.653-667
    • /
    • 1995
  • Post - operative wound infections have been a serious problem in nursing care in the operating room and appear to be strongly related to the infection occurring during the performance of operation. The purpose of this study is to identify patterns in duration of surgical hand scrub (SHS), to evaluate the method of SHS and to examine the rate of glove perforation. Subjects for this study include 244 doctors and 169 nurses working in the operative theatre of a hospital in Seoul area. Test samples and related data were collected from this medical facility between April 1, through 15, and July 1, through 5, 1995 by the author and a staff member working in the operating room. For the study, data on the SHS of doctors and nurses were obtained at the time of operation and multiple batches of surgical gloves worn by the operating doctors were collected after each operation. The duration of SHS was measured with a stop watch and the method of SHS was evaluated according to Scoring Hand Scrub Criteria (SHS Criteria) and expressed as SHS scores. For the analysis of the data, t-test was used to compare the differences in the duration and the SHS scores of doctors and nurses, and Pearson's correlation coefficient was used to examine the relationship between the SHS duration and the SHS scores. The results of the study are summarized as follows. 1) The mean time spent in each SHS was 167 seconds in nurses, and 127 seconds in doctors. The data comparing nurses and doctors indicated that there were significant differences in Our ation of SH S between these two groups (t=5.58, p=.000). 2) The mean time spent in the first SHS was 145 seconds and that in the End SHS, 135 seconds, and there was not a significant difference in the duration of the SHS between doctors and nurses (t=1.44, P=.156). 3) The mean time spent in the SHS by OS (Orthopaedic surgery) doctors was 162 seconds, 150 seconds by NS(Neurologic surgery), 121 seconds by GS(General surgery), 94 seconds by OPH(Opthalmology) and DS(Dental surgery), 82 seconds by URO(Urology), 78 seconds by PS(Plastic surgery) and 40 seconds by ENT(Ear, Nose & Throat) These also showed a significant difference in the duration of the SHS among the medical specialities (t=4.8, P=.0001). 4) The average SHS score of the nurses was 15.2, while that of doctors was 13.1. The statistical analysis showed that t-value was 3.66, p was. 000. This indicates that the nurses actually clean their hands more thoroughly than the doctors do. 5) The average SHS score of NS doctors was 15.5, 15.3 for doctors for OPH,14.3 for OS,12.7 for GS, 12.0 for DS, 11.7 for URO, 10.1 for PS, 7.5 for ENT. Comparison of the average SHS scores from 8 specialties showed that there was a significant differences in the patterns of the SHS (F=5.08, P=.000) among medical specialties. 6) It appears that the operating personnel scrub the palms and dorsum of their hand relatively well, however, less thorough the nails and fingers. 7) The more the operating personnel spend their time in hand scrubbing, the more correctly they clean their hands(r=.6427, P<.001). 8) The overall frequencies of perforation in all post-operative gloves tested was 38 out of 389 gloves (10.3%). The perforation rate for PS was 13%, 12.1% for GS,8.8% for 05, and 3.3% for NS.

  • PDF