• Title/Summary/Keyword: All-In-One

Search Result 25,102, Processing Time 0.07 seconds

USABILITY EVALUATION OF PLANNING MRI ACQUISITION WHEN CT/MRI FUSION OF COMPUTERIZED TREATMENT PLAN (전산화 치료계획의 CT/MRI 영상 융합 시 PLANNING MRI영상 획득의 유용성 평가)

  • Park, Do-Geun;Choe, Byeong-Gi;Kim, Jin-Man;Lee, Dong-Hun;Song, Gi-Won;Park, Yeong-Hwan
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.127-135
    • /
    • 2014
  • Purpose : By taking advantage of each imaging modality, the use of fused CT/MRI image has increased in prostate cancer radiation therapy. However, fusion uncertainty may cause partial target miss or normal organ overdose. In order to complement such limitation, our hospital acquired MRI image (Planning MRI) by setting up patients with the same fixing tool and posture as CT simulation. This study aims to evaluate the usefulness of the Planning MRI through comparing and analyzing the diagnostic MRI image and Planning MRI image. Materials and Methods : This study targeted 10 patients who had been diagnosed with prostate cancer and prescribed nonhormone and definitive RT 70 Gy/28 fx from August 2011 to July 2013. Each patient had both CT and MRI simulations. The MRI images were acquired within one half hour after the CT simulation. The acquired CT/MRI images were fused primarily based on bony structure matching. This study measured the volume of prostate in the images of Planning MRI and diagnostic MRI. The diameters at the craniocaudal, anteroposterior and left-to-right directions from the center of prostate were measured in order to compare changes in the shape of prostate. Results : As a result of comparing the volume of prostate in the images of Planning MRI and diagnostic MRI, they were found to be $25.01cm^3$(range $15.84-34.75cm^3$) and $25.05cm^3$(range $15.28-35.88cm^3$) on average respectively. The diagnostic MRI had an increase of 0.12 % as compared with the Planning MRI. On the planning MRI, there was an increase in the volume by $7.46cm^3$(29 %) at the transition zone directions, and there was a decrease in the volume by $8.52cm^3$(34 %) in the peripheral zone direction. As a result of measuring the diameters at the craniocaudal, anteroposterior and left-to-right directions in the prostate, the Planning MRI was found to have on average 3.82cm, 2.38cm and 4.59cm respectively and the diagnostic MRI was found to have on average 3.37cm, 2.76cm and 4.51cm respectively. All three prostate diameters changed and the change was significant in the Planning MRI. On average, the anteroposterior prostate diameter decrease by 0.38cm(13 %). The mean right-to-left and craniocaudal diameter increased by 0.08cm(1.6 %) and 0.45cm(13 %), respectively. Conclusion : Based on the results of this study, it was found that the total volumes of prostate in the Planning MRI and the diagnostic MRI were not significantly different. However, there was a change in the shape and partial volume of prostate due to the insertion of prostate balloon tube to the rectum. Thus, if the Planning MRI images were used when conducting the fusion of CT/MRI images, it would be possible to include the target in the CTV without a loss as much as the increased volume in the transition zone. Also, it would be possible to reduce the radiation dose delivered to the rectum through separating more clearly the reduction of peripheral zone volume. Therefore, the author of this study believes that acquisition of Planning MRI image should be made to ensure target delineation and localization accuracy.

Plasma Activity of Lysosomal Enzymes in Active Pulmonary Tuberculosis (활동성 폐결핵 환자에서 혈중 리소솜 효소의 활성도)

  • Koh, Youn-Suck;Choi, Jeong-Eun;Kim, Mi-Kyung;Lim, Chae-Man;Kim, Woo-Sung;Chi, Hyun-Sook;Kim, Won-Dong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.42 no.5
    • /
    • pp.646-653
    • /
    • 1995
  • Background: The confirmative diagnosis of pulmonary tuberculosis(Tb) can be made by the isolation of Mycobacterium Tuberculosis(MTb) in the culture of the sputum, respiratory secretions or tissues of the patients, but positive result could not always be obtained in pulmonary Tb cases. Although there are many indirect ways of the diagnosis of Tb, clinicians still experience the difficulty in the diagnosis of Tb because each method has its own limitation. Therefore development of a new diagnostic tool is clinically urgent. It was reported that silica cause some lysosomal enzymes to be released from macrophages in vitro and one of these enzymes is elevated in workers exposed to silica dust and in silicotic subjects. In pulmonary Tb, alveolar macrophages are known to be activated after ingestion of MTb. Activated macrophages can kill MTb through oxygen free radical species and digestive enzymes of lysosome. But if macrophages allow the bacilli to grow intracellularly, the macrophages will die finally and local lesion will enlarge. Then it is assumed that the lysosomal enzymes would be released from the dead macrophages. The goal of this investigation was to determine if there are differences in the plasma activities of lysosomal enzymes, ($\beta$-glucuronidase(GLU) and $\beta$-N-acetyl glucosaminidase(NAG), among the groups of active and inactive pulmonary Tb and healthy control, and to see if there is any possibility that the plasma activity of GLU and NAG can be used as diagnostic indicies of active pulmonary Tb. Methods: The plasma were obtained from 20 patients with bacteriologically proven active pulmonary Tb, 15 persons with inactive Tb and 20 normal controls. In 10 patients with active pulmonary Tb, serial samples after 2 months of anti-Tb medications were obtained. Plasma GLU and NAG activities were measured by the fluorometric methods using 4-methylumbelliferyl substrates. All data are expressed as the mean $\pm$ the standard error of the mean. Results: The activites of GLU and NAG in plasma of the patients with active Tb were $21.52{\pm}3.01$ and $325.4{\pm}23.37$(nmol product/h/ml of plasma), respectively. Those of inactive pulmonary Tb were $24.87{\pm}3.78$, $362.36{\pm}33.92$ and those of healthy control were $25.45{\pm}4.05$, $324.44{\pm}28.66$(nmol product/h/ml of plasma), respectively. There were no significant differences in the plasma activities of both enzymes among 3 groups. The plasma activities of GLU at 2 months after anti-Tb medications were increased($42.18{\pm}5.94$ nmol product/h/ml of plasma) in the patients with active pulmonary Tb compared with that at the diagnosis of Tb(P-value <0.05). Conclusion: The results of the present investigation suggest that the measurement of the plasma activities of GLU and NAG in the patients with active pulmonary Tb could not be a useful method for the diagnosis of active Tb. Further investigation is necessary to define the reasons why the plasma activities of the GLU was increased in the patients with active pulmonary Tb after Tb therapy.

  • PDF

The Findings of Pulmonary Function Test in Patients with Inhalation Injury (흡입화상 환자에서의 폐기능검사 소견)

  • Kim, Jong Yeop;Kim, Cheol Hong;Shin, Hyun Won;Chae, Young Je;Choi, Chul Young;Shin, Tae Rim;Park, Yong Bum;Lee, Jae Young;Bahn, Joon-Woo;Park, Sang Myeon;Kim, Dong-Gyu;Lee, Myung Goo;Hyun, In-Gyu;Jung, Ki-Suck
    • Tuberculosis and Respiratory Diseases
    • /
    • v.60 no.6
    • /
    • pp.653-662
    • /
    • 2006
  • Background: The changes in the pulmonary function observed in burn patients with an inhalation injury are probably the result of a combination of airway inflammation, chest wall and muscular abnormalities, and scar formation. In addition, it appears that prolonged ventilatory support and an episode of pneumonia contribute to the findings. This study investigated the changes in the pulmonary function in patients with inhalation injury at the early and late post-burn periods. Methods: From August 1, 2002, to August 30, 2005, surviving burn patients who had an inhalation injury were enrolled prospectively. An inhalation injury was identified by bronchoscopy within 48hours after admission. Spirometry was performed at the early phase during admission and the recovery phase after discharge, and the changes in the pulmonary function were compared. Results: 37 patients (M=28, F=9) with a total burn surface area (% TBSA), ranging from 0 to 18%, were included. The initial $PaO_2/$FiO_2$ratio and COHb were $286.4{\pm}129.6mmHg$ and $7.8{\pm}6.6%$. Nine cases (24.3%) underwent endotracheal intubation and 3 cases (8.1%) underwent mechanical ventilation. The initial X-ray findings revealed abnormalities in, 18 cases (48.6%) with 15 (83.3%) of these being completely resolved. However, 3 (16.7%) of these had residual sequela. The initial pulmonary function test, showed an obstructive pattern in 9 (24.3%) with 4 (44.4%) of these showing a positive bronchodilator response, A restrictive pattern was also observed in 9 (24.3%) patients. A lower DLco was observed in only 4 (17.4%) patients of which 23 had undergone DLco. In the follow-up study, an obstructive and restrictive pattern was observed in only one (2.7%) case each. All the decreased DLco returned to mormal. Conclusions: Most surviving burn patients with an inhalation injury but with a small burn size showed initial derangements in the pulmonary function test that was restored to a normal lung function during the follow up period.

Self-Regulatory Mode Effects on Emotion and Customer's Response in Failed Services - Focusing on the moderate effect of attribution processing - (고객의 자기조절성향이 서비스 실패에 따른 부정적 감정과 고객반응에 미치는 영향 - 귀인과정에 따른 조정적 역할을 중심으로 -)

  • Sung, Hyung-Suk;Han, Sang-Lin
    • Asia Marketing Journal
    • /
    • v.12 no.2
    • /
    • pp.83-110
    • /
    • 2010
  • Dissatisfied customers may express their dissatisfaction behaviorally. These behavioral responses may impact the firms' profitability. How do we model the impact of self regulatory orientation on emotions and subsequent customer behaviors? Obviously, the positive and negative emotions experienced in these situations will influence the overall degree of satisfaction or dissatisfaction with the service(Zeelenberg and Pieters 1999). Most likely, these specific emotions will also partly determine the subsequent behavior in relation to the service and service provider, such as the likelihood of complaining, the degree to which customers will switch or repurchase, and the extent of word of mouth communication they will engage in(Zeelenberg and Pieters 2004). This study investigates the antecedents, consequences of negative consumption emotion and the moderate effect of attribution processing in an integrated model(self regulatory mode → specific emotions → behavioral responses). We focused on the fact that regret and disappointment have effects on consumer behavior. Especially, There are essentially two approaches in this research: the valence based approach and the specific emotions approach. The authors indicate theoretically and show empirically that it matters to distinguish these approaches in services research. and The present studies examined the influence of two regulatory mode concerns(Locomotion orientation and Assessment orientation) with making comparisons on experiencing post decisional regret and disappointment(Pierro, Kruglanski, and Higgins 2006; Pierro et al. 2008). When contemplating a decision with a negative outcome, it was predicted that high (vs low) locomotion would induce more disappointment than regret, whereas high (vs low) assessment would induce more regret than disappointment. The validity of the measurement scales was also confirmed by evaluations provided by the participating respondents and an independent advisory panel; samples provided recommendations throughout the primary, exploratory phases of the study. The resulting goodness of fit statistics were RMR or RMSEA of 0.05, GFI and AGFI greater than 0.9, and a chi-square with a 175.11. The indicators of the each constructs were very good measures of variables and had high convergent validity as evidenced by the reliability with a more than 0.9. Some items were deleted leaving those that reflected the cognitive dimension of importance rather than the dimension. The indicators were very good measures and had convergent validity as evidenced by the reliability of 0.9. These results for all constructs indicate the measurement fits the sample data well and is adequate for use. The scale for each factor was set by fixing the factor loading to one of its indicator variables and then applying the maximum likelihood estimation method. The results of the analysis showed that directions of the effects in the model are ultimately supported by the theory underpinning the causal linkages of the model. This research proposed 6 hypotheses on 6 latent variables and tested through structural equation modeling. 6 alternative measurements were compared through statistical significance test of the paths of research model and the overall fitting level of structural equation model and the result was successful. Also, Locomotion orientation more positively influences disappointment when internal attribution is high than low and Assessment orientation more positively influences regret when external attribution is high than low. In sum, The results of our studies suggest that assessment and locomotion concerns, both as chronic individual predispositions and as situationally induced states, influence the amount of people's experienced regret and disappointment. These findings contribute to our understanding of regulatory mode, regret, and disappointment. In previous studies of regulatory mode, relatively little attention has been paid to the post actional evaluative phase of self regulation. The present findings indicate that assessment concerns and locomotion concerns are clearly distinct in this phase, with individuals higher in assessment delving more into possible alternatives to past actions and individuals higher in locomotion engaging less in such reflective thought. What this suggests is that, separate from decreasing the amount of counterfactual thinking per se, individuals with locomotion concerns want to move on, to get on with it. Regret is about the past and not the future. Thus, individuals with locomotion concerns are less likely to experience regret. The results supported our predictions. We discuss the implications of these findings for the nature of regret and disappointment from the perspective of their relation to regulatory mode. Also, self regulatory mode and the specific emotions(disappointment and regret) were assessed and their influence on customers' behavioral responses(inaction, word of mouth) was examined, using a sample of 275 customers. It was found that emotions have a direct impact on behavior over and above the effects of negative emotions and customer behavior. Hence, We argue against incorporating emotions such as regret and disappointment into a specific response measure and in favor of a specific emotions approach on self regulation. Implications for services marketing practice and theory are discussed.

  • PDF

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

Media Habits of Sensation Seekers (감지추구자적매체습관(感知追求者的媒体习惯))

  • Blakeney, Alisha;Findley, Casey;Self, Donald R.;Ingram, Rhea;Garrett, Tony
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.2
    • /
    • pp.179-187
    • /
    • 2010
  • Understanding consumers' preferences and use of media types is imperative for marketing and advertising managers, especially in today's fragmented market. A clear understanding assists managers in making more effective selections of appropriate media outlets, yet individuals' choices of type and use of media are based on a variety of characteristics. This paper examines one personality trait, sensation seeking, which has not appeared in the literature examining "new" media preferences and use. Sensation seeking is a personality trait defined as "the need for varied, novel, and complex sensations and experiences and the willingness to take physical and social risks for the sake of such experiences" (Zuckerman 1979). Six hypotheses were developed from a review of the literature. Particular attention was given to the Uses and Gratification theory (Katz 1959), which explains various reasons why people choose media types and their motivations for using the different types of media. Current theory suggests that High Sensation Seekers (HSS), due to their needs for novelty, arousal and unconventional content and imagery, would exhibit higher frequency of use of new media. Specifically, we hypothesize that HSS will use the internet more than broadcast (H1a) or print media (H1b) and more than low (LSS) (H2a) or medium sensation seekers (MSS) (H2b). In addition, HSS have been found to be more social and have higher numbers of friends therefore are expected to use social networking websites such as Facebook/MySpace (H3) and chat rooms (H4) more than LSS (a) and MSS (b). Sensation seekers can manifest into a range of behaviors including disinhibition,. It is expected that alternative social networks such as Facebook/MySpace (H5) and chat rooms (H6) will be used more often for those who have higher levels of disinhibition than low (a) or medium (b) levels. Data were collected using an online survey of participants in extreme sports. In order to reach this group, an improved version of a snowball sampling technique, chain-referral method, was used to select respondents for this study. This method was chosen as it is regarded as being effective to reach otherwise hidden population groups (Heckathorn, 1997). A final usable sample of 1108 respondents, which was mainly young (56.36% under 34), male (86.1%) and middle class (58.7% with household incomes over USD 50,000) was consistent with previous studies on sensation seeking. Sensation seeking was captured using an existing measure, the Brief Sensation Seeking Scale (Hoyle et al., 2002). Media usage was captured by measuring the self reported usage of various media types. Results did not support H1a and b. HSS did not show higher levels of usage of alternative media such as the internet showing in fact lower mean levels of usage than all the other types of media. The highest media type used by HSS was print media, suggesting that there is a revolt against the mainstream. Results support H2a and b that HSS are more frequent users of the internet than LSS or MSS. Further analysis revealed that there are significant differences in the use of print media between HSS and LSS, suggesting that HSS may seek out more specialized print publications in their respective extreme sport activity. Hypothesis 3a and b showed that HSS use Facebook/MySpace more frequently than either LSS or MSS. There were no significant differences in the use of chat rooms between LSS and HSS, so as a consequence no support for H4a, although significant for MSS H4b. Respondents with varying levels of disinhibition were expected to have different levels of use of Facebook/MySpace and chat-rooms. There was support for the higher levels of use of Facebook/MySpace for those with high levels of disinhibition than low or medium levels, supporting H5a and b. Similarly there was support for H6b, Those with high levels of disinhibition use chat-rooms significantly more than those with medium levels but not for low levels (H6a). The findings are counterintuitive and give some interesting insights for managers. First, although HSS use online media more frequently than LSS or MSS, this groups use of online media is less than either print or broadcast media. The advertising executive should not place too much emphasis on online media for this important market segment. Second, social media, such as facebook/Myspace and chatrooms should be examined by managers as potential ways to reach this group. Finally, there is some implication for public policy by the higher levels of use of social media by those who are disinhibited. These individuals are more inclined to engage in more socially risky behavior which may have some dire implications, e.g. by internet predators or future employers. There is a limitation in the study in that only those who engage in extreme sports are included. This is by nature a HSS activity. A broader population is therefore needed to test if these results hold.

The hydrodynamic characteristics of the canvas kite - 2. The characteristics of the triangular canvas kite - (캔버스 카이트의 유체역학적 특성에 관한 연구 - 2. 삼각형 캔버스 카이트의 특성 -)

  • Bae, Bong-Seong;Bae, Jae-Hyun;An, Heui-Chun;Lee, Ju-Hee;Shin, Jung-Wook
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.40 no.3
    • /
    • pp.206-213
    • /
    • 2004
  • As far as an opening device of fishing gears is concerned, applications of a kite are under development around the world. The typical examples are found in the opening device of the stow net on anchor and the buoyancy material of the trawl. While the stow net on anchor has proved its capability for the past 20 years, the trawl has not been wildly used since it has been first introduced for the commercial use only without sufficient studies and thus has revealed many drawbacks. Therefore, the fundamental hydrodynamics of the kite itself need to ne studied further. Models of plate and canvas kite were deployed in the circulating water tank for the mechanical test. For this situation lift and drag tests were performed considering a change in the shape of objects, which resulted in a different aspect ratio of rectangle and trapezoid. The results obtained from the above approaches are summarized as follows, where aspect ratio, attack angle, lift coefficient and maximum lift coefficient are denoted as A, B, $C_L$ and $C_{Lmax}$ respectively : 1. Given the triangular plate, $C_{Lmax}$ was produced as 1.26${\sim}$1.32 with A${\leq}$1 and 38$^{\circ}$B${\leq}$42$^{\circ}$. And when A${\geq}$1.5 and 20$^{\circ}$${\leq}$B${\leq}$50$^{\circ}$, $C_L$ was around 0.85. Given the inverted triangular plate, $C_{Lmax}$ was 1.46${\sim}$1.56 with A${\leq}$1 and 36$^{\circ}$B${\leq}$38$^{\circ}$. And When A${\geq}$1.5 and 22$^{\circ}$B${\leq}$26$^{\circ}$, $C_{Lmax}$ was 1.05${\sim}$1.21. Given the triangular kite, $C_{Lmax}$ was produced as 1.67${\sim}$1.77 with A${\leq}$1 and 46$^{\circ}$B${\leq}$48$^{\circ}$. And when A${\geq}$1.5 and 20$^{\circ}$B${\leq}$50$^{\circ}$, $C_L$ was around 1.10. Given the inverted triangular kite, $C_{Lmax}$ was 1.44${\sim}$1.68 with A${\leq}$1 and 28$^{\circ}$B${\leq}$32$^{\circ}$. And when A${\geq}$1.5 and 18$^{\circ}$B${\leq}$24$^{\circ}$, $C_{Lmax}$ was 1.03${\sim}$1.18. 2. For a model with A=1/2, an increase in B caused an increase in $C_L$ until $C_L$ has reached the maximum. Then there was a tendency of a very gradual decrease or no change in the value of $C_L$. For a model with A=2/3, the tendency of $C_L$ was similar to the case of a model with A=1/2. For a model with A=1, an increase in B caused an increase in $C_L$ until $C_L$ has reached the maximum. And the tendency of $C_L$ didn't change dramatically. For a model with A=1.5, the tendency of $C_L$ as a function of B was changed very small as 0.75${\sim}$1.22 with 20$^{\circ}$B${\leq}$50$^{\circ}$. For a model with A=2, the tendency of $C_L$ as a function of B was almost the same in the triangular model. There was no considerable change in the models with 20$^{\circ}$B${\leq}$50$^{\circ}$. 3. The inverted model's $C_L$ as a function of increase of B reached the maximum rapidly, then decreased gradually compared to the non-inverted models. Others were decreased dramatically. 4. The action point of dynamic pressure in accordance with the attack angle was close to the rear area of the model with small attack angle, and with large attack angle, the action point was close to the front part of the model. 5. There was camber vertex in the position in which the fluid pressure was generated, and the triangular canvas had large value of camber vertex when the aspect ratio was high, while the inverted triangular canvas was versa. 6. All canvas kite had larger camber ratio when the aspect ratio was high, and the triangular canvas had larger one when the attack angle was high, while the inverted triangluar canvas was versa.

Development of Practical Problem-Based Home Economics Teaching.Learning Process Plans by Blended Learning Strategy - Focusing on a Unit 'the Youth and Consumer Life' - (Blended Learning(BL) 전략을 활용한 실천적 문제 중심 가정과 교수 학습 과정안 개발 - '청소년과 소비생활' 단원을 중심으로 -)

  • Lee, Jin-Hee;Chae, Jung-Hyun
    • Journal of Korean Home Economics Education Association
    • /
    • v.20 no.4
    • /
    • pp.19-42
    • /
    • 2008
  • The purpose of this study was to develop practical problem-based home economics teaching.learning process plans about a unit 'the youth and consumer life' of middle school eighth-grade Technology and Home Economics by applying blended learning(BL) strategy. According to ADDIE instructional design model, this study was conducted in the following procedure: analysis, design/development, implementation, and evaluation. In the stage of design and development, the selected unit was converted into a practical problem-based unit, and practical problem-based teaching. learning process plans were designed in detail by using BL strategy. An online study room for practical problem-based home economics instruction grounded in BL strategy was prepared by using Edunet(http://community.edunet4u.net/${\sim}$consumer2). Eight-session lesson plans were mapped out, and study aids for students and materials for teachers were prepared. In the implementation stage, the first-session teaching plans that dealt with a minor question 'what preparations should be made to become a wise consumer' were utilized when instruction was provided to 115 eighth graders who were in three different province, and the other one was in a middle school in the city of Daejeon. The experimental teaching was implemented for two weeks in the following procedure: preliminary program, pre-online learning, main instruction and post- online learning. The preliminary program was carried out in a session in the classroom, and pre-online learning was provided before the main instruction was given in a session in the classroom. After the main instruction was completed, post-online learning was offered. In the evaluation stage, a survey was conducted on all the learners and teachers to find out their opinions and suggestions.

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.