• Title/Summary/Keyword: C언어

Search Result 1,291, Processing Time 0.03 seconds

A Study on the Kindergarten Teacher's Experience in the Child Violence (아동폭력에 대한 유치원 교사의 경험에 관한 연구)

  • Seo, Young-Min;Shin, Nam-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.362-371
    • /
    • 2019
  • The purpose of this study is to recognize kindergarten teachers' experiences of child violence for identifying the site's needs for the implementation of preventive education in early childhood and to provide basic data on child violence. To this end, nine teachers were interviewed in-depth. From the study results, first, child violence-related child behavior, which usually occurs in kindergartens, includes physical assault, aggression, verbal violence, threats and threats, and bullying. Second, teachers usually use direct intervention laws in cases of child violence, but were finding it difficult to intervene properly with many tasks or high teacher-to-child ratios. Teachers recognized the need for lower teacher-to-child ratios and placement of each class burden. Third, teachers were aware of the need for prevention education for child violence targeting infants, and instigated the following appropriate interactions immediately after problem behavior occurred: large group activities, specific multimedia education data and parent education. Fourth, teachers are concerned about the possibility of problem behavior being learned and imitated through education in the implementation of prevention education for children's violence. Therefore, this study proposed the need to develop various teaching methods that could be applied to infant education sites, focusing on the types of child violence-related problem behaviors that occur in kindergartens.

Development of an Input File Preparation Tool for Offline Coupling of DNDC and DSSAT Models (DNDC 지역별 구동을 위한 입력자료 생성 도구 개발)

  • Hyun, Shinwoo;Hwang, Woosung;You, Heejin;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.1
    • /
    • pp.68-81
    • /
    • 2021
  • The agricultural ecosystem is one of the major sources of greenhouse gas (GHG) emissions. In order to search for climate change adaptation options which mitigate GHG emissions while maintaining crop yield, it is advantageous to integrate multiple models at a high spatial resolution. The objective of this study was to develop a tool to support integrated assessment of climate change impact b y coupling the DSSAT model and the DNDC model. DNDC Regional Input File Tool(DRIFT) was developed to prepare input data for the regional mode of DNDC model using input data and output data of the DSSAT model. In a case study, GHG emissions under the climate change conditions were simulated using the input data prepared b y the DRIFT. The time to prepare the input data was increased b y increasing the number of grid points. Most of the process took a relatively short time, while it took most of the time to convert the daily flood depth data of the DSSAT model to the flood period of the DNDC model. Still, processing a large amount of data would require a long time, which could be reduced by parallelizing some calculation processes. Expanding the DRIFT to other models would help reduce the time required to prepare input data for the models.

Research Trends in Adaptation of Young Children from Multicultural Families : A Review of Articles (2006-2017) (다문화가정 유아의 적응에 대한 연구동향 분석: 국내 학술지를 중심으로)

  • Yoon, Gab-Jung;Son, Hwan-Hee
    • Korean Journal of Culture and Arts Education Studies
    • /
    • v.13 no.3
    • /
    • pp.81-109
    • /
    • 2018
  • This study aims to identify research trends in adaptation of young children from multicultural families in order to provide insights for researchers around research topics and cultural adaptation orientation of young children from multicultural families. Subjects of this study were 41 literatures with key words of'young children from multicultural families', 'adaptation', and'life'through paper search system. This study was analysed the research themes, methods, subjects, and perspectives of cultural adaptation in adaptation researches of young children from multicultural families, 2006-2017 from peer reviewed journals with analysis frameworks. The results showed that : (a) the most common research topic continues to be about adaptation conditions of young children from multicultural families; (b) among research methods, quantitative research has most frequently used in, followed by qualitative methods, mixed methods, and literature review; (c) cultural adaptation was focused on the perspective of social-cultural adaptation, developmental importance of adaptation in early childhood and problems-based research. It implicated that further researches are need to focused on psychological adaptation and formation of classroom community beyond personal and structural adaptation in preschools.

A Study for the Norms of Audiometric Tests in Koreans (정상한국인의 청력검사치에 관한 연구)

  • 오혜경;서장수;이근해;김희남;김영명;권영화;서옥기
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1981.05a
    • /
    • pp.38.1-38
    • /
    • 1981
  • Currently in the otologic field, there are various methods of special audiometric examinations, such as, tone decay, SISI, and impedance audiometry and only a few studies has been done in these fields sporadically in Korea. The purpose of this paper is to establish norms of various special audiometric tests, so we have performed the special audiometric tests on 100 male medical students in good physical condition and the follow results were obtained. 1. All cases showed over 90% of PB scores. The mean and its 2 S.D. were 98$\pm$4.9% in the right ear and 97$\pm$5.6% in the left ear. 2. The mean and its 2 S.D. of MCL(most comfortable level) were 45$\pm$15.4 dB in the right ear and 46$\pm$17.9 dB in the left ear, and its range was 12$\pm$12.2 dB in the right ear and 13$\pm$12.6 dB in the left ear. 3. The mean and its 2 S.D. of UCL (uncomfortable level) were 102$\pm$7.9 dB in the right ear and 102$\pm$7.9 dB in the left ear and about an half in cases showed over 106 dB of UCL. 4. In 95% of cases, SISIs(short increment sensitivity index) at 1, 000 Hz and 4000 Hz was below 45% in the right ear in both frequencies and below 55% and 75% in the left ear, respectively. 5. In 95% of cases, tone decays at 2, 000 Hz and 4, 000 Hz was below 10 dB in both ears. 6. The difference between SRT and PTA (speech reception threshold minus pure tone average) was 4$\pm$9.2 dB in the right ear and 4$\pm$10.0 dB in the left ear. 7. The dynamic range(uncomfortable level minus speech reception threshold) was 98$\pm$13.5 dB in the right ear and 99$\pm$13.5 dB in the left ear. We had trouble in estimating the dynamic range in about an half in cases, in which we couldn't estimate the UCL with our conventional audiometry. 8. The results of impedance audiometric tests were as follow: A. In the tympanogram, all cases were of A type with one exception of B type in the left ear. The mean and its 2 S.D. of its peak level were 22.8$\pm$32.94mm $H_2O$ in the right ear and 23.9$\pm$29. 81mm $H_2O$ in the left ear. B. The mean and its 2 S.D. of the compliance were 0.6$\pm$0.54cc in the right ear and 0.6$\pm$0.53cc in the left ear. C. The results of stapedial reflex: a. The mean and its 2 S.D. of the controlateral stapedial reflex at 500Hz, 1, 000Hz, 2, 000Hz, 4, 000Hz were 99$\pm$17.7 dB, 87$\pm$14.4 dB, 79$\pm$13.7 dB, 77$\pm$20.0 dB in the right ear and 99$\pm$15.9 dB, 88$\pm$13.9 dB, 79$\pm$13.7 dB, 77$\pm$21.3 dB in the left ear. Depending on the tested frequencies, the stapedial reflex wasn't generated in 6 cases in the right ear and 11 cases in the left ear. b. The mean and its 2 S.D. of the ipsilateral stapedial reflex at 1, 000Hz, and 2, 000Hz were 89$\pm$16.3 dB, 82$\pm$15.9 dB in the right ear and 89$\pm$18.0 dB, 83$\pm$18.9 dB in the left ear. Depending on the tested frequencies, the stapedial reflex wans't generated in 1 case in the right ear and 2 cases in the left ear. 9. Eustachian tube function using with impedance audiometry was malfunctioned in21 cases depending on the tested pressure and the range of peak level of tympanogram was 14$\pm$26.9mm $H_2O$(tested pressure:+250mm $H_2O$), 8$\pm$21.9mm $H_2O$ (tested pressure:-250mm $H_2O$) in the right ear and 11 cases depending on the tested pressure and the range of the peak level of tympanogram was 12$\pm$22.5mm $H_2O$ (tested pressure: +250 mm $H_2O$, 9$\pm$17.3mm $H_2O$(tested pressure: -250mm $H_2O$) in the left ear.

  • PDF

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Evaluation of Image Qualities for a Digital X-ray Imaging System Based on Gd$_2$O$_2$S(Tb) Scintillator and Photosensor Array by Using a Monte Carlo Imaging Simulation Code (몬테카를로 영상모의실험 코드를 이용한 Gd$_2$O$_2$S(Tb) 섬광체 및 광센서 어레이 기반 디지털 X-선 영상시스템의 화질평가)

  • Jung, Man-Hee;Jung, In-Bum;Park, Ju-Hee;Oh, Ji-Eun;Cho, Hyo-Sung;Han, Bong-Soo;Kim, Sin;Lee, Bong-Soo;Kim, Ho-Kyung
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.4
    • /
    • pp.253-259
    • /
    • 2004
  • in this study, we developed a Monte Carlo imaging simulation code written by the visual C$\^$++/ programing language for design optimization of a digital X-ray imaging system. As a digital X-ray imaging system, we considered a Gd$_2$O$_2$S(Tb) scintillator and a photosensor array, and included a 2D parallel grid to simulate general test renditions. The interactions between X-ray beams and the system structure, the behavior of lights generated in the scintillator, and their collection in the photosensor array were simulated by using the Monte Carlo method. The scintillator thickness and the photosensor array pitch were assumed to 66$\mu\textrm{m}$ and 48$\mu\textrm{m}$, respertively, and the pixel format was set to 256 x 256. Using the code, we obtained X-ray images under various simulation conditions, and evaluated their image qualities through the calculations of SNR (signal-to-noise ratio), MTF (modulation transfer function), NPS (noise power spectrum), DQE (detective quantum efficiency). The image simulation code developed in this study can be applied effectively for a variety of digital X-ray imaging systems for their design optimization on various design parameters.

TREATMENT OF ECHOLALIA IN CHILDREN WITH AUTISM (자폐아동의 반향어 치료)

  • Chung, Bo-In
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.9 no.1
    • /
    • pp.47-53
    • /
    • 1998
  • The purpose of this study was to investigate the possibility of providing familiar tasks as a treatment option to decrease echolalia. Two comparisons were made:One was to compare ‘conversation condition’ and ‘task performance condition.’ and the other was to compare ‘task performance alone condition’ and ‘task performance along with contingency of reinforcement condition.’ Two echolalic children aged 12 and 13 years participated in the experiment and A-B-A-B-BC-B-BC design was used, in which A was conversation only, B was task performance, and C was task performance along with contingency of reinforcement. In the A condition, the therapist asked easy and short questions to the child;in the B condition the child was given familiar tasks with short instruction, and in BC condition, each child was reinforced for his performance on given tasks, in which immediate echolalia was controlled through his hands being held down for 5 seconds. Delayed echolalia was recorded without any intervention being given. Each child was put into each of the 7 treatment conditions. With a 15 minutes session, each child went through 5 to 6 sessions per day for 2 weeks. The mean echolalia(immediate) rates across the 7 treatment conditions were:For child 1, A(99%)-B(65%)-A(95%)-B(10%)-BC(7%)-B(6%)- BC(7%) and for child 2, A(67%)-B(62%)-A(63%)-B(35%)-BC(8%)-B(4%)-BC(0%). As to the generalization of the treatment effect of immediate echolalia to the untreated delayed echolalia, there was shown a drastic reduction of delayed echolalia in child 2:A(35%)-B(57%)-A(56%)-B(40%)-BC(8%)-B(5%)-BC(9%). Child l’s delayed echolalia was negligible(mean=3%) pre-and post treatments. In conclusion, the results of this study clearly show that providing a task performance setting with familiar tasks can certainly be helpful for minimizing echolalic response, and along with the use of the contingency of reinforcement technique it can further not only correct echolalic behavior to a negligible degree but also help the echolalic child generalize its treatment effect to the child’ overall language improvement.

  • PDF

A Tool Box to Evaluate the Phased Array Coil Performance Using Retrospective 3D Coil Modeling (3차원 코일 모델링을 통해 위상배열코일 성능을 평가하기 위한 프로그램)

  • Perez, Marlon;Hernandez, Daniel;Michel, Eric;Cho, Min Hyoung;Lee, Soo Yeol
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.2
    • /
    • pp.107-119
    • /
    • 2014
  • Purpose : To efficiently evaluate phased array coil performance using a software tool box with which we can make visual comparison of the sensitivity of every coil element between the real experiment and EM simulation. Materials and Methods: We have developed a $C^{{+}{+}}$- and MATLAB-based software tool called Phased Array Coil Evaluator (PACE). PACE has the following functions: Building 3D models of the coil elements, importing the FDTD simulation results, and visualizing the coil sensitivity of each coil element on the ordinary Cartesian coordinate and the relative coil position coordinate. To build a 3D model of the phased array coil, we used an electromagnetic 3D tracker in a stylus form. After making the 3D model, we imported the 3D model into the FDTD electromagnetic field simulation tool. Results: An accurate comparison between the coil sensitivity simulation and real experiment on the tool box platform has been made through fine matching of the simulation and real experiment with aids of the 3D tracker. In the simulation and experiment, we used a 36-channel helmet-style phased array coil. At the 3D MRI data acquisition using the spoiled gradient echo sequence, we used the uniform cylindrical phantom that had the same geometry as the one in the FDTD simulation. In the tool box, we can conveniently choose the coil element of interest and we can compare the coil sensitivities element-by-element of the phased array coil. Conclusion: We expect the tool box can be greatly used for developing phased array coils of new geometry or for periodic maintenance of phased array coils in a more accurate and consistent manner.

EEG based Cognitive Load Measurement for e-learning Application (이러닝 적용을 위한 뇌파기반 인지부하 측정)

  • Kim, Jun;Song, Ki-Sang
    • Korean Journal of Cognitive Science
    • /
    • v.20 no.2
    • /
    • pp.125-154
    • /
    • 2009
  • This paper describes the possibility of human physiological data, especially brain-wave activity, to detect cognitive overload, a phenomenon that may occur while learner uses an e-learning system. If it is found that cognitive overload to be detectable, providing appropriate feedback to learners may be possible. To illustrate the possibility, while engaging in cognitive activities, cognitive load levels were measured by EEG (electroencephalogram) to seek detection of cognitive overload. The task given to learner was a computerized listening and recall test designed to measure working memory capacity, and the test had four progressively increasing degrees of difficulty. Eight male, right-handed, university students were asked to answer 4 sets of tests and each test took from 61 seconds to 198 seconds. A correction ratio was then calculated and EEG results analyzed. The correction ratio of listening and recall tests were 84.5%, 90.6%, 62.5% and 56.3% respectively, and the degree of difficulty had statistical significance. The data highlighted learner cognitive overload on test level of 3 and 4, the higher level tests. Second, the SEF-95% value was greater on test3 and 4 than on tests 1 and 2 indicating that tests 3 and 4 imposed greater cognitive load on participants. Third, the relative power of EEG gamma wave rapidly increased on the 3rd and $4^{th}$ test, and signals from channel F3, F4, C4, F7, and F8 showed statistically significance. These five channels are surrounding the brain's Broca area, and from a brain mapping analysis it was found that F8, right-half of the brain area, was activated relative to the degree of difficulty. Lastly, cross relation analysis showed greater increasing in synchronization at test3 and $4^{th}$ at test1 and 2. From these findings, it is possible to measure brain cognitive load level and cognitive over load via brain activity, which may provide atimely feedback scheme for e-learning systems.

  • PDF

Development of a Dose Calibration Program for Various Dosimetry Protocols in High Energy Photon Beams (고 에너지 광자선의 표준측정법에 대한 선량 교정 프로그램 개발)

  • Shin Dong Oh;Park Sung Yong;Ji Young Hoon;Lee Chang Geon;Suh Tae Suk;Kwon Soo IL;Ahn Hee Kyung;Kang Jin Oh;Hong Seong Eon
    • Radiation Oncology Journal
    • /
    • v.20 no.4
    • /
    • pp.381-390
    • /
    • 2002
  • Purpose : To develop a dose calibration program for the IAEA TRS-277 and AAPM TG-21, based on the air kerma calibration factor (or the cavity-gas calibration factor), as well as for the IAEA TRS-398 and the AAPM TG-51, based on the absorbed dose to water calibration factor, so as to avoid the unwanted error associated with these calculation procedures. Materials and Methods : Currently, the most widely used dosimetry Protocols of high energy photon beams are the air kerma calibration factor based on the IAEA TRS-277 and the AAPM TG-21. However, this has somewhat complex formalism and limitations for the improvement of the accuracy due to uncertainties of the physical quantities. Recently, the IAEA and the AAPM published the absorbed dose to water calibration factor based, on the IAEA TRS-398 and the AAPM TG-51. The formalism and physical parameters were strictly applied to these four dose calibration programs. The tables and graphs of physical data and the information for ion chambers were numericalized for their incorporation into a database. These programs were developed user to be friendly, with the Visual $C^{++}$ language for their ease of use in a Windows environment according to the recommendation of each protocols. Results : The dose calibration programs for the high energy photon beams, developed for the four protocols, allow the input of informations about a dosimetry system, the characteristics of the beam quality, the measurement conditions and dosimetry results, to enable the minimization of any inter-user variations and errors, during the calculation procedure. Also, it was possible to compare the absorbed dose to water data of the four different protocols at a single reference points. Conclusion : Since this program expressed information in numerical and data-based forms for the physical parameter tables, graphs and of the ion chambers, the error associated with the procedures and different user could be solved. It was possible to analyze and compare the major difference for each dosimetry protocol, since the program was designed to be user friendly and to accurately calculate the correction factors and absorbed dose. It is expected that accurate dose calculations in high energy photon beams can be made by the users for selecting and performing the appropriate dosimetry protocol.