• Title/Summary/Keyword: language of instruction

Search Result 340, Processing Time 0.033 seconds

Early Null Pointer Check using Predication in Java Just-In-Time Compilation (자바 적시 컴파일에서의 조건 수행을 이용한 비어 있는 포인터의 조기검사)

  • Lee Sanggyu;Choi Hyug-Kyu;Moon Soo-Mook
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.7
    • /
    • pp.683-692
    • /
    • 2005
  • Java specification states that all accesses to an object must be checked at runtime if object refers to null. Since Java is an object-oriented language, object accesses are frequent enough to make null pointer checks affect the performance significantly. In order to reduce the performance degradation, there have been attempts to remove redundant null pointer checks. For example, in a Java environment where a just-in-time (JIT) compiler is used, the JIT compiler removes redundant null pointer check code via code analysis. This paper proposes a technique to remove additional null pointer check code that could not be removed by previous JIT compilation techniques, via early null pointer check using an architectural feature called predication. Generally, null point check code consists of two instructions: a compare and a branch. Our idea is moving the compare instruction that is usually located just before an use of an object, to the point right after the object is defined so that the total number of compare instructions is reduced. This results in reduction of dynamic and static compare instructions by 3.21$\%$ and 1.98$\%$. respectively, in SPECjvm98 bechmarks, compared to the code that has already been optimized by previous null pointer check elimination techniques. Its performance impact on an Itanium machine is an improvement of 0.32$\%$.

A Study and Development of instruction-Seaming planning document with ICT on Nation Language Art in University of Education (교육대학에서 국어과 ICT 교수 학습 과정안 개발 연구)

  • Im, Cheon-Taek;Joo, Kang-Sik;Lee, Jae-Mu
    • 한국정보교육학회:학술대회논문집
    • /
    • 2004.01a
    • /
    • pp.458-467
    • /
    • 2004
  • 본 연구 개발은 초등학교 예비 교사에게 국어과 ICT 교수 학습에 대한 이해와 과정안 개발 및 수업에서의 활용 능력을 길러주고, 아울러 향후 ICT를 활용한 국어과 교육을 교육과정에 반영할 경우 제기될 수 있는 문제점을 짚어보고 개선 방안을 모색하는 데 그 목적을 두었다. 연구 개발의 내용은 국어과 ICT 활용 교육 관련 문헌 및 자료 탐색, 자료 및 수업 운영 실태 분석, 국어과 국어과 6학년 1, 2학기 180차시에 대한 학습 내용 및 목표 분석과 국어과 6학년 1, 2학기 180차시에 대한 ICT 교수 학습 과정안 개발, ICT 교수 학습과정안 개발 결과물의 수정 검토 및 서버 탑재, ICT 교수 학습 과정 개발 과정의 교육과정 적용 방안 모색이다. 본 연구 개발을 위하여 2003년 2학기 강좌인 '국어과 교재 연구(2학점)'와 통합하여 교수 학습 과정안 개발을 추진하였다. 본 강좌에서는 과정안 개발에 필요한 모형 안내, 과정안 작성 방법 지도, ICT 활용 수업의 분석, 과정안 개발에 필요한 컴퓨터 소양 교육이 이루어졌으며 과정안의 개발은 학기 과제로 추진하였다. 본 연구 개발 과제의 수행을 통하여 최종적으로 국어과 6학년 ICT 교수 학습 과정안 180차시와 연구 보고서 1종을 산출하였다. 본 연구 개발의 성과는 다음과 같다. 첫째, 예비 교사에게 국어과 ICT 교수 학습 모형 및 과정안에 대한 이해를 높이고 교수 학습 과정안의 작성 능력을 길러주었다. 둘째, 연구 보고서에 제시된 국어과 ICT 자료 개발 및 활용 실태, 예비 교사의 ICT 교수 학습 과정안 개발을 위한 강의 계획과 개발을 위한 실제 지도 과정은 향후 관련 강의나 연구를 설계하고 교육과정을 마련하는 데 도움을 줄 수 있다. 셋째, 예비 교사들이 개발한 자료는 현장 교사들이 국어과 수업을 운용하는 데 활용할 수 있다. 본 연구 개발 결과 드러난 문제점과 보다 나은 연구와 강의를 지향하기 위한 개선 방안을 제시하여 보면 다음과 같다. 첫째, 예비 교사의 특성과 개발 내용에 비추어 볼 때 ICT 교수 학습 과정안 개발 기간이 짧았다. 따라서 강의 시수를 늘리는 방안, 관련 강좌를 추가로 개설하는 방안, 기존 강좌를 통합하여 운영하는 방안을 고려해 보아야 한다. 둘째, 예비 교사의 수준을 고려할 때 여러 가지 '부가 자료'를 포함하는 ICT 교수 학습 과정안을 1인 1차시씩 개발하도록 하는 것은 학생들에게 상당한 부담을 안겨 주었다. 대안으로는 개발 관련 강자를 1년 단위로 계획하여 운용하는 방법, 학생 4-5명 당 1차시씩 개발하도록 하는 방법, ICT 교수 학습 과정안 자체만 개발하고 학습지, 교사연구자료, 자율학습 자료 등은 생략하는 방법 등이 있다. 셋째, 학생들의 컴퓨터 소양 기능이 부족한 상태에서 ICT 교수 학습 과정안 개발을 과제로 부과할 경우 좋은 결과물을 기대할 수 없다. 따라서 현재 교육대학의 교육 과정 편성 체제를 감안하면 컴퓨터 강좌와 교과 교육 강좌를 통합 강좌 형태로 운영하는 방법을 고려해 볼 만하다. 넷째, 교수 학습 모형의 경직성과 실제 교수 학습 변인을 고려한 활용이나 변형에 대한 탐구가 여전히 부족했다는 점이다. 앞으로는 이를 몇 가지의 모형 틀 속으로 자꾸 끓어 내려고 하는 것보다는 기존의 모형을 다양하게 변형할 수 있도록 그 틀을 열어줄 필요가 있다.

  • PDF

The difference of image quality using other radioactive isotope in uniformity correction map of myocardial perfusion SPECT (심근 관류 SPECT에서 핵종에 따른 Uniformity correction map 설정을 통한 영상의 질 비교)

  • Song, Jae hyuk;Kim, Kyeong Sik;Lee, Dong Hoon;Kim, Sung Hwan;Park, Jang Won
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.2
    • /
    • pp.87-92
    • /
    • 2015
  • Purpose When the patients takes myocardial perfusion SPECT using $^{201}Tl$, the operator gives the patients an injection of $^{201}Tl$. But the uniformity correction map in SPECT uses $^{99m}Tc$ uniformity correction map. Thus, we want to compare the image quality when it uses $^{99m}Tc$ uniformity correction map and when it uses $^{201}Tl$ uniformity correction map. Materials and Methods Phantom study is performed. We take the data by Asan medical center daily QC condition with flood phantom including $^{201}Tl$ 21.3 kBq/mL. After postprocessing with this data, we analyze CFOV integral uniformity(I.U) and differential uniformity(D.U). And we take the data with Jaszczak ECT Phantom by American college of radiology accreditation program instruction including $^{201}Tl$ 33.4 kBq/mL. After post processing with this data, we analyze spatial Resolution, Integral Uniformity(I.U), coefficient of variation(C.V) and Contrast with Interactive data language program. Results In the flood phantom test, when it uses $^{99m}Tc$ uniformity correction map, Flood I.U is 3.6% and D.U is 3.0%. When it uses $^{201}Tl$ uniformity correction map, Flood I.U is 3.8% and D.U is 2.1%. The flood I.U is worsen about 5%, but the D.U is improved about 30% inversely. In the Jaszczak ECT phantom test, when it uses $^{99m}Tc$ uniformity correction map, SPECT I.U, C.V and contrast is 13.99%, 4.89% and 0.69. When it uses $^{201}Tl$ uniformity correction map, SPECT I.U, C.V and contrast is 11.37%, 4.79% and 0.78. All of data are improved about 18%, 2%, 13% The spatial resolution was no significant changes. Conclusion In the flood phantom test, Flood I.U is worsen but Flood D.U is improved. Therefore, it's uncertain that an image quality is improved with flood phantom test. On the other hand, SPECT I.U, C.V, Contrast are improved about 18%, 2%, 13% in the Jaszczak ECT phantom test. This study has limitations that we can't take all variables into account and study with two phantoms. We need think about things that it has a good effect when doctors decipher the nuclear medicine image and it's possible to improve the image quality using the uniformity correction map of other radionuclides other than $^{99m}Tc$, $^{201}Tl$ when we make other nuclear medicine examinations.

  • PDF

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

The Implication and Issues of Landscape Design Education through National Exhibition of Korean Landscape Architecture (대한민국환경조경대전을 통해 본 조경 설계 교육의 쟁점과 시사점)

  • Choi, Jung-Mean;Yun, Su-jin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.44 no.2
    • /
    • pp.108-121
    • /
    • 2016
  • The purpose of this study is to explore the issues and implications for landscape design education in Korean landscape architecture by analyzing the National Exhibition of Korean Landscape Architecture(NEKLA). This study analyzed the suggested topics and selected site as well as the commentary that appeared in the NEKLA's award-winning book published from 2004 to 2014. Results of the study are as follows: First, topics of NEKLA are not only competition guidelines but related to exploring new area and role of Korean landscape architecture. Second, most dealing with site is 'industrial heritage and regeneration space' and 'green infrastructure'. In more recent years, a larger variety of sites were addressed. Third, site locations are concentrated in metropolitan areas, and awards and participation of the non-metropolitan universities was very low. Fourth, seven criteria can be applied in a general landscape design competition such as 'newness of the concept(idea)', 'logicality of the design process', 'selection of site fidelity of analysis(interpretation)', 'presentation and completion of the master plan', 'consistency with the theme', 'linkage of concepts and results' and 'feasibility'. The evaluation criteria are increasing the sophistication of the design language to provide useful suggestions on how to find design education methods. Its implications are as follows: First, training is essential to derive innovative ideas, but it should avoid excessive concept-oriented education. Second, design education may include instruction on how to define the problems related with the site. Third, more emphasis on design logic is essential to transform the innovative concept to actual results. Fourth, 'slick images' unrelated to design should be suppressed. Fifth, practice is needed to solve the topics addressed in the design process of education. Sixth, 'feasibility' and 'creative thinking' are necessary to recognize a reciprocal relationship that is helpful to one another. This study uses direct quote commentary to minimize the subjectivity of the researcher and to trace issues of the contemporary landscape architecture more directly and vividly. This study is a record waiting for another review as meta-criticism. In this regard this study, the landscape architect of the next times will have a mean that historical records to review the current thinking of the landscape theory and design.

TREATMENT OF ECHOLALIA IN CHILDREN WITH AUTISM (자폐아동의 반향어 치료)

  • Chung, Bo-In
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.9 no.1
    • /
    • pp.47-53
    • /
    • 1998
  • The purpose of this study was to investigate the possibility of providing familiar tasks as a treatment option to decrease echolalia. Two comparisons were made:One was to compare ‘conversation condition’ and ‘task performance condition.’ and the other was to compare ‘task performance alone condition’ and ‘task performance along with contingency of reinforcement condition.’ Two echolalic children aged 12 and 13 years participated in the experiment and A-B-A-B-BC-B-BC design was used, in which A was conversation only, B was task performance, and C was task performance along with contingency of reinforcement. In the A condition, the therapist asked easy and short questions to the child;in the B condition the child was given familiar tasks with short instruction, and in BC condition, each child was reinforced for his performance on given tasks, in which immediate echolalia was controlled through his hands being held down for 5 seconds. Delayed echolalia was recorded without any intervention being given. Each child was put into each of the 7 treatment conditions. With a 15 minutes session, each child went through 5 to 6 sessions per day for 2 weeks. The mean echolalia(immediate) rates across the 7 treatment conditions were:For child 1, A(99%)-B(65%)-A(95%)-B(10%)-BC(7%)-B(6%)- BC(7%) and for child 2, A(67%)-B(62%)-A(63%)-B(35%)-BC(8%)-B(4%)-BC(0%). As to the generalization of the treatment effect of immediate echolalia to the untreated delayed echolalia, there was shown a drastic reduction of delayed echolalia in child 2:A(35%)-B(57%)-A(56%)-B(40%)-BC(8%)-B(5%)-BC(9%). Child l’s delayed echolalia was negligible(mean=3%) pre-and post treatments. In conclusion, the results of this study clearly show that providing a task performance setting with familiar tasks can certainly be helpful for minimizing echolalic response, and along with the use of the contingency of reinforcement technique it can further not only correct echolalic behavior to a negligible degree but also help the echolalic child generalize its treatment effect to the child’ overall language improvement.

  • PDF

Analyzing Different Contexts for Energy Terms through Text Mining of Online Science News Articles (온라인 과학 기사 텍스트 마이닝을 통해 분석한 에너지 용어 사용의 맥락)

  • Oh, Chi Yeong;Kang, Nam-Hwa
    • Journal of Science Education
    • /
    • v.45 no.3
    • /
    • pp.292-303
    • /
    • 2021
  • This study identifies the terms frequently used together with energy in online science news articles and topics of the news reports to find out how the term energy is used in everyday life and to draw implications for science curriculum and instruction about energy. A total of 2,171 online news articles in science category published by 11 major newspaper companies in Korea for one year from March 1, 2018 were selected by using energy as a search term. As a result of natural language processing, a total of 51,224 sentences consisting of 507,901 words were compiled for analysis. Using the R program, term frequency analysis, semantic network analysis, and structural topic modeling were performed. The results show that the terms with exceptionally high frequencies were technology, research, and development, which reflected the characteristics of news articles that report new findings. On the other hand, terms used more than once per two articles were industry-related terms (industry, product, system, production, market) and terms that were sufficiently expected as energy-related terms such as 'electricity' and 'environment.' Meanwhile, 'sun', 'heat', 'temperature', and 'power generation', which are frequently used in energy-related science classes, also appeared as terms belonging to the highest frequency. From a network analysis, two clusters were found including terms related to industry and technology and terms related to basic science and research. From the analysis of terms paired with energy, it was also found that terms related to the use of energy such as 'energy efficiency,' 'energy saving,' and 'energy consumption' were the most frequently used. Out of 16 topics found, four contexts of energy were drawn including 'high-tech industry,' 'industry,' 'basic science,' and 'environment and health.' The results suggest that the introduction of the concept of energy degradation as a starting point for energy classes can be effective. It also shows the need to introduce high-tech industries or the context of environment and health into energy learning.

A Study on the Expression Class through Story-telling about Interracial Married Women's Homeland Cultures (결혼이주여성의 자기문화 스토리텔링 활용 표현교육 사례 연구)

  • Kim, Youngsoon;Heo, Sook;Nguyen, Tuan Anh
    • Cross-Cultural Studies
    • /
    • v.25
    • /
    • pp.695-721
    • /
    • 2011
  • The purpose of this study is to provide the case study of expression education using story-telling about their cultures from which they came to the women who get interracial married and study korean cultures with the pride of their homeland. This research is also for the diverse members of korean society to deeply understand interracial married women, get higher understanding cultural diversities. And it is expected that these women could learn and study more korean cultures, too. In this study, process-based instruction method is used in the first step and second step such as brainstorming, questioning, discussing, investigating, teacher's asking in order to create some ideas about their home countries. Suggesting an example answer by teacher and free-writing are also involved. As the core of the process-based writing activity, the second step is focused on revising and correcting. Through reviewing their own writing task, feedback from teacher, interviewing from the difficulty of writing after this activity to cultural and linguistic backgrounds, they could appreciate their errors or mistakes in writing are natural and this affects their learning abilities positively. In third step which is focused on speaking activities, teacher provides feedback to learners after checking their common errors or habits in speaking. Meanwhile, by evaluating the role of the appraiser, It is helpful for the learners to have self-esteem of their own. When interviewing after fourth step's activities, the teacher compliments each learner's improvement while pointing out some errors. Afterward, We can see they show more positiveness to learn and understand korean cultures and set their identities. And they indicate interests and concerns each other's cultures by story-telling. It means they identify the popularity and interaction which the story-telling contains. Also, they confirm the participation in story-telling by expressing their willingness to revise their stories. After the activities in fifth step, there have been relatively positive changes in establishing identity and cultivating a sense of pride of learner's homeland cultures. Furthermore, we could find the strong will to be a story-teller about their homeland cultures. On this research, the effectiveness of expression education case study using story-telling about local cultures of interracial married women's homeland has been examined centrally focused on popularity, interaction, and participation. Afterward, interracial married women could not only cultivate the understanding about korean cultures but also establish their identity, improve their korean language skills through this education case study. Finally, the studies of the education programs to train interracial married women as story-tellers for their homeland local cultures are expected.

Improved Original Entry Point Detection Method Based on PinDemonium (PinDemonium 기반 Original Entry Point 탐지 방법 개선)

  • Kim, Gyeong Min;Park, Yong Su
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.6
    • /
    • pp.155-164
    • /
    • 2018
  • Many malicious programs have been compressed or encrypted using various commercial packers to prevent reverse engineering, So malicious code analysts must decompress or decrypt them first. The OEP (Original Entry Point) is the address of the first instruction executed after returning the encrypted or compressed executable file back to the original binary state. Several unpackers, including PinDemonium, execute the packed file and keep tracks of the addresses until the OEP appears and find the OEP among the addresses. However, instead of finding exact one OEP, unpackers provide a relatively large set of OEP candidates and sometimes OEP is missing among candidates. In other words, existing unpackers have difficulty in finding the correct OEP. We have developed new tool which provides fewer OEP candidate sets by adding two methods based on the property of the OEP. In this paper, we propose two methods to provide fewer OEP candidate sets by using the property that the function call sequence and parameters are same between packed program and original program. First way is based on a function call. Programs written in the C/C++ language are compiled to translate languages into binary code. Compiler-specific system functions are added to the compiled program. After examining these functions, we have added a method that we suggest to PinDemonium to detect the unpacking work by matching the patterns of system functions that are called in packed programs and unpacked programs. Second way is based on parameters. The parameters include not only the user-entered inputs, but also the system inputs. We have added a method that we suggest to PinDemonium to find the OEP using the system parameters of a particular function in stack memory. OEP detection experiments were performed on sample programs packed by 16 commercial packers. We can reduce the OEP candidate by more than 40% on average compared to PinDemonium except 2 commercial packers which are can not be executed due to the anti-debugging technique.

Perceptions of Information Technology Competencies among Gifted and Non-gifted High School Students (영재와 평재 고등학생의 IT 역량에 대한 인식)

  • Shin, Min;Ahn, Doehee
    • Journal of Gifted/Talented Education
    • /
    • v.25 no.2
    • /
    • pp.339-358
    • /
    • 2015
  • This study was to examine perceptions of information technology(IT) competencies among gifted and non-gifted students(i.e., information science high school students and technical high school students). Of the 370 high school students surveyed from 3 high schools(i.e., gifted academy, information science high school, and technical high school) in three metropolitan cities, Korea, 351 students completed and returned the questionnaires yielding a total response rate of 94.86%. High school students recognized the IT professional competence as being most important when recruiting IT employees. And they considered that practice-oriented education was the most importantly needed to improve their IT skills. In addition, the most important sub-factors of IT core competencies among gifted academy students and information science high school students were basic software skills. Also Technical high school students responded that the main network and security capabilities were the most importantly needed to do so. Finally, the most appropriate training courses for enhancing IT competencies were recognized differently among gifted and non-gifted students. Gifted academy students responded that the 'algorithm' was the mostly needed for enhancing IT competencies, whereas information science high school students responded that 'data structures' and 'computer architecture' were mostly needed to do. For technical high school students, they responded that a 'programming language' course was the most needed to do so. Results are discussed in relations to IT corporate and school settings.