• Title/Summary/Keyword: computer files

Search Result 549, Processing Time 0.023 seconds

The instrument-centering ability of four Nickel-Titanium instruments in simulated curved root canals (만곡된 레진 모형 근관에서 4종의 엔진 구동형 니켈-티타늄 기구의 근관 중심율 유지 능력)

  • Ku, Jae-Hoon;Chang, Hoon-Sang;Chang, Seok-Woo;Cho, Hwan-Hee;Bae, Ji-Myung;Min, Kyung-San
    • Restorative Dentistry and Endodontics
    • /
    • v.31 no.2
    • /
    • pp.113-118
    • /
    • 2006
  • The aim of this study was to evaluate the ability of newly marketed NRT instruments to maintain the original root canal configuration and curvature during preparation in comparison with the three existing instruments in simulated root canals. Simulated canals in resin blocks were prepared with ProFile. K3, ProTaper and NRT instrument (n = 10 canals in each case). Pre- and post-operative images were recorded, and assessment of canal shape was completed with a computer image analysis program. The data were analyzed statistically using the One-way ANOVA followed by Duncan s test. The ability or instruments to remain centered in prepared canals at 1-, 2-mm levels was significantly better in ProFile groups than in other groups (p < 0.05). The change of centering ratio in NRT groups at 5-mm level was significantly greater than ProFile group and at 6- and 7-mm level than all other groups (p < 0.05). Although the NRT system was comparable to other systems in regards to its ability to maintain the canal configuration of apical portion, this system was more influenced by the mid-root curvature due to its stainless-steel files for coronal preflaring.

Using Web as CAI in the Classroom of Information Age (정보화시대를 대비한 CAI로서의 Web 활용)

  • Lee, Kwang-Hi
    • Journal of The Korean Association of Information Education
    • /
    • v.1 no.1
    • /
    • pp.38-48
    • /
    • 1997
  • This study is an attempt to present a usage of the Web as CAI in the classroom and to give a direction to the future education in the face of information age. Characteristcs of information society, current curriculum, educational and teacher education are first analyzed in this article. The features of internet and 'Web are then summarized to present benefits of usage in the classroom as a CAI tool. The literature shows several characteristics of information society as follows : a technological computer, a provision and sharing of information, multi functional society, a participative democracy', an autonomy, a time value..A problem solving and 4 Cs(e.g., cooperation, copying, communication, creativity) are newly needed in this learning environment. The Internet is a large collection of networks that are tied together so that users can share their vast resources, a wealth of information, and give a key to a successful, efficient. individual study over a time and space. The 'Web increases an academic achievement, a creativity, a problem solving, a cognitive thinking, and a learner's motivation through an easy access to : documents available on the Internet, files containing programs, pictures, movies, and sounds from an FTP site, Usenet newsgroups, WAIS seraches, computers accessible through telnet, hypertext document, Java applets and other multimedia browser enhancements, and much more, In the Web browser will be our primary tool in searching for information on the Internet in this information age.

  • PDF

SAF: A Scheme of Swap Space Allocation in File Systems to Reduce Disk Seek Time (SAF: 디스크 탐색 시간 향상을 위한 파일 시스템 내 스왑 공간 할당 기법)

  • Ahn, Woo-Hyun;Kim, Bo-Gon;Kim, Byung-Gyu;Oh, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.6
    • /
    • pp.1289-1300
    • /
    • 2011
  • In recent computer systems with high-performance, users execute programs needing large memory and programs intensively accessing files simultaneously. Such a large memory requirement makes virtual memory systems access swap spaces in disk, and intensive file accesses require file systems to access file system partitions in disk. Executing the two kinds of programs at once incurs large disk seeks between swap spaces and file system partitions frequently. To solve the problem, this paper proposes a new scheme called SAF to create several swap spaces in a file system partition, where pages to be paged out are stored. When a page is paged out, the scheme stores the page to one of the swap spaces close to a disk location where the most recently accessed file is located. The chosen swap space in the file system partition is closer to the disk location than the traditional swap space, so that our scheme can reduce the large disk seek time spent to move to the traditional swap space in paging out a page. The experiment of our scheme implemented in FreeBSD 6.2 shows that SAF reduces the execution time of several benchmarks over FreeBSD ranging from 14% to 42%.

A Text Mining-based Intrusion Log Recommendation in Digital Forensics (디지털 포렌식에서 텍스트 마이닝 기반 침입 흔적 로그 추천)

  • Ko, Sujeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.6
    • /
    • pp.279-290
    • /
    • 2013
  • In digital forensics log files have been stored as a form of large data for the purpose of tracing users' past behaviors. It is difficult for investigators to manually analysis the large log data without clues. In this paper, we propose a text mining technique for extracting intrusion logs from a large log set to recommend reliable evidences to investigators. In the training stage, the proposed method extracts intrusion association words from a training log set by using Apriori algorithm after preprocessing and the probability of intrusion for association words are computed by combining support and confidence. Robinson's method of computing confidences for filtering spam mails is applied to extracting intrusion logs in the proposed method. As the results, the association word knowledge base is constructed by including the weights of the probability of intrusion for association words to improve the accuracy. In the test stage, the probability of intrusion logs and the probability of normal logs in a test log set are computed by Fisher's inverse chi-square classification algorithm based on the association word knowledge base respectively and intrusion logs are extracted from combining the results. Then, the intrusion logs are recommended to investigators. The proposed method uses a training method of clearly analyzing the meaning of data from an unstructured large log data. As the results, it complements the problem of reduction in accuracy caused by data ambiguity. In addition, the proposed method recommends intrusion logs by using Fisher's inverse chi-square classification algorithm. So, it reduces the rate of false positive(FP) and decreases in laborious effort to extract evidences manually.

An Agroclimatic Data Retrieval and Analysis System for Microcomputer Users(CLIDAS) (퍼스컴을 이용한 농업기후자료 검색 및 분석시스템)

  • 윤진일;김영찬
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.38 no.3
    • /
    • pp.253-263
    • /
    • 1993
  • Climatological informations have not been fully utilized by agricultural research and extension workers in Korea due mainly to inaccessbilty to the archived climate data. This study was initiated to improve access to historical climate data gathered from 72 weather stations of Korea Meteorological Administration for agricultural applications by using a microcomputer-based methodology. The climatological elements include daily values of average, maximum and minimum temperature, relative humidity, average and maximum wind speed, wind direction, evaporation, precipitation, sunshine duration and cloud amount. The menu-driven, user-friendly data retrieval system(CLIDAS) provides quick summaries of the data values on a daily, weekly and monthly basis and selective retrieval of weather records meeting certain user specified critical conditions. Growing degree days and potential evapotranspiration data are derived from the daily climatic data, too. Data reports can be output to the computer screen, a printer or ASCII data files. CLIDAS can be run on any IBM compatible machines with Video Graphics Array card. To run the system with the whole database, more than 50 Mb hard disk space should be available. The system can be easily upgraded for further expansion of functions due to the module-structured design.

  • PDF

Design of Heterogeneous Content Linkage Method by Analyzing Genbank (Genbank 분석을 통한 이종의 콘텐츠 연계 방안 설계)

  • Ahn, Bu-Young;Lee, Myung-Sun;Kim, Ji-Young;Oh, Chung-Shick
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.49-54
    • /
    • 2010
  • As information on gene sequences is not only diverse but also extremely huge in volume, high-performance computer and information technology techniques are required to build and analyze gene sequence databases. This has given rise to the discipline of bioinformatics, a field of research where computers are utilized to collect, to manage, to save, to evaluate, and to analyze biological data. In line with such continued development in bioinformatics, the Korea Institute of Science and Technology Information (KISTI) has built an infrastructure for the biological information, based on the information technology, and provided the information for researchers of bioscience. This paper analyzes the reference fields of Genbank, the most frequently used gene database by the global researchers among the life information databases, and proposes the interface method to NDSL which is the science and technology information integrated service provided by KISTI. For these, after collecting Genbank data from NCBI FTP site, we rebuilt the database by separating Genbank text files into the basic gene data and the reference data. So new tables are generated through extracting the paper and patent information from Genbank reference fields. Then we suggest the method of connection with the paper DB and the patent DB operated by KISTI.

Deriving Priorities of Competences Required for Digital Forensic Experts using AHP (AHP 방법을 활용한 디지털포렌식 전문가 역량의 우선순위 도출)

  • Yun, Haejung;Lee, Seung Yong;Lee, Choong C.
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.1
    • /
    • pp.107-122
    • /
    • 2017
  • Nowadays, digital forensic experts are not only computer experts who restore and find deleted files, but also general experts who posses various capabilities including knowledge about processes/laws, communication skills, and ethics. However, there have been few studies about qualifications or competencies required for digital forensic experts comparing with their importance. Therefore, in this study, AHP questionnaires were distributed to digital forensic experts and analyzed to derive priorities of competencies; the first-tier questions which consisted of knowledge, technology, and attitude, and the second-tier ones which have 20 items. Research findings showed that the most important competency was knowledge, followed by technology and attitude but no significant difference was found. Among 20 items of the second-tier competencies, the most important competency was "digital forensics equipment/tool program utilization skill" and it was followed by "data extraction and imaging skill from storage devices." Attitude such as "judgment," "morality," "communication skill," "concentration" were subsequently followed. The least critical one was "substantial law related to actual cases." Previous studies on training/education for digital forensics experts focused on law, IT knowledge, and usage of analytic tools while attitude-related competencies have not given proper attention. We hope this study can provide helpful implications to design curriculum and qualifying exam to foster digital forensic experts.

Experiencing with Splunk, a Platform for Analyzing Machine Data, for Improving Recruitment Support Services in WorldJob+ (머신 데이터 분석용 플랫폼 스플렁크를 이용한 취업지원 서비스 개선에 관한 연구 : 월드잡플러스 사례를 중심으로)

  • Lee, Jae Deug;Rhee, MoonKi Kyle;Kim, Mi Ryang
    • Journal of Digital Convergence
    • /
    • v.16 no.3
    • /
    • pp.201-210
    • /
    • 2018
  • WorldJob+, being operated by The Human Resources Development Service of Korea, provides a recruitment support services to overseas companies wanting to hire talented Korean applicants and interns, and support the entire course from overseas advancement information check to enrollment, interview, and learning for young job-seekers. More than 300,000 young people have registered in WorldJob+, an overseas united information network, for job placement. To innovate WorldJob+'s services for young job-seekers, Splunk, a powerful platform for analyzing machine data, was introduced to collate and view system log files collected from its website. Leveraging Splunk's built-in data visualization and analytical features, WorldJob+ has built custom tools to gain insight into the operation of the recruitment supporting service system and to increase its integrity. Use cases include descriptive and predictive analytics for matching up services to allow employers and job seekers to be matched based on their respective needs and profiles, and connect jobseekers with the best recruiters and employers on the market, helping job seekers secure the best jobs fast. This paper will cover the numerous ways WorldJob+ has leveraged Splunk to improve its recruitment supporting services.

MPEG-H 3D Audio Decoder Structure and Complexity Analysis (MPEG-H 3D 오디오 표준 복호화기 구조 및 연산량 분석)

  • Moon, Hyeongi;Park, Young-cheol;Lee, Yong Ju;Whang, Young-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.2
    • /
    • pp.432-443
    • /
    • 2017
  • The primary goal of the MPEG-H 3D Audio standard is to provide immersive audio environments for high-resolution broadcasting services such as UHDTV. This standard incorporates a wide range of technologies such as encoding/decoding technology for multi-channel/object/scene-based signal, rendering technology for providing 3D audio in various playback environments, and post-processing technology. The reference software decoder of this standard is a structure combining several modules and can operate in various modes. Each module is composed of independent executable files and executed sequentially, real time decoding is impossible. In this paper, we make DLL library of the core decoder, format converter, object renderer, and binaural renderer of the standard and integrate them to enable frame-based decoding. In addition, by measuring the computation complexity of each mode of the MPEG-H 3D-Audio decoder, this paper also provides a reference for selecting the appropriate decoding mode for various hardware platforms. As a result of the computational complexity measurement, the low complexity profiles included in Korean broadcasting standard has a computation complexity of 2.8 times to 12.4 times that of the QMF synthesis operation in case of rendering as a channel signals, and it has a computation complexity of 4.1 times to 15.3 times of the QMF synthesis operation in case of rendering as a binaural signals.

Research on text mining based malware analysis technology using string information (문자열 정보를 활용한 텍스트 마이닝 기반 악성코드 분석 기술 연구)

  • Ha, Ji-hee;Lee, Tae-jin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.1
    • /
    • pp.45-55
    • /
    • 2020
  • Due to the development of information and communication technology, the number of new / variant malicious codes is increasing rapidly every year, and various types of malicious codes are spreading due to the development of Internet of things and cloud computing technology. In this paper, we propose a malware analysis method based on string information that can be used regardless of operating system environment and represents library call information related to malicious behavior. Attackers can easily create malware using existing code or by using automated authoring tools, and the generated malware operates in a similar way to existing malware. Since most of the strings that can be extracted from malicious code are composed of information closely related to malicious behavior, it is processed by weighting data features using text mining based method to extract them as effective features for malware analysis. Based on the processed data, a model is constructed using various machine learning algorithms to perform experiments on detection of malicious status and classification of malicious groups. Data has been compared and verified against all files used on Windows and Linux operating systems. The accuracy of malicious detection is about 93.5%, the accuracy of group classification is about 90%. The proposed technique has a wide range of applications because it is relatively simple, fast, and operating system independent as a single model because it is not necessary to build a model for each group when classifying malicious groups. In addition, since the string information is extracted through static analysis, it can be processed faster than the analysis method that directly executes the code.