• Title/Summary/Keyword: programming languages

Search Result 327, Processing Time 0.032 seconds

Development of a Forensic Analyzing Tool based on Cluster Information of HFS+ filesystem

  • Cho, Gyu-Sang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.178-192
    • /
    • 2021
  • File system forensics typically focus on the contents or timestamps of a file, and it is common to work around file/directory centers. But to recover a deleted file on the disk or use a carving technique to find and connect partial missing content, the evidence must be analyzed using cluster-centered analysis. Forensics tools such as EnCase, TSK, and X-ways, provide a basic ability to get information about disk clusters, but these are not the core functions of the tools. Alternatively, Sysinternals' DiskView tool provides a more intuitive visualization function, which makes it easier to obtain information around disk clusters. In addition, most current tools are for Windows. There are very few forensic analysis tools for MacOS, and furthermore, cluster analysis tools are very rare. In this paper, we developed a tool named FACT (Forensic Analyzer based Cluster Information Tool) for analyzing the state of clusters in a HFS+ file system, for digital forensics. The FACT consists of three features, a Cluster based analysis, B-tree based analysis, and Directory based analysis. The Cluster based analysis is the main feature, and was basically developed for cluster analysis. The FACT tool's cluster visualization feature plays a central role. The FACT tool was programmed in two programming languages, C/C++ and Python. The core part for analyzing the HFS+ filesystem was programmed in C/C++ and the visualization part is implemented using the Python Tkinter library. The features in this study will evolve into key forensics tools for use in MacOS, and by providing additional GUI capabilities can be very important for cluster-centric forensics analysis.

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

Design and Implementation of IoT based Low cost, Effective Learning Mechanism for Empowering STEM Education in India

  • Simmi Chawla;Parul Tomar;Sapna Gambhir
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.163-169
    • /
    • 2024
  • India is a developing nation and has come with comprehensive way in modernizing its reducing poverty, economy and rising living standards for an outsized fragment of its residents. The STEM (Science, Technology, Engineering, and Mathematics) education plays an important role in it. STEM is an educational curriculum that emphasis on the subjects of "science, technology, engineering, and mathematics". In traditional education scenario, these subjects are taught independently, but according to the educational philosophy of STEM that teaches these subjects together in project-based lessons. STEM helps the students in his holistic development. Youth unemployment is the biggest concern due to lack of adequate skills. There is a huge skill gap behind jobless engineers and the question arises how we can prepare engineers for a better tomorrow? Now a day's Industry 4.0 is a new fourth industrial revolution which is an intelligent networking of machines and processes for industry through ICT. It is based upon the usage of cyber-physical systems and Internet of Things (IoT). Industrial revolution does not influence only production but also educational system as well. IoT in academics is a new revolution to the Internet technology, which introduced "Smartness" in the entire IT infrastructure. To improve socio-economic status of the India students must equipped with 21st century digital skills and Universities, colleges must provide individual learning kits to their students which can help them in enhancing their productivity and learning outcomes. The major goal of this paper is to present a low cost, effective learning mechanism for STEM implementation using Raspberry Pi 3+ model (Single board computer) and Node Red open source visual programming tool which is developed by IBM for wiring hardware devices together. These tools are broadly used to provide hands on experience on IoT fundamentals during teaching and learning. This paper elaborates the appropriateness and the practicality of these concepts via an example by implementing a user interface (UI) and Dashboard in Node-RED where dashboard palette is used for demonstration with switch, slider, gauge and Raspberry pi palette is used to connect with GPIO pins present on Raspberry pi board. An LED light is connected with a GPIO pin as an output pin. In this experiment, it is shown that the Node-Red dashboard is accessing on Raspberry pi and via Smartphone as well. In the final step results are shown in an elaborate manner. Conversely, inadequate Programming skills in students are the biggest challenge because without good programming skills there would be no pioneers in engineering, robotics and other areas. Coding plays an important role to increase the level of knowledge on a wide scale and to encourage the interest of students in coding. Today Python language which is Open source and most demanding languages in the industry in order to know data science and algorithms, understanding computer science would not be possible without science, technology, engineering and math. In this paper a small experiment is also done with an LED light via writing source code in python. These tiny experiments are really helpful to encourage the students and give play way to learn these advance technologies. The cost estimation is presented in tabular form for per learning kit provided to the students for Hands on experiments. Some Popular In addition, some Open source tools for experimenting with IoT Technology are described. Students can enrich their knowledge by doing lots of experiments with these freely available software's and this low cost hardware in labs or learning kits provided to them.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Practical visualization of discontinuity distribution in subsurface using borehole image analysis (시추공영상분석을 이용한 지하 불연속면 분포의 가시화 실용연구)

  • 송무영;박찬석
    • The Journal of Engineering Geology
    • /
    • v.12 no.1
    • /
    • pp.23-34
    • /
    • 2002
  • Borehole image analysis has been carried out to obtain the detailed geological data by approach of direct observation. Direct application of borehole image analysis inevitably gives rise to a few of restriction of data acquisition due to the limited information within narrow borehole space. Considering the apparent dip of discontinuity surface depending upon the direction, the visualized program of two-dimensional subsurface discontinuities is coded. Borehole image analysis can compensate the distribution of subsurface discontinuity extending into the expected area of investigation. In order to draw subsurface profile in the proposed area of subsurface construction, visualized program is coded as a window GUI (Graphic User Interface) using Fortran and Visual Basic Programming languages. It is to open publicly for the usage of whoever is in want. Discontinuity distribution map is visualized along the Proposed line of tunnel in the Janggye-ri area, Jangsu-gun. Using the visualized program, the limited information from borehole spatially applies into analysis of overall subsurface structures, and the distributional characteristics of discontinuity anticipate at the proposed area. In addition, spacing and extension of joint and depth of discontinuity effecting tunnel safety can be visualized along the direction of the proposed tunnel. These lines of visualization apply design and construction of fundanmental structures.

A Structural Testing Strategy for PLC Programs Specified by Function Block Diagram (함수 블록 다이어그램으로 명세된 PLC 프로그램에 대한 구조적 테스팅 기법)

  • Jee, Eun-Kyoung;Jeon, Seung-Jae;Cha, Sung-Deok
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.3
    • /
    • pp.149-161
    • /
    • 2008
  • As Programmable Logic Controllers(PLCs) are frequently used to implement real-time safety critical software, testing of PLC software is getting more important. We propose a structural testing technique on Function Block Diagram(FBD) which is one of the PLC programming languages. In order to test FBD networks, we define templates for function blocks including timer function blocks and propose an algorithm based on the templates to transform a unit FBD into a flowgraph. We generate test cases by applying existing testing techniques to the generated flowgraph. While the existing FBD testing technique do not consider infernal structure of FBD to generate test cases and can be applied only to FBD from which the specific intermediate model can be generated, this approach has advantages of systematic test case generation considering infernal structure of FBD and applicability to any FBD without regard to its intermediate format. Especially, the proposed method enables FBD networks including timer function blocks to be tested thoroughly. To demonstrate the effectiveness of the proposed method, we use trip logic of bistable processor of digital nuclear power plant protection systems which is being developed in Korea.

Generating Intermediate Representation of IDL Using the CFE (CFE를 사용한 IDL 중간 표현 생성)

  • Park, Chan-Mo;Song, Gi-Beom;Hong, Seong-Pyo;Lee, Hyok;Lee, Jeong-Ki;Lee, Joon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.05a
    • /
    • pp.192-197
    • /
    • 1999
  • Programmers who write distributed programs are faced with a dilemma when writing the systems communication code. If the code is written by hand or partly by hand, then the speed of the application may be maximized, but the human effort required to implement and maintain the system is greatly increased. On the other hand, if the code is generated using a CORBA IDL compiler then the programmer effort will be reduced, but the performance of the application may be poor. So we need the optimizing the code generated by CORBA IDL compiler. We introduce the techniques which have been used by typical programming languages into compilation of IDL. We separate the phase of compilation into three phase. The first phase parses interface definition in IDL, manages nested scope and generates AST(Abstract Syntax Tree). The second phase implements the optimization. The third phase generates the code in target language. In this paper, we focus on the first phase. We separate interface definition into interface and message representation from AST. This supports the separate optimization of code in second phase.

  • PDF

Design and Implementation of a Framework Modeler for Intranet Construction Tool (인트라넷 구축 도구를 위한 프레임워크 모델러의 설계 및 구현)

  • Lee, Chang-Mog;Yoo, Cheol-Jung;Chang, Ok-Bae;Lee, Sang-Duck
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.1
    • /
    • pp.63-76
    • /
    • 2001
  • As reusability becomes recognized more importantly, with the introduction of ObjectOriented Programming Languages, developers not only want to reduce development duration, but also to develop a proper system robustly and safely by renovating the Hot Spot in order to reuse the existing framework. When we perform these works, we need the development environment which is the Rapid Application Development tool, and the RAD tools provide us with the convenient development environment. The need of RAD tools is recognized by every Object-Oriented programmer, and many business enterprises are developing them. In this paper, we will present a design and implementation of module-based modeler as a method for developing technique to constmct user-driven Intranet environment for the generation of the program based on the framework. The framework modeler used Java language that is independent on platform, and applied the technique of OMT editor that provides the UML notation partially. Additionally, The modeler also includes the notations that are not supported in OMT editor. In addition to this characteristic, it is structured to develop system consistently with applying the Agent pattern, which is a design pattern suggested by ourselves, to send messages occurred between various Views. The existing MVC(Model-View-Controller) architecture does not have this function. Thus, this tool has a flexibility when user's requirements are changed, or functions are extended.

  • PDF

An Exploratory Study of Software Development Environment in Korean Shipbuilding and Marine Industry (조선해양산업 소프트웨어 개발환경 현황 연구)

  • Yu, Misun;Jeong, Yang-Jae;Chun, In-Geol;Kim, Byoung-Chul;Na, Gapjoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.6
    • /
    • pp.221-228
    • /
    • 2018
  • With an increase in demand for the high added value of shipbuilding and marine industry based on the information and communications technology (ICT), software technology has become more important than ever in the industry. In this paper, we present the result of our preliminary investigation on the current software development environment in the shipbuilding and marine industry in order to develop reusable software component, which can enhance the competitiveness of software development. The investigation is performed based on the survey answers from 34 developers who are working in different shipbuilding and marine companies. The questionnaire is composed of items to gather the information of each company such as the number of employees and product domain, and actual software development environment such as operating system, programming languages, deployment format, obstacles for developing components, and the adoption of software development methods and tools. According to the results of the survey, the most important consideration to select their development platform was the number of available utilities and the technical supports, followed by performance, price and security problems. In addition, the requirements of various platforms supporting and the higher reliability, and the limitations of low development cost and manpower made it difficult for them to develop reusable software components. Finally, throughout the survey, we find out that only 15% of developers used software development processes and managed the quality to systematically develop their software products, therefore, shipbuilding and marine companies need more technical and institutional support to improve their ability to develop high qualified software.

Intermediate-Representation Translation Techniques to Improve Vulnerability Analysis Efficiency for Binary Files in Embedded Devices (임베디드 기기 바이너리 취약점 분석 효율성 제고를 위한 중간어 변환 기술)

  • Jeoung, Byeoung Ho;Kim, Yong Hyuk;Bae, Sung il;Im, Eul Gyu
    • Smart Media Journal
    • /
    • v.7 no.1
    • /
    • pp.37-44
    • /
    • 2018
  • Utilizing sequence control and numerical computing, embedded devices are used in a variety of automated systems, including those at industrial sites, in accordance with their control program. Since embedded devices are used as a control system in corporate industrial complexes, nuclear power plants and public transport infrastructure nowadays, deliberate attacks on them can cause significant economic and social damages. Most attacks aimed at embedded devices are data-coded, code-modulated, and control-programmed. The control programs for industry-automated embedded devices are designed to represent circuit structures, unlike common programming languages, and most industrial automation control programs are designed with a graphical language, LAD, which is difficult to process static analysis. Because of these characteristics, the vulnerability analysis and security related studies for industry automation control programs have only progressed up to the formal verification, real-time monitoring levels. Furthermore, the static analysis of industrial automation control programs, which can detect vulnerabilities in advance and prepare for attacks, stays poorly researched. Therefore, this study suggests a method to present a discussion on an industry automation control program designed to represent the circuit structure to increase the efficiency of static analysis of embedded industrial automation programs. It also proposes a medium term translation technology exploiting LLVM IR to comprehensively analyze the industrial automation control programs of various manufacturers. By using LLVM IR, it is possible to perform integrated analysis on dynamic analysis. In this study, a prototype program that converts to a logical expression type of medium language was developed with regards to the S company's control program in order to verify our method.