• Title/Summary/Keyword: Software Complexity

Search Result 743, Processing Time 0.028 seconds

Designing Integrated Diagnosis Platform for Heterogeneous Combat System of Surface Vessels (다기종 수상함 전투체계의 통합 진단 플랫폼 설계)

  • Kim, Myeong-hun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.186-188
    • /
    • 2021
  • The architecture named IDPS is a design concept of web-based integrated platform for heterogeneous naval combat system, which accomplishes efficiency(decreasing complexity) of diagnosis process and reduces time to diagnose system. Each type of surface vessel has its own diagnostic processes and applications, and that means it also requires its own diagnostic engineer(inefficiency in human resource management). In addition, man-based diagnostic causes quality issues such as difference approach of log analysis in accordance with engineer skills. Thus In this paper, we designed integrated diagnostic platform named IDPS with simplified common process regardless of type of surface vessel and we reinforced IDPS with status decision algorithm(SDA) that judges current software status of vessel based on gathered lots of logs. It will enable engineers to diagnose system more efficiently and to use more resources in utilizing SDA-analyzed diagnostic results.

  • PDF

In vitro anti-Trypanosoma cruzi activity of methanolic extract of Bidens pilosa and identification of active compounds by gas chromatography-mass spectrometry analysis

  • Gabriel Enrique Cazares-Jaramillo;Zinnia Judith Molina-Garza;Itza Eloisa Luna-Cruz;Luisa Yolanda Solis-Soto;Jose Luis Rosales-Encina;Lucio Galaviz-Silva
    • Parasites, Hosts and Diseases
    • /
    • v.61 no.4
    • /
    • pp.405-417
    • /
    • 2023
  • Chagas disease, caused by Trypanosoma cruzi parasite, is a significant but neglected tropical public health issue in Latin America due to the diversity of its genotypes and pathogenic profiles. This complexity is compounded by the adverse effects of current treatments, underscoring the need for new therapeutic options that employ medicinal plant extracts without negative side effects. Our research aimed to evaluate the trypanocidal activity of Bidens pilosa fractions against epimastigote and trypomastigote stages of T. cruzi, specifically targeting the Brener and Nuevo León strains-the latter isolated from Triatoma gerstaeckeri in General Terán, Nuevo León, México. We processed the plant's aerial parts (stems, leaves, and flowers) to obtain a methanolic extract (Bp-mOH) and fractions with varying solvent polarities. These preparations inhibited more than 90% of growth at concentrations as low as 800 ㎍/ml for both parasite stages. The median lethal concentration (LC50) values for the Bp-mOH extract and its fractions were below 500 ㎍/ml. Tests for cytotoxicity using Artemia salina and Vero cells and hemolytic activity assays for the extract and its fractions yielded negative results. The methanol fraction (BPFC3MOH1) exhibited superior inhibitory activity. Its functional groups, identified as phenols, enols, alkaloids, carbohydrates, and proteins, include compounds such as 2-hydroxy-3-methylbenzaldehyde (50.9%), pentadecyl prop-2-enoate (22.1%), and linalool (15.4%). Eight compounds were identified, with a match confirmed by the National Institute of Standards and Technology (NIST-MS) software through mass spectrometry analysis.

A Study on the Development of Adversarial Simulator for Network Vulnerability Analysis Based on Reinforcement Learning (강화학습 기반 네트워크 취약점 분석을 위한 적대적 시뮬레이터 개발 연구)

  • Jeongyoon Kim; Jongyoul Park;Sang Ho Oh
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.1
    • /
    • pp.21-29
    • /
    • 2024
  • With the development of ICT and network, security management of IT infrastructure that has grown in size is becoming very difficult. Many companies and public institutions are having difficulty managing system and network security. In addition, as the complexity of hardware and software grows, it is becoming almost impossible for a person to manage all security. Therefore, AI is essential for network security management. However, since it is very dangerous to operate an attack model in a real network environment, cybersecurity emulation research was conducted through reinforcement learning by implementing a real-life network environment. To this end, this study applied reinforcement learning to the network environment, and as the learning progressed, the agent accurately identified the vulnerability of the network. When a network vulnerability is detected through AI, automated customized response becomes possible.

CIA-Level Driven Secure SDLC Framework for Integrating Security into SDLC Process (CIA-Level 기반 보안내재화 개발 프레임워크)

  • Kang, Sooyoung;Kim, Seungjoo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.909-928
    • /
    • 2020
  • From the early 1970s, the US government began to recognize that penetration testing could not assure the security quality of products. Results of penetration testing such as identified vulnerabilities and faults can be varied depending on the capabilities of the team. In other words none of penetration team can assure that "vulnerabilities are not found" is not equal to "product does not have any vulnerabilities". So the U.S. government realized that in order to improve the security quality of products, the development process itself should be managed systematically and strictly. Therefore, the US government began to publish various standards related to the development methodology and evaluation procurement system embedding "security-by-design" concept from the 1980s. Security-by-design means reducing product's complexity by considering security from the initial phase of development lifecycle such as the product requirements analysis and design phase to achieve trustworthiness of product ultimately. Since then, the security-by-design concept has been spread to the private sector since 2002 in the name of Secure SDLC by Microsoft and IBM, and is currently being used in various fields such as automotive and advanced weapon systems. However, the problem is that it is not easy to implement in the actual field because the standard or guidelines related to Secure SDLC contain only abstract and declarative contents. Therefore, in this paper, we present the new framework in order to specify the level of Secure SDLC desired by enterprises. Our proposed CIA (functional Correctness, safety Integrity, security Assurance)-level-based security-by-design framework combines the evidence-based security approach with the existing Secure SDLC. Using our methodology, first we can quantitatively show gap of Secure SDLC process level between competitor and the company. Second, it is very useful when you want to build Secure SDLC in the actual field because you can easily derive detailed activities and documents to build the desired level of Secure SDLC.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

Cache Memory and Replacement Algorithm Implementation and Performance Comparison

  • Park, Na Eun;Kim, Jongwan;Jeong, Tae Seog
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.3
    • /
    • pp.11-17
    • /
    • 2020
  • In this paper, we propose practical results for cache replacement policy by measuring cache hit and search time for each replacement algorithm through cache simulation. Thus, the structure of each cache memory and the four types of alternative policies of FIFO, LFU, LRU and Random were implemented in software to analyze the characteristics of each technique. The paper experiment showed that the LRU algorithm showed hit rate and search time of 36.044% and 577.936ns in uniform distribution, 45.636% and 504.692ns in deflection distribution, while the FIFO algorithm showed similar performance to the LRU algorithm at 36.078% and 554.772ns in even distribution and 45.662% and 489.574ns in bias distribution. Then LFU followed, Random algorithm was measured at 30.042% and 622.866ns at even distribution, 36.36% at deflection distribution and 553.878ns at lowest performance. The LRU replacement method commonly used in cache memory has the complexity of implementation, but it is the most efficient alternative to conventional alternative algorithms, indicating that it is a reasonable alternative method considering the reference information of data.

Convergence Organization Strategies of the Computational Thinking in Informatics Curriculums (정보과 교육과정에서 융합형 컴퓨팅사고력 구성 전략)

  • Shin, Soo-Bum;Kim, Chul;Park, Namje;Kim, Kap-Su;Sung, Young-Hoon;Jeong, Young-Sik
    • Journal of The Korean Association of Information Education
    • /
    • v.20 no.6
    • /
    • pp.607-616
    • /
    • 2016
  • Computational thinking is complexity and independent subject matter being capable to learn concept of computer science and providing methodology of problem solving. Also many experts have said that computational thinking will be grow essential tool in the further developing information society. Thus our country has been trying to introduce it in the K12 informatics subject matter education. Therefore we proposed a introducing method of computational thinking being appropriated of a character of it in the informatics curriculum. To do this, we analyzed character and worthy of it, advanced model cases introducing it into the curriculum. And we proposed that introduced case of it into curriculum is divided 3 cases archiving computational thinking itself, being connected aim of general subject matter with it and computer science education. According to this advanced cases, this study selected permeative style of computational thinking with the informatics subject matted curriculum. This method is divided achievement criterion into contents and means. also we proposed that contents area of informatics subject matter achievement criterion is composed Computing System, Information Life, Software and means area can be filled with subset of computational thinking. This introducing method can make informatics subject matter education settle subject matter helping problem solving through computer system beyond character of technology oriented subject matter.

Fast Coding Unit Decision Algorithm Based on Region of Interest by Motion Vector in HEVC (움직임 벡터에 의한 관심영역 기반의 HEVC 고속 부호화 유닛 결정 방법)

  • Hwang, In Seo;Sunwoo, Myung Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.11
    • /
    • pp.41-47
    • /
    • 2016
  • High efficiency video coding (HEVC) employs a coding tree unit (CTU) to improve the coding efficiency. A CTU consists of coding units (CU), prediction units (PU), and transform units (TU). All possible block partitions should be performed on each depth level to obtain the best combination of CUs, PUs, and TUs. To reduce the complexity of block partitioning process, this paper proposes the PU mode skip algorithm with region of interest (RoI) selection using motion vector. In addition, this paper presents the CU depth level skip algorithm using the co-located block information in the previously encoded frames. First, the RoI selection algorithm distinguishes between dynamic CTUs and static CTUs and then, asymmetric motion partitioning (AMP) blocks are skipped in the static CTUs. Second, the depth level skip algorithm predicts the most probable target depth level from average depth in one CTU. The experimental results show that the proposed fast CU decision algorithm can reduce the total encoding time up to 44.8% compared to the HEVC test model (HM) 14.0 reference software encoder. Moreover, the proposed algorithm shows only 2.5% Bjontegaard delta bit rate (BDBR) loss.

Visual Touchless User Interface for Window Manipulation (윈도우 제어를 위한 시각적 비접촉 사용자 인터페이스)

  • Kim, Jin-Woo;Jung, Kyung-Boo;Jeong, Seung-Do;Choi, Byung-Uk
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.471-478
    • /
    • 2009
  • Recently, researches for user interface are remarkably processed due to the explosive growth of 3-dimensional contents and applications, and the spread class of computer user. This paper proposes a novel method to manipulate windows efficiently using only the intuitive motion of hand. Previous methods have some drawbacks such as burden of expensive device, high complexity of gesture recognition, assistance of additional information using marker, and so on. To improve the defects, we propose a novel visual touchless interface. First, we detect hand region using hue channel in HSV color space to control window using hand. The distance transform method is applied to detect centroid of hand and curvature of hand contour is used to determine position of fingertips. Finally, by using the hand motion information, we recognize hand gesture as one of predefined seven motions. Recognized hand gesture is to be a command to control window. In the proposed method, user can manipulate windows with sense of depth in the real environment because the method adopts stereo camera. Intuitive manipulation is also available because the proposed method supports visual touch for the virtual object, which user want to manipulate, only using simple motions of hand. Finally, the efficiency of the proposed method is verified via an application based on our proposed interface.