• Title/Summary/Keyword: Systems engineering

Search Result 44,659, Processing Time 0.065 seconds

Improved Original Entry Point Detection Method Based on PinDemonium (PinDemonium 기반 Original Entry Point 탐지 방법 개선)

  • Kim, Gyeong Min;Park, Yong Su
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.6
    • /
    • pp.155-164
    • /
    • 2018
  • Many malicious programs have been compressed or encrypted using various commercial packers to prevent reverse engineering, So malicious code analysts must decompress or decrypt them first. The OEP (Original Entry Point) is the address of the first instruction executed after returning the encrypted or compressed executable file back to the original binary state. Several unpackers, including PinDemonium, execute the packed file and keep tracks of the addresses until the OEP appears and find the OEP among the addresses. However, instead of finding exact one OEP, unpackers provide a relatively large set of OEP candidates and sometimes OEP is missing among candidates. In other words, existing unpackers have difficulty in finding the correct OEP. We have developed new tool which provides fewer OEP candidate sets by adding two methods based on the property of the OEP. In this paper, we propose two methods to provide fewer OEP candidate sets by using the property that the function call sequence and parameters are same between packed program and original program. First way is based on a function call. Programs written in the C/C++ language are compiled to translate languages into binary code. Compiler-specific system functions are added to the compiled program. After examining these functions, we have added a method that we suggest to PinDemonium to detect the unpacking work by matching the patterns of system functions that are called in packed programs and unpacked programs. Second way is based on parameters. The parameters include not only the user-entered inputs, but also the system inputs. We have added a method that we suggest to PinDemonium to find the OEP using the system parameters of a particular function in stack memory. OEP detection experiments were performed on sample programs packed by 16 commercial packers. We can reduce the OEP candidate by more than 40% on average compared to PinDemonium except 2 commercial packers which are can not be executed due to the anti-debugging technique.

PST Member Behavior Analysis Based on Three-Dimensional Finite Element Analysis According to Load Combination and Thickness of Grouting Layer (하중조합과 충전층 두께에 따른 3차원 유한요소 해석에 의한 PST 부재의 거동 분석)

  • Seo, Hyun-Su;Kim, Jin-Sup;Kwon, Min-Ho
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.22 no.6
    • /
    • pp.53-62
    • /
    • 2018
  • Follofwing the accelerating speed-up of trains and rising demand for large-volume transfer capacity, not only in Korea, but also around the world, track structures for trains have been improving consistently. Precast concrete slab track (PST), a concrete structure track, was developed as a system that can fulfil new safety and economic requirements for railroad traffic. The purpose of this study is to provide the information required for the development and design of the system in the future, by analyzing the behavior of each structural member of the PST system. The stress distribution result for different combinations of appropriate loads according to the KRL-2012 train load and KRC code was analyzed by conducting a three-dimensional finite element analysis, while the result for different thicknesses of the grouting layer is also presented. Among the structural members, the largest stress took place on the grouting layer. The stress changed sensitively following the thickness and the combination of loads. When compared with a case of applying only a vertical KRL-2012 load, the stress increased by 3.3 times and 14.1 times on a concrete panel and HSB, respectively, from the starting load and temperature load. When the thickness of the grouting layer increased from 20 mm to 80 mm, the stress generated on the concrete panel decreased by 4%, while the stress increased by 24% on the grouting layer. As for the cracking condition, tension cracking was caused locally on the grouting layer. Such a result indicates that more attention should be paid to the flexure and tension behavior from horizontal loads rather than from vertical loads when developing PST systems. In addition, the safety of each structural member must be ensured by maintaining the thickness of the grouting layer at 40 mm or more.

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

Cellular Protective Effects of Peanut Sprout Root Extracts (땅콩나물 뿌리 추출물의 세포 보호 효과)

  • Jo, Na Rae;Park, Chan Il;Park, Chae Won;Shin, Dong Han;Hwang, Yoon Chan;Kim, Yong Hyun;Park, Soo Nam
    • Applied Chemistry for Engineering
    • /
    • v.23 no.2
    • /
    • pp.183-189
    • /
    • 2012
  • In this study, the cellular protective effect and antioxidative property of peanut sprout root extracts were investigated. Cellular protective effects of peanut sprout root extracts on the rose-bengal sensitized photohemolysis of human erythrocytes were investigated. The ethyl acetate fraction of extracts exhibited a cellular protective effect in a concentration dependent manner. Particularly, the aglycone fraction of extracts showed prominent cellular protective effects in a concentration range (5~50 ${\mu}g/mL$). They are more effective than that of (+)-${\alpha}$-tocopherol, known as a lipid peroxidation chain blocker. Reactive oxygen species (ROS) scavenging activities ($OSC_{50}$) of peanut sprout root extracts on ROS generated in $Fe^{3+}$-EDTA/$H_2O_2$ system were investigated using the luminol-dependent chemiluminescence assay. The ethyl acetate fraction of extracts ($OSC_{50}$; 1.59 ${\mu}g/mL$) showed a similar ROS scavenging activity compare with that of L-ascorbic acid (1.50 ${\mu}g/mL$), known as a strong antioxidant. On the other hand, the order of free radical (1,1-diphenyl-2-picrylhydraxyl, DPPH) scavenging activity ($FSC_{50}$) was (+)-${\alpha}$-tocopherol > 80% MeOH extract > aglycone fraction > ethyl acetate fraction. These results indicate that peanut sprout root extracts can function as an antioxidant in biological systems, particularly skin exposed to solar UV radiation by scavenging $^1O_2$ and other ROS, and to protect cellular membranes against ROS.

Bactericidal Efficacy of a Fumigation Disinfectant with Ortho-phenylphenol as an Active Ingredient Against Pseudomonas Aeruginosa and Enterococcus Hirae (Ortho-phenylphenol을 주성분을 하는 훈증소독제의 Pseudomonas aeruginosa와 Enterococcus hirae에 대한 살균효과)

  • Cha, Chun-Nam;Park, Eun-Kee;Kim, Yongpal;Yu, Eun-Ah;Yoo, Chang-Yeol;Hong, Il-Hwa;Kim, Suk;Lee, Hu-Jang
    • Journal of Food Hygiene and Safety
    • /
    • v.29 no.1
    • /
    • pp.60-66
    • /
    • 2014
  • This test was performed to evaluate the bactericidal efficacy of a fumigation disinfectant containing 20% ortho-phenylphenol against Pseudomonas aeruginosa (P. aeruginosa) and Enterococcus hirae (E. hirae). In preliminary tests, P. aeruginosa and E. hirae working culture suspension number (N value) were $2.8{\times}10^8$ and $4.0{\times}10^8CFU/mL$, respectively. And all the colony numbers on the carriers exposed to the fumigant (n1, n2, n3) were higher than 0.5N1 (the number of bacterial test suspentions by pour plate method), 0.5N2 (the number of bacterial test suspentions by filter membrane method) and 0.5N1, respectively. In addition, the mean number of P. aeruginosa and E. hirae recovered on the control-carriers (T value) was $2.8{\times}10^8$ and $3.4{\times}10^6CFU/mL$, respectively. In the bactericidal effect of the fumigant, the reduction number of $2.8{\times}10^8$ (d value) was 6.46 and 5.19 logCFU/mL, respectively. According to the French standard for the fumigant, the d value for the effective bactericidal fumigant should be over than 5 logCFU/mL. With the results from this study, the fumigation disinfectant containing 20% ortho-phenylphenol has an effective bactericidal activity, then the fumigant can be applied to disinfect food materials and kitchen appliances contaminated with the pathogenic bacteria.

Development of Deep Learning Structure to Improve Quality of Polygonal Containers (다각형 용기의 품질 향상을 위한 딥러닝 구조 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.493-500
    • /
    • 2021
  • In this paper, we propose the development of deep learning structure to improve quality of polygonal containers. The deep learning structure consists of a convolution layer, a bottleneck layer, a fully connect layer, and a softmax layer. The convolution layer is a layer that obtains a feature image by performing a convolution 3x3 operation on the input image or the feature image of the previous layer with several feature filters. The bottleneck layer selects only the optimal features among the features on the feature image extracted through the convolution layer, reduces the channel to a convolution 1x1 ReLU, and performs a convolution 3x3 ReLU. The global average pooling operation performed after going through the bottleneck layer reduces the size of the feature image by selecting only the optimal features among the features of the feature image extracted through the convolution layer. The fully connect layer outputs the output data through 6 fully connect layers. The softmax layer multiplies and multiplies the value between the value of the input layer node and the target node to be calculated, and converts it into a value between 0 and 1 through an activation function. After the learning is completed, the recognition process classifies non-circular glass bottles by performing image acquisition using a camera, measuring position detection, and non-circular glass bottle classification using deep learning as in the learning process. In order to evaluate the performance of the deep learning structure to improve quality of polygonal containers, as a result of an experiment at an authorized testing institute, it was calculated to be at the same level as the world's highest level with 99% good/defective discrimination accuracy. Inspection time averaged 1.7 seconds, which was calculated within the operating time standards of production processes using non-circular machine vision systems. Therefore, the effectiveness of the performance of the deep learning structure to improve quality of polygonal containers proposed in this paper was proven.

Development of Rainfall-runoff Analysis Algorithm on Road Surface (도로 표면 강우 유출 해석 알고리즘 개발)

  • Jo, Jun Beom;Kim, Jung Soo;Kwak, Chang Jae
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.4
    • /
    • pp.223-232
    • /
    • 2021
  • In general, stormwater flows to the road surface, especially in urban areas, and it is discharged through the drainage grate inlets on roads. The appropriate evaluation of the road drainage capacity is essential not only in the design of roads and inlets but also in the design of sewer systems. However, the method of road surface flow analysis that reflects the topographical and hydraulic conditions might not be fully developed. Therefore, the enhanced method of road surface flow analysis should be presented by investigating the existing analysis method such as the flow analysis module (uniform; varied) and the flow travel time (critical; fixed). In this study, the algorithm based on varied and uniform flow analysis was developed to analyze the flow pattern of road surface. The numerical analysis applied the uniform and varied flow analysis module and travel time as parameters were conducted to estimate the characteristics of rainfall-runoff in various road conditions using the developed algorithm. The width of the road (two-lane (6 m)) and the slope of the road (longitudinal slope of road 1 - 10%, transverse slope of road 2%, and transverse slope of gutter 2 - 10%) was considered. In addition, the flow of the road surface is collected from the gutter along the road slope and drained through the gutter in the downstream part, and the width of the gutter was selected to be 0.5 m. The simulation results were revealed that the runoff characteristics were affected by the road slope conditions, and it was found that the varied flow analysis module adequately reflected the gutter flow which is changed along the downstream caused by collecting of road surface flow at the gutter. The varied flow analysis module simulated 11.80% longer flow travel time on average (max. 23.66%) and 4.73% larger total road surface discharge on average (max. 9.50%) than the uniform flow analysis module. In order to accurately estimate the amount of runoff from the road, it was appropriate to perform flow analysis by applying the critical duration and the varied flow analysis module. The developed algorithm was expected to be able to be used in the design of road drainage because it was accurately simulated the runoff characteristics on the road surface.

Estimation of Food Commodity Intakes from the Korea National Health and Nutrition Examination Survey Databases: With Priority Given to Intake of Perilla Leaf (국민건강영양조사 자료를 이용한 식품 섭취량 산출 방법 개발: 들깻잎 섭취량을 중심으로)

  • Kim, Seung Won;Jung, Junho;Lee, Joong-Keun;Woo, Hee Dong;Im, Moo-Hyeog;Park, Young Sig;Ko, Sanghoon
    • Food Engineering Progress
    • /
    • v.14 no.4
    • /
    • pp.307-315
    • /
    • 2010
  • The safety and security of food supply should be one of the primary responsibilities of any government. Estimation of nation's food commodity intakes is important to control the potential risks in food systems since food hazards are often associated with quality and safety of food commodities. The food intake databases provided by Korea National Health and Nutrition Examination Survey (KNHANES) are good resources to estimate the demographic data of intakes of various food commodities. A limitation of the KNHANES databases, however, is that the food intakes surveyed are not based on commodities but ingredients and their mixtures. In this study, reasonable calculation strategies were applied to convert the food intakes of the ingredients mixtures from the KNHANES into food commodity intakes. For example, Perilla leaf consumed with meat, raw fish, and etc. in Korean diets was used to estimate its Korean intakes and develop algorithms for demographic analysis. Koreans have consumed raw, blanched, steamed, and canned perilla leaf products. The average daily intakes of the perilla leaf were analyzed demographically, for examples, the intakes by gender, age, and etc. The average daily intakes of total perilla leaf were 2.03${\pm}$0.27 g in 1998, 2.11${\pm}$0.26 g in 2001, 2.29${\pm}$0.27 g in 2005, 2.75${\pm}$0.35 g in 2007, and 2.27${\pm}$0.20 g in 2008. Generally, people equal to or over 20 years of age have shown higher perilla leaf intakes than people below 20. This study would be contributed to the estimation of intakes of possible chemical contaminants such as residual pesticides and subsequent analysis for their potential risk.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Economic Impact of HEMOS-Cloud Services for M&S Support (M&S 지원을 위한 HEMOS-Cloud 서비스의 경제적 효과)

  • Jung, Dae Yong;Seo, Dong Woo;Hwang, Jae Soon;Park, Sung Uk;Kim, Myung Il
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.261-268
    • /
    • 2021
  • Cloud computing is a computing paradigm in which users can utilize computing resources in a pay-as-you-go manner. In a cloud system, resources can be dynamically scaled up and down to the user's on-demand so that the total cost of ownership can be reduced. The Modeling and Simulation (M&S) technology is a renowned simulation-based method to obtain engineering analysis and results through CAE software without actual experimental action. In general, M&S technology is utilized in Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), Multibody dynamics (MBD), and optimization fields. The work procedure through M&S is divided into pre-processing, analysis, and post-processing steps. The pre/post-processing are GPU-intensive job that consists of 3D modeling jobs via CAE software, whereas analysis is CPU or GPU intensive. Because a general-purpose desktop needs plenty of time to analyze complicated 3D models, CAE software requires a high-end CPU and GPU-based workstation that can work fluently. In other words, for executing M&S, it is absolutely required to utilize high-performance computing resources. To mitigate the cost issue from equipping such tremendous computing resources, we propose HEMOS-Cloud service, an integrated cloud and cluster computing environment. The HEMOS-Cloud service provides CAE software and computing resources to users who want to experience M&S in business sectors or academics. In this paper, the economic ripple effect of HEMOS-Cloud service was analyzed by using industry-related analysis. The estimated results of using the experts-guided coefficients are the production inducement effect of KRW 7.4 billion, the value-added effect of KRW 4.1 billion, and the employment-inducing effect of 50 persons per KRW 1 billion.