• Title/Summary/Keyword: Design graph

Search Result 683, Processing Time 0.023 seconds

Design for Deep Learning Configuration Management System using Block Chain (딥러닝 형상관리를 위한 블록체인 시스템 설계)

  • Bae, Su-Hwan;Shin, Yong-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.3
    • /
    • pp.201-207
    • /
    • 2021
  • Deep learning, a type of machine learning, performs learning while changing the weights as it progresses through each learning process. Tensor Flow and Keras provide the results of the end of the learning in graph form. Thus, If an error occurs, the result must be discarded. Consequently, existing technologies provide a function to roll back learning results, but the rollback function is limited to results up to five times. Moreover, they applied the concept of MLOps to track the deep learning process, but no rollback capability is provided. In this paper, we construct a system that manages the intermediate value of the learning process by blockchain to record the intermediate learning process and can rollback in the event of an error. To perform the functions of blockchain, the deep learning process and the rollback of learning results are designed to work by writing Smart Contracts. Performance evaluation shows that, when evaluating the rollback function of the existing deep learning method, the proposed method has a 100% recovery rate, compared to the existing technique, which reduces the recovery rate after 6 times, down to 10% when 50 times. In addition, when using Smart Contract in Ethereum blockchain, it is confirmed that 1.57 million won is continuously consumed per block creation.

Analysis Program for Offshore Wind Energy Substructures Embedded in AutoCAD (오토캐드 환경에서 구현한 해상풍력 지지구조 해석 프로그램)

  • James Ban;Chuan Ma;Sorrasak Vachirapanyakun;Pasin Plodpradit;Goangseup Zi
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.4
    • /
    • pp.33-44
    • /
    • 2023
  • Wind power is one of the most efficient and reliable energy sources in the transition to a low-carbon society. In particular, offshore wind power provides a high-quality and stable wind resource compared to onshore wind power while both present a higher installed capacity than other renewables. In this paper, we present our new program, the X-WIND program well suitable for the assessment of the substructure of offshore wind turbines. We have developed this program to increase the usability of analysis programs for offshore wind energy substructures by addressing the shortcomings of existing programs. Unlike the existing programs which cannot solely perform the substructure analyses or lack pre-post processors, our X-WIND program can complete the assessment analysis for the offshore wind turbines alone. The X-WIND program is embedded in AutoCAD so that both design and analysis are performed on a single platform. This also performs static and dynamic analysis for wind, wave, and current loads, essential for offshore wind power structures, and includes pre/post processors for designs, mesh developments, graph plotting, and code checking. With this expertise, our program enhances the usability of analysis programs for offshore wind energy substructures, promoting convenience and efficiency.

Acoustic Emission (AE) Technology-based Leak Detection System Using Macro-fiber Composite (MFC) Sensor (Macro fiber composite (MFC) 센서를 이용한 음향방출 기술 기반 배관 누수 감지 시스템)

  • Jaehyun Park;Si-Maek Lee;Beom-Joo Lee;Seon Ju Kim;Hyeong-Min Yoo
    • Composites Research
    • /
    • v.36 no.6
    • /
    • pp.429-434
    • /
    • 2023
  • In this study, aimed at improving the existing acoustic emission sensor for real time monitoring, a macro-fiber composite (MFC) transducer was employed as the acoustic emission sensor in the gas leak detection system. Prior to implementation, structural analysis was conducted to optimize the MFC's design. Consequently, the flexibility of the MFC facilitated excellent adherence to curved pipes, enabling the reception of acoustic emission (AE) signals without complications. Analysis of AE signals revealed substantial variations in parameter values for both high-pressure and low-pressure leaks. Notably, in the parameters of the Fast Fourier Transform (FFT) graph, the change amounted to 120% to 626% for high-pressure leaks compared to the case without leaks, and approximately 9% to 22% for low-pressure leaks. Furthermore, depending on the distance from the leak site, the magnitude of change in parameters tended to decrease as the distance increased. As the results, in the future, not only will it be possible to detect a leak by detecting the amount of parameter change in the future, but it will also be possible to identify the location of the leak from the amount of change.

Evaluation of Data-based Expansion Joint-gap for Digital Maintenance (디지털 유지관리를 위한 데이터 기반 교량 신축이음 유간 평가 )

  • Jongho Park;Yooseong Shin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.2
    • /
    • pp.1-8
    • /
    • 2024
  • The expansion joint is installed to offset the expansion of the superstructure and must ensure sufficient gap during its service life. In detailed guideline of safety inspection and precise safety diagnosis for bridge, damage due to lack or excessive gap is specified, but there are insufficient standards for determining the abnormal behavior of superstructures. In this study, a data-based maintenance was proposed by continuously monitoring the expansion-gap data of the same expansion joint. A total of 2,756 data were collected from 689 expansion joint, taking into account the effects of season. We have developed a method to evaluate changes in the expansion joint-gap that can analyze the thermal movement through four or more data at the same location, and classified the factors that affect the superstructure behavior and analyze the influence of each factor through deep learning and explainable artificial intelligence(AI). Abnormal behavior of the superstructure was classified into narrowing and functional failure through the expansion joint-gap evaluation graph. The influence factor analysis using deep learning and explainable AI is considered to be reliable because the results can be explained by the existing expansion gap calculation formula and bridge design.

A Study on the Capacity Review of One-lane Hi-pass Lanes on Highways : Focusing on Using Bootstrapping Techniques (고속도로 단차로 하이패스차로 용량 검토에 관한 연구 : 부트스트랩 기법 활용 중심으로)

  • Bosung Kim;Donghee Han
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.3
    • /
    • pp.1-16
    • /
    • 2024
  • In the present highway design guidelines suggest that the capacity of one-lane hi-pass lanes is 2,000 veh/h for mainline toll plaza and 1,700 veh/h for interchange toll plaza. However, in a study conducted in early 2010, capacity of the mainline toll plaza was presented with 1,476 veh/h/ln to 1,665 veh/h/ln, and capacity of the interchange toll plaza was presented as 1,443 veh/h/ln. Accordingly, this study examined the feasibility of the capacity of the currently proposed highway one-lane hi-pass lane. Based on the 2021 individual vehicle passing data collected from the one-lane hi-pass gantry, the speed-traffic volume relationship graph and headway were used to calculate and compare capacity. In addition, the bootstrapping technique was introduced to utilize the headway and new processing methods for collected data were reviewed. As a result of the analysis, the one-lane hi-pass capacity could be estimated at 1,700 veh/h/ln for the interchange toll plaza, and at least 1,700 veh/h/ln for the mainline toll plaza. In addition, by using the bootstrap technique when using headway data, it was possible to present an estimated capacity similar to the observed capacity.

Effects of Ingredients on the Its Quality Characteristics during Kimchi Fermentation (부재료가 김치의 품질 특성에 미치는 영향)

  • Ku, Kyung-Hyung;Sunwoo, Ji-Young;Park, Wan-Soo
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.34 no.2
    • /
    • pp.267-276
    • /
    • 2005
  • This study was carried out to investigate the effects of Kimchi ingredients, garlic, ginger, green onion and fermented fish sauces, on the Kimchi characteristics during fermentation. The experiment design of this study was the central composite design and response surfaces methodology. Garlic (X$_1$) of 0∼2%, ginger (X$_2$) of 0∼1.4%, green onion (X$_3$) of 0∼4% and fermented fish sauces (shrimp, X$_4$ and anchovy, X$_{5}$) of 0∼2% per salted Chinese cabbage of 100 g put in independent variables, respectively. The result of response surface regression analysis, independent variables of various ingredients and dependent variables, correlation coefficient ($R^2$) showed very difference value according to added ingredients. In the Kimchi samples fixed independent variables of garlic (X$_1$)-ginger (X$_2$), generally, it showed high correlation value more than samples fixed other independent variables of garlic (X$_1$)-green onion (X$_3$) and ginger (X$_2$)-green onion (X$_3$) over the fermentation period. And the correlation coefficient ($R^2$) of fermented fish sauces (shrimp of X$_4$, anchovy of X$_{5}$) showed value over 0.8 in the its characteristics of Kimchi samples except for textural properties of sensory evaluation. In the graph pattern of fermented fish sauces using response surfaces methodology, it showed a little increasing value of titratable acidity, lactic acid bacteria and 'a' of redness, 'b' of yellowish according to increasing addition fermented fish sauces. In the total acceptability of sensory evaluation, it showed high value according to increasing fermented fish sauce at the initial fermentation period of Kimchi. But it showed high value Kimchi sample added content of 1.0% fermented fish sauce in the middle (appropriate fermentation) and last (excessive) fermentation period.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

Design of a Bit-Serial Divider in GF(2$^{m}$ ) for Elliptic Curve Cryptosystem (타원곡선 암호시스템을 위한 GF(2$^{m}$ )상의 비트-시리얼 나눗셈기 설계)

  • 김창훈;홍춘표;김남식;권순학
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.12C
    • /
    • pp.1288-1298
    • /
    • 2002
  • To implement elliptic curve cryptosystem in GF(2$\^$m/) at high speed, a fast divider is required. Although bit-parallel architecture is well suited for high speed division operations, elliptic curve cryptosystem requires large m(at least 163) to support a sufficient security. In other words, since the bit-parallel architecture has an area complexity of 0(m$\^$m/), it is not suited for this application. In this paper, we propose a new serial-in serial-out systolic array for computing division operations in GF(2$\^$m/) using the standard basis representation. Based on a modified version of tile binary extended greatest common divisor algorithm, we obtain a new data dependence graph and design an efficient bit-serial systolic divider. The proposed divider has 0(m) time complexity and 0(m) area complexity. If input data come in continuously, the proposed divider can produce division results at a rate of one per m clock cycles, after an initial delay of 5m-2 cycles. Analysis shows that the proposed divider provides a significant reduction in both chip area and computational delay time compared to previously proposed systolic dividers with the same I/O format. Since the proposed divider can perform division operations at high speed with the reduced chip area, it is well suited for division circuit of elliptic curve cryptosystem. Furthermore, since the proposed architecture does not restrict the choice of irreducible polynomial, and has a unidirectional data flow and regularity, it provides a high flexibility and scalability with respect to the field size m.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Science Integrated Process Skill of the Students in Science Education Center for the Gifted (과학영재교육원 학생들의 과학 통합 탐구 능력)

  • Jeong, Eunyoung;Kwon, Yi-young;Yang, Joo-sung;Ko, Yu-mi
    • Journal of Science Education
    • /
    • v.37 no.3
    • /
    • pp.525-537
    • /
    • 2013
  • The purpose of this study was to investigate science integrated process skill of the students in science education center for the gifted. In order to do this, 'free-response test for the assessment of science process skills' developed by Yu-Hyang Kim(2013) was administered to 102 students(15 in elementary school science class, 58 in middle school science class I, and 29 in middle school science class II) who attend the program of science education center for the gifted in C university. The assessment tool measured 9 skills ; formulating inquiry questions, recognizing variables, formulating hypotheses, designing experiment, transforming data, interpreting data, drawing conclusions, formulating generalizations, and evaluating the designed experiments. As a result, the students in science education center for the gifted had relatively high scores in the area of 'formulating hypotheses' and 'recognizing variables', but they had relatively low scores in the area of 'transforming data', 'interpreting data', and 'evaluating the designed experiments'. The 2 items' percentage of correct answers were below 40% ; one is about a drawing a line graph in 'transforming data', and the other requires finding improvements of the experimental design in 'evaluation'. There was no significant difference between boys' scores and girls's one, and between the scores of students in the field of biology and those of students in the other fields(physics, chemistry, and earth science) in science integrated process skills. And there was significant difference according to the periods receiving the gifted education in 'formulating generalizations'. The teaching and learning has to focus on improving science integrated process skills in the program of science education center for the gifted and teaching and learning materials needs to be developed.

  • PDF