• Title/Summary/Keyword: R&D input

Search Result 744, Processing Time 0.031 seconds

Experimental and model study on the mixing effect of injection method in UV/H2O2 process

  • Heekyong Oh;Pyonghwa Jang;Jinseok Hyung;Jayong Koo;SungKyu Maeng
    • Membrane and Water Treatment
    • /
    • v.14 no.3
    • /
    • pp.129-140
    • /
    • 2023
  • The appropriate injection of H2O2 is essential to produce hydroxyl radicals (OH·) by mixing H2O2 quickly and exposing the resulting H2O2 solution to UV irradiation. This study focused on evaluating mixing device of H2O2 as a design factor of UV/H2O2 AOP pilot plant using a surface water. The experimental investigation involved both experimental and model-based analyses to evaluate the mixing effect of different devices available for the H2O2 injection of a tubular hollow pipe, elliptical type of inline mixer, and nozzle-type injection mixer. Computational fluid dynamics analysis was employed to model and simulate the mixing devices. The results showed that the elliptical type of inline mixer showed the highest uniformity of 95%, followed by the nozzle mixer with 83%, and the hollow pipe with only 18%, after passing through each mixing device. These results indicated that the elliptical type of inline mixer was the most effective in mixing H2O2 in a bulk. Regarding the pressure drops between the inlet and outlet of pipe, the elliptical-type inline mixer exhibited the highest pressure drop of 15.8 kPa, which was unfavorable for operation. On the other hand, the nozzle mixer and hollow pipe showed similar pressure drops of 0.4 kPa and 0.3 kPa, respectively. Experimental study showed that the elliptical type of inline and nozzle-type injection mixers worked well for low concentration (less than 5mg/L) of H2O2 injection within 10% of the input value, indicating that both mixers were appropriate for required H2O2 concentration and mixing intensity of UV/ H2O2 AOP process. Additionally, the elliptical-type inline mixer proved to be more stable than the nozzle-type injection mixer when dealing with highly concentrated pollutants entering the UV/H2O2 AOP process. It is recommended to use a suitable mixing device to meet the desired range of H2O2 concentration in AOP process.

A Modified grid-based KIneMatic wave STOrm Runoff Model (ModKIMSTORM) (I) - Theory and Model - (격자기반 운동파 강우유출모형 KIMSTORM의 개선(I) - 이론 및 모형 -)

  • Jung, In Kyun;Lee, Mi Seon;Park, Jong Yoon;Kim, Seong Joon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.6B
    • /
    • pp.697-707
    • /
    • 2008
  • The grid-based KIneMatic wave STOrm Runoff Model (KIMSTORM) by Kim (1998) predicts the temporal variation and spatial distribution of overland flow, subsurface flow and stream flow in a watershed. The model programmed with C++ language on Unix operating system adopts single flowpath algorithm for water balance simulation of flow at each grid element. In this study, we attempted to improve the model by converting the code into FORTRAN 90 on MS Windows operating system and named as ModKIMSTORM. The improved functions are the addition of GAML (Green-Ampt & Mein-Larson) infiltration model, control of paddy runoff rate by flow depth and Manning's roughness coefficient, addition of baseflow layer, treatment of both spatial and point rainfall data, development of the pre- and post-processor, and development of automatic model evaluation function using five evaluation criteria (Pearson's coefficient of determination, Nash and Sutcliffe model efficiency, the deviation of runoff volume, relative error of the peak runoff rate, and absolute error of the time to peak runoff). The modified model adopts Shell Sort algorithm to enhance the computational performance. Input data formats are accepted as raster and MS Excel, and model outputs viz. soil moisture, discharge, flow depth and velocity are generated as BSQ, ASCII grid, binary grid and raster formats.

Optimum Size Selection and Machinery Costs Analysis for Farm Machinery Systems - Programming for Personal Computer - (농기계(農機械) 투입모형(投入模型) 설정(設定) 및 기계이용(機械利用) 비용(費用) 분석연구(分析硏究) - PC용(用) 프로그램 개발(開發) -)

  • Lee, W.Y.;Kim, S.R.;Jung, D.H.;Chang, D.I.;Lee, D.H.;Kim, Y.H.
    • Journal of Biosystems Engineering
    • /
    • v.16 no.4
    • /
    • pp.384-398
    • /
    • 1991
  • A computer program was developed to select the optimum size of farm machine and analyze its operation costs according to various farming conditions. It was written in FORTRAN 77 and BASIC languages and can be run on any personal computer having Korean Standard Complete Type and Korean Language Code. The program was developed as a user-friendly type so that users can carry out easily the costs analysis for the whole farm work or respective operation in rice production, and for plowing, rotarying and pest controlling in upland. The program can analyze simultaneously three different machines in plowing & rotarying and two machines in transplanting, pest controlling and harvesting operations. The input data are the sizes of arable lands, possible working days and number of laborers during the opimum working period, and custom rates varying depending on regions and individual farming conditions. We can find out the results such as the selected optimum combination farm machines, the overs and shorts of working days relative to the planned working period, capacities of the machines, break-even points by custom rate, fixed costs for a month, and utilization costs in a hectare.

  • PDF

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Development of KD- Propeller Series using a New Blade Section (새로운 날개단면을 이용한 KD-프로펠러 씨리즈 개발)

  • J.T. Lee;M.C. Kim;J.W. Ahn;H.C. Kim
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.28 no.2
    • /
    • pp.52-68
    • /
    • 1991
  • A new propeller series is developed using the newly developed blade section(KH18 section) which behaves better cavitation characteristics and higher lift-drag ratio at wide range of angle-of-attack. The pitch and camber distributions are disigned in order to have the same radial and chordwise loading distribution with the selected circumferentially averaged wake input. Since the geometries of the series propeller, such as chord length, thickness, skew and rate distribations, are selected by regression of the recent full scale propeller geometric data, the performance prediction of a propeller at preliminary design stage can be mure realistic. Number of blades of the series propellers is 4 and the expanded blade area ratios are 0.3, 0.45, 0.6 and 0.75. Mean pitch ratios are selected as 0.5, 0.65, 0.8, 0.75 and 1.1 for each expanded area ratio. The new propeller series is composed of 20 propellers and is named as KD(KRISO-DAEWOO) propeller series. Propeller open water tests are performed at the experimental towing tank, and the cavitation observation tests and fluctuating pressure measurements are carried out at the cavitation tunnel of KRISO. $B_{P}-\delta$ curves, which can be used to select the optimum propeller diameter at the preliminary design stage, are derived from a regression analysis of the propeller often water test results. The KD-cavitation chart is derived from the cavitation observation test results by choosing the local maximum lift coefficient and the local cavitation number as parameters. The caviy extent of a propeller can be predicted more accurately by using the KD-cavitation chart at a preliminary design stage, since it is derived from the results of the cavitation observation tests in the selected ship's wake, whereas the existing cavitation charts, such as the Burrill's cavitation chart, are derived from the test results in uniform flow.

  • PDF

Development of the forecasting model for import volume by item of major countries based on economic, industrial structural and cultural factors: Focusing on the cultural factors of Korea (경제적, 산업구조적, 문화적 요인을 기반으로 한 주요 국가의 한국 품목별 수입액 예측 모형 개발: 한국의, 한국에 대한 문화적 요인을 중심으로)

  • Jun, Seung-pyo;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.23-48
    • /
    • 2021
  • The Korean economy has achieved continuous economic growth for the past several decades thanks to the government's export strategy policy. This increase in exports is playing a leading role in driving Korea's economic growth by improving economic efficiency, creating jobs, and promoting technology development. Traditionally, the main factors affecting Korea's exports can be found from two perspectives: economic factors and industrial structural factors. First, economic factors are related to exchange rates and global economic fluctuations. The impact of the exchange rate on Korea's exports depends on the exchange rate level and exchange rate volatility. Global economic fluctuations affect global import demand, which is an absolute factor influencing Korea's exports. Second, industrial structural factors are unique characteristics that occur depending on industries or products, such as slow international division of labor, increased domestic substitution of certain imported goods by China, and changes in overseas production patterns of major export industries. Looking at the most recent studies related to global exchanges, several literatures show the importance of cultural aspects as well as economic and industrial structural factors. Therefore, this study attempted to develop a forecasting model by considering cultural factors along with economic and industrial structural factors in calculating the import volume of each country from Korea. In particular, this study approaches the influence of cultural factors on imports of Korean products from the perspective of PUSH-PULL framework. The PUSH dimension is a perspective that Korea develops and actively promotes its own brand and can be defined as the degree of interest in each country for Korean brands represented by K-POP, K-FOOD, and K-CULTURE. In addition, the PULL dimension is a perspective centered on the cultural and psychological characteristics of the people of each country. This can be defined as how much they are inclined to accept Korean Flow as each country's cultural code represented by the country's governance system, masculinity, risk avoidance, and short-term/long-term orientation. The unique feature of this study is that the proposed final prediction model can be selected based on Design Principles. The design principles we presented are as follows. 1) A model was developed to reflect interest in Korea and cultural characteristics through newly added data sources. 2) It was designed in a practical and convenient way so that the forecast value can be immediately recalled by inputting changes in economic factors, item code and country code. 3) In order to derive theoretically meaningful results, an algorithm was selected that can interpret the relationship between the input and the target variable. This study can suggest meaningful implications from the technical, economic and policy aspects, and is expected to make a meaningful contribution to the export support strategies of small and medium-sized enterprises by using the import forecasting model.

Regeneration Processes of Nutrients in the Polar Front Area of the last Sea IV. Chlorophyll a Distribution, New Production and the Vertical Diffusion of Nitrate (동해 극전선역의 영양염류 순환과정 IV. Clorophyll a 분포, 신생산 및 질산염의 수직확산)

  • MOON Chang-Ho;YANG Sung-Ryull;YANG Han-Soeb;CHO Hyun-Jin;LEE Seung-Yong;KIM Seok-Yun
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.31 no.2
    • /
    • pp.259-266
    • /
    • 1998
  • A study on the biological and chemical characteristics in the middle last Sea of Korea was carried out at 31 stations in October $11\~18$, 1995 on board the R/V Tam-Yang. The chlorophyll a concentration, new and regenerated production, and the vertical diffusion of nitrate from the thermocline structure were investigated. From the vertical distribution of chlorophyll a, subsurface maxima were observed near the thermorline at most stations including the frontal zone, except at the southern stations where the maximum chloropyll a concentration occurred at the surface, The nanophytoplankton was the most dominant fraction comprising $83.5\%$ of total phytoplankton cell numbers, but netphytoplankton were common at the southern stations where the dominant species were Rhizosolenia sp. Nitrogenous new production and regenerated productions were measured using the stable isotope $^{15}N$ nitrate and ammonia uptake method. The vertically integrated nitrogen production varied between 8.470 and $72.945\;mg\;N\;m^{-2}\;d^{-1}$. The f-ratio, which is the traction of new production from primary production, waried between 0.03 and 0.72, indicating that $3\%$ to $72\%$ of primary production was supported by the input of nutrients from below the euphotic zone and the rest are supported by ammonia recycled within the euphotic layer. This range of f-ratio encompasses from extremely oligotrophic to eutrophic area characteristics. The differences in productivity and f-ratio among stations were related to frontal structure and the bottom topography. The values were high near the frontal zone and low outside of it, and the station near Ulleng Island showed the highest f-ratio. Vertical diffusion coefficients were calculated from both the water column stability (Kz-1) of King and Devol's equation (1979) and new nitrogen requirement (Kz-2). The values of Kz-2 ($0.11\~0.55\;cm^2/s$) were relatively low compared to the values reported previously.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

A Basic Study on the Radiological Characteristics and Disposal Methods of NORM Wastes (공정부산물의 방사선적 특성과 처분방안에 관한 기본 연구)

  • Jeong, Jongtae;Baik, Min-Hoon;Park, Chung-Kyun;Park, Tae-Jin;Ko, Nak-Youl;Yoon, Ki Hoon
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.12 no.3
    • /
    • pp.217-233
    • /
    • 2014
  • Securing the radiological safety is a prerequisite for the safe management of the naturally occurring radioactive materials (NORM) which cannot be reused. This becomes a crucial focus of our R&D efforts upon the implementation of the Act on Protective Action Guidelines against Radiation in the Natural Environment. To secure the safety, the establishment of technical bases and procedures for securing radiological safety related to the disposal of NORM is required. Thus, it is necessary to analyze the characteristics, to collect the data, to have the radiological safety assessment methodologies and tools, to investigate disposal methods and facilities, and to study the effects of the input data on the safety for the NORM wastes. Here, we assess the environmental impact of the NORM waste disposal with respect to the major domestic and foreign NORM characteristics. The data associated with major industries are collected/analyzed and the status of disposal facilities and methodologies relevant to the NORM wastes is investigated. We also suggest the conceptual design concept of a landfill disposal facility and the management plan with respect to the major NORM wastes characteristics. The radionuclide pathways are identified for the atmospheric transport and leachate release and the environmental impact assessment methodology for the NORM waste disposal is established using a relevant code. The assessment and analysis on the exposure doses and excessive cancer risks for the NORM waste disposal are performed using the characteristics of the representative domestic NORM wastes including flying ash, phosphor gypsum, and redmud. The results show that the exposure dose and the excessive cancer risks are very low to consider any radiation effects. This study will contribute to development in the areas of the regulatory technology for securing radiological safety relevant to NORM waste disposal and to the implementation technology for the Act.