• Title/Summary/Keyword: Step-One Parallel System

Search Result 35, Processing Time 0.028 seconds

Study of Optimal Design Parameter for Gearbox on Wind Power System (풍력발전시스템용 증속기의 최적화 설계요소에 관한 연구)

  • 이근호;성백주;최용혁
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.737-741
    • /
    • 2003
  • The wind power system is spotlighted as one of the no-pollution power generation systems. The system uses winds as power source that are rotated the blade and the rotating power from blade generate the electricity power. Gearbox needs to transfer the wind powers that have the high-torque-low-speed characteristics to generator that have the low-torque-high-speed characteristics. Because the wind power system generally locates the remote place like seaside or mountainside and the gearbox installs on the limited and high placed space, the gearbox of the wind power system is required the optimal space design and high reliability. In this paper, the structure of the gearbox is proposed to achieve the optimal space and efficiency by compounding the planetary gear train that has the high power density and parallel type gear train that has the long service life. The design parameters that are affected the service life are studied. The gear ratio and face width are investigated as an affected parameter for design sensitivity of service life.

  • PDF

Evolutionary Neural Network based on DNA coding method for Time series prediction (시계열 예측을 위한 DNA코딩 기반의 신경망 진화)

  • 이기열;이동욱;심귀보
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.4
    • /
    • pp.315-323
    • /
    • 2000
  • In this paper, we propose a method of constructing neural networks using bio-inpired emergent and evolutionary concepts. This method is algorithm that is based on the characteristics of the biological DNA and growth of plants, Here is, we propose a constructing method to make a DNA coding method for production rule of L-system. L-system is based on so-called the parallel rewriting nechanism. The DNA coding method has no limitation in expressing the produlation the rule of L-system. Evolutionary algotithms motivated by Darwinaian natural selection are population based searching methods and the high performance of which is highly dependent on the representation of solution space. In order to verify the effectiveness of our scheme, we apply it one step ahead prediction of Mackey-Glass time series, Sunspot data and KOSPI data.

  • PDF

Improving Haskell GC-Tuning Time Using Divide-and-Conquer (분할 정복법을 이용한 Haskell GC 조정 시간 개선)

  • An, Hyungjun;Kim, Hwamok;Liu, Xiao;Kim, Yeoneo;Byun, Sugwoo;Woo, Gyun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.9
    • /
    • pp.377-384
    • /
    • 2017
  • The performance improvement of a single core processor has reached its limit since the circuit density cannot be increased any longer due to overheating. Therefore, the multicore and manycore architectures have emerged as viable approaches and parallel programming becomes more important. Haskell, a purely functional language, is getting popular in this situation since it naturally supports parallel programming owing to its beneficial features including the implicit parallelism in evaluating expressions and the monadic tools supporting parallel constructs. However, the performance of Haskell parallel programs is strongly influenced by the performance of the run-time system including the garbage collector. Though a memory profiling tool namely GC-tune has been suggested, we need a more systematic way to use this tool. Since GC-tune finds the optimal memory size by executing the target program with all the different possible GC options, the GC-tuning time takes too long. This paper suggests a basic divide-and-conquer method to reduce the number of GC-tune executions by reducing the search area by one-quarter for every searching step. Applying this method to two parallel programs, a maximally independent set and a K-means programs, the memory tuning time is reduced by 7.78 times with accuracy 98% on average.

Content-Addressable Systolic Array for Solving Tridiagonal Linear Equation Systems (삼중대각행렬 선형방정식의 해를 구하기 위한 내용-주소법 씨스톨릭 어레이)

  • 이병홍;김정선;채수환
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.6
    • /
    • pp.556-565
    • /
    • 1991
  • Using the WDZ decomposition algorithm, a parallel algorithm is presented for solving the linear system Ax=b which has an nxn nonsingular tridiagonal matrix. For implementing this algorithm a CAM systolic arrary is proposed, and each processing element of this array has its own CAM to store the nonzero elements of the tridiagonal matrix. In order to evaluate this array the algorithm presented is compared to theis compared to the LU decomposition algorithm. It is found that the execution time of the algorithm presented is reduced to about 1/4 than that of the LU decomposition algorithm. If each computation process step can be dome in one time unit, the system of eqations is solved in a systolic fashion without central control is obtained in 2n+1 time steps.

  • PDF

Low Carbon.Green Growth Paradigm for Fisheries Sector (수산부문 저탄소.녹색성장 패러다임)

  • Park, Seong-Kwae;Kwon, Suk-Jae
    • Ocean and Polar Research
    • /
    • v.31 no.1
    • /
    • pp.97-110
    • /
    • 2009
  • Two of the most important topics of the 21st century are ensuring harmony between man and his environment and the emerging long-tail economy in which niche markets are becoming increasingly more important. Since the Industrial Revolution in 17th century, human beings have increasingly exploited the world's natural capital, such as the natural environment and its ecosystems. Now the world is facing limits to sustainable economic growth because of limits to this natural capital. Thus, most countries are beginning to adopt a new development paradigm, the so-called"Green Development Paradigm" which pursues environmental conservation in parallel with economic growth. Recently, the Korean government announced an ambitious national policy of Low Carbon & Green Growth for the next six decades. This is an important step that transforms the existing national policy into a new future-oriented one. The fisheries sector in particular has great potential for making a substantial contribution to this national policy initiative. For example, the ocean itself with its sea plants and phytoplankton has an enormous capacity for fixing carbon, and its vast areas of tidal flats have a tremendous potential for cleaning up pollutants from both the sea and the land. Furthermore, the fishing industry has great potential for the development of fuel-saving biodegradable technologies, and a long-tail economy based on digital technologies can do much to promote the production and consumption of green goods and services derived from the oceans and the fisheries. In order for this potential to be realized, the fisheries authority needs to develop a new green-growth strategy that is practical and widely supported by fishing communities and the markets, taking into account the need for greenhouse gas reduction, conservation of the ocean environment and ecosystems, an improved system for seafood safety, the establishment of strengthened MCS (monitoring control surveillance) system, and the development of coastal ecotourism. In addition, fisheries green policies need to be implemented through a well-organized system of government aids, regulations and compensation, and spontaneous (voluntary) orders in fishing communities should be promoted to encourage far more responsible fisheries.

All-port Broadcasting Algorithms on Wormhole Routed Star Graph Networks (웜홀 라우팅을 지원하는 스타그래프 네트워크에서 전 포트 브로드캐스팅 알고리즘)

  • Kim, Cha-Young;Lee, Sang-Kyu;Lee, Ju-Young
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.2
    • /
    • pp.65-74
    • /
    • 2002
  • Recently star networks are considered as attractive alternatives to the widely used hypercube for interconnection networks in parallel processing systems by many researchers. One of the fundamental communication problems on star graph networks is broadcasing In this paper we consider the broadcasting problems in star graph networks using wormhole routing. In wormhole routed system minimizing link contention is more critical for the system performance than the distance between two communicating nodes. We use Hamiltonian paths in star graph to set up link-disjoint communication paths We present a broadcast algorithm in n-dimensional star graph of N(=n!) nodes such that the total completion time is no larger than $([long_n n!]+1)$ steps where $([long_n n!]+1)$ is the lower bound This result is significant improvement over the previous n-1 step broadcasting algorithm.

Optimized Hardware Design using Sobel and Median Filters for Lane Detection

  • Lee, Chang-Yong;Kim, Young-Hyung;Lee, Yong-Hwan
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.9 no.1
    • /
    • pp.115-125
    • /
    • 2019
  • In this paper, the image is received from the camera and the lane is sensed. There are various ways to detect lanes. Generally, the method of detecting edges uses a lot of the Sobel edge detection and the Canny edge detection. The minimum use of multiplication and division is used when designing for the hardware configuration. The images are tested using a black box image mounted on the vehicle. Because the top of the image of the used the black box is mostly background, the calculation process is excluded. Also, to speed up, YCbCr is calculated from the image and only the data for the desired color, white and yellow lane, is obtained to detect the lane. The median filter is used to remove noise from images. Intermediate filters excel at noise rejection, but they generally take a long time to compare all values. In this paper, by using addition, the time can be shortened by obtaining and using the result value of the median filter. In case of the Sobel edge detection, the speed is faster and noise sensitive compared to the Canny edge detection. These shortcomings are constructed using complementary algorithms. It also organizes and processes data into parallel processing pipelines. To reduce the size of memory, the system does not use memory to store all data at each step, but stores it using four line buffers. Three line buffers perform mask operations, and one line buffer stores new data at the same time as the operation. Through this work, memory can use six times faster the processing speed and about 33% greater quantity than other methods presented in this paper. The target operating frequency is designed so that the system operates at 50MHz. It is possible to use 2157fps for the images of 640by360 size based on the target operating frequency, 540fps for the HD images and 240fps for the Full HD images, which can be used for most images with 30fps as well as 60fps for the images with 60fps. The maximum operating frequency can be used for larger amounts of the frame processing.

Kinetics and Mechanism for Aquation of cis-[Co(en)$_2$YCl]$^{r+}$ (Y = NH$_3^-$, NO$_2$, NCS$^-$, H$_2$O} in Hg$^{2+}$ Aqueous Solution ($Hg^{2+}$ 수용액 내에서 cis-[Co(en)$_2$YCl]$^{r+}$ (Y = $NH_3$, NO$_2^-$, NCS$^-$, $H_2O$)의 아쿠아 반응속도와 반응메카니즘)

  • Byung-Kak Park;Joo-Sang Lim
    • Journal of the Korean Chemical Society
    • /
    • v.32 no.5
    • /
    • pp.476-482
    • /
    • 1988
  • Kinetic studies and theoretical investigations were made to illustrate the mechanism of the aquation of cis-[Co(en)$_2$YCl]$^{r+}$ (Y = NH$_3$, NO$_2^-$, NCS$^-$, $H_2O$) in $Hg^{2+}$ aqueous solution UV/vis-spectrophotometrically. The aquation of cis-[Co(en)$_2$YCl]$^{r+}$ have been found to be the second order for overall reaction as first order for each of substrate and Hg$^{2+}$+ catalyst. The reaction rate was increased in the order of Y=NH$_3$ < NCS$^-$- < $H_2O$ < $NO_2^-$, which are neighboring group of Cl. The step of bond formation was found to be the rate determining one, because the net charge of central metal ion run parallel with the observed rate constant. On the basis of rate determining step, kinetic data and the observed activation parameters, we have proposed the Id mechanism for the reaction system. The rate equation derived from the proposed mechanism has been in agreement with the observed rate equation.

  • PDF

A practial design of direct digital frequency synthesizer with multi-ROM configuration (병렬 구조의 직접 디지털 주파수 합성기의 설계)

  • 이종선;김대용;유영갑
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.12
    • /
    • pp.3235-3245
    • /
    • 1996
  • A DDFS(Direct Digital Frequency Synthesizer) used in spread spectrum communication systems must need fast switching speed, high resolution(the step size of the synthesizer), small size and low power. The chip has been designed with four parallel sine look-up table to achieve four times throughput of a single DDFS. To achieve a high processing speed DDFS chip, a 24-bit pipelined CMOS technique has been applied to the phase accumulator design. To reduce the size of the ROM, each sine ROM of the DDFS is stored 0-.pi./2 sine wave data by taking advantage of the fact that only one quadrant of the sine needs to be stored, since the sine the sine has symmetric property. And the 8 bit of phase accumulator's output are used as ROM addresses, and the 2 MSBs control the quadrants to synthesis the sine wave. To compensate the spectrum purity ty phase truncation, the DDFS use a noise shaper that structure like a phase accumlator. The system input clock is divided clock, 1/2*clock, and 1/4*clock. and the system use a low frequency(1/4*clock) except MUX block, so reduce the power consumption. A 107MHz DDFS(Direct Digital Frequency Synthesizer) implemented using 0.8.mu.m CMOS gate array technologies is presented. The synthesizer covers a bandwidth from DC to 26.5MHz in steps of 1.48Hz with a switching speed of 0.5.mu.s and a turing latency of 55 clock cycles. The DDFS synthesizes 10 bit sine waveforms with a spectral purity of -65dBc. Power consumption is 276.5mW at 40MHz and 5V.

  • PDF

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.