• Title/Summary/Keyword: 3D Based

Search Result 15,822, Processing Time 0.054 seconds

Optimum Design of Two Hinged Steel Arches with I Sectional Type (SUMT법(法)에 의(依)한 2골절(滑節) I형(形) 강재(鋼材) 아치의 최적설계(最適設計))

  • Jung, Young Chae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.12 no.3
    • /
    • pp.65-79
    • /
    • 1992
  • This study is concerned with the optimal design of two hinged steel arches with I cross sectional type and aimed at the exact analysis of the arches and the safe and economic design of structure. The analyzing method of arches which introduces the finite difference method considering the displacements of structure in analyzing process is used to eliminate the error of analysis and to determine the sectional force of structure. The optimizing problems of arches formulate with the objective functions and the constraints which take the sectional dimensions(B, D, $t_f$, $t_w$) as the design variables. The object functions are formulated as the total weight of arch and the constraints are derived by using the criteria with respect to the working stress, the minimum dimension of flange and web based on the part of steel bridge in the Korea standard code of road bridge and including the economic depth constraint of the I sectional type, the upper limit dimension of the depth of web and the lower limit dimension of the breadth of flange. The SUMT method using the modified Newton Raphson direction method is introduced to solve the formulated nonlinear programming problems which developed in this study and tested out throught the numerical examples. The developed optimal design programming of arch is tested out and examined throught the numerical examples for the various arches. And their results are compared and analyzed to examine the possibility of optimization, the applicablity, the convergency of this algorithm and with the results of numerical examples using the reference(30). The correlative equations between the optimal sectional areas and inertia moments are introduced from the various numerical optimal design results in this study.

  • PDF

Characteristics of Phytoplankton Succession Based on the Functional Group in the Enclosed Culture System (대형 배양장치에서 기능그룹에 기초한 식물플랑크톤 천이 특성)

  • Lee, Kyung-Lak;Noh, Seongyu;Lee, Jaeyoon;Yoon, Sungae;Lee, Jaehak;Shin, Yuna;Lee, Su-Woong;Rhew, Doughee;Lee, Jaekwan
    • Korean Journal of Ecology and Environment
    • /
    • v.50 no.4
    • /
    • pp.441-451
    • /
    • 2017
  • The present study was conducted from August to December 2016 in a cylindrical water tank with a diameter of 1 m, a height of 4 m and a capacity of 3,000 L. The field water and sediment from the Nakdong River were also sampled for the experimental culture (field water+sediment) and control culture (field water), respectively. In this study, we aimed to investigate phytoplankton succession pattern using the phytoplankton functional group in the enclosed culture system. A total of 50 species in 27 genera including Chlorophyceae (30 species), Bacillariophyceae (11 species), Cyanophyceae (7 species), and Cryptophyceae (2 species) were identified in the experimental and control culture systems. A total of 19 phytoplankton functional groups (PFGs) were identified, and these groups include B, C, D, F, G, H1, J, K, Lo, M, MP, N, P, S1, $T_B$, $W_0$, X1, X2 and Y. In particular, $W_0$, J and M groups exhibited the marked succession in the experimental culture system with higher biovolumes compared to those of the control culture system, which may be related to the internal cycling of nutrients by sediment in the experimental culture system. The principal component analyses demonstrated that succession patterns in PFG were associated with the main environmental factors such as nutrients(N, P), water temperature and light intensity in two culture systems. In conclusion, the present study showed the potential applicability of the functional group for understanding the adaptation strategies and ecological traits of the phytoplankton succession in the water bodies of Korea.

A Experimental Study on Exclusion Ability of Riprap into Bypass Pipe (저층수 배사관 내 유입된 사석 배출능력에 대한 연구)

  • Jeong, Seok Il;Lee, Seung Oh
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.37 no.1
    • /
    • pp.239-246
    • /
    • 2017
  • There are various transversal structures (small dams or drop structures) in median and small streams in Korea. Most of them are concrete structures and it is so hard to exclude low-level water. Unless drainage valves and/or gates would not be installed near bottom of bed, sediment from upstream should be deposited and also contaminants attached to the sediments would devastatingly threaten the water quality and ecosystem. One of countermeasures for such problem is the bypass pipe installed underneath the transversal structure. However, there is still issued whether it would be workable if the gravels and/or stones would roll into and be not excluded. Therefore, in this study, the conditions to exclude the rip stone which enter into the bypass pipe was reviewed. Based on sediment transport phenomenon, the behavior of stones was investigated with the concepts from the critical shear stress of sediment and d'Alembert principle. As final results, the basis condition (${\tau}_c{^*}$) was derived using the Lagrangian description since the stones are in the moving state, not in the stationary state. From hydraulic experiments the relative velocity could be obtained. In order to minimize the scale effect, the extra wide channel of 5.0 m wide and 1.0 m high was constructed and the experimental stones were fully spherical ones. Experimental results showed that the ratio of flow velocity to spherical particle velocity was measured between 0.5 and 0.7, and this result was substituted into the suggested equation to identify the critical condition wether the stones were excluded. Regimes about the exclusion of stone in bypass pipe were divided into three types according to particle Reynolds number ($Re_p$) and dimensionless critical shear force (${\tau}_c{^*}$) - exclusion section, probabilistic exclusion section, no exclusion section. Results from this study would be useful and essential information for bypass pipe design in transveral structures.

Assessment of Growth Conditions and Maintenance of Law-Protected Trees in Je-cheon City (제천시 보호수의 생육환경 및 관리현황 평가)

  • Yoon, Young-Han;Ju, Jin-Hee
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.28 no.2
    • /
    • pp.67-74
    • /
    • 2010
  • Law-protected trees are our precious asset as natural resources with history and tradition and natural heritage which should be protected and maintained well to bequeath next generation. Law-protected trees have not only thremmatologic and genetic meaning but also environmental and emotional meaning for their value to be high. This study investigated location, vitality, wrapping condition of root area and status of maintenance of the trees to figure out their growth environment and status of maintenance in a small-middle city through survey on those of law-protected trees in Je-cheon. There showed 300 more year old trees in Je-cheon mostly and the number of trees located in flat fields was the highest. For location type, village, hill and road types were presented in the order and for degree of development, land for building was found most frequently. The average electric resistance of the formative layer was measured to be $8.4k{\Omega}$ and four trees showed bark separation. Most law-protected trees underwent tree surgery, and complete bareness of root area was observed in a tree. The root area of two trees was covered with concrete. pH of soil was recorded to be 5.0~8.4 with its average of 7.1 and electric conductivity(EC) was less than 0.5 dS/m. For status of maintenance rearing facilities were placed for 16 trees out of totally 48 ones and stone fence was done for three ones. Tree surgery was conducted for 33 trees to prevent and to treat decomposed parts of holes. Direction boards were installed for 23 trees. Based on these results, measures to manage systematically law-protected trees in Jecheon could be suggested as follows. First, a sufficient space for growth of low part of trees should be secured. Second, a voluntary management should be induced by advertising them to residents in a community. Third, rearing facilities and direction boards of law-protected trees should be placed and related education should be conducted. Fourth, through operation of the department for law-protected trees consisting of related professions and cooperation among related departments the trees should be maintained continuously.

An Experimental Study on the Hydration Heat of Concrete Using Phosphate based Inorganic Salt (인산계 무기염을 이용한 콘크리트의 수화 발열 특성에 관한 실험적 연구)

  • Jeong, Seok-Man;Kim, Se-Hwan;Yang, Wan-Hee;Kim, Young-Sun;Ki, Jun-Do;Lee, Gun-Cheol
    • Journal of the Korea Institute of Building Construction
    • /
    • v.20 no.6
    • /
    • pp.489-495
    • /
    • 2020
  • Whereas the control of the hydration heat in mass concrete has been important as the concrete structures enlarge, many conventional strategies show some limitations in their effectiveness and practicality. Therefore, In this study, as a solution of controling the heat of hydration of mass concrete, a method to reduce the heat of hydration by controlling the hardening of cement was examined. The reduction of the hydration heat by the developed Phosphate Inorganic Salt was basically verified in the insulated boxes filled with binder paste or concrete mixture. That is, the effects of the Phosphate Inorganic Salt on the hydration heat, flow or slump, and compressive strength were analyzed in binary and ternary blended cement which is generally used for low heat. As a result, the internal maximum temperature rise induced by the hydration heat was decreased by 9.5~10.6% and 10.1~11.7% for binder paste and concrete mixed with the Phosphate Inorganic Salt, respectively. Besides, the delay of the time corresponding to the peak temperature was apparently observed, which is beneficial to the emission of the internal hydration heat in real structures. The Phosphate Inorganic Salt that was developed and verified by a series of the aforementioned experiments showed better performance than the existing ones in terms of the control of the hydration heat and other performance. It can be used for the purpose of hydration heat of mass concrete in the future.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

『Chūn-qiū』Wáng-lì(『春秋』王曆)➂ - from Zhōu-lì(周曆) to Xià-lì(夏曆), and "Xíng-xià-zhī-shí(行夏之時)" Mentioned by Confucius (『춘추』 왕력(王曆)➂ - 주력(周曆)에서 하력(夏曆)으로, 그리고 공자의 "행하지시(行夏之時)")

  • Seo, Jeong-Hwa
    • The Journal of Korean Philosophical History
    • /
    • no.54
    • /
    • pp.153-184
    • /
    • 2017
  • During the Pre-Qin(秦) Dynasty era, there were the records that there had been many calendar systems, such as $g{\check{u}}-li{\grave{u}}-l{\grave{i}}$(古六曆 : six ancient calendar systems). Then, the fact that particularly $zh{\bar{o}}u-l{\grave{i}}$(周曆) and $xi{\grave{a}}-l{\grave{i}}$(夏曆) were mainly discussed among them resulted from a lot of discussions from the differences in the calendar system in "$Ch{\bar{u}}n-qi{\bar{u}}$(春秋)" known to have been written by Confucius from the calendar system in "$X{\acute{i}}ng-xi{\grave{a}}-zh{\bar{i}}-sh{\acute{i}}$(行夏之時 : implement the calendar of Ha dynasty.)" that Confucius mentioned himself to his disciple. $zh{\bar{o}}u-l{\grave{i}}$(周曆) with $d{\bar{o}}ngzh{\grave{i}}-yu{\grave{e}}$(冬至月 : the 11th month of the lunar calendar) as the first month of a year had the system of the lunar calendar, and $xi{\grave{a}}-l{\grave{i}}$(夏曆) called as the calendar of Ha(夏) dynasty had the system of $ji{\acute{e}}-q{\grave{i}}-l{\grave{i}}$(節氣曆 : a kind of the solar calendar that divides one year of 365 days into 24 solar terms) with $y{\acute{i}}n-yu{\grave{e}}$(寅月 :one month from the present Feb 5) as the first month of a year. These two calendars had definite differences in the first months of a year, names of seasons, and the lunar calendar and the solar calendar. The fundamental reason why Confucius recommended the performance of $xi{\grave{a}}-l{\grave{i}}$(夏曆) as a way to run the nation was not that it started from the philosophical view of the universe that among the 'three $zh{\bar{e}}ng$'(三正)' of $ti{\bar{a}}n-zh{\bar{e}}ng$(天正 : the first month of a year with the heaven as the standard), $d{\grave{i}}-zh{\bar{e}}ng$(地正 : the first month of a year with the earth as the standard) and $r{\acute{e}}n-zh{\bar{e}}ng$(人正 : the first month of a year with humans as the standard), but that he wanted to emphasize the importance of practical national economic policies to enhance agricultural productivity. It becomes the criterion that even though Confucius emphasized that politicians should not have moral flaws ideally, with regard to public policies, he wanted to stress politicians' duties based on the reality a lot.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Analysis of the Causes of Subfrontal Recurrence in Medulloblastoma and Its Salvage Treatment (수모세포종의 방사선치료 후 전두엽하방 재발된 환자에서 원인 분석 및 구제 치료)

  • Cho Jae Ho;Koom Woong Sub;Lee Chang Geol;Kim Kyoung Ju;Shim Su Jung;Bak Jino;Jeong Kyoungkeun;Kim Tae_Gon;Kim Dong Seok;Choi oong-Uhn;Suh Chang Ok
    • Radiation Oncology Journal
    • /
    • v.22 no.3
    • /
    • pp.165-176
    • /
    • 2004
  • Purpose: Firstly, to analyze facto in terms of radiation treatment that might potentially cause subfrontal relapse in two patients who had been treated by craniospinal irradiation (CSI) for medulloblastoma, Secondly, to explore an effective salvage treatment for these relapses. Materials and Methods: Two patients who had high-risk disease (T3bMl, T3bM3) were treated with combined chemoradiotherapy CT-simulation based radiation-treatment planning (RTP) was peformed. One patient who experienced relapse at 16 months after CSI was treated with salvage surgery followed by a 30.6 Gy IMRT (intensity modulated radiotherapy). The other patient whose tumor relapsed at 12 months after CSI was treated by surgery alone for the recurrence. To investigate factors that might potentially cause subfrontal relapse, we evaluated thoroughly the charts and treatment planning process including portal films, and tried to find out a method to give help for placing blocks appropriately between subfrotal-cribrifrom plate region and both eyes. To salvage subfrontal relapse in a patient, re-irradiation was planned after subtotal tumor removal. We have decided to treat this patient with IMRT because of the proximity of critical normal tissues and large burden of re-irradiation. With seven beam directions, the prescribed mean dose to PTV was 30.6 Gy (1.8 Gy fraction) and the doses to the optic nerves and eyes were limited to 25 Gy and 10 Gy, respectively. Results: Review of radiotherapy Portals clearly indicated that the subfrontal-cribriform plate region was excluded from the therapy beam by eye blocks in both cases, resulting in cold spot within the target volume, When the whole brain was rendered in 3-D after organ drawing in each slice, it was easier to judge appropriateness of the blocks in port film. IMRT planning showed excellent dose distributions (Mean doses to PTV, right and left optic nerves, right and left eyes: 31.1 Gy, 14.7 Gy, 13.9 Gy, 6.9 Gy, and 5.5 Gy, respectively. Maximum dose to PTV: 36 Gy). The patient who received IMRT is still alive with no evidence of recurrence and any neurologic complications for 1 year. Conclusion: To prevent recurrence of medulloblastoma in subfrontal-cribriform plate region, we need to pay close attention to the placement of eye blocks during the treatment. Once subfrontal recurrence has happened, IMRT may be a good choice for re-irradiation as a salvage treatment to maximize the differences of dose distributions between the normal tissues and target volume.