• 제목/요약/키워드: low-computational

Search Result 1,998, Processing Time 0.028 seconds

Characteristics of Flow and Sedimentation around the Embankment (방조제 부근에서의 흐름과 퇴적환경의 특성)

  • Lee Moon Ock;Park Il Heum;Lee Yeon Gyu
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.3 no.4
    • /
    • pp.37-55
    • /
    • 2000
  • Two-dimensional numerical experiments and field surveys have been conducted to clarify some environmental variations in the flow and sedimentation in the adjacent seas after the construction of a tidal embankment. Velocities of flow and water levels in the bay decreased after the construction of the barrage. When the freshwater was instantly released into the bay, the conditions of flow were unaltered, with the exception of a minor variation in velocities and tidal levels around the sluices at the ebb flow. The computational results showed that freshwater released at the low water reached the outside of the bay and then returned to the inside with the tidal currents at the high water. The front sea regions of the embankment had a variety of sedimentary phases such as a clayish silt, a silty clay and a sandy clayish silt. However, a clayish silt was prevalent in the middle of the bay. On the other hand, the skewness, which reflects the behaviour of sediments, was $\{pm}0.1$ at the front regions of the embankment while it was more than ±0.3 in the middle of the bay. Analytical results of drilling samples acquired from the front of the sluice gates showed that the lower part of the sediments consists of very fine silty or clayish grains. The upper surface layer consisted of shellfish, such as oyster or barnacle with a thickness of 40~50 cm. Therefore, it seemed that the lower part of the sediments would have been one of intertidal zones prior to the embankment construction while the upper shellfish layer would have been debris of shellfish farms formed in the adjacent seas after the construction of the embankment. This shows the difference of sedimentary phases reflected the influence of a tidal embankment construction.

  • PDF

Study on the Heat Transfer Phenomenon around Underground Concrete Digesters for Bigas Production Systems (생물개스 발생시스템을 위한 지하매설콘크리트 다이제스터의 열전달에 관한 연구)

  • 김윤기;고재균
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.22 no.1
    • /
    • pp.53-66
    • /
    • 1980
  • The research work is concerned with the analytical and experimental studies on the heat transfer phenomenon around the underground concrete digester used for biogas production Systems. A mathematical and computational method was developed to estimate heat losses from underground cylindrical concrete digester used for biogas production systems. To test its feasibility and to evaluate thermal parameters of materials related, the method was applied to six physical model digesters. The cylindrical concrete digester was taken as a physical model, to which the model,atical model of heat balance can be applied. The mathematical model was transformed by means of finite element method and used to analyze temperature distribution with respect to several boundary conditions and design parameters. The design parameters of experimental digesters were selected as; three different sizes 40cm by 80cm, 80cm by 160cm and l00cm by 200cm in diameter and height; two different levels of insulation materials-plain concrete and vermiculite mixing in concrete; and two different types of installation-underground and half-exposed. In order to carry out a particular aim of this study, the liquid within the digester was substituted by water, and its temperature was controlled in five levels-35。 C, 30。 C, 25。 C, 20。C and 15。C; and the ambient air temperature and ground temperature were checked out of the system under natural winter climate conditions. The following results were drawn from the study. 1.The analytical method, by which the estimated values of temperature distribution around a cylindrical digester were obtained, was able to be generally accepted from the comparison of the estimated values with the measured. However, the difference between the estimated and measured temperature had a trend to be considerably increased when the ambient temperature was relatively low. This was mainly related variations of input parameters including the thermal conductivity of soil, applied to the numerical analysis. Consequently, the improvement of these input data for the simulated operation of the numerical analysis is expected as an approach to obtain better refined estimation. 2.The difference between estimated and measured heat losses was shown to have the similar trend to that of temperature distribution discussed above. 3.It was found that a map of isothermal lines drawn from the estimated temperature distribution was very useful for a general observation of the direction and rate of heat transfer within the boundary. From this analysis, it was interpreted that most of heat losses is passed through the triangular section bounded within 45 degrees toward the wall at the bottom edge of the digesten Therefore, any effective insulation should be considered within this region. 4.It was verified by experiment that heat loss per unit volume of liquid was reduced as the size of the digester became larger For instance, at the liquid temperature of 35˚ C, the heat loss per unit volume from the 0. 1m$^3$ digester was 1, 050 Kcal/hr m$^3$, while at for 1. 57m$^3$ digester was 150 Kcal/hr m$^3$. 5.In the light of insulation, the vermiculite concrete was consistently shown to be superior to the plain concrete. At the liquid temperature ranging from 15。 C to 350 C, the reduction of heat loss was ranged from 5% to 25% for the half-exposed digester, while from 10% to 28% for the fully underground digester. 6.In the comparison of heat loss between the half-exposed and underground digesters, the heat loss from the former was fr6m 1,6 to 2, 6 times as much as that from the latter. This leads to the evidence that the underground digester takes advantage of heat conservation during winter.

  • PDF

Development of Agent-based Platform for Coordinated Scheduling in Global Supply Chain (글로벌 공급사슬에서 경쟁협력 스케줄링을 위한 에이전트 기반 플랫폼 구축)

  • Lee, Jung-Seung;Choi, Seong-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.213-226
    • /
    • 2011
  • In global supply chain, the scheduling problems of large products such as ships, airplanes, space shuttles, assembled constructions, and/or automobiles are complicated by nature. New scheduling systems are often developed in order to reduce inherent computational complexity. As a result, a problem can be decomposed into small sub-problems, problems that contain independently small scheduling systems integrating into the initial problem. As one of the authors experienced, DAS (Daewoo Shipbuilding Scheduling System) has adopted a two-layered hierarchical architecture. In the hierarchical architecture, individual scheduling systems composed of a high-level dock scheduler, DAS-ERECT and low-level assembly plant schedulers, DAS-PBS, DAS-3DS, DAS-NPS, and DAS-A7 try to search the best schedules under their own constraints. Moreover, the steep growth of communication technology and logistics enables it to introduce distributed multi-nation production plants by which different parts are produced by designated plants. Therefore vertical and lateral coordination among decomposed scheduling systems is necessary. No standard coordination mechanism of multiple scheduling systems exists, even though there are various scheduling systems existing in the area of scheduling research. Previous research regarding the coordination mechanism has mainly focused on external conversation without capacity model. Prior research has heavily focuses on agent-based coordination in the area of agent research. Yet, no scheduling domain has been developed. Previous research regarding the agent-based scheduling has paid its ample attention to internal coordination of scheduling process, a process that has not been efficient. In this study, we suggest a general framework for agent-based coordination of multiple scheduling systems in global supply chain. The purpose of this study was to design a standard coordination mechanism. To do so, we first define an individual scheduling agent responsible for their own plants and a meta-level coordination agent involved with each individual scheduling agent. We then suggest variables and values describing the individual scheduling agent and meta-level coordination agent. These variables and values are represented by Backus-Naur Form. Second, we suggest scheduling agent communication protocols for each scheduling agent topology classified into the system architectures, existence or nonexistence of coordinator, and directions of coordination. If there was a coordinating agent, an individual scheduling agent could communicate with another individual agent indirectly through the coordinator. On the other hand, if there was not any coordinating agent existing, an individual scheduling agent should communicate with another individual agent directly. To apply agent communication language specifically to the scheduling coordination domain, we had to additionally define an inner language, a language that suitably expresses scheduling coordination. A scheduling agent communication language is devised for the communication among agents independent of domain. We adopt three message layers which are ACL layer, scheduling coordination layer, and industry-specific layer. The ACL layer is a domain independent outer language layer. The scheduling coordination layer has terms necessary for scheduling coordination. The industry-specific layer expresses the industry specification. Third, in order to improve the efficiency of communication among scheduling agents and avoid possible infinite loops, we suggest a look-ahead load balancing model which supports to monitor participating agents and to analyze the status of the agents. To build the look-ahead load balancing model, the status of participating agents should be monitored. Most of all, the amount of sharing information should be considered. If complete information is collected, updating and maintenance cost of sharing information will be increasing although the frequency of communication will be decreasing. Therefore the level of detail and updating period of sharing information should be decided contingently. By means of this standard coordination mechanism, we can easily model coordination processes of multiple scheduling systems into supply chain. Finally, we apply this mechanism to shipbuilding domain and develop a prototype system which consists of a dock-scheduling agent, four assembly- plant-scheduling agents, and a meta-level coordination agent. A series of experiments using the real world data are used to empirically examine this mechanism. The results of this study show that the effect of agent-based platform on coordinated scheduling is evident in terms of the number of tardy jobs, tardiness, and makespan.

Adaptive Data Hiding Techniques for Secure Communication of Images (영상 보안통신을 위한 적응적인 데이터 은닉 기술)

  • 서영호;김수민;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.664-672
    • /
    • 2004
  • Widespread popularity of wireless data communication devices, coupled with the availability of higher bandwidths, has led to an increased user demand for content-rich media such as images and videos. Since such content often tends to be private, sensitive, or paid for, there exists a requirement for securing such communication. However, solutions that rely only on traditional compute-intensive security mechanisms are unsuitable for resource-constrained wireless and embedded devices. In this paper, we propose a selective partial image encryption scheme for image data hiding , which enables highly efficient secure communication of image data to and from resource constrained wireless devices. The encryption scheme is invoked during the image compression process, with the encryption being performed between the quantizer and the entropy coder stages. Three data selection schemes are proposed: subband selection, data bit selection and random selection. We show that these schemes make secure communication of images feasible for constrained embed-ded devices. In addition we demonstrate how these schemes can be dynamically configured to trade-off the amount of ded devices. In addition we demonstrate how these schemes can be dynamically configured to trade-off the amount of data hiding achieved with the computation requirements imposed on the wireless devices. Experiments conducted on over 500 test images reveal that, by using our techniques, the fraction of data to be encrypted with our scheme varies between 0.0244% and 0.39% of the original image size. The peak signal to noise ratios (PSNR) of the encrypted image were observed to vary between about 9.5㏈ to 7.5㏈. In addition, visual test indicate that our schemes are capable of providing a high degree of data hiding with much lower computational costs.

A Comparative Study on the Improvement of Curriculum in the Junior College for the Industrial Design Major (2년제 대학 산업디자인전공의 교육과정 개선방안에 관한 비교연구)

  • 강사임
    • Archives of design research
    • /
    • v.13 no.1
    • /
    • pp.209-218
    • /
    • 2000
  • The purpose of this study was to improve the curriculum for industrial design department in the junior colleges. In order to achieve the purpose, two methodologies were carried out. First is job analysis of the industrial designers who have worked in the small & medium manufacturing companies, second is survey for the opinions of professors in the junior colleges. Some results were as follows: 1. The period of junior college for industrial designers is 2 years according to present. But selectively 1 year of advanced course can be established. 2. The practice subjects same as computational formative techniques needed to product development have to be increased. In addition kinds of selection subjects same as foreign language, manufacturing process, new product information and consumer behavior investigation have to be extended. 3. The next subjects need to adjust the title, contents and hours. (1) The need of 3.D related subjects same as computer modeling, computer rendering, 3.D modeling was high. The use of computer is required to design presentation subjects. (2)The need of advertising and sale related subjects same as printing, merchandise, package, typography, photography was low, the need of presentation techniques of new product development was high. (3) The need of field practice, special lecture on practice and reading original texts related subjects was same as at present, but these are not attached importance to form. As the designers feel keenly the necessity of using foreign language, the need of language subject was high.

  • PDF

Analysis of Hydrodynamics in a Directly-Irradiated Fluidized Bed Solar Receiver Using CPFD Simulation (CPFD를 이용한 태양열 유동층 흡열기의 수력학적 특성 해석)

  • Kim, Suyoung;Won, Geunhye;Lee, Min Ji;Kim, Sung Won
    • Korean Chemical Engineering Research
    • /
    • v.60 no.4
    • /
    • pp.535-543
    • /
    • 2022
  • A CPFD (Computational particle fluid dynamics) model of solar fluidized bed receiver of silicon carbide (SiC: average dp=123 ㎛) particles was established, and the model was verified by comparing the simulation and experimental results to analyze the effect of particle behavior on the performance of the receiver. The relationship between the heat-absorbing performance and the particles behavior in the receiver was analyzed by simulating their behavior near bed surface, which is difficult to access experimentally. The CPFD simulation results showed good agreement with the experimental values on the solids holdup and its standard deviation under experimental condition in bed and freeboard regions. The local solid holdups near the bed surface, where particles primarily absorb solar heat energy and transfer it to the inside of the bed, showed a non-uniform distribution with a relatively low value at the center related with the bubble behavior in the bed. The local solid holdup increased the axial and radial non-uniformity in the freeboard region with the gas velocity, which explains well that the increase in the RSD (Relative standard deviation) of pressure drop across the freeboard region is responsible for the loss of solar energy reflected by the entrained particles in the particle receiver. The simulation results of local gas and particle velocities with gas velocity confirmed that the local particle behavior in the fluidized bed are closely related to the bubble behavior characterized by the properties of the Geldart B particles. The temperature difference of the fluidizing gas passing through the receiver per irradiance (∆T/IDNI) was highly correlated with the RSD of the pressure drop across the bed surface and the freeboard regions. The CPFD simulation results can be used to improve the performance of the particle receiver through local particle behavior analysis.

Scientific Practices Manifested in Science Textbooks: Middle School Science and High School Integrated Science Textbooks for the 2015 Science Curriculum (과학 교과서에 제시된 과학실천의 빈도와 수준 -2015 개정 교육과정에 따른 중학교 과학 및 통합과학-)

  • Kang, Nam-Hwa;Lee, Hye Rim;Lee, Sangmin
    • Journal of The Korean Association For Science Education
    • /
    • v.42 no.4
    • /
    • pp.417-428
    • /
    • 2022
  • This study analyzed the frequency and level of scientific practices presented in secondary science textbooks. A total of 1,378 student activities presented in 14 middle school science textbooks and 5 high school integrated science textbooks were analyzed, using the definition and level of scientific practice suggested in the NGSS. Findings show that most student activities focus on three practices. Compared to the textbooks for the previous science curriculum, the practice of 'obtaining, evaluating, and communicating information' was more emphasized, reflecting societal changes due to ICT development. However, the practice of 'asking a question', which can be an important element of student-led science learning, was still rarely found in textbooks, and 'developing and using models', 'using math and computational thinking' and 'arguing based on evidence' were not addressed much. The practices were mostly elementary school level except for the practice of 'constructing explanations'. Such repeated exposures to a few and low level of practices mean that many future citizens would be led to a naïve understanding of science. The findings imply that it is necessary to emphasize various practices tailored to the level of students. In the upcoming revision of the science curriculum, it is necessary to provide the definition of practices that are not currently specified and the expected level of each practice so that the curriculum can provide sufficient guidance for textbook writing. These efforts should be supported by benchmarking of overseas science curriculum and research that explore students' ability and teachers' understanding of scientific practices.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.