• Title/Summary/Keyword: 공학설계과정

Search Result 1,715, Processing Time 0.028 seconds

Characteristics of Deformation and Shear Strength of Parallel Grading Coarse-grained Materials Using Large Triaxial Test Equipment (대형삼축시험에 의한 상사입도 조립재료의 변형 및 전단강도 특성)

  • Jin, Guang-Ri;Snin, Dong-Hoon;Im, Eun-Sang;Kim, Ki-Young
    • Journal of the Korean Geotechnical Society
    • /
    • v.25 no.12
    • /
    • pp.57-67
    • /
    • 2009
  • Along with the advanced construction technologies, the maximum size of coarse aggregate used for dam construction ranges from several cm to 1m. Testing the original gradation samples is not only expensive but also causes many technical difficulties. Generally, indoor tests are performed on the samples with the parallel grading method after which the results are applied to the design and interpretation of the actual geotechnical structure. In order to anticipate the exact behavior characteristics for the geotechnical structure, it is necessary to understand the changes in the shear behavior. In this study, the Large Triaxial Test was performed on the parallel grading method samples that were restructured with river bed sand-gravel, with a different maximum size, which is the material that was used to construct Dam B in Korea. And the Stress - Strain characteristics of the parallel grading method samples and the characteristics of the shear strength were compared and analyzed. In the test results, the coarse-grained showed strain softening and expansion behavior of the volume, which became more obvious as the maximum size increased. The internal angle of friction and the shear strength appeared to increase as the maximum size of the parallel grading method sample increased.

A Study on the Improvement of Computing Thinking Education through the Analysis of the Perception of SW Education Learners (SW 교육 학습자의 인식 분석을 통한 컴퓨팅 사고력 교육 개선 방안에 관한 연구)

  • ChwaCheol Shin;YoungTae Kim
    • Journal of Industrial Convergence
    • /
    • v.21 no.3
    • /
    • pp.195-202
    • /
    • 2023
  • This study analyzes the results of a survey based on classes conducted in the field to understand the educational needs of learners, and reflects the elements necessary for SW education. In this study, various experimental elements according to learning motivation and learning achievement were constructed and designed through previous studies. As a survey applied to this study, experimental elements in three categories: Faculty Competences(FC), Learner Competences(LC), and Educational Conditions(EC) were analyzed by primary area and secondary major, respectively. As a result of analyzing CT-based SW education by area, the development of educational materials, understanding of lectures, and teaching methods showed high satisfaction, while communication with students, difficulty of lectures, and the number of students were relatively low. The results of the analysis by major were found to be more difficult and less interesting in the humanities than in the engineering field. In this study, Based on these statistical results proposes the need for non-major SW education to improve into an interesting curriculum for effective liberal arts education in the future in terms of enhancing learners' problem-solving skills.

Parameter Sensitivity Analysis of VfloTM Model In Jungnang basin (중랑천 유역에서의 VfloTM 모형의 매개변수 민감도 분석)

  • Kim, Byung Sik;Kim, Bo Kyung;Kim, Hung Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.6B
    • /
    • pp.503-512
    • /
    • 2009
  • Watershed models, which are a tool for water cycle mechanism, are classified as the distributed model and the lumped model. Currently, the distributed models have been more widely used than lumped model for many researches and applications. The lumped model estimates the parameters in the conceptual and empirical sense, on the other hand, in the case of distributed model the first-guess value is estimated from the grid-based watershed characteristics and rainfall data. Therefore, the distributed model needs more detailed parameter adjustment in its calibration and also one should precisely understand the model parameters' characteristics and sensitivity. This study uses Jungnang basin as a study area and $Vflo^{TM}$ model, which is a physics-based distributed hydrologic model, is used to analyze its parameters' sensitivity. To begin with, 100 years frequency-design rainfall is derived from Huff's method for rainfall duration of 6 hours, then the discharge is simulated using the calibrated parameters of $Vflo^{TM}$ model. As a result, hydraulic conductivity and overland's roughness have an effect on runoff depth and peak discharge, respectively, while channel's roughness have influence on travel time and peak discharge.

Hybrid Integration of P-Wave Velocity and Resistivity for High-Quality Investigation of In Situ Shear-Wave Velocities at Urban Areas (도심지 지반 전단파속도 탐사를 위한 P-파 속도와 전기비저항의 이종 결합)

  • Joh, Sung-Ho;Kim, Bong-Chan
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.1C
    • /
    • pp.45-51
    • /
    • 2010
  • In urban area, design and construction of civil engineering structures such as subway tunnel, underground space and deep excavation is impeded by unreliable site investigation. Variety of embedded objects, electric noises and traffic vibrations degrades the quality of site investigation, whatever the site-investigation technique would be. In this research, a preliminary research was performed to develop a dedicated site investigation technique for urban geotechnical sites, which can overcome the limitations of urban sites. HiRAS (Hybrid Integration of Surface Waves and Resistivity) technique which is the first outcome of the preliminary research was proposed in this paper. The technique combines surface wave as well as electrical resistivity. CapSASW method for surface-wave technique and PDC-R technique for electrical resistivity survey were incorporated to develop HiRAS technique. CapSASW method is a good method for evaluating material stiffness and PDC-R technique is a reliable method for determination of underground stratification even in a site with electrical noise. For the inversion analysis of HiRAS techniuqe, a site-specific relationship between stress-wave velocity and resistivity was employed. As for outgrowth of this research, the 2-D distribution of Poisson's ratio could be also determined.

A Study on Monitoring Surface Displacement Using SAR Data from Satellite to Aid Underground Construction in Urban Areas (위성 SAR 자료를 활용한 도심지 지하 교통 인프라 건설에 따른 지표 변위 모니터링 적용성 연구)

  • Woo-Seok Kim;Sung-Pil Hwang;Wan-Kyu Yoo;Norikazu Shimizu;Chang-Yong Kim
    • The Journal of Engineering Geology
    • /
    • v.34 no.1
    • /
    • pp.39-49
    • /
    • 2024
  • The construction of underground infrastructure is garnering growing increasing research attention owing to population concentration and infrastructure overcrowding in urban areas. An important associated task is establishing a monitoring system to evaluate stability during infrastructure construction and operation, which relies on developing techniques for ground investigation that can evaluate ground stability, verify design validity, predict risk, facilitate safe operation management, and reduce construction costs. The method proposed here uses satellite imaging in a cost-effective and accurate ground investigation technique that can be applied over a wide area during the construction and operation of infrastructure. In this study, analysis was performed using Synthetic Aperture Radar (SAR) data with the time-series radar interferometric technique to observe surface displacement during the construction of urban underground roads. As a result, it was confirmed that continuous surface displacement was occurring at some locations. In the future, comparing and analyzing on-site measurement data with the points of interest would aid in confirming whether displacement occurs due to tunnel excavation and assist in estimating the extent of excavation impact zones.

Algorithm for a Minimum Linear Arrangement(MinLA) of Lattice Graph (격자 그래프의 최소선형배열 알고리즘)

  • Sang-Un Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.105-111
    • /
    • 2024
  • This paper deals with the minimum linear arrangement(MinLA) of a lattice graph, to which an approximate algorithm of linear complexity O(n) remains as a viable solution, deriving the optimal MinLA of 31,680 for 33×33 lattice. This paper proposes a partitioning arrangement algorithm of complexity O(1) that delivers exact solution to the minimum linear arrangement. The proposed partitioning arrangement algorithm could be seen as loading boxes into a container. It firstly partitions m rows into r1,r2,r3 and n columns into c1,c2,c3, only to obtain 7 containers. Containers are partitioning with a rule. It finally assigns numbers to vertices in each of the partitioned boxes location-wise so as to obtain the MinLA. Given m,n≥11, the size of boxes C2,C4,C6 is increased by 2 until an increase in the MinLA is detected. This process repeats itself 4 times at maximum given m,n≤100. When tested to lattice in the range of 2≤n≤100, the proposed algorithm has proved its universal applicability to lattices of both m=n and m≠n. It has also obtained optimal results for 33×33 and 100×100 lattices superior to those obtained by existing algorithms. The minimum linear arrangement algorithm proposed in this paper, with its simplicity and outstanding performance, could therefore be also applied to the field of Very Large Scale Integration circuit where m,n are infinitely large.

Synthesis and Properties of Ionic Polyacetylene Composite from the In-situ Quaternization Polymerization of 2-Ethynylpyridine Using Iron (III) Chloride (염화 철(III)을 이용한 2-에티닐피리딘의 in-situ4차염화중합을 통한 이온형 폴리아세틸렌 복합체의 합성과 특성)

  • Taehyoung Kim;Sung-Ho Jin;Jongwook Park;Yeong-Soon Gal
    • Applied Chemistry for Engineering
    • /
    • v.35 no.4
    • /
    • pp.296-302
    • /
    • 2024
  • An ionic conjugated polymer-iron (III) chloride composite was prepared via in-situ quaternization polymerization of 2-ethynylpyridine (2EP) using iron (III) chloride. Various instrumental methods revealed that the chemical structure of the resulting conjugated polymer (P2EP)-iron (III) chloride composite has the conjugated backbone system having the designed pyridinium ferric chloride complexes. The polymerization mechanism was assumed to be that the activated triple bond of 2-ethynylpyridinium salt, formed at the first reaction step, is easily susceptible to the step-wise polymerization, followed by the same propagation step that contains the propagating macroanion and monomeric 2-ethynylpyridinium salts. The electro-optical and electrochemical properties of the P2EP-FeCl3 composite were studied. In the UV-visible spectra of P2EP-FeCl3 composite, the absorption maximum values were 480 nm and 533 nm, and the PL maximum value was 598 nm. The cyclic voltammograms of the P2EP-FeCl3 composite exhibited irreversible electrochemical behavior between the oxidation and reduction peaks. The kinetics of the redox process of composites were found to be very close to a diffusion-controlled process from the plot of the oxidation current density versus the scan rate.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

A study of compaction ratio and permeability of soil with different water content (축제용흙의 함수비 변화에 의한 다짐율 및 수용계수 변화에 관한 연구)

  • 윤충섭
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.13 no.4
    • /
    • pp.2456-2470
    • /
    • 1971
  • Compaction of soil is very important for construction of soil structures such as highway fills, embankment of reservoir and seadike. With increasing compaction effort, the strength of soil, interor friction and Cohesion increas greatly while the reduction of permerbilityis evident. Factors which may influence compaction effort are moisture content, grain size, grain distribution and other physical properties as well as the variable method of compaction. The moisture content among these parameter is the most important thing. For making the maximum density to a given soil, the comparable optimum water content is required. If there is a slight change in water content when compared with optimum water content, the compaction ratio will decrease and the corresponding mechanical properties will change evidently. The results in this study of soil compaction with different water content are summarized as follows. 1) The maximum dry density increased and corresponding optimum moisture content decreased with increasing of coarse grain size and the compaction curve is steeper than increasing of fine grain size. 2) The maximum dry density is decreased with increasing of the optimum water content and a relationship both parameter becomes rdam-max=2.232-0.02785 $W_0$ But this relstionship will be change to $r_d=ae^{-bw}$ when comparable water content changes. 3) In case of most soils, a dry condition is better than wet condition to give a compactive effort, but the latter condition is only preferable when the liquid limit of soil exceeds 50 percent. 4) The compaction ratio of cohesive soil is greeter than cohesionless soil even the amount of coarse grain sizes are same. 5) The relationship between the maximum dry density and porosity is as rdmax=2,186-0.872e, but it changes to $r_d=ae^{be}$ when water content vary from optimum water content. 6) The void ratio is increased with increasing of optimum water content as n=15.85+1.075 w, but therelation becames $n=ae^{bw}$ if there is a variation in water content. 7) The increament of permeabilty is high when the soil is a high plasticity or coarse. 8) The coefficient of permeability of soil compacted in wet condition is lower than the soil compacted in dry condition. 9) Cohesive soil has higher permeability than cohesionless soil even the amount of coarse particles are same. 10) In generall, the soil which has high optimum water content has lower coefficient of permeability than low optimum water content. 11) The coefficient of permeability has a certain relations with density, gradation and void ratio and it increase with increasing of saturation degree.

  • PDF

A Hardware Implementation of Image Scaler Based on Area Coverage Ratio (면적 점유비를 이용한 영상 스케일러의 설계)

  • 성시문;이진언;김춘호;김이섭
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.3
    • /
    • pp.43-53
    • /
    • 2003
  • Unlike in analog display devices, the physical screen resolution in digital devices are fixed from the manufacturing. It is a weak point on digital devices. The screen resolution displayed in digital display devices is varied. Thus, interpolation or decimation of the resolution on the display is needed to make the input pixels equal to the screen resolution., This process is called image scaling. Many researches have been developed to reduce the hardware cost and distortion of the image of image scaling algorithm. In this paper, we proposed a Winscale algorithm. which modifies the scale up/down in continuous domain to the scale up/down in discrete domain. Thus, the algorithm is suitable to digital display devices. Hardware implementation of the image scaler is performed using Verilog XL and chip is fabricated in a 0.5${\mu}{\textrm}{m}$ Samsung SOG technology. The hardware costs as well as the scalabilities are compared with the conventional image scaling algorithms that are used in other software. This Winscale algorithm is proved more scalable than other image-scaling algorithm, which has similar H/W cost. This image-scaling algorithm can be used in various digital display devices that need image scaling process.