• Title/Summary/Keyword: 설계성능

Search Result 16,731, Processing Time 0.04 seconds

Experimental Study to Evaluate the Durability of 100 MPa Class Ultra-high Strength Centrifugal Molding Concrete (100MPa급 초고강도 원심성형 콘크리트의 내구성 평가를 위한 실험연구)

  • Jeong-Hoi Kim;Sung-Jin Kim;Doo-Sung Lee
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.1
    • /
    • pp.12-23
    • /
    • 2024
  • In this study, a structural concrete square beam was developed using the centrifugal molding technique. In order to secure the bending stiffness of the cross section, the hollow rate of the cross section was set to 10% or less. Instead of using the current poor mixture of concrete and a concrete mixing ratio with a high slump (150-200) and a design strength of 100 MPa or more was developed and applied. In order to investigate the durability of centrifugally formed PSC square beams to be used as the superstructure of the avalanch tunnel or ramen bridge, the durability performance of ultra-high-strength centrifugally formed concrete with a compressive strength of 100 MPa was evaluated in terms of deterioration and chemical resistance properties.Concrete durability tests, including chloride penetration resistance, accelerated carbonation, sulfate erosion resistance, freeze-thaw resistance, and scaling resistance, were performed on centrifugally formed square beam test specimens produced in 2022 and 2023. Considering the information verified in this study, the durability of centrifugally molded concrete, which has increased watertightness in the later manufacturing stage, was found to be superior to that of general concrete.

Fabrication of Silica Nanoparticles by Recycling EMC Waste from Semiconductor Molding Process and Its Application to CMP Slurry (반도체 몰딩 공정에서 발생하는 EMC 폐기물의 재활용을 통한 실리카 나노입자의 제조 및 반도체용 CMP 슬러리로의 응용)

  • Ha-Yeong Kim;Yeon-Ryong Chu;Gyu-Sik Park;Jisu Lim;Chang-Min Yoon
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.32 no.1
    • /
    • pp.21-29
    • /
    • 2024
  • In this study, EMC(Epoxy molding compound) waste from the semiconductor molding process is recycled and synthesized into silica nanoparticles, which are then applied as abrasive materials contains CMP(Chemical mechanical polishing) slurry. Specifically, silanol precursor is extracted from EMC waste according to the ultra-sonication method, which provides heat and energy, using ammonia solution as an etchant. By employing as-extracted silanol via a facile sol-gel process, uniform silica nanoparticles(e-SiO2, experimentally synthesized SiO2) with a size of ca. 100nm are successfully synthesized. Through physical and chemical analysis, it was confirmed that e-SiO2 has similar properties compared to commercially available SiO2(c-SiO2, commercially SiO2). For practical CMP applications, CMP slurry is prepared using e-SiO2 as an abrasive and tested by polishing a semiconductor chip. As a result, the scratches that are roughly on the surface of the chip are successfully removed and turned into a smooth surface. Hence, the results present a recycling method of EMC waste into silica nanoparticles and the application to high-quality CMP slurry for the polishing process in semiconductor packaging.

Field Applicability Evaluation Experiment for Ultra-high Strength (130MPa) Concrete (초고강도(130MPa) 콘크리트의 현장적용성 평가에 관한 실험)

  • Choonhwan Cho
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.20-31
    • /
    • 2024
  • Purpose: Research and development of high-strength concrete enables high-rise buildings and reduces the self-weight of the structure by reducing the cross-section, thereby reducing the thickness of beams and slabs to build more floors. A large effective space can be secured and the amount of reinforcement and concrete used to designate the base surface can be reduced. Method: In terms of field construction and quality, the effect of reducing the occurrence of drying shrinkage can be confirmed by studying the combination of low water bonding ratio and minimizing bleeding on the concrete surface. Result: The ease of site construction was confirmed due to the high self-charging property due to the increased fluidity by using high-performance water reducing agents, and the advantage of shortening the time to remove the formwork by expressing the early strength of concrete was confirmed. These experimental results show that the field application of ultra-high-strength concrete with a design standard strength of 100 MPa or higher can be expanded in high-rise buildings. Through this study, we experimented and evaluated whether ultra-high-strength concrete with a strength of 130 MPa or higher, considering the applicability of high-rise buildings with more than 120 floors in Korea, could be applied in the field. Conclusion: This study found the optimal mixing ratio studied by various methods of indoor basic experiments to confirm the applicability of ultra-high strength, produced 130MPa ultra-high strength concrete at a ready-mixed concrete factory similar to the real size, and tested the applicability of concrete to the fluidity and strength expression and hydration heat.

A Study on the Smoke Removal Equipment in Plant Facilities Using Simulation (시뮬레이션을 이용한 플랜트 시설물 제연설비에 관한 연구)

  • Doo Chan Choi;Min Hyeok Yang;MIn Hyeok Ko;Su Min Oh
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.40-46
    • /
    • 2024
  • Purpose: In this study, in order to ensure the evacuation safety of plant facilities, we analyze the relationship between the height of smoke removal boundary walls, the presence or absence of smoke removal equipment, and evacuation safety. Method: Using fire and evacuation simulations, evacuation safety was analyzed through changes in the height of the smoke removal boundary wall, air supply volume and exhaust volume according to vertical dista. Result: In the case of visible drawings, if only 0.6m of boundary wall is used, the time below 5m reaches the shortest, and 1.2m of boundary width is 20% longer than when using smoke removal facilities. In the case of temperature, 1.2m is 20% longer than 0.6m when only the boundary width is used without smoke removal facilities. Conclusion: It was found that increasing the length of the smoke removal boundary wall could affect visibility, and installing a smoke removal facility would affect temperature. Therefore, it is determined that an appropriate smoke removal plan and smoke removal equipment should be installed in consideration of the process characteristics.

A Study on the Evaluation the Safety of Evacuation in Indoor Sports Stadium through Evacuation Simulation (피난시뮬레이션을 통한 실내 스포츠경기장 내 장애인의 피난 안전성 평가 연구)

  • MinEon Ju;SeHong Min
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.69-81
    • /
    • 2024
  • Purpose: Recently, there has been a movement to guarantee the right to watch sports for the disabled. However, the sports stadium is designed without considering the wheelchair users, so the right to move in the stadium is not secured. Restrictions on the movement of the disabled make the evacuation vulnerable in an emergency. This study aims to develop a plan to ensure the safety of movement and evacuation of wheelchair users by conducting simulations targeting indoor sports stadiums. Method: The simulation was performed by constructing a scenario with the shape of the stands as a variable. The effect of the installation of wheelchair seats on evacuation was confirmed. Result: The results according to whether wheelchair seats are installed, the evacuation route of wheelchair movement, and whether wheelchair seats are separately arranged were compared. The impact of wheelchair seat installation on evacuation and its characteristics were derived. As a result, upward and separation seat was the most vulnerable to evacuation. Conclusion: A plan to secure evacuation performance was derived for the top floors of upward and separation seat. It is judged that the content can be use as a way to secure the safety of movement and evacuation of the disabled in sports stadiums.

A Study on Key Arrangement of Virtual Keyboard based on Eyeball Input system (안구 입력 시스템 기반의 화상키보드 키 배열 연구)

  • Sa Ya Lee;Jin Gyeong Hong;Joong Sup Lee
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.94-103
    • /
    • 2024
  • The eyeball input system is a text input system designed based on 'eye tracking technology' and 'virtual keyboard character-input technology'. The current virtual keyboard structure used is a rectangular QWERTY array optimized for a multi-input method that simultaneously utilizes all 10 fingers on both hands. However, since 'eye-tracking technology' is a single-input method that relies solely on eye movement, requiring only one focal point for input, problems arise when used in conjunction with a rectangular virtual keyboard structure designed for multi-input method. To solve this problem, first of all, previous studies on the shape, type, and movement of muscles connected to the eyeball were investigated. Through the investigation, it was identified that the principle of eye movement occurs in a circle rather than in a straight line. This study, therefore, proposes a new key arrangement wherein the keys are arranged in a circular structure suitable for rotational motion rather than the key arrangement of the current virtual keyboard which is arranged in a rectangular structure and optimized for both-hand input. In addition, compared to the existing rectangular key arrangement, a performance verification experiment was conducted on the circular key arrangement, and through the experiment, it was confirmed that the circular arrangement would be a good replacement for the rectangular arrangement for the virtual keyboard.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

A Study of Radon Reduction using Panel-type Activated Carbon (판재형 활성탄을 이용한 라돈 저감 연구)

  • Choi, Il-Hong;Kang, Sang-Sik;Jun, Jae-Hoon;Yang, Seung-Woo;Park, Ji-Koon
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.5
    • /
    • pp.297-302
    • /
    • 2017
  • Recently, building materials and air purification filters with eco-friendly charcoal are actively studying to reduce the concentration of radon gas in indoor air. In this study, radon reduction performance was assessed by designing and producing new panel-type activated carbon filter that can be handled more efficiently than conventional charcoal filters, which can reduce radon gas. For the fabrication of our panel-type activated carbon filter, first the pressed molding product after mixing activated carbon powder and polyurethane. Then, through diamond cutting, the activated carbon filter of 2 mm, 4 mm and 6 mm thickness were fabricated. To investigate the physical characteristics of the fabricated activated carbon filter, a surface area and flexural strength measurement was performed. In addition, to evaluate the reduction performance of radon gas in indoor, the radon concentration of before and after the filter passes from a constant amount of air flow using three acrylic chambers was measured, respectively. As a result, the surface area of the fabricated activated carbon was approximately $1,008m^2/g$ showing similar value to conventional products. Also, the flexural load was found to have three times higher value than the gypsum board with 435 N. Finally, the radon reduction efficiency from indoor gas improved as the thickness of the activated carbon increases, resulting in an excellent radon removal rate of more than 90 % in the 6 mm thick filter. From the experimental results, the panel-type activated carbon is considered to be available as an eco-friendly building material to reduce radon gas in an enclosed indoor environment.