• Title/Summary/Keyword: 설계성능

Search Result 16,766, Processing Time 0.043 seconds

Class-Agnostic 3D Mask Proposal and 2D-3D Visual Feature Ensemble for Efficient Open-Vocabulary 3D Instance Segmentation (효율적인 개방형 어휘 3차원 개체 분할을 위한 클래스-독립적인 3차원 마스크 제안과 2차원-3차원 시각적 특징 앙상블)

  • Sungho Song;Kyungmin Park;Incheol Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.7
    • /
    • pp.335-347
    • /
    • 2024
  • Open-vocabulary 3D point cloud instance segmentation (OV-3DIS) is a challenging visual task to segment a 3D scene point cloud into object instances of both base and novel classes. In this paper, we propose a novel model Open3DME for OV-3DIS to address important design issues and overcome limitations of the existing approaches. First, in order to improve the quality of class-agnostic 3D masks, our model makes use of T3DIS, an advanced Transformer-based 3D point cloud instance segmentation model, as mask proposal module. Second, in order to obtain semantically text-aligned visual features of each point cloud segment, our model extracts both 2D and 3D features from the point cloud and the corresponding multi-view RGB images by using pretrained CLIP and OpenSeg encoders respectively. Last, to effectively make use of both 2D and 3D visual features of each point cloud segment during label assignment, our model adopts a unique feature ensemble method. To validate our model, we conducted both quantitative and qualitative experiments on ScanNet-V2 benchmark dataset, demonstrating significant performance gains.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

A Study of Radon Reduction using Panel-type Activated Carbon (판재형 활성탄을 이용한 라돈 저감 연구)

  • Choi, Il-Hong;Kang, Sang-Sik;Jun, Jae-Hoon;Yang, Seung-Woo;Park, Ji-Koon
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.5
    • /
    • pp.297-302
    • /
    • 2017
  • Recently, building materials and air purification filters with eco-friendly charcoal are actively studying to reduce the concentration of radon gas in indoor air. In this study, radon reduction performance was assessed by designing and producing new panel-type activated carbon filter that can be handled more efficiently than conventional charcoal filters, which can reduce radon gas. For the fabrication of our panel-type activated carbon filter, first the pressed molding product after mixing activated carbon powder and polyurethane. Then, through diamond cutting, the activated carbon filter of 2 mm, 4 mm and 6 mm thickness were fabricated. To investigate the physical characteristics of the fabricated activated carbon filter, a surface area and flexural strength measurement was performed. In addition, to evaluate the reduction performance of radon gas in indoor, the radon concentration of before and after the filter passes from a constant amount of air flow using three acrylic chambers was measured, respectively. As a result, the surface area of the fabricated activated carbon was approximately $1,008m^2/g$ showing similar value to conventional products. Also, the flexural load was found to have three times higher value than the gypsum board with 435 N. Finally, the radon reduction efficiency from indoor gas improved as the thickness of the activated carbon increases, resulting in an excellent radon removal rate of more than 90 % in the 6 mm thick filter. From the experimental results, the panel-type activated carbon is considered to be available as an eco-friendly building material to reduce radon gas in an enclosed indoor environment.

A Study on Increasing the Efficiency of Biogas Production using Mixed Sludge in an Improved Single-Phase Anaerobic Digestion Process (개량형 단상 혐기성 소화공정에서의 혼합슬러지를 이용한 바이오가스 생산효율 증대방안 연구)

  • Jung, Jong-Cheal;Chung, Jln-Do;Kim, San
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.6
    • /
    • pp.588-597
    • /
    • 2016
  • In this study, we attempted to improve the biogas production efficiency by varying the mixing ratio of the mixed sludge of organic wastes in the improved single-phase anaerobic digestion process. The types of organic waste used in this study were raw sewage sludge, food wastewater leachate and livestock excretions. The biomethane potential was determined through the BMP test. The results showed that the biomethane potential of the livestock excretions was the highest at $1.55m^3CN4/kgVS$, and that the highest value of the composite sample, containing primary sludge, food waste leachate and livestock excretions at proportions of 50%, 30% and 20% respectively) was $0.43m^3CN4/kgVS$. On the other hand, the optimal mixture ratio of composite sludge in the demonstration plant was 68.5 (raw sludge) : 18.0 (food waste leachate) : 13.5 (livestock excretions), which was a somewhat different result from that obtained in the BMP test. This difference was attributed to the changes in the composite sludge properties and digester operating conditions, such as the retention time. The amount of biogas produced in the single-phase anaerobic digestion process was $2,514m^3/d$ with a methane content of 62.8%. Considering the value of $2,319m^3/d$ of biogas produced as its design capacity, it was considered that this process demonstrated the maximum capacity. Also, through this study, it was shown that, in the case of the anaerobic digestion process, the two-phase digestion process is better in terms of its stable tank operation and high efficiency, whereas the existing single-phase digestion process allows for the improvement of the digestion efficiency and performance.

Characteristics of Wheat Flour Dough and Noodles with Amylopectin Content and Hydrocolloids (아밀로펙틴 함량 변화와 하이드로콜로이드 첨가에 의한 밀가루 반죽 및 국수의 특성)

  • Cho, Young-Hwa;Shim, Jae-Yong;Lee, Hyeon-Gyu
    • Korean Journal of Food Science and Technology
    • /
    • v.39 no.2
    • /
    • pp.138-145
    • /
    • 2007
  • The effects of amylopectin and hydrocolloid (locust bean gum and guar gum) content on wheat flour dough and noodle properties were investigated. As the amount of amylopectin increased, the water absorption rate (farinograph), the tension (tension test), the gel stability (freeze-thawing treatment), and the springiness and the cohesiveness (TPA) increased, but the pasting temperature (RVA), the lightness and yellowness (color measurement), and the hardness (TPA) tended to decrease. In sensory evaluations, the scores for cohesiveness, springiness, and acceptability of cooked noodle increased as the proportion of amylopectin increased. The proper combination of amylose/amylopectin ratio and hydrocolloids improved the freeze-thaw stability and the sensory acceptability of wheat flour dough and noodle.

Aspect of the chief of state guard EMP (Electro Magnetic Pulse) protection system for the consideration (국가원수 경호적 측면에서의 EMP(Electro Magnetic Pulse) 방호 시스템에 대한 고찰)

  • Jung, Joo-Sub
    • Korean Security Journal
    • /
    • no.41
    • /
    • pp.37-66
    • /
    • 2014
  • In recent years, with the development of computers and electronics, electronics and communication technology in a growing and each part is dependent on the cross-referencing makes all electronic equipment is obsolete due to direct or indirect damage EMP. Korea and the impending standoff North Korea has a considerable level of technologies related to the EMP, EMP weapons you already have or in a few years, the development of EMP weapons will complete. North Korea launched a long-range missile and conducted a nuclear test on several occasions immediately after, when I saw the high-altitude nuclear blackmail has been strengthening the outright offensive nuclear EMP attacks at any time and practical significance for the EMP will need offensive skills would improve. At this point you can predict the damage situation of Korea's security reality that satisfy the need, more than anything else to build a protective system of the EMP. The scale of the damage that unforeseen but significant military damage and socio-economic damage and fatalities when I looked into the situation which started out as a satellite communications systems and equipment to attack military and security systems and transportation, finance, national emergency system, such as the damage elsewhere. In General, there is no direct casualties reported, but EMP medical devices that rely on lethal damage to people who can show up. In addition, the State power system failure due to a power supply interruption would not have thought the damage would bring State highly dependent on domestic power generation of nuclear plants is a serious nuclear power plant accident in the event of a blackout phenomenon can lead to the plant's internal problems should see a forecast. First of all, a special expert Committee of the EMP, the demand for protective facilities and equipment and conduct an investigation, he takes fits into your budget is under strict criteria by configuring the contractors should be sifting through. He then created the Agency for verification of performance EMP protection after you have verified the performance of maintenance, maintenance, safety and security management, design and construction company organized and systematic process Guard facilities or secret communications equipment and perfect for the EMP, such as protective equipment maneuver system should take.

  • PDF

Design and Implementation of Game Server using the Efficient Load Balancing Technology based on CPU Utilization (게임서버의 CPU 사용율 기반 효율적인 부하균등화 기술의 설계 및 구현)

  • Myung, Won-Shig;Han, Jun-Tak
    • Journal of Korea Game Society
    • /
    • v.4 no.4
    • /
    • pp.11-18
    • /
    • 2004
  • The on-line games in the past were played by only two persons exchanging data based on one-to-one connections, whereas recent ones (e.g. MMORPG: Massively Multi-player Online Role-playings Game) enable tens of thousands of people to be connected simultaneously. Specifically, Korea has established an excellent network infrastructure that can't be found anywhere in the world. Almost every household has a high-speed Internet access. What made this possible was, in part, high density of population that has accelerated the formation of good Internet infrastructure. However, this rapid increase in the use of on-line games may lead to surging traffics exceeding the limited Internet communication capacity so that the connection to the games is unstable or the server fails. expanding the servers though this measure is very costly could solve this problem. To deal with this problem, the present study proposes the load distribution technology that connects in the form of local clustering the game servers divided by their contents used in each on-line game reduces the loads of specific servers using the load balancer, and enhances performance of sewer for their efficient operation. In this paper, a cluster system is proposed where each Game server in the system has different contents service and loads are distributed efficiently using the game server resource information such as CPU utilization. Game sewers having different contents are mutually connected and managed with a network file system to maintain information consistency required to support resource information updates, deletions, and additions. Simulation studies show that our method performs better than other traditional methods. In terms of response time, our method shows shorter latency than RR (Round Robin) and LC (Least Connection) by about 12%, 10% respectively.

  • PDF

AN ORBIT PROPAGATION SOFTWARE FOR MARS ORBITING SPACECRAFT (화성 근접 탐사를 위한 우주선의 궤도전파 소프트웨어)

  • Song, Young-Joo;Park, Eun-Seo;Yoo, Sung-Moon;Park, Sang-Young;Choi, Kyu-Hong;Yoon, Jae-Cheol;Yim, Jo-Ryeong;Kim, Han-Dol;Choi, Jun-Min;Kim, Hak-Jung;Kim, Byung-Kyo
    • Journal of Astronomy and Space Sciences
    • /
    • v.21 no.4
    • /
    • pp.351-360
    • /
    • 2004
  • An orbit propagation software for the Mars orbiting spacecraft has been developed and verified in preparations for the future Korean Mars missions. Dynamic model for Mars orbiting spacecraft has been studied, and Mars centered coordinate systems are utilized to express spacecraft state vectors. Coordinate corrections to the Mars centered coordinate system have been made to adjust the effects caused by Mars precession and nutation. After spacecraft enters Sphere of Influence (SOI) of the Mars, the spacecraft experiences various perturbation effects as it approaches to Mars. Every possible perturbation effect is considered during integrations of spacecraft state vectors. The Mars50c gravity field model and the Mars-GRAM 2001 model are used to compute perturbation effects due to Mars gravity field and Mars atmospheric drag, respectively. To compute exact locations of other planets, JPL's DE405 ephemerides are used. Phobos and Deimos's ephemeris are computed using analytical method because their informations are not released with DE405. Mars Global Surveyor's mapping orbital data are used to verify the developed propagator performances. After one Martian day propagation (12 orbital periods), the results show about maximum ${\pm}5$ meter errors, in every position state components(radial, cross-track and along-track), when compared to these from the Astrogator propagation in the Satellite Tool Kit. This result shows high reliability of the developed software which can be used to design near Mars missions for Korea, in future.