• Title/Summary/Keyword: information processing scope

Search Result 171, Processing Time 0.023 seconds

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Analysis of the Research Trend and Developmental Direction against the VDS Data (차량검지기 자료 관련 연구동향 분석 및 발전방향)

  • Kim, Han-Soo;Park, Dong-Joo;Shin, Seung-Jin;Beck, Seung-Kirl;NamKoong, Sung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.6 no.1 s.12
    • /
    • pp.13-26
    • /
    • 2007
  • A VDS data in the domestic has been used within limits to real time information such as congestion management, incident management, and route guidance service. On the other hand, a VDS data in the foreign countries had been used to various objectives such as transportation policy assessment, transportation construction evaluation, franc safety improvement, and etc. The scope and method of the study is the VDS data which was installed in the uninterrupted flow such as the freeway and the interrupted flow in a diversion route of the leeway. It has investigated and analyzed the VDS as our subject to study, study objective and study methodology for each study generally classified as 1) data collection 2) data processing 3) data store and 4) data quality section. This study has investigated and analyzed the various literatures in domestic and foreign countries regarding the VDS data. And It drew the development direction of the study which is about VDS data in domestic from now.

  • PDF

Automatic Method for Extracting Homogeneity Threshold and Segmenting Homogeneous Regions in Image (영상의 동질성 문턱 값 추출과 영역 분할 자동화 방법)

  • Han, Gi-Tae
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.363-374
    • /
    • 2010
  • In this paper, we propose the method for extracting Homogeneity Threshold($H_T$) and for segmenting homogeneous regions by USRG(Unseeded Region Growing) with $H_T$. The $H_T$ is a criterion to distinguish homogeneity in neighbor pixels and is computed automatically from the original image by proposed method. Theoretical background for proposed method is based on the Otsu's single level threshold method. The method is used to divide a small local part of original image int o two classes and the sum($\sigma_c$) of standard deviations for the classes to satisfy special conditions for distinguishing as different regions from each other is used to compute $H_T$. To find validity for proposed method, we compare the original image with the image that is regenerated with only the segmented homogeneous regions and show up the fact that the difference between two images is not exist visually and also present the steps to regenerate the image in order the size of segmented homogeneous regions and in order the intensity that includes pixels. Also, we show up the validity of proposed method with various results that is segmented using the homogeneity thresholds($H^*_T$) that is added a coefficient ${\alpha}$ for adjusting scope of $H_T$. We expect that the proposed method can be applied in various fields such as visualization and animation of natural image, anatomy and biology and so on.

A Distributed Altruistic Locking Scheme For Multilevel Secure Database in Wireless Mobile Network Environments (무선 이동 네트워크 환경에서 다단계 보안 데이터베이스를 위한 분산 이타적 잠금 기법)

  • Kim, Hee-Wan;Park, Dong-Soon;Rhee, Hae-Kyung;Kim, Ung-Mo
    • The KIPS Transactions:PartD
    • /
    • v.9D no.2
    • /
    • pp.235-242
    • /
    • 2002
  • We propose an advanced transaction scheduling protocol for concurrency control of multilevel secure databases in wireless mobile network environment. Wireless communication is characterized by frequent spurious disconnections. So short-lived transaction must quickly access database without any delay by long-lived one. We adapted two-phase locking protocol, namely traditional syntax-oriented serializability notions, to multilevel secure databases in wireless mobile network environment. Altruistic locking, as an advanced protocol, has attempted to reduce delay effect associated with lock release moment by use of the idea of donation. An improved form of a1truism has also been deployed for extended a1truistic locking. This is in a way that scope of data to he early released is enlarged to include even data initially not intended to be donated. Our protocol is based on extended altruistic locking, but a new method, namely bi-directional donation locking for multilevel secure databases (MLBiDL), is additionally used in order to satisfy security requirements and concurrency. We showed the Simulation experiments that MLBiDL outperforms the other locking protocols in terms of the degree of throughput and average waiting time.

Object Modeling for Mapping from XML Document and Query to UML Class Diagram based on XML-GDM (XML-GDM을 기반으로 한 UML 클래스 다이어그램으로 사상을 위한 XML문서와 질의의 객체 모델링)

  • Park, Dae-Hyun;Kim, Yong-Sung
    • The KIPS Transactions:PartD
    • /
    • v.17D no.2
    • /
    • pp.129-146
    • /
    • 2010
  • Nowadays, XML has been favored by many companies internally and externally as a means of sharing and distributing data. there are many researches and systems for modeling and storing XML documents by an object-oriented method as for the method of saving and managing web-based multimedia document more easily. The representative tool for the object-oriented modeling of XML documents is UML (Unified Modeling Language). UML at the beginning was used as the integrated methodology for software development, but now it is used more frequently as the modeling language of various objects. Currently, UML supports various diagrams for object-oriented analysis and design like class diagram and is widely used as a tool of creating various database schema and object-oriented codes from them. This paper proposes an Efficinet Query Modelling of XML-GL using the UML class diagram and OCL for searching XML document which its application scope is widely extended due to the increased use of WWW and its flexible and open nature. In order to accomplish this, we propose the modeling rules and algorithm that map XML-GL. which has the modeling function for XML document and DTD and the graphical query function about that. In order to describe precisely about the constraint of model component, it is defined by OCL (Object Constraint Language). By using proposed technique creates a query for the XML document of holding various properties of object-oriented model by modeling the XML-GL query from XML document, XML DTD, and XML query while using the class diagram of UML. By converting, saving and managing XML document visually into the object-oriented graphic data model, user can prepare the base that can express the search and query on XML document intuitively and visually. As compared to existing XML-based query languages, it has various object-oriented characteristics and uses the UML notation that is widely used as object modeling tool. Hence, user can construct graphical and intuitive queries on XML-based web document without learning a new query language. By using the same modeling tool, UML class diagram on XML document content, query syntax and semantics, it allows consistently performing all the processes such as searching and saving XML document from/to object-oriented database.

Security Requirements Analysis on IP Camera via Threat Modeling and Common Criteria (보안위협모델링과 국제공통평가기준을 이용한 IP Camera 보안요구사항 분석)

  • Park, Jisoo;Kim, Seungjoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.3
    • /
    • pp.121-134
    • /
    • 2017
  • With rapid increasing the development and use of IoT Devices, requirements for safe IoT devices and services such as reliability, security are also increasing. In Security engineering, SDLC (Secure Development Life Cycle) is applied to make the trustworthy system. Secure Development Life Cycle has 4 big steps, Security requirements, Design, Implementation and Operation and each step has own goals and activities. Deriving security requirements, the first step of SDLC, must be accurate and objective because it affect the rest of the SDLC. For accurate and objective security requirements, Threat modeling is used. And the results of the threat modeling can satisfy the completeness of scope of analysis and the traceability of threats. In many countries, academic and IT company, a lot of researches about drawing security requirements systematically are being done. But in domestic, awareness and researches about deriving security requirements systematically are lacking. So in this paper, I described about method and process to drawing security requirements systematically by using threat modeling including DFD, STRIDE, Attack Library and Attack Tree. And also security requirements are described via Common Criteria for delivering objective meaning and broad use of them.

Image Processing System based on Deep Learning for Safety of Heat Treatment Equipment (열처리 장비의 Safety를 위한 딥러닝 기반 영상처리 시스템)

  • Lee, Jeong-Hoon;Lee, Ro-Woon;Hong, Seung-Taek;Kim, Young-Gon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.77-83
    • /
    • 2020
  • The heat treatment facility is in a situation where the scope of application of the remote IOT system is expanding due to the harsh environment caused by high heat and long working hours among the root industries. In this heat treatment process environment, the IOT middleware is required to play a pivotal role in interpreting, managing and controlling data information of IoT devices (sensors, etc.). Until now, the system controlled by the heat treatment remotely was operated with the command of the operator's batch system without overall monitoring of the site situation. However, for the safety and precise control of the heat treatment facility, it is necessary to control various sensors and recognize the surrounding work environment. As a solution to this, the heat treatment safety support system presented in this paper proposes a support system that can detect the access of the work manpower to the heat treatment furnace through thermal image detection and operate safely when ordering work from a remote location. In addition, an OPEN CV-based deterioration analysis system using DNN deep learning network was constructed for faster and more accurate recognition than general fixed hot spot monitoring-based thermal image analysis. Through this, we would like to propose a system that can be used universally in the heat treatment environment and support the safety management specialized in the heat treatment industry.

Effects of Jinyeosoo Clean Mist on the Improvement of Facial Skin in Middle-aged Women (진여클린미스트를 이용한 중년여성의 안면피부개선 효과)

  • Kim, Min Joo
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.3
    • /
    • pp.220-228
    • /
    • 2021
  • In this study, 20 middle-aged women aged from 35 to 55 were studied from October 12, 2020 to November 15, 2020 to find out the effects of improvement of facial skin of Jinyeosoo Clean Mist. Among the 20 participants, 10 were divided into experimental group and 10 control group. The Janus facial skin scope system (PSI Co.) used on the skin to measure pore, wrinkles, elasticity, UV pigmentation, and skin tone observed depending the difference in light sources. The facial examination took place 10 minutes after cleansing to stabilize and a total of 4 examinations were carried out after every week of using Jinyeosoo Clean Mist for 4 weeks. For statistical processing, SPSS statistics program 21.0 was utilized to compare the mean averages of pre-experimental and post-experimental data from the above two groups - the experimental group that used Jinyeosoo Clean Mist (hydrogen ion mist) 1 and 2 and the control group that did not use the subject products - and the corresponding sample t-test was used. As a result of the analysis, the group that used Jinyeosoo Clean Mist showed difference in pores (t=3.280, p<.05), wrinkles (t=4.353, p<.01) and elasticity (t=3.003, p<.05), skin tone(t=3.280, p<.01) under the statistical significance level.

Artificial Intelligence-based Security Control Construction and Countermeasures (인공지능기반 보안관제 구축 및 대응 방안)

  • Hong, Jun-Hyeok;Lee, Byoung Yup
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.1
    • /
    • pp.531-540
    • /
    • 2021
  • As cyber attacks and crimes increase exponentially and hacking attacks become more intelligent and advanced, hacking attack methods and routes are evolving unpredictably and in real time. In order to reinforce the enemy's responsiveness, this study aims to propose a method for developing an artificial intelligence-based security control platform by building a next-generation security system using artificial intelligence to respond by self-learning, monitoring abnormal signs and blocking attacks.The artificial intelligence-based security control platform should be developed as the basis for data collection, data analysis, next-generation security system operation, and security system management. Big data base and control system, data collection step through external threat information, data analysis step of pre-processing and formalizing the collected data to perform positive/false detection and abnormal behavior analysis through deep learning-based algorithm, and analyzed data Through the operation of a security system of prevention, control, response, analysis, and organic circulation structure, the next generation security system to increase the scope and speed of handling new threats and to reinforce the identification of normal and abnormal behaviors, and management of the security threat response system, Harmful IP management, detection policy management, security business legal system management. Through this, we are trying to find a way to comprehensively analyze vast amounts of data and to respond preemptively in a short time.

Digital Twin-Based Communication Optimization Method for Mission Validation of Swarm Robot (군집 로봇의 임무 검증 지원을 위한 디지털 트윈 기반 통신 최적화 기법)

  • Gwanhyeok, Kim;Hanjin, Kim;Junhyung, Kwon;Beomsu, Ha;Seok Haeng, Huh;Jee Hoon, Koo;Ho Jung, Sohn;Won-Tae, Kim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • Robots are expected to expand their scope of application to the military field and take on important missions such as surveillance and enemy detection in the coming future warfare. Swarm robots can perform tasks that are difficult or time-consuming for a single robot to be performed more efficiently due to the advantage of having multiple robots. Swarm robots require mutual recognition and collaboration. So they send and receive vast amounts of data, making it increasingly difficult to verify SW. Hardware-in-the-loop simulation used to increase the reliability of mission verification enables SW verification of complex swarm robots, but the amount of verification data exchanged between the HILS device and the simulator increases exponentially according to the number of systems to be verified. So communication overload may occur. In this paper, we propose a digital twin-based communication optimization technique to solve the communication overload problem that occurs in mission verification of swarm robots. Under the proposed Digital Twin based Multi HILS Framework, Network DT can efficiently allocate network resources to each robot according to the mission scenario through the Network Controller algorithm, and can satisfy all sensor generation rates required by individual robots participating in the group. In addition, as a result of an experiment on packet loss rate, it was possible to reduce the packet loss rate from 15.7% to 0.2%.