• Title/Summary/Keyword: Autonomous Machine

Search Result 203, Processing Time 0.025 seconds

Prediction Model of Real Estate Transaction Price with the LSTM Model based on AI and Bigdata

  • Lee, Jeong-hyun;Kim, Hoo-bin;Shim, Gyo-eon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.274-283
    • /
    • 2022
  • Korea is facing a number difficulties arising from rising housing prices. As 'housing' takes the lion's share in personal assets, many difficulties are expected to arise from fluctuating housing prices. The purpose of this study is creating housing price prediction model to prevent such risks and induce reasonable real estate purchases. This study made many attempts for understanding real estate instability and creating appropriate housing price prediction model. This study predicted and validated housing prices by using the LSTM technique - a type of Artificial Intelligence deep learning technology. LSTM is a network in which cell state and hidden state are recursively calculated in a structure which added cell state, which is conveyor belt role, to the existing RNN's hidden state. The real sale prices of apartments in autonomous districts ranging from January 2006 to December 2019 were collected through the Ministry of Land, Infrastructure, and Transport's real sale price open system and basic apartment and commercial district information were collected through the Public Data Portal and the Seoul Metropolitan City Data. The collected real sale price data were scaled based on monthly average sale price and a total of 168 data were organized by preprocessing respective data based on address. In order to predict prices, the LSTM implementation process was conducted by setting training period as 29 months (April 2015 to August 2017), validation period as 13 months (September 2017 to September 2018), and test period as 13 months (December 2018 to December 2019) according to time series data set. As a result of this study for predicting 'prices', there have been the following results. Firstly, this study obtained 76 percent of prediction similarity. We tried to design a prediction model of real estate transaction price with the LSTM Model based on AI and Bigdata. The final prediction model was created by collecting time series data, which identified the fact that 76 percent model can be made. This validated that predicting rate of return through the LSTM method can gain reliability.

The linear model analysis and Fuzzy controller design of the ship using the Nomoto model (Nomoto모델을 이용한 선박의 선형 모델 분석 및 퍼지제어기 설계)

  • Lim, Dae-Yeong;Kim, Young-Chul;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.2
    • /
    • pp.821-828
    • /
    • 2011
  • This paper developed the algorithm for improving the performance the auto pilot in the autonomous vehicle system consisting of the Track keeping control, the Automatic steering, and the Automatic mooring control. The automatic steering is the control device that could save the voyage distance and cost of fuel by reducing the unnecessary burden of driving due to the continuous artificial navigation, and avoiding the route deviation. During the step of the ship autonomic navigation control, since the wind power or the tidal force could make the ship deviate from the fixed course, the automatic steering calculates the difference between actual sailing line and the set course to keep the ship sailing in the vicinity of intended course. first, we could get the transfer function for the modeling of ship according to the Nomoto model. Considering the maneuverability, we propose it as linear model with only 4 degree of freedoms to present the heading angle response to the input of rudder angle. In this paper, the model of ship is derived from the simplified Nomoto model. Since the proposed model considers the maximum angle and rudder rate of the ship auto pilot and also designs the Fuzzy controller based on existing PID controller, the performance of the steering machine is well improved.

Pedestrian Classification using CNN's Deep Features and Transfer Learning (CNN의 깊은 특징과 전이학습을 사용한 보행자 분류)

  • Chung, Soyoung;Chung, Min Gyo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.91-102
    • /
    • 2019
  • In autonomous driving systems, the ability to classify pedestrians in images captured by cameras is very important for pedestrian safety. In the past, after extracting features of pedestrians with HOG(Histogram of Oriented Gradients) or SIFT(Scale-Invariant Feature Transform), people classified them using SVM(Support Vector Machine). However, extracting pedestrian characteristics in such a handcrafted manner has many limitations. Therefore, this paper proposes a method to classify pedestrians reliably and effectively using CNN's(Convolutional Neural Network) deep features and transfer learning. We have experimented with both the fixed feature extractor and the fine-tuning methods, which are two representative transfer learning techniques. Particularly, in the fine-tuning method, we have added a new scheme, called M-Fine(Modified Fine-tuning), which divideslayers into transferred parts and non-transferred parts in three different sizes, and adjusts weights only for layers belonging to non-transferred parts. Experiments on INRIA Person data set with five CNN models(VGGNet, DenseNet, Inception V3, Xception, and MobileNet) showed that CNN's deep features perform better than handcrafted features such as HOG and SIFT, and that the accuracy of Xception (threshold = 0.5) isthe highest at 99.61%. MobileNet, which achieved similar performance to Xception and learned 80% fewer parameters, was the best in terms of efficiency. Among the three transfer learning schemes tested above, the performance of the fine-tuning method was the best. The performance of the M-Fine method was comparable to or slightly lower than that of the fine-tuningmethod, but higher than that of the fixed feature extractor method.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A 2D / 3D Map Modeling of Indoor Environment (실내환경에서의 2 차원/ 3 차원 Map Modeling 제작기법)

  • Jo, Sang-Woo;Park, Jin-Woo;Kwon, Yong-Moo;Ahn, Sang-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.355-361
    • /
    • 2006
  • In large scale environments like airport, museum, large warehouse and department store, autonomous mobile robots will play an important role in security and surveillance tasks. Robotic security guards will give the surveyed information of large scale environments and communicate with human operator with that kind of data such as if there is an object or not and a window is open. Both for visualization of information and as human machine interface for remote control, a 3D model can give much more useful information than the typical 2D maps used in many robotic applications today. It is easier to understandable and makes user feel like being in a location of robot so that user could interact with robot more naturally in a remote circumstance and see structures such as windows and doors that cannot be seen in a 2D model. In this paper we present our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The algorithm is applied to 2 cases which are corridor and space that has the four wall like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.

  • PDF

Development of a flower support for real flower decoration Automatic Production System (생화 장식 꽃받침 자동 생산 시스템 개발)

  • Song, Myung-Seok;Kim, Man-Joong;Kim, Seon-Bong;Ji, Peng;Ryuh, Beom-Sahng
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.7
    • /
    • pp.63-71
    • /
    • 2018
  • A flower support was developed for real flower decoration automation production system using an ultrasonic wave sealer to automatically produce a system. Because a flower support for real flower decoration that was produced manually could not meet the needs of the consumers, this study developed an automated manufacturing system to increase productivity. A flower support for real flower decoration was constructed using a cap consisting of plastic and plate made from non-woven fabric. The guide was designed to transport the cap to the ultrasonic wave sealer and optimal guide was developed from the test according to the material and shape. To produce the entire system, the guides and accessories were weighed and appropriate motors and pulleys were calculated. Control of the automation production system was based on a PCB board, which increased the reliability and security, and a remote controller with manual and automatic modes was prepared. After development, tests of the transfer precision and repetition accuracy revealed an X-axis of 2.7mm, a Y-axis of 1 mm, and a repetition of 0 mm. The productivity was also checked. The automated machine worked 8 hours/day to make 35 supports and 70 Therefore, the automatic system produces 200% more output than manual work

A Method to Manage Faults in SOA using Autonomic Computing (자율 컴퓨팅을 적용한 SOA 서비스 결함 관리 기법)

  • Cheun, Du-Wan;Lee, Jae-Yoo;La, Hyun-Jung;Kim, Soo-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.716-730
    • /
    • 2008
  • In Service-Oriented Architecture (SOA), service providers develop and deploy reusable services on the repositories, and service consumers utilize blackbox form of services through their interfaces. Services are also highly evolvable and often heterogeneous. Due to these characteristics of the service, it is hard to manage the faults if faults occur on the services. Autonomic Computing (AC) is a way of designing systems which can manage themselves without direct human intervention. Applying the key disciplines of AC to service management is appealing since key technical issues for service management can be effectively resolved by AC. In this paper, we present a theoretical model, Symptom-Cause-Actuator (SCA), to enable autonomous service fault management in SOA. We derive SCA model from our rigorous observation on how physicians treat patients. In this paper, we first define a five-phase computing model and meta-model of SCA. And, we define a schema of SCA profile, which contains instances of symptoms, causes, actuators and their dependency values in a machine readable form. Then, we present detailed algorithms for the five phases that are used to manage faults the services. To show the applicability of our approach, we demonstrate the result of our case study for the domain of 'Flight Ticket Management Services'.

Automated Vehicle Research by Recognizing Maneuvering Modes using LSTM Model (LSTM 모델 기반 주행 모드 인식을 통한 자율 주행에 관한 연구)

  • Kim, Eunhui;Oh, Alice
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.4
    • /
    • pp.153-163
    • /
    • 2017
  • This research is based on the previous research that personally preferred safe distance, rotating angle and speed are differentiated. Thus, we use machine learning model for recognizing maneuvering modes trained per personal or per similar driving pattern groups, and we evaluate automatic driving according to maneuvering modes. By utilizing driving knowledge, we subdivided 8 kinds of longitudinal modes and 4 kinds of lateral modes, and by combining the longitudinal and lateral modes, we build 21 kinds of maneuvering modes. we train the labeled data set per time stamp through RNN, LSTM and Bi-LSTM models by the trips of drivers, which are supervised deep learning models, and evaluate the maneuvering modes of automatic driving for the test data set. The evaluation dataset is aggregated of living trips of 3,000 populations by VTTI in USA for 3 years and we use 1500 trips of 22 people and training, validation and test dataset ratio is 80%, 10% and 10%, respectively. For recognizing longitudinal 8 kinds of maneuvering modes, RNN achieves better accuracy compared to LSTM, Bi-LSTM. However, Bi-LSTM improves the accuracy in recognizing 21 kinds of longitudinal and lateral maneuvering modes in comparison with RNN and LSTM as 1.54% and 0.47%, respectively.

An interpretive comparison of the education as event in The Structure of World History and Anti-Oedipus (『세계사의 구조』와 『안티 오이디푸스』에 나타난 사건적 교육의 해석적 비교)

  • Kim, Young-chul
    • Korean Educational Research Journal
    • /
    • v.42 no.1
    • /
    • pp.1-34
    • /
    • 2021
  • The thesis tries to compare The Structure of World History with Anti-Oedipus in the textual context, and to re-compare in the educational context. I mean by the education an event which contrasts starkly with an essence. It adopts 5W1H, a general reporting form of an accident or event, as the distinctive features at twice comparisons. The purpose of the thesis is not evaluative but interpretive comparison. In the textual context, the thesis discusses, 1) as WHAT, the use of Marx from Kant vs. Nietzsche's point of view, 2) as WHO, the actual subjects of the exchanging human vs. the productive machine, 3) as WHEN/WHERE, the society of the modes of exchange vs. the modes of inscription, 4) as HOW, the revolutionay means of the simultaneous revolution of the world vs. the schizophrenic process, 5) as WHY, the ideal subjects of the associative human vs. the non-human of liberation of desire. In the educational context, the thesis discusses, 1) in the WHAT as educational way, autonomous morality vs. active power, 2) in the WHO as the affirmity of actual subjects, that of the ideal idea vs. that of real power, 3) in the WHEN/WHERE, as the in-between time-space of education, the incommensurable communicative situation of humans vs. the conflictive of machines, 4) in the HOW, as the educational method of achieving the ideal, the involuntary restoration of the already-had ideal vs. the now-have completion and break-through of the schizophrenic process, 5) in the WHY, as the aim of education, cosmopolitan vs. overman.

  • PDF

Relationship between the Ancient Silk Road and High-technology Machine in Producing Kyung-Geum (고대 실크로드와 고조선 경금 제직기의 연관성 고찰)

  • Kim, Ji-Su;Na, Young-Joo
    • Science of Emotion and Sensibility
    • /
    • v.23 no.4
    • /
    • pp.117-142
    • /
    • 2020
  • This study aims to look for the main transport road of the ancient Silk Road and to add to the hidden history of silk, where little is known about the weaving technology of the beautiful silk of GoJoseon. The research was through the analysis of relics of empirical data and analyzed the secondary data collected from books, papers, and photos of artifacts. The research questions are as follows: First, investigates the environment of silk production for GoJoseon KyungGeum and the correlation between ancient Silk Road and the East region. Second, examines the advanced weaving technology of KyungGeum in GoJoseon. The findings of the study are as follows: It is possible to infer the production period of silk in GoJoseon through jade silkworms from the Hongsan Dong-Yi culture of 4500 BC. KyungGeum pieces were excavated in Louran, Astana and Niya of the Xinjiang Autonomous Region and Noin-Ula of Mongolia, and the oldest KyungGeum was found in JoYang, one of the capitals of GoJoseon near Balhae Bay. KyungGeum was invented in the 11th century BCE here. It became the brocade and damask of the West, which were delivered through steppe road before the 5~6th century BCE. The production of KyungGeum was possible through the advanced loom which is GoJoseon's horizontal square 'Jewharu' loom combined with a high level of weaving skill. This can't be made through the slant loom of China nor vertical loom of the West Asia. Based on these results, it is suggested to continue the research on the history of ancient silkroad.