• Title/Summary/Keyword: Artificial vision

Search Result 316, Processing Time 0.027 seconds

Fish-eye camera calibration and artificial landmarks detection for the self-charging of a mobile robot (이동로봇의 자동충전을 위한 어안렌즈 카메라의 보정 및 인공표지의 검출)

  • Kwon, Oh-Sang
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.4
    • /
    • pp.278-285
    • /
    • 2005
  • This paper describes techniques of camera calibration and artificial landmarks detection for the automatic charging of a mobile robot, equipped with a fish-eye camera in the direction of its operation for movement or surveillance purposes. For its identification from the surrounding environments, three landmarks employed with infrared LEDs, were installed at the charging station. When the robot reaches a certain point, a signal is sent to the LEDs for activation, which allows the robot to easily detect the landmarks using its vision camera. To eliminate the effects of the outside light interference during the process, a difference image was generated by comparing the two images taken when the LEDs are on and off respectively. A fish-eye lens was used for the vision camera of the robot but the wide-angle lens resulted in a significant image distortion. The radial lens distortion was corrected after linear perspective projection transformation based on the pin-hole model. In the experiment, the designed system showed sensing accuracy of ${\pm}10$ mm in position and ${\pm}1^{\circ}$ in orientation at the distance of 550 mm.

Information Processing in Primate Retinal Ganglion

  • Je, Sung-Kwan;Cho, Jae-Hyun;Kim, Gwang-Baek
    • Journal of information and communication convergence engineering
    • /
    • v.2 no.2
    • /
    • pp.132-137
    • /
    • 2004
  • Most of the current computer vision theories are based on hypotheses that are difficult to apply to the real world, and they simply imitate a coarse form of the human visual system. As a result, they have not been showing satisfying results. In the human visual system, there is a mechanism that processes information due to memory degradation with time and limited storage space. Starting from research on the human visual system, this study analyzes a mechanism that processes input information when information is transferred from the retina to ganglion cells. In this study, a model for the characteristics of ganglion cells in the retina is proposed after considering the structure of the retina and the efficiency of storage space. The MNIST database of handwritten letters is used as data for this research, and ART2 and SOM as recognizers. The results of this study show that the proposed recognition model is not much different from the general recognition model in terms of recognition rate, but the efficiency of storage space can be improved by constructing a mechanism that processes input information.

Future Trends of IoT, 5G Mobile Networks, and AI: Challenges, Opportunities, and Solutions

  • Park, Ji Su;Park, Jong Hyuk
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.743-749
    • /
    • 2020
  • Internet of Things (IoT) is a growing technology along with artificial intelligence (AI) technology. Recently, increasing cases of developing knowledge services using information collected from sensor data have been reported. Communication is required to connect the IoT and AI, and 5G mobile networks have been widely spread recently. IoT, AI services, and 5G mobile networks can be configured and used as sensor-mobile edge-server. The sensor does not send data directly to the server. Instead, the sensor sends data to the mobile edge for quick processing. Subsequently, mobile edge enables the immediate processing of data based on AI technology or by sending data to the server for processing. 5G mobile network technology is used for this data transmission. Therefore, this study examines the challenges, opportunities, and solutions used in each type of technology. To this end, this study addresses clustering, Hyperledger Fabric, data, security, machine vision, convolutional neural network, IoT technology, and resource management of 5G mobile networks.

Generative Adversarial Networks: A Literature Review

  • Cheng, Jieren;Yang, Yue;Tang, Xiangyan;Xiong, Naixue;Zhang, Yuan;Lei, Feifei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4625-4647
    • /
    • 2020
  • The Generative Adversarial Networks, as one of the most creative deep learning models in recent years, has achieved great success in computer vision and natural language processing. It uses the game theory to generate the best sample in generator and discriminator. Recently, many deep learning models have been applied to the security field. Along with the idea of "generative" and "adversarial", researchers are trying to apply Generative Adversarial Networks to the security field. This paper presents the development of Generative Adversarial Networks. We review traditional generation models and typical Generative Adversarial Networks models, analyze the application of their models in natural language processing and computer vision. To emphasize that Generative Adversarial Networks models are feasible to be used in security, we separately review the contributions that their defenses in information security, cyber security and artificial intelligence security. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction.

Improved Deep Residual Network for Apple Leaf Disease Identification

  • Zhou, Changjian;Xing, Jinge
    • Journal of Information Processing Systems
    • /
    • v.17 no.6
    • /
    • pp.1115-1126
    • /
    • 2021
  • Plant disease is one of the most irritating problems for agriculture growers. Thus, timely detection of plant diseases is of high importance to practical value, and corresponding measures can be taken at the early stage of plant diseases. Therefore, numerous researchers have made unremitting efforts in plant disease identification. However, this problem was not solved effectively until the development of artificial intelligence and big data technologies, especially the wide application of deep learning models in different fields. Since the symptoms of plant diseases mainly appear visually on leaves, computer vision and machine learning technologies are effective and rapid methods for identifying various kinds of plant diseases. As one of the fruits with the highest nutritional value, apple production directly affects the quality of life, and it is important to prevent disease intrusion in advance for yield and taste. In this study, an improved deep residual network is proposed for apple leaf disease identification in a novel way, a global residual connection is added to the original residual network, and the local residual connection architecture is optimized. Including that 1,977 apple leaf disease images with three categories that are collected in this study, experimental results show that the proposed method has achieved 98.74% top-1 accuracy on the test set, outperforming the existing state-of-the-art models in apple leaf disease identification tasks, and proving the effectiveness of the proposed method.

A Study of the Trend of Deep Learning Technology of China (중국의 딥러닝 기술 동향에 관한 연구)

  • Fu, Yumei;Kim, Minyoung;Park, Geunho;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.385-388
    • /
    • 2019
  • In recent years, China has faced unprecedented intelligent reforms. Artificial intelligence has become a hot topic in society. The deep learning framework is the core of artificial intelligence industrialization, and it has also attracted the attention of all parties. Among them, deep learning has been applied in the fields of computer vision, speech recognition, and language technology processing. This paper will introduce China's development status and future challenges in technology, talent, and market applications.

  • PDF

Design of Irrigation Pumping System Controller for Operational Instrument of Articulation (관절경 수술을 위한 관주(灌注)시스 (Irrigation Pumping System) 제어기의 개발)

  • 김민수;이순걸
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.1294-1297
    • /
    • 2003
  • With the development of medical field, many kinds of operations have been performed on human articulation. Arthroscopic surgery, which has Irrigation Pumping System for security of operator vision and washing spaces of operation, has been used for more merits than others. In this paper, it is presented that the research on a reliable control algorithm of the pumping system instrument for arthroscopic surgery. Before clinical operation, the flexible artificial articulation model is used for realizing the model the most same as human's and the algorithm has been exploited for it. This system is considered of the following; limited sensing point, dynamic effect by compliance, time delay by fluid flow and so on. The system is composed with a pressure controller, a regulator for keeping air pressure, an airtight tank that can have distilled water packs, artificial articulation and a measuring system, and has controlled by the feedback of pressure sensor on the artificial articulation. Also the system has applied to Smith Predictor for time delay and the parameter estimation method for the most suitable system with both the experiment data and modeling. In this paper, the pressure error that is between an air pressure tank and an artificial articulation was measured so that the system could be presumed and then the controller had developed for performing State-Feedback. Finally, the controller with a real microprocessor has realized. The confidence of system can be proved by applying this control algorithm to an artificial articulation experiment material.

  • PDF

Back-bead Prediction and Weldability Estimation Using An Artificial Neural Network (인공신경망을 이용한 이면비드 예측 및 용접성 평가)

  • Lee, Jeong-Ick;Koh, Byung-Kab
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.4
    • /
    • pp.79-86
    • /
    • 2007
  • The shape of excessive penetration mainly depends on welding conditions(welding current and welding voltage), and welding process(groove gap and welding speed). These conditions are the major affecting factors to width and height of back bead. In this paper, back-bead prediction and weldability estimation using artificial neural network were investigated. Results are as follows. 1) If groove gap, welding current, welding voltage and welding speed will be previously determined as a welding condition, width and height of back bead can be predicted by artificial neural network system without experimental measurement. 2) From the result applied to three weld quality levels(ISO 5817), both experimented measurement using vision sensor and predicted mean values by artificial neural network showed good agreement. 3) The width and height of back bead are proportional to groove gap, welding current and welding voltage, but welding speed. is not.

Object Detection Based on Virtual Humans Learning (가상 휴먼 학습 기반 영상 객체 검출 기법)

  • Lee, JongMin;Jo, Dongsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.376-378
    • /
    • 2022
  • Artificial intelligence technology is widely used in various fields such as artificial intelligence speakers, artificial intelligence chatbots, and autonomous vehicles. Among these AI application fields, the image processing field shows various uses such as detecting objects or recognizing objects using artificial intelligence. In this paper, data synthesized by a virtual human is used as a method to analyze images taken in a specific space.

  • PDF

Multi-type Image Noise Classification by Using Deep Learning

  • Waqar Ahmed;Zahid Hussain Khand;Sajid Khan;Ghulam Mujtaba;Muhammad Asif Khan;Ahmad Waqas
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.143-147
    • /
    • 2024
  • Image noise classification is a classical problem in the field of image processing, machine learning, deep learning and computer vision. In this paper, image noise classification is performed using deep learning. Keras deep learning library of TensorFlow is used for this purpose. 6900 images images are selected from the Kaggle database for the classification purpose. Dataset for labeled noisy images of multiple type was generated with the help of Matlab from a dataset of non-noisy images. Labeled dataset comprised of Salt & Pepper, Gaussian and Sinusoidal noise. Different training and tests sets were partitioned to train and test the model for image classification. In deep neural networks CNN (Convolutional Neural Network) is used due to its in-depth and hidden patterns and features learning in the images to be classified. This deep learning of features and patterns in images make CNN outperform the other classical methods in many classification problems.