• Title/Summary/Keyword: AI camera

Search Result 88, Processing Time 0.026 seconds

Study on Chinese Consumers' Perceptions of Samsung Smartphones through Social Media Data Analysis (소셜 미디어 데이터 분석을 통한 중국 소비자의 삼성 스마트폰에 대한 인식 연구)

  • Cui Ran;Inyong Nam
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.311-321
    • /
    • 2024
  • This study comprehensively analyzed the perceptions of Chinese consumers who have and have not purchased Samsung smartphones, based on data from the social media platform Weibo. Various big data analysis techniques were used, including text mining, frequency analysis, centrality analysis, semantic network analysis, and CONCOR analysis. The results indicate that positive perceptions of Samsung smartphones include aspects such as design aesthetics, camera functionality, AI features, screen quality, specifications, and performance, and their status as a premium brand. On the other hand, negative perceptions include issues with pricing, a yellow tint in photos, slow charging speeds, and safety concerns. These findings will provide a crucial basis for making significant improvements in Samsung's market strategy in China.

Utilizing AI Foundation Models for Language-Driven Zero-Shot Object Navigation Tasks (언어-기반 제로-샷 물체 목표 탐색 이동 작업들을 위한 인공지능 기저 모델들의 활용)

  • Jeong-Hyun Choi;Ho-Jun Baek;Chan-Sol Park;Incheol Kim
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.3
    • /
    • pp.293-310
    • /
    • 2024
  • In this paper, we propose an agent model for Language-Driven Zero-Shot Object Navigation (L-ZSON) tasks, which takes in a freeform language description of an unseen target object and navigates to find out the target object in an inexperienced environment. In general, an L-ZSON agent should able to visually ground the target object by understanding the freeform language description of it and recognizing the corresponding visual object in camera images. Moreover, the L-ZSON agent should be also able to build a rich spatial context map over the unknown environment and decide efficient exploration actions based on the map until the target object is present in the field of view. To address these challenging issues, we proposes AML (Agent Model for L-ZSON), a novel L-ZSON agent model to make effective use of AI foundation models such as Large Language Model (LLM) and Vision-Language model (VLM). In order to tackle the visual grounding issue of the target object description, our agent model employs GLEE, a VLM pretrained for locating and identifying arbitrary objects in images and videos in the open world scenario. To meet the exploration policy issue, the proposed agent model leverages the commonsense knowledge of LLM to make sequential navigational decisions. By conducting various quantitative and qualitative experiments with RoboTHOR, the 3D simulation platform and PASTURE, the L-ZSON benchmark dataset, we show the superior performance of the proposed agent model.

Ship Detection Using Background Estimation of Video and AIS Informations (영상의 배경추정기법과 AIS정보를 이용한 선박검출)

  • Kim, Hyun-Tae;Park, Jang-Sik;Yu, Yun-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.12
    • /
    • pp.2636-2641
    • /
    • 2010
  • To support anti-collision between ship to ship and sea-search and sea-rescue work, ship automatic identification system(AIS) that can both send and receive messages between ship and VTS Traffic control have been adopted. And port control system can control traffic vessel service which is co-operated with AIS. For more efficient traffic vessel service, ship recognition and display system is required to cooperated with AIS. In this paper, we propose ship detection system which is co-operated with AIS by using background estimation based on image processing for on the sea or harbor image extracted from camera. We experiment with on the sea or harbor image extracted from real-time input image from camera. By computer simulation and real world test, the proposed system show more effective to ship monitoring.

Enhancing the performance of the facial keypoint detection model by improving the quality of low-resolution facial images (저화질 안면 이미지의 화질 개선를 통한 안면 특징점 검출 모델의 성능 향상)

  • KyoungOok Lee;Yejin Lee;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.171-187
    • /
    • 2023
  • When a person's face is recognized through a recording device such as a low-pixel surveillance camera, it is difficult to capture the face due to low image quality. In situations where it is difficult to recognize a person's face, problems such as not being able to identify a criminal suspect or a missing person may occur. Existing studies on face recognition used refined datasets, so the performance could not be measured in various environments. Therefore, to solve the problem of poor face recognition performance in low-quality images, this paper proposes a method to generate high-quality images by performing image quality improvement on low-quality facial images considering various environments, and then improve the performance of facial feature point detection. To confirm the practical applicability of the proposed architecture, an experiment was conducted by selecting a data set in which people appear relatively small in the entire image. In addition, by choosing a facial image dataset considering the mask-wearing situation, the possibility of expanding to real problems was explored. As a result of measuring the performance of the feature point detection model by improving the image quality of the face image, it was confirmed that the face detection after improvement was enhanced by an average of 3.47 times in the case of images without a mask and 9.92 times in the case of wearing a mask. It was confirmed that the RMSE for facial feature points decreased by an average of 8.49 times when wearing a mask and by an average of 2.02 times when not wearing a mask. Therefore, it was possible to verify the applicability of the proposed method by increasing the recognition rate for facial images captured in low quality through image quality improvement.

Methodology for Generating UAV's Effective Flight Area that Satisfies the Required Spatial Resolution (요구 공간해상도를 만족하는 무인기의 유효 비행 영역 생성 방법)

  • Ji Won Woo;Yang Gon Kim;Jung Woo An;Sang Yun Park;Gyeong Rae Nam
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.4
    • /
    • pp.400-407
    • /
    • 2024
  • The role of unmanned aerial vehicles (UAVs) in modern warfare is increasingly significant, making their capacity for autonomous missions essential. Accordingly, autonomous target detection/identification based on captured images is crucial, yet the effectiveness of AI models depends on image sharpness. Therefore, this study describes how to determine the field of view (FOV) of the camera and the flight position of the UAV considering the required spatial resolution. Firstly, the calculation of the size of the acquisition area is discussed in relation to the relative position of the UAV and the FOV of the camera. Through this, this paper first calculates the area that can satisfy the spatial resolution and then calculates the relative position of the UAV and the FOV of the camera that can satisfy it. Furthermore, this paper propose a method for calculating the effective range of the UAV's position that can satisfy the required spatial resolution, centred on the coordinate to be photographed. This is then processed into a tabular format, which can be used for mission planning.

Cooperative Robot for Table Balancing Using Q-learning (테이블 균형맞춤 작업이 가능한 Q-학습 기반 협력로봇 개발)

  • Kim, Yewon;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.4
    • /
    • pp.404-412
    • /
    • 2020
  • Typically everyday human life tasks involve at least two people moving objects such as tables and beds, and the balancing of such object changes based on one person's action. However, many studies in previous work performed their tasks solely on robots without factoring human cooperation. Therefore, in this paper, we propose cooperative robot for table balancing using Q-learning that enables cooperative work between human and robot. The human's action is recognized in order to balance the table by the proposed robot whose camera takes the image of the table's state, and it performs the table-balancing action according to the recognized human action without high performance equipment. The classification of human action uses a deep learning technology, specifically AlexNet, and has an accuracy of 96.9% over 10-fold cross-validation. The experiment of Q-learning was carried out over 2,000 episodes with 200 trials. The overall results of the proposed Q-learning show that the Q function stably converged at this number of episodes. This stable convergence determined Q-learning policies for the robot actions. Video of the robotic cooperation with human over the table balancing task using the proposed Q-Learning can be found at http://ibot.knu.ac.kr/videocooperation.html.

Development of a position sensitive CsI(Tl) crystal array

  • Shi, Guo-Zhu;Chen, Ruo-Fu;Chen, Kun;Shen, Ai-Hua;Zhang, Xiu-Ling;Chen, Jin-Da;Du, Cheng-Ming;Hu, Zheng-Guo;Fan, Guang-Wei
    • Nuclear Engineering and Technology
    • /
    • v.52 no.4
    • /
    • pp.835-840
    • /
    • 2020
  • A position-sensitive CsI(Tl) crystal array coupled with the multi-anode position sensitive photomultiplier tube (PS-PMT), Hamamatsu H8500C, has been developed at the Institute of Modern Physics. An effective, fast, and economical readout circuit based on discretized positioning circuit (DPC) bridge was designed for the 64-channel multi-anode flat panel PSPMT. The horizontal and vertical position resolutions are 0.58 mm and 0.63 mm respectively for the 1.0 × 1.0 × 5.0 ㎣ CsI(Tl) array, and the horizontal and vertical position resolutions are 0.86 mm and 0.80 mm respectively for the 2.0 × 2.0 × 10.0 ㎣ CsI(Tl) array. These results show that the CsI(Tl) crystal array with low cost could be applied in the fields of medical imaging and high-resolution gamma camera.

Development of Human Detection Technology with Heterogeneous Sensors for use at Disaster Sites (재난 현장에서 이종 센서를 활용한 인명 탐지 기술 개발)

  • Seo, Myoung Kook;Yoon, Bok Joong;Shin, Hee Young;Lee, Kyong Jun
    • Journal of Drive and Control
    • /
    • v.17 no.3
    • /
    • pp.1-8
    • /
    • 2020
  • Recently, a special purpose machine with two manipulators and quadruped crawler system has been developed for rapid life-saving and initial restoration work at disaster sites. This special purpose machine provides the driver with various environmental recognition functions for accurate and rapid task determination. In particular, the human detection technology assists the driver in poor working conditions such as low-light, dust, water vapor, fog, rain, etc. to prevent secondary human accidents when moving and working. In this study, a human detection module is developed to be mounted on a special purpose machine. A thermal sensor and CCD camera were used to detect victims and nearby workers in response to the difficult environmental conditions present at disaster sites. The performance of various AI-based life detection algorithm were verified and then applied to the task of detecting various objects with different postures and exposure conditions. In addition, image visibility improvement technology was applied to further improve the accuracy of human detection.

A Design of IT Conversion Remote Monitoring System for Offshore Plant (IT융합 해양플랜트 원격 감시 시스템 설계)

  • Hwang, Hun-Gyu;Kim, Hun-Ki;Lee, Jae-Woong;Kim, Min-Jae;Yoo, Gang-Ju;Lee, Seong-Dae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.847-850
    • /
    • 2013
  • An offshore plant exposes environmental threats which are typhoon, tidal wave and etc., also the offshore plant exposes artificial threats by fire and collision of ship. In this paper, we design an IT conversion remote monitoring system for protection from environmental and artificial threats using camera, AtoN AIS. The system helps to monitor possible situations around offshore plant remotely. Therefore, we handle the situations appropriately and manage the offshore plant safely.

  • PDF

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.