• Title/Summary/Keyword: smart

Search Result 20,578, Processing Time 0.052 seconds

Fabrication of Strain Sensor Based on Graphene/Polyurethane Nanoweb and Respiration Measurement (그래핀/폴리우레탄 나노웹 기반의 스트레인센서 제작 및 호흡측정)

  • Lee, Hyocheol;Cho, Hyeon-seon;Lee, Eugene;Jang, Eunji;Cho, Gilsoo
    • Science of Emotion and Sensibility
    • /
    • v.22 no.1
    • /
    • pp.15-22
    • /
    • 2019
  • The purpose of this study is to develop a strain sensor based on a nanoweb by applying electrical conductivity to a polyurethane nanoweb through the use of Graphene. For this purpose, 1% Graphene ink was pour-coated on a polyurethane nanoweb and post-treated with PDMS (Polydimethylsiloxane) to complete a wearable strain sensor. The surface characteristics of the specimens were evaluated using a field emission scanning electron microscope (FE-SEM) to check whether the conductive material was well coated on the surface of the specimen. Electrical properties of the specimens were measured by using a multimeter to measure the linear resistance of the specimen and comparing how the line resistance changes when 5% and 10% of the specimens are tensioned, respectively. In order to evaluate the performance of the specimen, the gauge factor was obtained. The evaluation of the clothing was performed by attaching the completed strain sensor to the dummy and measuring the respiration signal according to the tension using MP150 (Biopac system Inc., USA) and Acqknowledge (ver. 4.2, Biopac system Inc., U.S.A.). As a result of the evaluation of the surface characteristics, it was confirmed that all the conductive nanoweb specimen were uniformly coated with the Graphen ink. As a result of measuring the resistance value according to the tensile strength, the specimen G, which was treated with just graphene had the lowest resistance value, the specimen G-H had the highest resistance value, and the change of the line resistance value of the specimen G and the specimen G-H is increased to 5% It is found that it increases steadily. Unlike the resistance value results, specimen G showed a higher gauge rate than specimen G-H. As a result of evaluation of the actual clothes, the strain sensor made using the specimen G-H measured the stable peak value and obtained a signal of good quality. Therefore, we confirmed that the polyurethane nanoweb treated with Graphene ink plays a role as a breathing sensor.

A Study on the Change of Image Quality According to the Change of Tube Voltage in Computed Tomography Pediatric Chest Examination (전산화단층촬영 소아 흉부검사에서 관전압의 변화에 따른 화질변화에 관한 연구)

  • Kim, Gu;Kim, Gyeong Rip;Sung, Soon Ki;Kwak, Jong Hyeok
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.503-508
    • /
    • 2019
  • In short a binary value according to a change in the tube voltage by using one of VOLUME AXIAL MODE of scanning techniques of chest CT image quality evaluation in order to obtain high image and to present the appropriate tube voltage. CT instruments were GE Revolution (GE Healthcare, Wisconsin USA) model and Phantom used Pediatric Whole Body Phantom PBU-70. The test method was examined in Volume Axial mode using the pediatric protocol used in the Y university hospital of mass-produced material. The tube voltage was set to 70kvp, 80kvp, 100kvp, and mAs was set to smart mA-ODM. The mean SNR difference of the heart was $-4.53{\pm}0.26$ at 70 kvp, $-3.34{\pm}0.18$ at 80 kvp, $-1.87{\pm}0.15$ at 100 kvp, and SNR at 70 kvp was about -2.66 higher than 100 kvp and statistically significant (p<0.05) In the Lung SNR mean difference analysis, $-78.20{\pm}4.16$ at 70 kvp, $-79.10{\pm}4.39$ at 80 kvp, $-77.43{\pm}4.72$ at 100 kvp, and SNR at 70 kvp at about -0.77 higher than 100 kvp were statistically significant. (p<0.05). Lung CNR mean difference was $73.67{\pm}3.95$ at 70 kvp, $75.76{\pm}4.25$ at 80 kvp, $75.57{\pm}4.62$ at 100 kvp and 20.9 CNR at 80 kvp higher than 70 kvp and statistically significant (p<0.05) At 100 kvp of tube voltage, the SNR was close to 1 while maintaining the quality of the heart image when 70 kvp and 80 kvp were compared. However, there is no difference in SNR between 70 kvp and 80 kvp, and 70 kvp can be used to reduce the radiation dose. On the other and, CNR showed an approximate value of 1 at 70 kvp. There is no difference between 80 kvp and 100 kvp. Therefore, 80 kvp can reduce the radiation dose by pediatric chest CT. In addition, it is possible to perform a scan with a short scan time of 0.3 seconds in the volume axial mode test, which is useful for pediatric patients who need to move or relax.

Development Process for User Needs-based Chatbot: Focusing on Design Thinking Methodology (사용자 니즈 기반의 챗봇 개발 프로세스: 디자인 사고방법론을 중심으로)

  • Kim, Museong;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.221-238
    • /
    • 2019
  • Recently, companies and public institutions have been actively introducing chatbot services in the field of customer counseling and response. The introduction of the chatbot service not only brings labor cost savings to companies and organizations, but also enables rapid communication with customers. Advances in data analytics and artificial intelligence are driving the growth of these chatbot services. The current chatbot can understand users' questions and offer the most appropriate answers to questions through machine learning and deep learning. The advancement of chatbot core technologies such as NLP, NLU, and NLG has made it possible to understand words, understand paragraphs, understand meanings, and understand emotions. For this reason, the value of chatbots continues to rise. However, technology-oriented chatbots can be inconsistent with what users want inherently, so chatbots need to be addressed in the area of the user experience, not just in the area of technology. The Fourth Industrial Revolution represents the importance of the User Experience as well as the advancement of artificial intelligence, big data, cloud, and IoT technologies. The development of IT technology and the importance of user experience have provided people with a variety of environments and changed lifestyles. This means that experiences in interactions with people, services(products) and the environment become very important. Therefore, it is time to develop a user needs-based services(products) that can provide new experiences and values to people. This study proposes a chatbot development process based on user needs by applying the design thinking approach, a representative methodology in the field of user experience, to chatbot development. The process proposed in this study consists of four steps. The first step is 'setting up knowledge domain' to set up the chatbot's expertise. Accumulating the information corresponding to the configured domain and deriving the insight is the second step, 'Knowledge accumulation and Insight identification'. The third step is 'Opportunity Development and Prototyping'. It is going to start full-scale development at this stage. Finally, the 'User Feedback' step is to receive feedback from users on the developed prototype. This creates a "user needs-based service (product)" that meets the process's objectives. Beginning with the fact gathering through user observation, Perform the process of abstraction to derive insights and explore opportunities. Next, it is expected to develop a chatbot that meets the user's needs through the process of materializing to structure the desired information and providing the function that fits the user's mental model. In this study, we present the actual construction examples for the domestic cosmetics market to confirm the effectiveness of the proposed process. The reason why it chose the domestic cosmetics market as its case is because it shows strong characteristics of users' experiences, so it can quickly understand responses from users. This study has a theoretical implication in that it proposed a new chatbot development process by incorporating the design thinking methodology into the chatbot development process. This research is different from the existing chatbot development research in that it focuses on user experience, not technology. It also has practical implications in that companies or institutions propose realistic methods that can be applied immediately. In particular, the process proposed in this study can be accessed and utilized by anyone, since 'user needs-based chatbots' can be developed even if they are not experts. This study suggests that further studies are needed because only one field of study was conducted. In addition to the cosmetics market, additional research should be conducted in various fields in which the user experience appears, such as the smart phone and the automotive market. Through this, it will be able to be reborn as a general process necessary for 'development of chatbots centered on user experience, not technology centered'.

Requirement Analysis for Agricultural Meteorology Information Service Systems based on the Fourth Industrial Revolution Technologies (4차 산업혁명 기술에 기반한 농업 기상 정보 시스템의 요구도 분석)

  • Kim, Kwang Soo;Yoo, Byoung Hyun;Hyun, Shinwoo;Kang, DaeGyoon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.3
    • /
    • pp.175-186
    • /
    • 2019
  • Efforts have been made to introduce the climate smart agriculture (CSA) for adaptation to future climate conditions, which would require collection and management of site specific meteorological data. The objectives of this study were to identify requirements for construction of agricultural meteorology information service system (AMISS) using technologies that lead to the fourth industrial revolution, e.g., internet of things (IoT), artificial intelligence, and cloud computing. The IoT sensors that require low cost and low operating current would be useful to organize wireless sensor network (WSN) for collection and analysis of weather measurement data, which would help assessment of productivity for an agricultural ecosystem. It would be recommended to extend the spatial extent of the WSN to a rural community, which would benefit a greater number of farms. It is preferred to create the big data for agricultural meteorology in order to produce and evaluate the site specific data in rural areas. The digital climate map can be improved using artificial intelligence such as deep neural networks. Furthermore, cloud computing and fog computing would help reduce costs and enhance the user experience of the AMISS. In addition, it would be advantageous to combine environmental data and farm management data, e.g., price data for the produce of interest. It would also be needed to develop a mobile application whose user interface could meet the needs of stakeholders. These fourth industrial revolution technologies would facilitate the development of the AMISS and wide application of the CSA.

Detection Ability of Occlusion Object in Deep Learning Algorithm depending on Image Qualities (영상품질별 학습기반 알고리즘 폐색영역 객체 검출 능력 분석)

  • LEE, Jeong-Min;HAM, Geon-Woo;BAE, Kyoung-Ho;PARK, Hong-Ki
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.3
    • /
    • pp.82-98
    • /
    • 2019
  • The importance of spatial information is rapidly rising. In particular, 3D spatial information construction and modeling for Real World Objects, such as smart cities and digital twins, has become an important core technology. The constructed 3D spatial information is used in various fields such as land management, landscape analysis, environment and welfare service. Three-dimensional modeling with image has the hig visibility and reality of objects by generating texturing. However, some texturing might have occlusion area inevitably generated due to physical deposits such as roadside trees, adjacent objects, vehicles, banners, etc. at the time of acquiring image Such occlusion area is a major cause of the deterioration of reality and accuracy of the constructed 3D modeling. Various studies have been conducted to solve the occlusion area. Recently the researches of deep learning algorithm have been conducted for detecting and resolving the occlusion area. For deep learning algorithm, sufficient training data is required, and the collected training data quality directly affects the performance and the result of the deep learning. Therefore, this study analyzed the ability of detecting the occlusion area of the image using various image quality to verify the performance and the result of deep learning according to the quality of the learning data. An image containing an object that causes occlusion is generated for each artificial and quantified image quality and applied to the implemented deep learning algorithm. The study found that the image quality for adjusting brightness was lower at 0.56 detection ratio for brighter images and that the image quality for pixel size and artificial noise control decreased rapidly from images adjusted from the main image to the middle level. In the F-measure performance evaluation method, the change in noise-controlled image resolution was the highest at 0.53 points. The ability to detect occlusion zones by image quality will be used as a valuable criterion for actual application of deep learning in the future. In the acquiring image, it is expected to contribute a lot to the practical application of deep learning by providing a certain level of image acquisition.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Development of smartphone-based voice therapy program (스마트폰기반 음성치료 프로그램 개발연구)

  • Lee, Ha-Na;Park, Jun-Hee;Yoo, Jae-Yeon
    • Phonetics and Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.51-61
    • /
    • 2019
  • The purpose of this study was to develop a smartphone based voice therapy program for patients with voice disorders. Contents of voice therapy were collected through analysis of mobile contents related to voice therapy in Korea, experts and users' demand survey, and the program was developed using Android Studio. Content needed for voice therapy was collected through analysis of mobile contents related to voice therapy. The user satisfaction evaluation for application was conducted for five patient with functional voice disorders. The results showed that the mobile contents related to voice therapy in Korea were mostly related to breathing, followed by voice and singing, but only 13 applications were practically practiced for voice therapy. Expert and user demand surveys showed that the patients and therapists both had a high need for content that could provide voice training in places other than the treatment room. Based on this analysis, 'Home Voice Trainer', an smartphone based voice therapy program, was developed. Home Voice Trainer is an application for voice therapy and management based on Android smartphones. It is designed to train voice therapy activities at home that have been trained offline. In addition, the records of voice training of patients were managed online so that patients can maintain voice improvement through continuous voice consulting even after the end of voice therapy. User evaluations show that patients are satisfied with the difficulty and content of voice therapy programs provided by home voice trainers, but lack of a portion of user interface, such as the portion of home button and interface between screens. Further study suggests the clinical application of home voice trainer to the patients with voice disorders. It is expected that the development study and the clinical application of smart contents related to voice therapy will be actively conducted.

Analyzing the Performance of a Temperature and Humidity Measuring System of a Smart Greenhouse for Strawberry Cultivation (딸기재배 스마트 온실용 온습도 계측시스템의 성능평가)

  • Jeong, Young Kyun;Lee, Jong Goo;Ahn, Enu Ki;Seo, Jae Seok;Kim, Hyeon Tae;Yoon, Yong Cheol
    • Journal of Bio-Environment Control
    • /
    • v.28 no.2
    • /
    • pp.117-125
    • /
    • 2019
  • This study compared the temperature and humidity measured by an aspirated radiation shield (ARS), the accuracy of which has been recently verified, and those measured by a system developed by the parent company (Company A) to investigate and improve the performance of the developed system. The results are as follows. Overall, the two-plate system had a lower radiation shielding effect than the one-plate system but showed better performance results when excluding the effect of strawberry vegetation on the systems. The overall maximum temperature ranges measured by company A's system and the ARS were $20.5{\sim}53.3^{\circ}C$ and $17.8{\sim}44.1^{\circ}C$, respectively. Thus, the maximum temperature measured by company A's system was $2.7{\sim}9.2^{\circ}C$ higher, and the maximum daily temperature difference was approximately $12.2^{\circ}C$. The overall average temperature measured by company A's system and the ARS was $12.4{\sim}38.6^{\circ}C$ and $11.8{\sim}32.7^{\circ}C$, respectively. Thus, the overall average temperature measured by company A's system was $0.6{\sim}5.9^{\circ}C$ higher, and the maximum daily temperature difference was approximately $6.7^{\circ}C$. The overall minimum temperature ranges measured by company A's system and the ARS were $4.2{\sim}28.6^{\circ}C$ and $2.9{\sim}26.4^{\circ}C$, respectively. Thus, the minimum temperature measured by company A's system was $1.3{\sim}2.2^{\circ}C$ higher, and the minimum daily temperature difference was approximately $2.9^{\circ}C$. In addition, the overall relative humidity ranges measured by company A's system and the ARS were 52.9~93.3% and 55.3~96.5%, respectively. Thus, company A's system showed a 2.4~3.2% lower relative humidity range than the ARS. However, there was a day when the relative humidity measured by company A's system was 18.0% lower than that measured by the ARS at maximum. In conclusion, there were differences in the relative humidity measured by the two company's devices, as in the temperature, although the differences were insignificant.

The Effects of Game User's Social Capital and Information Privacy Concern on SNGReuse Intention and Recommendation Intention Through Flow (게임 이용자의 사회자본과 개인정보제공에 대한 우려가 플로우를 통해 SNG 재이용의도와 추천의도에 미치는 영향)

  • Lee, Ji-Hyeon;Kim, Han-Ku
    • Management & Information Systems Review
    • /
    • v.37 no.4
    • /
    • pp.21-39
    • /
    • 2018
  • Today, Mobile Instant Message (MIM) has become a communication means which is commonly used by many people as the technology on smart phones has been enhanced. Among the services, KakaoGame creates much profits continuously by using its representative Kakao platform. However, even though the number of users of KakaoGame increases and the characteristics of the users are more diversified, there are few researches on the relationship between the characteristics of the SNG users and the continuous use of the game. Since the social capital that is formed by the SNG users with the acquaintances create the sense of belonging, its role is being emphasized under the environment of social network. In addition, game user's concerns about the information privacy may decrease the trust on a game APP, and it also caused to threaten about the game system. Therefore, this study was designed to examine the structural relationships among SNG users' social capital, concerns about the information privacy, flow, SNG reuse intention and recommendation intention. The results from this study are as follow. First of all, the participants' bridging social capital had a positive effect on the flow of an SNG, but the bonding social capital had a negative effect on the flow of an SNG. In addition, awareness of information privacy concern had a negative effects on the flow of an SNG, but control of information privacy concern had a positive effect on the flow of an SNG. Lastly, the flow of an SNG had a positive effect on the reuse intention and recommendation intention of an SNG. Also, reuse intention of an SNG had a positive effect on the recommendation intention. Based on the results from this study, academic and practical implications can be drawn. First, This study focused on KakaoTalk which has both of the closed and open characteristics of an SNS and it was found that the SNG user's social capital might be a factor influencing each user's behaviors through the user's flow experiences in SNG. Second, this study extends the scope of prior researches by empirically analysing the relationship between the concerns about the SNG user's information privacy and flow of an SNG. Finally, the results of this research can provide practical guidelines to develop effective marketing strategies considering them for SNG companies.

A comparative study of mosquito population density according to the Sejong City areas and old city and new city (세종특별자치시 전역과 구도심 및 신도심에 따른 모기 밀도 비교 연구)

  • Na, Sumi;Doh, Jiseon;Yang, Young Cheol;Ryu, Sungmin;Yi, Hoonbok
    • Korean Journal of Environmental Biology
    • /
    • v.39 no.3
    • /
    • pp.362-373
    • /
    • 2021
  • This study was conducted to establish mosquito distribution density and habitat in Sejong city for the prevention of mosquito-borne infectious diseases. The overall distribution of mosquitoes in the Sejong City was investigated, and the population density of mosquitoes in the old and new city was analyzed. Mosquito populations were determined using MOSHOLE and Blacklight traps once a week overnight. We also compared the mosquito population density of the old city and the new city, and the daily mosquito population was calculated using the data from the smart mosquito trap(DMS). Of all the study sites, Geumnam-myeon had the highest number of mosquitoes captured, and the dominant species were Armigeres subalbatus and Culex pipienspallens. Mosquito species with the potential for transmitting diseases were mainly found in Yeonseo-myeon (106 individual), and Geumnam-myeon (101). Mosquito collection rates by MOSHOLE trap and Blacklight trap were 58.49% and 41.51%, respectively. We concluded that using CO2 would be the most suitable approach for collecting mosquitoes. The mosquito population density in the old city (92.05±7.04) was approximately twice that of the new city(51.50±4.05). Since Sejong City is divided into old city and new city, it is difficult to spot remarkable effects in a standardized way. For effective quarantine, differentiation of quarantine must be established. This study results provide a basis for Sejong City's integrated mosquito control guidelines, and therefore effective control of which we believe will help control the spread of mosquito-borne diseases and reduce damage from mosquitoes.