• Title/Summary/Keyword: Computer Application

Search Result 7,943, Processing Time 0.049 seconds

Study on the Attitudes toward Artificial Intelligence and Digital Literacy of Dental Hygiene Students

  • Seon-Ju Sim;Ji-Hye Kim;Min-Hee Hong;Su-Min Hong;Myung-Jin Lee
    • Journal of dental hygiene science
    • /
    • v.24 no.3
    • /
    • pp.171-180
    • /
    • 2024
  • Background: The Fourth Industrial Revolution highlights the importance of artificial intelligence (AI) and digital literacy in dental hygiene education. However, research on students' attitudes toward AI and their digital literacy levels is limited. Therefore, this study investigated the attitudes of dental hygiene students toward AI and digital literacy levels. Methods: In total, 167 dental hygiene students in Baekseok University participated in the study and provided informed consent. The survey tool included general characteristics, smartphone usage patterns, attitudes toward AI, and digital literacy levels. Attitudes toward AI and digital literacy based on general characteristics and smart device usage were analyzed using t-tests and one-way ANOVA. Correlations among attitudes toward AI, digital literacy awareness, and digital literacy behaviors were analyzed using Pearson's correlation analysis. The impact of AI attitudes and digital literacy awareness on digital literacy behavior was examined using linear regression analysis. Results: Students with higher interest in their major had more positive attitudes toward AI, and those with higher smart device usage showed increased AI attitudes and digital literacy (p<0.05). Simple frequency or duration of smartphone use did not affect digital literacy, but students who perceived their smart device usage positively and believed that they used smart devices effectively in their studies exhibited higher levels of digital literacy (p<0.05). A positive attitude toward AI is associated with higher levels of digital literacy (p<0.05). Digital literacy awareness and attitudes toward AI influenced digital literacy behavior (p<0.05). Conclusion: These results suggest that the qualified utilization and application of digital devices in dental hygiene education are important. Improving the educational curriculum is necessary; as a result, digital technology can be effectively utilized, and various educational programs should be introduced to enhance digital literacy.

Establishment of Valve Replacement Registry and Risk Factor Analysis Based on Database Application Program (데이터베이스 프로그램에 기반한 심장판막 치환수술 환자의 레지스트리 확립 및 위험인자 분석)

  • Kim, Kyung-Hwan;Lee, Jae-Ik;Lim, Cheong;Ahn, Hyuk
    • Journal of Chest Surgery
    • /
    • v.35 no.3
    • /
    • pp.209-216
    • /
    • 2002
  • Background: Valvular heart disease is still the most common health problem in Korea. By the end of the year 1999, there has been 94,586 cases of open heart surgery since the first case in 1958. Among them, 36,247 cases were acquired heart diseases and 20,704 of those had valvular heart disease. But there was no database system and every surgeon and physician had great difficulties in analysing and utilizing those tremendous medical resources. Therefore, we developed a valve registry database program and utilize it for risk factor analysis and so on. Material and Method: Personal computer-based multiuser database program was created using Microsoft AccessTM. That consisted of relational database structure with fine-tuned compact field variables and server-client architecture. Simple graphic user interface showed easy-to-use accessability and comprehensibility. User-oriented modular structure enabled easier modification through native AccessTM functions. Infinite application of query function aided users to extract, summarize, analyse and report the study result promptly. Result: About three-thousand cases of valve replacement procedure were performed in our hospital from 1968 to 1999. Total number of prosthesis replaced was 3,700. The numbers of cases for mitral, aortic and tricuspid valve replacement were 1600, 584, 76, respectively. Among them, 700 patients received prosthesis in more than two positions. Bioprosthesis or mechanical prosthesis were used in 1,280 and 1,500 patients respectively Redo valve replacements were performed in 460 patients totally and 40 patients annually Conclusion: Database program for registry of valvular heart disease was successfully developed and used in personal computer-based multiuser environment. This revealed promising results and perspectives in database management and utilization system.

Development of Industrial Embedded System Platform (산업용 임베디드 시스템 플랫폼 개발)

  • Kim, Dae-Nam;Kim, Kyo-Sun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.5
    • /
    • pp.50-60
    • /
    • 2010
  • For the last half a century, the personal computer and software industries have been prosperous due to the incessant evolution of computer systems. In the 21st century, the embedded system market has greatly increased as the market shifted to the mobile gadget field. While a lot of multimedia gadgets such as mobile phone, navigation system, PMP, etc. are pouring into the market, most industrial control systems still rely on 8-bit micro-controllers and simple application software techniques. Unfortunately, the technological barrier which requires additional investment and higher quality manpower to overcome, and the business risks which come from the uncertainty of the market growth and the competitiveness of the resulting products have prevented the companies in the industry from taking advantage of such fancy technologies. However, high performance, low-power and low-cost hardware and software platforms will enable their high-technology products to be developed and recognized by potential clients in the future. This paper presents such a platform for industrial embedded systems. The platform was designed based on Telechips TCC8300 multimedia processor which embedded a variety of parallel hardware for the implementation of multimedia functions. And open-source Embedded Linux, TinyX and GTK+ are used for implementation of GUI to minimize technology costs. In order to estimate the expected performance and power consumption, the performance improvement and the power consumption due to each of enabled hardware sub-systems including YUV2RGB frame converter are measured. An analytic model was devised to check the feasibility of a new application and trade off its performance and power consumption. The validity of the model has been confirmed by implementing a real target system. The cost can be further mitigated by using the hardware parts which are being used for mass production products mostly in the cell-phone market.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Development Plan of Guard Service According to the LBS Introduction (경호경비 발전전략에 따른 위치기반서비스(LBS) 도입)

  • Kim, Chang-Ho;Chang, Ye-Chin
    • Korean Security Journal
    • /
    • no.13
    • /
    • pp.145-168
    • /
    • 2007
  • Like to change to the information-oriented society, the guard service needs to be changed. The communication and hardware technology develop rapidly and according to the internet environment change from cable to wireless, modern person can approach every kinds of information service using wireless communication machinery which can be moved such as laptop, computer, PDA, mobile phone and so on, LBS field which presents the needing information and service at anytime, anywhere, and which kinds of device expands it's territory all the more together with the appearance of ubiquitous concept. LBS use the chip in the mobile phone and make to confirm the position of the joining member anytime within several tens centimeters to hundreds meters. LBS can be divided by the service method which use mobile communication base station and apply satellite. Also each service type can be divided by location chase service, public safe service, location based information service and so on, and it is the part which will plan with guard service development. It will be prospected 8.460 hundred million in 2005 years and 16.561 hundred million in 2007 years scale of market. Like this situation, it can be guessed that the guard service has to change rapidly according to the LBS application. Study method chooses documentary review basically, and at first theory method mainly uses the second documentary examination which depends on learned journal and independent volume which published in the inside and the outside of the country, internet searching, other kinds of all study report, statute book, thesis which published at public order research institute of the Regional Police Headquarter, police operation data, data which related with statute, documents and statistical data which depend on private guard company and so on. So the purpose of the study gropes in accordance with the LBS application, and present the problems and improvement method to analyze indirect of manager side of operate guard adaptation service of LBS, government side which has to activate LBS, systematical, operation management, manpower management and education training which related with guard course side which has to study and educate in accordance with application of the new guard service, as well as intents to excellent quality service of guard.

  • PDF

Development of Menu Labeling System (MLS) Using Nutri-API (Nutrition Analysis Application Programming Interface) (영양분석 API를 이용한 메뉴 라벨링 시스템 (MLS) 개발)

  • Hong, Soon-Myung;Cho, Jee-Ye;Park, Yu-Jeong;Kim, Min-Chan;Park, Hye-Kyung;Lee, Eun-Ju;Kim, Jong-Wook;Kwon, Kwang-Il;Kim, Jee-Young
    • Journal of Nutrition and Health
    • /
    • v.43 no.2
    • /
    • pp.197-206
    • /
    • 2010
  • Now a days, people eat outside of the home more and more frequently. Menu labeling can help people make more informed decisions about the foods they eat and help them maintain a healthy diet. This study was conducted to develop menu labeling system using Nutri-API (Nutrition Analysis Application Programming Interface). This system offers convenient user interface and menu labeling information with printout format. This system provide useful functions such as new food/menu nutrients information, retrieval food semantic service, menu plan with subgroup and nutrient analysis informations and print format. This system provide nutritive values with nutrient information and ratio of 3 major energy nutrients. MLS system can analyze nutrients for menu and each subgroup. And MLS system can display nutrient comparisons with DRIs and % Daily Nutrient Values. And also this system provide 6 different menu labeling formate with nutrient information. Therefore it can be used by not only usual people but also dietitians and restaurant managers who take charge of making a menu and experts in the field of food and nutrition. It is expected that Menu Labeling System (MLS) can be useful of menu planning and nutrition education, nutrition counseling and expert meal management.

A Study on the Application Direction of Finite Element Analysis in the Field of Packaging through Research Trend Analysis in Korea (국내 연구 동향 분석을 통한 포장분야에서 유한요소해석의 적용 방향에 관한 고찰)

  • Lee, Hakrae;Jeon, Kyubae;Ko, Euisuk;Shim, Woncheol;Kang, Wookgun;Kim, Jaineung
    • KOREAN JOURNAL OF PACKAGING SCIENCE & TECHNOLOGY
    • /
    • v.23 no.3
    • /
    • pp.191-200
    • /
    • 2017
  • Proper packaging design can meet both the environmental and economic aspects of packaging materials by reducing the use of packaging materials, waste generation, material costs, and logistics costs. Finite element analysis(FEM) is used as a useful tool in various fields such as structural analysis, heat transfer, fluid motion, and electromagnetic field, but its application in the field of packaging is still insufficient. Therefore, the application of FEM to the field of packaging can save the cost and time in the future research because it is possible to design the package by computer simulation, and it is possible to reduce the packaging waste and logistics cost through proper packaging design. Therefore, this study investigated the FEM papers published in Korea for the purpose of helping research design using FEM program in the field of packaging in the future. In this paper, we analyzed the 29 papers that were directly related to the analysis of FEM papers published in domestic journals from 1991 to 2017. As a result, we analyzed the research topic, FEM program, and analysis method using each paper, and presented the direction that can be applied in future packaging field. When the FEM is applied to the packaging field, it is possible to change the structure and reduce the thickness through the stress and vibration analysis applied to the packaging material, thereby reducing the cost by improving the mechanical strength and reducing the amount of the packaging material. Therefore, in the field of packaging research in the future, if the FEM is performed together, economical and reasonable packaging design will be possible.

Preliminary Study on the MR Temperature Mapping using Center Array-Sequencing Phase Unwrapping Algorithm (Center Array-Sequencing 위상펼침 기법의 MR 온도영상 적용에 관한 기초연구)

  • Tan, Kee Chin;Kim, Tae-Hyung;Chun, Song-I;Han, Yong-Hee;Choi, Ki-Seung;Lee, Kwang-Sig;Jun, Jae-Ryang;Eun, Choong-Ki;Mun, Chi-Woong
    • Investigative Magnetic Resonance Imaging
    • /
    • v.12 no.2
    • /
    • pp.131-141
    • /
    • 2008
  • Purpose : To investigate the feasibility and accuracy of Proton Resonance Frequency (PRF) shift based magnetic resonance (MR) temperature mapping utilizing the self-developed center array-sequencing phase unwrapping (PU) method for non-invasive temperature monitoring. Materials and Methods : The computer simulation was done on the PU algorithm for performance evaluation before further application to MR thermometry. The MR experiments were conducted in two approaches namely PU experiment, and temperature mapping experiment based on the PU technique with all the image postprocessing implemented in MATLAB. A 1.5T MR scanner employing a knee coil with $T2^*$ GRE (Gradient Recalled Echo) pulse sequence were used throughout the experiments. Various subjects such as water phantom, orange, and agarose gel phantom were used for the assessment of the self-developed PU algorithm. The MR temperature mapping experiment was initially attempted on the agarose gel phantom only with the application of a custom-made thermoregulating water pump as the heating source. Heat was generated to the phantom via hot water circulation whilst temperature variation was observed with T-type thermocouple. The PU program was implemented on the reconstructed wrapped phase images prior to map the temperature distribution of subjects. As the temperature change is directly proportional to the phase difference map, the absolute temperature could be estimated from the summation of the computed temperature difference with the measured ambient temperature of subjects. Results : The PU technique successfully recovered and removed the phase wrapping artifacts on MR phase images with various subjects by producing a smooth and continuous phase map thus producing a more reliable temperature map. Conclusion : This work presented a rapid, and robust self-developed center array-sequencing PU algorithm feasible for the application of MR temperature mapping according to the PRF phase shift property.

  • PDF

Comparative Analysis of Nitrogen Concentration of Rainfall in South Korea for Nonpoint Source Pollution Model Application (비점오염모델 적용을 위한 우리나라 행정구역별 강수 중 질소농도 비교분석)

  • Choi, Dong Ho;Kim, Min-Kyeong;Hur, Seung-Oh;Hong, Sung-Chang;Choi, Soon-Kun
    • Korean Journal of Environmental Agriculture
    • /
    • v.37 no.3
    • /
    • pp.189-196
    • /
    • 2018
  • BACKGROUND: Water quality management of river requires quantification of pollutant loads and implementation of measures through monitoring study, but it requires labour and costs. Therefore, many researchers are performing nonpoint source pollution analysis using computer models. However, calibration of model parameters needs observed data. Nitrogen concentration in rainfall is one of the factors to be considered when estimating the pollutant loads through application of the nonpoint source pollution model, but the default value provided by the model is used when there are no observed data. Therefore, this study aims to provide the representative nitrogen concentration of the rainfall for the administrative district ensuring rational modeling and reliable results. METHODS AND RESULTS: In this study, rainfall monitoring data from June 2015 to December 2017 were used to determine the nitrogen concentration in rainfall for each administrative district. Range of the $NO_3{^-}$ and $NH_4{^+}$ concentrations were 0.41~6.05 mg/L, 0.39~2.27 mg/L, respectively, and T-N concentration was 0.80~7.71 mg/L. Furthermore, the national average of T-N concentration in this study was $2.84{\pm}1.42mg/L$, which was similar to the national average of T-N 3.03 mg/L presented by the Ministry of Environment in 2015. Therefore, the nitrogen concentrations suggested in this study can be considered to be resonable values. CONCLUSION: The nitrogen concentrations estimated in this study showed regional differences. Therefore, when estimating the pollutant loads through application of the nonpoint source pollution model, resonable parameter estimation of nitrogen concentration in rainfall is possible by reflecting the regional characteristics.