• Title/Summary/Keyword: Language study in Korea

Search Result 3,251, Processing Time 0.04 seconds

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Multi-day Trip Planning System with Collaborative Recommendation (협업적 추천 기반의 여행 계획 시스템)

  • Aprilia, Priska;Oh, Kyeong-Jin;Hong, Myung-Duk;Ga, Myeong-Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.159-185
    • /
    • 2016
  • Planning a multi-day trip is a complex, yet time-consuming task. It usually starts with selecting a list of points of interest (POIs) worth visiting and then arranging them into an itinerary, taking into consideration various constraints and preferences. When choosing POIs to visit, one might ask friends to suggest them, search for information on the Web, or seek advice from travel agents; however, those options have their limitations. First, the knowledge of friends is limited to the places they have visited. Second, the tourism information on the internet may be vast, but at the same time, might cause one to invest a lot of time reading and filtering the information. Lastly, travel agents might be biased towards providers of certain travel products when suggesting itineraries. In recent years, many researchers have tried to deal with the huge amount of tourism information available on the internet. They explored the wisdom of the crowd through overwhelming images shared by people on social media sites. Furthermore, trip planning problems are usually formulated as 'Tourist Trip Design Problems', and are solved using various search algorithms with heuristics. Various recommendation systems with various techniques have been set up to cope with the overwhelming tourism information available on the internet. Prediction models of recommendation systems are typically built using a large dataset. However, sometimes such a dataset is not always available. For other models, especially those that require input from people, human computation has emerged as a powerful and inexpensive approach. This study proposes CYTRIP (Crowdsource Your TRIP), a multi-day trip itinerary planning system that draws on the collective intelligence of contributors in recommending POIs. In order to enable the crowd to collaboratively recommend POIs to users, CYTRIP provides a shared workspace. In the shared workspace, the crowd can recommend as many POIs to as many requesters as they can, and they can also vote on the POIs recommended by other people when they find them interesting. In CYTRIP, anyone can make a contribution by recommending POIs to requesters based on requesters' specified preferences. CYTRIP takes input on the recommended POIs to build a multi-day trip itinerary taking into account the user's preferences, the various time constraints, and the locations. The input then becomes a multi-day trip planning problem that is formulated in Planning Domain Definition Language 3 (PDDL3). A sequence of actions formulated in a domain file is used to achieve the goals in the planning problem, which are the recommended POIs to be visited. The multi-day trip planning problem is a highly constrained problem. Sometimes, it is not feasible to visit all the recommended POIs with the limited resources available, such as the time the user can spend. In order to cope with an unachievable goal that can result in no solution for the other goals, CYTRIP selects a set of feasible POIs prior to the planning process. The planning problem is created for the selected POIs and fed into the planner. The solution returned by the planner is then parsed into a multi-day trip itinerary and displayed to the user on a map. The proposed system is implemented as a web-based application built using PHP on a CodeIgniter Web Framework. In order to evaluate the proposed system, an online experiment was conducted. From the online experiment, results show that with the help of the contributors, CYTRIP can plan and generate a multi-day trip itinerary that is tailored to the users' preferences and bound by their constraints, such as location or time constraints. The contributors also find that CYTRIP is a useful tool for collecting POIs from the crowd and planning a multi-day trip.

The Usage of the Vulgate Bible in the European Catholicism: from the Council of Trent until the Second Council of Vatican (유럽 천주교의 불가타 성경 사용 양상: 트렌토 공의회 이후부터 2차 바티칸 공의회 이전까지)

  • CHO, Hyeon Beom
    • The Critical Review of Religion and Culture
    • /
    • no.32
    • /
    • pp.257-287
    • /
    • 2017
  • It seems to be quite an ambitious endeavor to trace back the translation history of Catholic Vulgate Bible from Latin language to Asian languages since 16th century. I try to bring out the translation(translative) procedure of Latin Bible to the Chinese Version, which is eventually come up (and the latter)to the Korean Version. It has been supported and funded by the National Research Foundation of Korea. This task has a three-year plan. For the first step(operation), I examined and searched the European situation of the Vulgate Bible in the Catholic Church, i.e. the ritual use of Vulgate Bible in the Mass and the religious retreat. The liturgical texts, to begin with, were analysed to disclose how the Vulgate Bible was reflected in them. The Lectionary and the Evangeliary were the typical ones. The structure or the formation system of the Lectionaries for Mass was based on the liturgical year cycle. From this point, the Vulgate Bible was rooted in the religious life of European Catholics after the Council of Trent which had proclaimed the Vulgate to be authentic source of the Revelation, therefore, to be respected as the only authoritative Bible. How did the Catholic Church use the Vulgate Bible out of the context and the boundary (sphere) of liturgy? The Meditation guide books for the purpose of instructing the religious retreat was published and (diffused) circulated among the priests, the religious persons and even the laymen. In those books also were included (found) the citation, the interpretation and the commentaries of the Vulgate Bible. The most of the devotees in Europe read the biblical phrases out of the meditation guide books. There are still remained the unsolved problems of how to understand (for understanding) the actual aspect of the Vulgate Bible in the European Catholic Church. All the Biblical verses were translated into French and included in the meditation guide books published in France. What did the Holy See think the French translation of the Vulgate Bible? Unfortunately, there were not found the Vatican Decrees about the European translation of the Vulgate Bible. The relationship between the Vulgate Bible and the Meditation guide (Those) will be much important for the study of Chinese translation of it. The search for the Decrees and the researches on it and the European and the non-European translations of the Vulgate Bible will be a continuous task for me as well as the other researchers on these subjects in the future.

Digital Humanities, and Applications of the "Successful Exam Passers List" (과거 합격자 시맨틱 데이터베이스를 활용한 디지털 인문학 연구)

  • LEE, JAE OK
    • (The)Study of the Eastern Classic
    • /
    • no.70
    • /
    • pp.303-345
    • /
    • 2018
  • In this article, how the Bangmok(榜目) documents, which are essentially lists of successful passers for the civil competitive examination system of the $Chos{\breve{o}}n$ dynasty, when rendered into digitalized formats, could serve as source of information, which would not only lets us know the $Chos{\breve{o}}n$ individuals' social backgrounds and bloodlines but also enables us to understand the intricate nature that the Yangban network had, will be discussed. In digitalized humanity studies, the Bangmok materials, literally a list of leading elites of the $Chos{\breve{o}}n$ period, constitute a very interesting and important source of information. Based upon these materials, we can see how the society -as well as the Yangban community- was like. Currently, all data inside these Bangmok lists are rendered in XML(eXtensible Makrup Language) format and are being served through DBMS(Database Management System), so anyone who would want to examine the statistics could freely do so. Also, by connecting the data in these Bangmok materials with data from genealogy records, we could identify an individual's marital relationship, home town, and political affiliation, and therefore create a complex narrative that would be effective in describing that individual's life in particular. This is a graphic database, which shows-when Bangmok data is punched in-successful passers as individual nodes, and displays blood and marital relations in a very visible way. Clicking upon the nodes would provide you with access to all kinds of relationships formed among more than 90 thousand successful passers, and even the overall marital network, once the genealogical data is input. In Korea, since 2005 and through now, the task of digitalizing data from the Civil exam Bangmok(Mun-gwa Bangmok), Military exam Bangmok (Mu-gwa Bangmok), the "Sa-ma" Bangmok and "Jab-gwa" Bangmok materials, has been completed. They can be accessed through a website(http://people.aks.ac.kr/index.aks) which has information on numerous famous past Korean individuals. With this kind of source of information, we are now able to extract professional Jung-in figures from these lists. However, meaningful and practical studies using this data are yet to be announced. This article would like to remind everyone that this information should be used as a window through which we could see not only the lives of individuals, but also the society.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Original expression of the creative chidren's picture-book (창작그림동화의 독창성 연구)

  • 안경환
    • Archives of design research
    • /
    • v.11 no.1
    • /
    • pp.185-197
    • /
    • 1998
  • The domestic publishing market has heen ranked at No.7 in the word publishing market(stastics material in Cultu re and Gymnastics m inistrv )Es pe cia Ill'. publishing quantity of children'book is about to reacb No.3. Such a publishing condition i." showing that Korean publishing world has limit,llion of kind and genre despite of its quant.iative improvement On the ot.her hand. t',reign juvenile publi."hing has multi-publishing form, which is a simultaneous publishing with dolls, audio stuff, game programs and CD-ROM t.itles. Even the animation is considered as of the publication at the planning s tsge. However, when we take a look at domestic condition we come to know that Korean juvenile publishing has been occupied mostly by the studying book. Also, the cautious book selection by the well educated parents in l990's has brought up the change of juvenile publishing world. Such a presen t condition bears of juvenlie publi.,;hing world. Such a present condition bears problem, which is the checking 190 translat.ions among the published picture- books of the last ye ar children's book Nevertheless, there was a sucessful domestic planned creative picture book last year. That is "Puppy s shit", which was sold out 15 000 copies and be st se ller of children's book. Whe n we take a look at the commercial success of "Puppy s shit", it is possible that domestic work holds a position in the publishing market. "Puppy s shit" is the story about valuable nature with Korean styled illustration, which tells the prefemece of Korean book in do mestic pu blis hin f.i market. With the motto "Finding prospect of the Korean creative children's book", this paper was went throu gh. By searchinf.i for creative com ponent.s of picture-book planning such as theme, story, illustration, and edit design through the foreign picture-book "What 1 want. to know from the little mole is who made it on top of his head"-and domestic creative picture/book 'Puppy's shit", this study tried to tell a couple of things like followings publication of Korean creative picture book in t.he world. professional and more artistic inner fabric and originality(the relatio nship be tween stort and illu,tration), improvement of illustration through new formative language with well expressed con ten t, planning improvem ent of Korean creative pictive picture book including literary, artistic and educative component and finally examples of planning, artict and educative component and finally example, of planning the good book with a story and illu,;tration which can in the long run improve the value of life for the children.h can in the long run improve the value of life for the children.

  • PDF

Trend Analysis of Barrier-free Academic Research using Text Mining and CONCOR (텍스트 마이닝과 CONCOR을 활용한 배리어 프리 학술연구 동향 분석)

  • Jeong-Ki Lee;Ki-Hyok Youn
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.2
    • /
    • pp.19-31
    • /
    • 2023
  • The importance of barrier free is being highlighted worldwide. This study attempted to identify barrier-free research trends using text mining. Through this, it was intended to help with research and policies to create a barrier free environment. The analysis data is 227 papers published in domestic academic journals from 1996 when barrier free research began to 2022. The researcher converted the title, keywords, and abstract of an academic thesis into text, and then analyzed the pattern of the thesis and the meaning of the data. The summary of the research results is as follows. First, barrier-free research began to increase after 2009, with an annual average of 17.1 papers being published. This is related to the implementation guidelines for the barrier-free certification system that took effect on July 15, 2008. Second, results of barrier-free text mining i) As a result of word frequency analysis of top keywords, important keywords such as barrier free, disabled, design, universal design, access, elderly, certification, improvement, evaluation, and space, facility, and environment were searched. ii) As a result of TD-IDF analysis, the main keywords were universal design, design, certification, house, access, elderly, installation, disabled, park, evaluation, architecture, and space. iii) As a result of N-Ggam analysis, barrier free+certification, barrier free+design, barrier free+barrier free, elderly+disabled, disabled+elderly, disabled+convenience facilities, the disabled+the elderly, society+the elderly, convenience facilities+installation, certification+evaluation index, physical+environment, life+quality, etc. appeared in a related language. Third, as a result of the CONCOR analysis, cluster 1 was barrier-free issues and challenges, cluster 2 was universal design and space utilization, cluster 3 was Improving Accessibility for the Disabled, and cluster 4 was barrier free certification and evaluation. Based on the analysis results, this study presented policy implications for vitalizing barrier-free research and establishing a desirable barrier free environment.

A Study on the Medical Application and Personal Information Protection of Generative AI (생성형 AI의 의료적 활용과 개인정보보호)

  • Lee, Sookyoung
    • The Korean Society of Law and Medicine
    • /
    • v.24 no.4
    • /
    • pp.67-101
    • /
    • 2023
  • The utilization of generative AI in the medical field is also being rapidly researched. Access to vast data sets reduces the time and energy spent in selecting information. However, as the effort put into content creation decreases, there is a greater likelihood of associated issues arising. For example, with generative AI, users must discern the accuracy of results themselves, as these AIs learn from data within a set period and generate outcomes. While the answers may appear plausible, their sources are often unclear, making it challenging to determine their veracity. Additionally, the possibility of presenting results from a biased or distorted perspective cannot be discounted at present on ethical grounds. Despite these concerns, the field of generative AI is continually advancing, with an increasing number of users leveraging it in various sectors, including biomedical and life sciences. This raises important legal considerations regarding who bears responsibility and to what extent for any damages caused by these high-performance AI algorithms. A general overview of issues with generative AI includes those discussed above, but another perspective arises from its fundamental nature as a large-scale language model ('LLM') AI. There is a civil law concern regarding "the memorization of training data within artificial neural networks and its subsequent reproduction". Medical data, by nature, often reflects personal characteristics of patients, potentially leading to issues such as the regeneration of personal information. The extensive application of generative AI in scenarios beyond traditional AI brings forth the possibility of legal challenges that cannot be ignored. Upon examining the technical characteristics of generative AI and focusing on legal issues, especially concerning the protection of personal information, it's evident that current laws regarding personal information protection, particularly in the context of health and medical data utilization, are inadequate. These laws provide processes for anonymizing and de-identification, specific personal information but fall short when generative AI is applied as software in medical devices. To address the functionalities of generative AI in clinical software, a reevaluation and adjustment of existing laws for the protection of personal information are imperative.

Dosimetric Evaluation of a Small Intraoral X-ray Tube for Dental Imaging (치과용 초소형 X-선 튜브의 선량평가)

  • Ji, Yunseo;Kim, YeonWoo;Lee, Rena
    • Progress in Medical Physics
    • /
    • v.26 no.3
    • /
    • pp.160-167
    • /
    • 2015
  • Radiation exposure from medical diagnostic imaging procedures to patients is one of the most significant interests in diagnostic x-ray system. A miniature x-ray intraoral tube was developed for the first time in the world which can be inserted into the mouth for imaging. Dose evaluation should be carried out in order to utilize such an imaging device for clinical use. In this study, dose evaluation of the new x-ray unit was performed by 1) using a custom made in vivo Pig phantom, 2) determining exposure condition for the clinical use, and 3) measuring patient dose of the new system. On the basis of DRLs (Diagnostic Reference Level) recommended by KDFA (Korea Food & Drug Administration), the ESD (Entrance Skin Dose) and DAP (Dose Area Product) measurements for the new x-ray imaging device were designed and measured. The maximum voltage and current of the x-ray tubes used in this study were 55 kVp, and 300 mA. The active area of the detector was $72{\times}72mm$ with pixel size of $48{\mu}m$. To obtain the operating condition of the new system, pig jaw phantom images showing major tooth-associated tissues, such as clown, pulp cavity were acquired at 1 frame/sec. Changing the beam currents 20 to $80{\mu}A$, x-ray images of 50 frames were obtained for one beam current with optimum x-ray exposure setting. Pig jaw phantom images were acquired from two commercial x-ray imaging units and compared to the new x-ray device: CS 2100, Carestream Dental LLC and EXARO, HIOSSEN, Inc. Their exposure conditions were 60 kV, 7 mA, and 60 kV, 2 mA, respectively. Comparing the new x-ray device and conventional x-ray imaging units, images of the new x-ray device around teeth and their neighboring tissues turn out to be better in spite of its small x-ray field size. ESD of the new x-ray device was measured 1.369 mGy on the beam condition for the best image quality, 0.051 mAs, which is much less than DRLs recommended by IAEA (International Atomic Energy Agency) and KDFA, both. Its dose distribution in the x-ray field size was observed to be uniform with standard deviation of 5~10 %. DAP of the new x-ray device was $82.4mGy*cm^2$ less than DRL established by KDFA even though its x-ray field size was small. This study shows that the new x-ray imaging device offers better in image quality and lower radiation dose compared to the conventional intraoral units. In additions, methods and know-how for studies in x-ray features could be accumulated from this work.