• Title/Summary/Keyword: guide system

Search Result 2,178, Processing Time 0.032 seconds

Acoustic Analysis and Melodization of Korean Intonation for Language Rehabilitation (언어재활을 위한 한국어의 음향적 분석과 선율화)

  • Choi, Jin Hee;Park Jeong Mi
    • Journal of Music and Human Behavior
    • /
    • v.21 no.1
    • /
    • pp.49-68
    • /
    • 2024
  • This study aims to acoustically analyze Korean language characteristics and convert these findings into musical elements, providing foundational data for evidence-based music-language rehabilitation. We collected voice data from thirty men and thirty women aged 19-25, each providing six-syllable prosodic units composed of two accentual phrases, including both declarative and interrogative sentences. Analyzing this data with Praat, we extracted syllabic acoustic properties and conducted statistical analyses based on acoustic properties, sentence type, gender, and particle presence. Significant differences were found in syllable frequency and duration based on accentual phrases and prosodic units (p < .001), with interrogative showing higher frequencies and declaratives longer durations (p < .001). Female frequencies were significantly higher than males' (p < .001), with longer durations observed (p < .001). Particle syllables also showed significantly stronger intensities (p < .001). Finally, we presented melodies converted from these acoustic properties into musical scores based on pitch, duration, and accent. The insights from this analysis of six-syllable Korean sentences will guide further research on developing a system for melodizing large-scale Korean speech data, expected to be crucial in music-based language rehabilitation.

ERF Components Patterns of Causal Question Generation during Observation of Biological Phenomena : A MEG Study (생명현상 관찰에서 나타나는 인과적 의문 생성의 ERF 특성 : MEG 연구)

  • Kwon, Suk-Won;Kwon, Yong-Ju
    • Journal of Science Education
    • /
    • v.33 no.2
    • /
    • pp.336-345
    • /
    • 2009
  • The purpose of this study is to analysis ERF components patterns of causal questions generated during the observation of biological phenomenon. First, the system that shows pictures causing causal questions based on biological phenomenon (evoked picture system) was developed in a way of cognitive psychology. The ERF patterns of causal questions based on time-series brain processing was observed using MEG. The evoked picture system was developed by R&D method consisting of scientific education experts and researchers. Tasks were classified into animal (A), microbe (M), and plant (P) tasks according to biological species and into interaction (I), all (A), and part (P) based on the interaction between different species. According to the collaboration with MEG team in the hospital of Seoul National University, the paradigm of MEG task was developed. MEG data about the generation of scientific questions in 5 female graduate student were collected. For examining the unique characteristic of causal question, MEG ERF components were analyzed. As a result, total 100 pictures were produced by evoked picture and 4 ERF components, M1(100~130ms), M2(220~280ms), M3(320~390ms), M4(460~520ms). The present study could guide personalized teaching-learning method through the application and development of scientific question learning program.

  • PDF

Reproducibility and accuracy of tooth size measurements obtained by the use of computer (컴퓨터를 이용한 치아크기 계측시 재현도와 정확도에 관한 연구)

  • Kim, Eun-Jeong;Hwang, Hyeon-Shik
    • The korean journal of orthodontics
    • /
    • v.29 no.5 s.76
    • /
    • pp.563-573
    • /
    • 1999
  • The purpose of this study was to evaluate the availability of computer system for the measurement of tooth size in the model analysis through the comparison of two measurements: One was to use a computer; and the other was to use vernier calipers. Twenty sets of casts were used, which showed a moderate degree of crowding and full eruption of all teeth. The mesio-distal width of 12 teeth from the left central incisor to the left first molar at each set of the casts were measured twice with vernier calipers and a computer respectively. This measurement was repeated two weeks later. First, for the reproducibility analysis, the two computer measurements were compared then the vernier calipers measurements were compared. Second, all the teeth were sepapated into the region of mesiodistal contact points and its width was measured by a micrometer to obtain standard measurements. For the accuracy analysis, these standard measurements were compared with the measurements from the dental casts using two methods. The difference between them was defined as the measurement error. To investigate the cause of measurement error, an examination was made for the presence and degree of contact point deviation on each tooth from the upper and lower occlusograms, and the mesio-distal angulation of each tooth was measured with TARG. Following results were obtained through statistical analysis. 1. In the analysis for reproducibility; the measurements with vernier calipers showed significant differences in three out of twelve teeth while the computer measurements showed significant differences in one out of twelve teeth. 2. In the analysis for accuracy; compared with the standard measurements, the measurements with vernier calipers showed significant differences in three out of twelve teeth while the computer measurements showed significant differences in two out of twelve teeth. 3. Compared with the standard measurements, the measurements with vernier calipers were apt to be larger at the upper first molar, and smaller at the lower first molar The computer measurements, however, were apt to be larger at both upper and lower first molars. 4. The measurements with vernier calipers showed the largest error at the lower first molar and the degree of error was variable according to the tooth while the difference of error was small in the computer measurements. 5. In the analysis for the correlation of the degree of measurement errors with the contact point deviation index and the mesio-distal crown angulation of each tooth, the measurements with vernier calipers did not show significant correlation while the measurements with computer showed slight Positive correlations. The results of this study indicate that a computer system may be useful for the measurement of tooth size in the model analysis.

  • PDF

Comparison of using CBCT with CT Simulator for Radiation dose of Treatment Planning (CBCT와 Simulation CT를 이용한 치료계획의 선량비교)

  • Kim, Dae-Young;Choi, Ji-Won;Cho, Jung-Keun
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.12
    • /
    • pp.742-749
    • /
    • 2009
  • The use of cone-beam computed tomography(CBCT) has been proposed for guiding the delivery of radiation therapy. A kilovoltage imaging system capable of radiography, fluoroscopy, and cone-beam computed tomography(CT) has been integrated with a medical linear accelerator. A standard clinical linear accelerator, operating in arc therapy mode, and an amorphous-silicon (a-Si) with an on-board electronic portal imager can be used to treat palliative patient and verify the patient's position prior to treatment. On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. In this study, the accuracy of Hounsfield Units of CBCT images as well as the accuracy of dose calculations based on CBCT images of a phantom and compared the results with those of using CT simulator images. Phantom and patient studies were carried out to evaluate the achievable accuracy in using CBCT and CT stimulator for dose calculation. Relative electron density as a function of HU was obtained for both planning CT stimulator and CBCT using a Catphan-600 (The Phantom Laboratory, USA) calibration phantom. A clinical treatment planning system was employed for CT stimulator and CBCT based dose calculations and subsequent comparisons. The dosimetric consequence as the result of HU variation in CBCT was evaluated by comparing MU/cCy. The differences were about 2.7% (3-4MU/100cGy) in phantom and 2.5% (1-3MU/100cGy) in patients. The difference in HU values in Catphan was small. However, the magnitude of scatter and artifacts in CBCT images are affected by limitation of detector's FOV and patient's involuntary motions. CBCT images included scatters and artifacts due to In addition to guide the patient setup process, CBCT data acquired prior to the treatment be used to recalculate or verify the treatment plan based on the patient anatomy of the treatment area. And the CBCT has potential to become a very useful tool for on-line ART.)

Design and Implementation of Clipcast Service via Terrestrial DMB (지상파 DMB를 이용한 클립캐스트 서비스 설계 및 구현)

  • Cho, Suk-Hyun;Seo, Jong-Soo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.23-32
    • /
    • 2011
  • Design and Implementation of Clipcast Service via Terrestrial DMB This paper outlines the system design and the implementation process of clipcast service that can send clips of video, mp3, text, images, etc. to terrestrial DMB terminals. To provide clipcast service in terrestrial DMB, a separate data channel needs to be allocated and this requires changes in the existing bandwidth allocation. Clipcast contents can be sent after midnight at around 3 to 4 AM, when terrestrial DMB viewship is low. If the video service bit rate is lowered to 352 Kbps and the TPEG service band is fully used, then 320 Kbps bit rate can be allocated to clipcast. To enable clipcast service, the terminals' DMB program must be executed, and this can be done through SMS and EPG. Clipcast service applies MOT protocol to transmit multimedia objects, and transmits twice in carousel format for stable transmission of files. Therefore, 72Mbyte data can be transmitted in one hour, which corresponds to about 20 minutes of full motion video service at 500Kbps data rate. When running the clip transmitted through terrestrial DMB data channel, information regarding the length of each clip is received through communication with the CMS(Content Management Server), then error-free files are displayed. The clips can be provided to the users as preview contents of the complete VOD contents. In order to use the complete content, the user needs to access the URL allocated for that specific content and download the content by completing a billing process. This paper suggests the design and implementation of terrestrial DMB system to provide clipcast service, which enables file download services as provided in MediaFLO, DVB-H, and the other mobile broadcasting systems. Unlike the other mobile broadcasting systems, the proposed system applies more reliable SMS method to activate the DMB terminals for highly stable clipcast service. This allows hybrid, i.e, both SMS and EPG activations of terminals for clipcast services.

The Present Status and Prospect of GIS Learning in Teaching Geography of High School (고등학교 지리학습에서 GIS 교육의 현황과 전망)

  • Hwang, Sang-Ill;Lee, Kum-Sam
    • Journal of the Korean association of regional geographers
    • /
    • v.2 no.2
    • /
    • pp.219-231
    • /
    • 1996
  • The aim here is to analyse the system of description of GIS in all of the high school textbooks passed with the official approval, to find the degree to which teachers understand about GIS, and to consider the present condition of GIS instruction. Most of the authors of textbooks generally underestimate importance of GIS, and there is difference among their awareness. In the system of description of GIS, there are only a few kinds of textbooks in which explanation of GIS is made coherently from the purpose of instruction aim through the chapter summary and to overall test in both of the Korean Geography and the World Geography. This trend is due to the degree of distribution of the GIS specialists in writing a textbook while the other texts books shows just a brief introduction of GIS concept. Although there is the limit for teachers to study how to teach GIS due to its very technological aspect as well as few previous training and teacher's guide. Thus it is evident that about a half of teachers who responded taught high school students without a knowledge on GIS, and a few of them even never referred to that concept. These facts may negatively affect the status of a geography in the society of information. For the solution of these issues, it is considered how to repair the description system and its contents. Besides, the variation among textbooks is reduced at the further revision of the 7th curriculum. And the printed matters of GIS are sufficiently provided for the teachers to use as their teaching aids. It is desirable that the GIS instruction models should be further developed for college education, and the programs for the on-the-job teachers training should be arranged. Besides, the previous training for the on-the-job teachers should be achieved more practically with enough time before the revision of curriculum.

  • PDF

Distribution and Stratigraphical Significance of the Haengmae Formation in Pyeongchang and Jeongseon areas, South Korea (평창-정선 일대 "행매층"의 분포와 층서적 의의)

  • Kim, Namsoo;Choi, Sung-Ja;Song, Yungoo;Park, Chaewon;Chwae, Ueechan;Yi, Keewook
    • Economic and Environmental Geology
    • /
    • v.53 no.4
    • /
    • pp.383-395
    • /
    • 2020
  • The stratigraphical position of the Haengmae Formation can provide clues towards solving the hot issue on the Silurian formation, also known as Hoedongri Formation. Since the 2010s, there have been several reports denying the Haengmae Formation as a lithostratigraphic unit. This study aimed to clarify the lithostratigraphic and chronostratigraphic significance of the Haengmae Formation. The distribution and structural geometry of the Haengmae Formation were studied through geologic mapping, and the correlation of relative geologic age and the absolute age was performed through conodont biostratigraphy and zircon U-Pb dating respectively. The representative rock of the Haengmae Formation is massive and yellow-yellowish brown pebble-bearing carbonate rocks with a granular texture similar to sandstone. Its surface is rough with a considerable amount of pores. By studying the mineral composition, contents, and microstructure of the rocks, they have been classified as pebble-bearing clastic rocks composed of dolomite pebbles and matrix. They chiefly comprise of euhedral or subhedral dolomite, and rounded, well-sorted fine-grained quartz, which are continuously distributed in the study area from Biryong-dong to Pyeongan-ri. Bedding attitude and the thickness of the Haengmae Formation are similar to that of the Hoedongri Formation in the north-eastern area (Biryong-dong to Haengmae-dong). The dip-direction attitudes were maintained 340°/15° from Biryong-dong to Haengmae-dong with a thickness of ca. 200 m. However, around the southwest of the studied area, the attitude is suddenly changed and the stratigraphic sequence is in disorder because of fold and thrust. Consequently, the formation is exposed to a wide low-relief area of 1.5 km × 2.5 km. Zircon U-Pb age dating results ranged from 470 to 449 Ma, which indicates that the Haengmae Formation formed during the Upper Ordovician or later. The pebble-bearing carbonate rock consisted of clastic sediments, suggesting that the Middle Ordovician conodonts from the Haengmae Formation must be reworked. Therefore, the above-stated evidence supports that the geologic age of the Haengmae Formation should be Upper Ordovician or later. This study revealed that the Haengmae Formation is neither shear zone, nor an upper part of the Jeongseon Limestone, and is also not the same age as the Jeongseon Limestone. Furthermore, it was confirmed that the Haengmae Formation should be considered a unit of lithostratigraphy in accordance with the stratigraphic guide of the International Commission on Stratigraphy (ICS).

Determining the Authenticity of Labeled Traceability Information by DNA Identity Test for Hanwoo Meats Distributed in Seoul, Korea (DNA 동일성 검사를 통한 서울지역 유통 한우육의 표시 이력정보 진위 판별)

  • Yeon-jae Bak;Mi-ae Park;Su-min Lee;Hyung-suk Park
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.1
    • /
    • pp.12-18
    • /
    • 2023
  • Beef traceability systems help prevent the distribution of Hanwoo (Korean native cattle) meat as imported beef. In particular, assigning a traceability number to each cattle can provide all information regarding the purchased Hanwoo meat to the consumers. In the present study, a DNA identity test was conducted on 344 samples of Hanwoo meat from large livestock product stores in Seoul between 2021 and 2022 to determine the authenticity of important label information, such as the traceability number. Traceability number mismatch was confirmed in 45 cases (13.1%). The mismatch rate decreased to 11.3% in 2022 from 14.7% in 2021, and the mismatch rate was higher in the northern region (16.9%) than in the southern region (10.2%). In addition, of the six brands, B and D showed satisfactory traceability system management, whereas E and A showed poor traceability system management, with significant differences (P<0.001). The actual traceability number confirmation rate was only 53.9% among the mismatch samples. However, examination of the authenticity of label information of the samples within the identified range revealed false marking in the order of the traceability number (13.1%), sex (2.9%), slaughterhouse name (2.2%), and grade (1.6%); no false marking of breed (Hanwoo) was noted. To prevent the distribution of erroneously marked livestock products, the authenticity of label information must be determined promptly. Therefore, a legal basis must be established mandating the filling of a daily work sheet, including the traceability number of beef, in partial meat subdivisions. Our findings can be used as reference data to guide the management direction of traceability systems for ensuring transparency in the distribution of livestock products.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.