• Title/Summary/Keyword: electronic current

Search Result 6,532, Processing Time 0.04 seconds

A 1280-RGB $\times$ 800-Dot Driver based on 1:12 MUX for 16M-Color LTPS TFT-LCD Displays (16M-Color LTPS TFT-LCD 디스플레이 응용을 위한 1:12 MUX 기반의 1280-RGB $\times$ 800-Dot 드라이버)

  • Kim, Cha-Dong;Han, Jae-Yeol;Kim, Yong-Woo;Song, Nam-Jin;Ha, Min-Woo;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.1
    • /
    • pp.98-106
    • /
    • 2009
  • This work proposes a 1280-RGB $\times$ 800-Dot 70.78mW 0.l3um CMOS LCD driver IC (LDI) for high-performance 16M-color low temperature poly silicon (LTPS) thin film transistor liquid crystal display (TFT-LCD) systems such as ultra mobile PC (UMPC) and mobile applications simultaneously requiring high resolution, low power, and small size at high speed. The proposed LDI optimizes power consumption and chip area at high resolution based on a resistor-string based architecture. The single column driver employing a 1:12 MUX architecture drives 12 channels simultaneously to minimize chip area. The implemented class-AB amplifier achieves a rail-to-rail operation with high gain and low power while minimizing the effect of offset and output deviations for high definition. The supply- and temperature-insensitive current reference is implemented on chip with a small number of MOS transistors. A slew enhancement technique applicable to next-generation source drivers, not implemented on this prototype chip, is proposed to reduce power consumption further. The prototype LDI implemented in a 0.13um CMOS technology demonstrates a measured settling time of source driver amplifiers within 1.016us and 1.072us during high-to-low and low-to-high transitions, respectively. The output voltage of source drivers shows a maximum deviation of 11mV. The LDI with an active die area of $12,203um{\times}1500um$ consumes 70.78mW at 1.5V/5.5V.

Citing Behavior of Korean Scientists on Foreign Journals in KSCD (KSCD를 활용한 국내 과학기술자의 해외 학술지 인용행태 연구)

  • Kim, Byung-Kyu;Kang, Mu-Yeong;Choi, Seon-Heui;Kim, Soon-Young;You, Beom-Jong;Shin, Jae-Do
    • Journal of the Korean Society for information Management
    • /
    • v.28 no.2
    • /
    • pp.117-133
    • /
    • 2011
  • There have been little comprehensive research for studying impact of foreign journals on Korean scientists. The main reason for this is because there was no extensive citation index database of domestic journals for analysis. Korea Institute of Science and Technology Information (KISTI) built the Korea Science Citation Database (KSCD), and have provided Korea Science Citation Index (KSCI) and Korea Journal Citation Reports (KJCR) services. In this article, citing behavior of Korean scientists on foreign journals was examined by using KSCD that covers Korean core journals. This research covers (1) analysis of foreign document types cited, (2) analysis of citation counts of foreign journals by subject and the ratio of citing different disciplines, (3) analysis of language and country of foreign documents cited, (4) analysis of publishers of journals and whether or not journals are listed on global citation index services and (5) analysis for current situation of subscribing to foreign electronic journals in Korea. The results of this research would be useful for establishing strategies for licensing foreign electronic journals and for information services. From this research, immediacy citation rate (average 1.46%), peak-time (average 3.9 years) and half-life (average 8 years) of cited foreign journals were identified. It was also found that Korean scientistis tend to cite journals covered in SCI(E) or SCOPUS, and 90% of cited foreign journals have been licensed by institutions in Korea.

Ontology-based Course Mentoring System (온톨로지 기반의 수강지도 시스템)

  • Oh, Kyeong-Jin;Yoon, Ui-Nyoung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.149-162
    • /
    • 2014
  • Course guidance is a mentoring process which is performed before students register for coming classes. The course guidance plays a very important role to students in checking degree audits of students and mentoring classes which will be taken in coming semester. Also, it is intimately involved with a graduation assessment or a completion of ABEEK certification. Currently, course guidance is manually performed by some advisers at most of universities in Korea because they have no electronic systems for the course guidance. By the lack of the systems, the advisers should analyze each degree audit of students and curriculum information of their own departments. This process often causes the human error during the course guidance process due to the complexity of the process. The electronic system thus is essential to avoid the human error for the course guidance. If the relation data model-based system is applied to the mentoring process, then the problems in manual way can be solved. However, the relational data model-based systems have some limitations. Curriculums of a department and certification systems can be changed depending on a new policy of a university or surrounding environments. If the curriculums and the systems are changed, a scheme of the existing system should be changed in accordance with the variations. It is also not sufficient to provide semantic search due to the difficulty of extracting semantic relationships between subjects. In this paper, we model a course mentoring ontology based on the analysis of a curriculum of computer science department, a structure of degree audit, and ABEEK certification. Ontology-based course guidance system is also proposed to overcome the limitation of the existing methods and to provide the effectiveness of course mentoring process for both of advisors and students. In the proposed system, all data of the system consists of ontology instances. To create ontology instances, ontology population module is developed by using JENA framework which is for building semantic web and linked data applications. In the ontology population module, the mapping rules to connect parts of degree audit to certain parts of course mentoring ontology are designed. All ontology instances are generated based on degree audits of students who participate in course mentoring test. The generated instances are saved to JENA TDB as a triple repository after an inference process using JENA inference engine. A user interface for course guidance is implemented by using Java and JENA framework. Once a advisor or a student input student's information such as student name and student number at an information request form in user interface, the proposed system provides mentoring results based on a degree audit of current student and rules to check scores for each part of a curriculum such as special cultural subject, major subject, and MSC subject containing math and basic science. Recall and precision are used to evaluate the performance of the proposed system. The recall is used to check that the proposed system retrieves all relevant subjects. The precision is used to check whether the retrieved subjects are relevant to the mentoring results. An officer of computer science department attends the verification on the results derived from the proposed system. Experimental results using real data of the participating students show that the proposed course guidance system based on course mentoring ontology provides correct course mentoring results to students at all times. Advisors can also reduce their time cost to analyze a degree audit of corresponding student and to calculate each score for the each part. As a result, the proposed system based on ontology techniques solves the difficulty of mentoring methods in manual way and the proposed system derive correct mentoring results as human conduct.

A Study on e-B/L Korea Service and its Facilitation Strategies (한국형 전자선하증권 활성화 전략에 관한 연구)

  • Jeong, Yoon-Say
    • International Commerce and Information Review
    • /
    • v.13 no.4
    • /
    • pp.51-79
    • /
    • 2011
  • Korea has accomplished the establishment of the National Single Window for Paperless Trade. Since 1991, it has developed Trade Automation Service System based on EDI technology. In 2003, Korean government and private sectors jointly began to set up National Paperless Trade Service( e-Trade Service) as one of the e-government projects. In 2008, they commenced the uTradeHub Service which was equipped with Internet based e-B/L and e-Nego service systems for the first time in the world To facilitate the service Korea amended its e-Trade facilitation Act and Law by 2007. At the end of 2011, Korea historically recorded its trade volume of 1 trillion US dollars and joined '$1 trillion trade club' as the 9the member country since the country had started international trade less than five decades ago. A rolling out of the e-B/L and e-Nego service will 'ally reduce the transaction costs of trading businesses and accelerate the activation e-trade services. The purposes of the study are to examine 'e-B/L Korea' service and its facilitation strategies as well as identify obstacles to utilize the 'e-B/L Korea' service. The paper reviewed and analyzed Korea's Paperless trade system and distinctive characteristics of the 'e-B/L Korea Service. Parts of the fOWld distinctive characteristics of the Korea's e-B/L service are as follows; It is well equiped with IT and legal system. It also has more that 30,000 potential users who are already uTradeHub service users. The paper indicated several weaknesses of the current system such as global KPI issues, circulation of the electronic documents not only in the domestic market but also among economies, development of the electronic Bill of Exchange. As resolution measures, the paper recommended the introduction of mutual recognition system of PKI among trade partner counties, setting up e-trade solution for small and medium companies, and special attention to raise users' awareness of the e-B/L service.

  • PDF

Development Process and Methods of Audit and Certification Toolkit for Trustworthy Digital Records Management Agency (신뢰성 있는 전자기록관리기관 감사인증도구 개발에 관한 연구)

  • Rieh, Hae-young;Kim, Ik-han;Yim, Jin-Hee;Shim, Sungbo;Jo, YoonSun;Kim, Hyojin;Woo, Hyunmin
    • The Korean Journal of Archival Studies
    • /
    • no.25
    • /
    • pp.3-46
    • /
    • 2010
  • Digital records management is one whole system in which many social and technical elements are interacting. To maintain the trustworthiness, the repository needs periodical audit and certification. Thus, individual electronic records management agency needs toolkit that can be used to self-evaluate their trustworthiness continuously, and self-assess their atmosphere and system to recognize deficiencies. The purpose of this study is development of self-certification toolkit for repositories, which synthesized and analysed such four international standard and best practices as OAIS Reference Model(ISO 14721), TRAC, DRAMBORA, and the assessment report conducted and published by TNA/UKDA, as well as MoRe2 and current national laws and standards. As this paper describes and demonstrate the development process and the framework of this self-certification toolkit, other electronic records management agencies could follow the process and develop their own toolkit reflecting their situation, and utilize the self-assessment results in-house. As a result of this research, 12 areas for assessment were set, which include (organizational) operation management, classification system and master data management, acquisition, registration and description, storage and preservation, disposal, services, providing finding aids, system management, access control and security, monitoring/audit trail/statistics, and risk management. In each 12 area, the process map or functional charts were drawn and business functions were analyzed, and 54 'evaluation criteria', consisted of main business functional unit in each area were drawn. Under each 'evaluation criteria', 208 'specific evaluation criteria', which supposed to be implementable, measurable, and provable for self-evaluation in each area, were drawn. The audit and certification toolkit developed by this research could be used by digital repositories to conduct periodical self-assessment of the organization, which would be used to supplement any found deficiencies and be used to reflect the organizational development strategy.

Standardization and Management of Interface Terminology regarding Chief Complaints, Diagnoses and Procedures for Electronic Medical Records: Experiences of a Four-hospital Consortium (전자의무기록 표준화 용어 관리 프로세스 정립)

  • Kang, Jae-Eun;Kim, Kidong;Lee, Young-Ae;Yoo, Sooyoung;Lee, Ho Young;Hong, Kyung Lan;Hwang, Woo Yeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.679-687
    • /
    • 2021
  • The purpose of the present study was to document the standardization and management process of interface terminology regarding the chief complaints, diagnoses, and procedures, including surgery in a four-hospital consortium. The process was proposed, discussed, modified, and finalized in 2016 by the Terminology Standardization Committee (TSC), consisting of personnel from four hospitals. A request regarding interface terminology was classified into one of four categories: 1) registration of a new term, 2) revision, 3) deleting an old term and registering a new term, and 4) deletion. A request was processed in the following order: 1) collecting testimonies from related departments and 2) voting by the TSC. At least five out of the seven possible members of the voting pool need to approve of it. Mapping to the reference terminology was performed by three independent medical information managers. All processes were performed online, and the voting and mapping results were collected automatically. This process made the decision-making process clear and fast. In addition, this made users receptive to the decision of the TSC. In the 16 months after the process was adopted, there were 126 new terms registered, 131 revisions, 40 deletions of an old term and the registration of a new term, and 1235 deletions.

A Study on the Development Direction of Medical Image Information System Using Big Data and AI (빅데이터와 AI를 활용한 의료영상 정보 시스템 발전 방향에 대한 연구)

  • Yoo, Se Jong;Han, Seong Soo;Jeon, Mi-Hyang;Han, Man Seok
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.9
    • /
    • pp.317-322
    • /
    • 2022
  • The rapid development of information technology is also bringing about many changes in the medical environment. In particular, it is leading the rapid change of medical image information systems using big data and artificial intelligence (AI). The prescription delivery system (OCS), which consists of an electronic medical record (EMR) and a medical image storage and transmission system (PACS), has rapidly changed the medical environment from analog to digital. When combined with multiple solutions, PACS represents a new direction for advancement in security, interoperability, efficiency and automation. Among them, the combination with artificial intelligence (AI) using big data that can improve the quality of images is actively progressing. In particular, AI PACS, a system that can assist in reading medical images using deep learning technology, was developed in cooperation with universities and industries and is being used in hospitals. As such, in line with the rapid changes in the medical image information system in the medical environment, structural changes in the medical market and changes in medical policies to cope with them are also necessary. On the other hand, medical image information is based on a digital medical image transmission device (DICOM) format method, and is divided into a tomographic volume image, a volume image, and a cross-sectional image, a two-dimensional image, according to a generation method. In addition, recently, many medical institutions are rushing to introduce the next-generation integrated medical information system by promoting smart hospital services. The next-generation integrated medical information system is built as a solution that integrates EMR, electronic consent, big data, AI, precision medicine, and interworking with external institutions. It aims to realize research. Korea's medical image information system is at a world-class level thanks to advanced IT technology and government policies. In particular, the PACS solution is the only field exporting medical information technology to the world. In this study, along with the analysis of the medical image information system using big data, the current trend was grasped based on the historical background of the introduction of the medical image information system in Korea, and the future development direction was predicted. In the future, based on DICOM big data accumulated over 20 years, we plan to conduct research that can increase the image read rate by using AI and deep learning algorithms.

Characteristics of the ( Pb, La ) $TiO_3$ Thin Films with Pb/La Compositions (Pb/La 조성에 따른 ( Pb, La ) $TiO_3$ 박막의 특성 변화)

  • Kang, Seong-Jun;Joung, Yang-Hee;Yoon, Yung-Sup
    • Journal of the Korean Institute of Telematics and Electronics D
    • /
    • v.36D no.1
    • /
    • pp.29-37
    • /
    • 1999
  • In this study, we have prepared PLT thin films having various La concentrations by using sol-gel method and studied on the effect of La concentration on the electrical properties of PLT thin films. As the La concentration increases from 5mol% to 28mol%, the dielectric constant at 10kHz increases from 428 to 761, while the loss tangent decreases from 0.063 to 0.024. Also, the leakage current density at 150kV/cm has a tendency to decrease from 6.96${\mu}A/cm^2$ to 0.79${\mu}A/cm^2$. In the result of hysteresis loops of PLT thin films, the remanent polariation and the coercive field decrease from 9.55${\mu}C/cm^2$ to 1.10${\mu}C/cm^2$ and from 46.4kV/cm to 13.7kV/cm, respectively. With the result of the fatigue test on the PLT thin films, we have found that the fatigue properties are improved remarkably as the La concentration increases from 5 mol% to 28mol%. In particular, the PLT28) has paraelectric phase and its charge storage clensity and leakage current density at 5V are 134fC/${\mu}cm^2$ and 1.01${\mu}A/cm^2$, respectively. The remanent polarization and coercive field of the PLT(10) film are 6.96${\mu}C/cm^2$ and 40.2kV/cm, respectively. After applying of $10^9$ square pulses with ${\pm}5V$, the remanent polarilzation of the PLT(10) film decreases about 20% from the initial state. In the results, we conclude that the 10mol% and the 28mol% La doped PLT thin films are very suitable for the capacitor dielectrics of new generation of DRAM and NVFRAM respecitively.

  • PDF

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Study on the Archives and Records Management in Korea - Overview and Future Direction - (한국의 기록관리 현황 및 발전방향에 관한 연구)

  • Han, Sang-Wan;Kim, Sung-Soo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.2 no.2
    • /
    • pp.1-38
    • /
    • 2002
  • This study examines the status quo of Korean archives and records management from the Governmental as well as professional activities for the development of the field in relation to the new legislation on records management. Among many concerns, this study primarily explores the following four perspectives: 1) the Government Archives and Records Services; 2) the Korean Association of Archives; 3) the Korean Society of Archives and Records Management; 4) the Journal of Korean Society of Archives and Records Management. One of the primary tasks of the is to build the special depository within which the Presidential Library should be located. As a result, the position of the GARS can be elevated and directed by an official at the level of vice-minister right under a president as a governmental representative of managing the public records. In this manner, GARS can sustain its independency and take custody of public records across government agencies. made efforts in regard to the preservation of paper records, the preservation of digital resources in new media formats, facilities and equipments, education of archivists and continuing, training of practitioners, and policy-making of records preservation. For further development, academia and corporate should cooperate continuously to face with the current problems. has held three international conferences to date. The topics of conferences include respectively: 1) records management and archival education of Korea, Japan, and China; 2) knowledge management and metadata for the fulfillment of archives and information science; and 3) electronic records management and preservation with the understanding of ongoing archival research in the States, Europe, and Asia. The Society continues to play a leading role in both of theory and practice for the development of archival science in Korea. It should also suggest an educational model of archival curricula that fits into the Korean context. The Journals of Records Management & Archives Society of Korea have been published on the six major topics to date. Findings suggest that "Special Archives" on regional or topical collections are desirable because it can house subject holdings on specialty or particular figures in that region. In addition, archival education at the undergraduate level is more desirable for Korean situations where practitioners are strongly needed and professionals with master degrees go to manager positions. Departments of Library and Information Science in universities, therefore, are needed to open archival science major or track at the undergraduate level in order to meet current market demands. The qualification of professional archivists should be moderate as well.