• Title/Summary/Keyword: library types

Search Result 921, Processing Time 0.027 seconds

A Systematic Review of Trends of Domestic Digital Curation Research (체계적 문헌고찰을 통한 국내 디지털 큐레이션 연구동향 분석)

  • Minseok Park;Jisue Lee
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.24 no.2
    • /
    • pp.41-63
    • /
    • 2024
  • This study investigated research trends in digital curation indexed in a prominent domestic academic information database. A systematic literature review was conducted on 39 academic papers published from 2009 to 2023. The review examined indexing status according to publication year, venue, academic discipline, research area distribution, research affiliation and occupation, and research types. In addition, network centrality analysis and cohesive group analysis were performed on 69 author keywords. The findings revealed several key points. First, digital curation research peaked in 2015 and 2016 with 5 publications each year, followed by a slight decrease, and then consistently produced 4 or more publications annually since 2019. Second, among the 39 studies, 25 were conducted in interdisciplinary fields, including library and information science, while 11 were in the humanities, such as miscellaneous humanities. The most prominent research areas were theoretical and infrastructural aspects, information management and services, and institutional domains. Third, digital curation research was predominantly led by university-affiliated professors and researchers, with collaborative research more prevalent than solo research. Lastly, analysis of author keywords revealed that "digital curation," "institution," and "content" were the most influential central keywords within the overall network.

A historical study of the Large Banner, a symbol of the military dignity of the Late Joseon Dynasty (조선 후기 무위(武威)의 상징 대기치(大旗幟) 고증)

  • JAE, Songhee;KIM, Youngsun
    • Korean Journal of Heritage: History & Science
    • /
    • v.54 no.4
    • /
    • pp.152-173
    • /
    • 2021
  • The Large Banner was introduced during the Japanese Invasions of Korea with a new military system. It was a flag that controlled the movement of soldiers in military training. In addition, it was used in other ways, such as a symbol when receiving a king in a military camp, a flag raised on the front of a royal procession, at the reception and dispatch of envoys, and at a local official's procession. The Large Banner was recognized as a symbol of military dignity and training rites. The Large Banner was analyzed in the present study in the context of two different types of decorations. Type I includes chungdogi, gakgi and moongi. Type II includes grand, medium, and small obangi, geumgogi and pyomigi. Each type is decorated differently for each purpose. The size of the flag is estimated to be a square of over 4 ja long in length. Flame edges were attached to one side and run up and down The Large Banner used the Five Direction Colors based on the traditional principles of Yin-Yang and Five Elements. The pattern of the Large Banner is largely distinguished by four. The pattern of large obangi consists of divine beasts symbolizing the Five Directions and a Taoism amulet letter. The pattern of medium obangi features spiritual generals that escort the Five Directions. The pattern of small obangi has the Eight Trigrams. The pattern of moongi consists of a tiger with wings that keeps a tight watch on the army's doors. As for historical sources of coloring for Large Banner production, the color-written copy named Gije, from the collection of the Osaka Prefect Library, was confirmed as the style of the Yongho Camp in the mid to late 18th century, and it was also used for this essay and visualization work. We used Cloud-patterned Satin Damask as the background material for Large Banner production, to reveal the dignity of the military. The size of the 4 ja flag was determined to be 170 cm long and 145 cm wide, and the 5 ja flag was 200 cm long and 175 cm wide. The conversion formula used for this work was Youngjochuck (1 ja =30cm). In addition, the order of hierarchy in the Flag of the King was discovered within all flags of the late Joseon Dynasty. In the above historical study, the two types of Large Banner were visualized. The visualization considered the size of the flag, the decoration of the flagpole, and the patterns described in this essay to restore them to their original shape laid out the 18th century relics on the background. By presenting color, size, material patterns, and auxiliary items together, it was possible not only to produce 3D content, but also to produce real products.

A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

  • Kam, Miah;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.53-77
    • /
    • 2012
  • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Identification of a Potexvirus in Korean Garlic Plants (한국 마늘 Potexvirus의 cDNA 유전자 분리 및 분포에 관한 연구)

  • Song, Jong-Tae;Choi, Jin-Nam;Song, Sang-Ik;Lee, Jong-Seob;Choi, Yang-Do
    • Applied Biological Chemistry
    • /
    • v.38 no.1
    • /
    • pp.55-62
    • /
    • 1995
  • To understand the molecular structure of Korean garlic viruses, cDNA cloning of virus genomic RNA was attempted. Virus particles were isolated from virus-infected garlic leaves and a cDNA library was constructed from garlic virus RNA. One of these clones, S81, selected by random sequencing has been identified as a member of potexvirus group other than potyvirus and carlavirus. The clone is 873 bp long contains most of the coat protein (CP) coding region and 3'-noncoding region including poly(A) tail. A putative polyadenylation signal sequence (AAUAAA) and the hexanucleotide motif (ACUUAA), a replicational cis-acting element conserved in the 3'-noncoding region of potexvirus RNAs are noticed. The clone S81 shows about 30-40% identity in both nucleotide and amino acid sequences with CPs of potexviruses. The genome size of the virus was analysed to be 7.46 knt by Northern blot analysis, which was longer than those of other potexviruses. The open reading frame encoding CP was expressed as a fusion protein (S81CP) in Escherichia coli and the recombinant protein was purified by immobilized metal binding affinity chromatography. Polyclonal antibody was raised against S81CP in rabbit to examine the occurrence of garlic potexvirus in Korean garlic plants by immunoblot analysis. Two virus protein bands of Mr 27,000 and 29,000 from garlic leaf extract of various cultivars reacted with the antibody. It was shown that Mr 27,000 band might not be a degradation product of Mr 29,000 band, suggesting that two types of potexvirus different in size of coat protein could exist in Korean garlic plants.

  • PDF

Data Mining Approaches for DDoS Attack Detection (분산 서비스거부 공격 탐지를 위한 데이터 마이닝 기법)

  • Kim, Mi-Hui;Na, Hyun-Jung;Chae, Ki-Joon;Bang, Hyo-Chan;Na, Jung-Chan
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.3
    • /
    • pp.279-290
    • /
    • 2005
  • Recently, as the serious damage caused by DDoS attacks increases, the rapid detection and the proper response mechanisms are urgent. However, existing security mechanisms do not effectively defend against these attacks, or the defense capability of some mechanisms is only limited to specific DDoS attacks. In this paper, we propose a detection architecture against DDoS attack using data mining technology that can classify the latest types of DDoS attack, and can detect the modification of existing attacks as well as the novel attacks. This architecture consists of a Misuse Detection Module modeling to classify the existing attacks, and an Anomaly Detection Module modeling to detect the novel attacks. And it utilizes the off-line generated models in order to detect the DDoS attack using the real-time traffic. We gathered the NetFlow data generated at an access router of our network in order to model the real network traffic and test it. The NetFlow provides the useful flow-based statistical information without tremendous preprocessing. Also, we mounted the well-known DDoS attack tools to gather the attack traffic. And then, our experimental results show that our approach can provide the outstanding performance against existing attacks, and provide the possibility of detection against the novel attack.

Changes in Domestic Perception of Overseas Korean Cultural Heritage Explored through Exhibitions Held in Korea (국내 전시 사례로 본 국외 소재 한국 문화재에 대한 국내의 인식 변화)

  • Shin Soyeon
    • Bangmulgwan gwa yeongu (The National Museum of Korea Journal)
    • /
    • v.1
    • /
    • pp.330-355
    • /
    • 2024
  • There are two main perspectives in Korea on Korean cultural heritage located overseas: one views it as items that need to be repatriated since they were scattered abroad under unfortunate historical circumstances. The other considers them as a means to more widely promote Korea's culture and long history. A shift in perspective has gradually been taking place in the decades since Korea's liberation from Japanese colonial rule in 1945. This can be noted through three major types of exhibitions. The first type is exhibitions of repatriated cultural heritage that showcase items that were illegally removed from the country but later returned or otherwise acquired through purchase or donation. The Special Exhibition of Returned Cultural Heritage, which was held in 1966 on the occasion of the normalization of diplomatic relations between the Republic of Korea and Japan, emphasized the legitimacy of reclaiming cultural properties that were illegally removed from Korea during the period of Japanese colonial rule. Around the 1990s, special exhibitions of private donations were held, which also highlighted the legitimacy of repatriation. The special exhibition of the Oegyujanggak Uigwe (Royal Protocols of the Joseon Dynasty from the Outer Royal Library) held in 2011 was seen as an opportunity to raise public interest in repatriation, heal the wounds of history, and restore the nation's cultural pride. The second type of exhibition involves borrowing and displaying overseas Korean cultural heritage in accordance with a theme as a means to reenergize and provide a comprehensive view of Korean culture. The exhibitions National Treasures from the Goryeo Dynasty in 1995 and National Treasures from the Early Joseon Dynasty in 1997 (both held at the Hoam Museum of Art) and the Masterpieces of Goryeo Buddhist Painting held at the National Museum of Korea in 2010 underscored the importance of overseas Korean cultural heritage for exploring Korean cultural history. The third type is special exhibitions on the history of the collection of Korean cultural heritage. With Korea's economic growth in the 1980s and the increase in exhibitions and the number of galleries featuring Korean cultural heritage in overseas museums in the 1990s, interest in the history of acquisition also grew. Exhibitions like The Korean Collection of the Peabody Essex Museum in 1994 and Korean Art from the United States in 2012 introduced overseas galleries focused on Korean art and the diverse history of collecting Korean cultural properties. They also examined the perception of Korean art in the United States. These efforts heightened public interest in establishing and supporting Korean galleries abroad. The initiation of more systematic surveys and research on Korean cultural heritage located abroad and the contribution of overseas Korean cultural heritage to the enhancement of the local understanding and promotion of Korean culture have resulted in changes to the perception of overseas Korean cultural heritage in Korea.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

A Study on the Curriculum for Record Management Science Education - with focus on the Faculty of Cultural Information Resources, Surugadai University; Evolving Program, New Connections (기록관리학의 발전을 위한 교육과정연구 -준하태(駿河台)(스루가다이)대학(大學)의 경우를 중심(中心)으로-)

  • Kim, Yong-Won
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.1 no.1
    • /
    • pp.69-94
    • /
    • 2001
  • The purpose of this paper is to provide an overview of the current status of the records management science education in Japan, and to examine the implications of the rapid growth of this filed while noting some of its significant issues and problems. The goal of records management science education is to improve the quality of information services and to assure an adequate supply of information professionals. Because records management science programs prepare students for a professional career, their curricula must encompass elements of both education and practical training. This is often expressed as a contrast between theory and practice. The confluence of the social, economic and technological realities of the environment where the learning takes place affects both. This paper reviews the historical background and current trends of records management science education in Japan. It also analyzes the various types of curriculum and the teaching staff of these institutions, with focus on the status of the undergraduate program at Surugadai University, the first comprehensive, university level program in Japan. The Faculty of Cultural Information Resources, Surugadai University, a new school toward an integrated information disciplines, was opened in 1994, to explore the theory and practice of the management diverse cultural information resources. Its purpose was to stimulate and promote research in additional fields of information science by offering professional training in archival science, records management, and museum curatorship, as well as librarianship. In 1999, the school introduced a master program, the first in Japan. The Faculty has two departments and each of them has two courses; Department of Sensory Information Resources Management; -Sound and Audiovisual Information Management, -Landscape and Tourism Information Management, Department of Knowledge Information Resources Management; -Library and Information Management, -Records and Archives Management The structure of the entire curriculum is also organized in stages from the time of entrance through basic instruction and onwards. Orientation subjects which a student takes immediately upon entering university is an introduction to specialized education, in which he learns the basic methods of university education and study, During his first and second years, he arranges Basic and Core courses as essential steps towards specialization at university. For this purpose, the courses offer a wide variety of study topics. The number of courses offered, including these, amounts to approximately 150. While from his third year onwards, he begins specific courses that apply to his major field, and in a gradual accumulation of seminar classes and practical training, puts his knowledge grained to practical use. Courses pertaining to these departments are offered to students beginning their second year. However, there is no impenetrable wall between the two departments, and there are only minor differences with regard requirements for graduation. Students may select third or fourth year seminars regardless of the department to which they belong. To be awarded a B.A. in Cultural Information Resources, the student is required to earn 34 credits in Basic Courses(such as, Social History of Cultural Information, Cultural Anthropology, History of Science, Behavioral Sciences, Communication, etc.), 16 credits in Foreign Languages(including 10 in English), 14 credits on Information Processing(including both theory and practice), and 60 credits in the courses for his or her major. Finally, several of the issues and problems currently facing records management science education in Japan are briefly summarized below; -Integration and Incorporation of related areas and similar programs, -Curriculum Improvement, -Insufficient of Textbooks, -Lack of qualified Teachers, -Problems of the employment of Graduates. As we moved toward more sophisticated, integrated, multimedia information services, information professionals will need to work more closely with colleagues in other specialties. It will become essential to the survival of the information professions for librarians to work with archivists, record managers and museum curators. Managing the changes in our increasingly information-intensive society demands strong coalitions among everyone in cultural Institutions. To provide our future colleagues with these competencies will require building and strengthening partnerships within and across the information professions and across national borders.