• Title/Summary/Keyword: 모델선택

Search Result 3,083, Processing Time 0.044 seconds

A Study on the Success Factors of Co-Founding Start-up by Step: Focusing on the Case of Opportunity-type Start-up (공동창업의 단계별 성공요인에 관한 연구: 기회형 창업기업 사례를 중심으로)

  • Yun, Seong Man;Sung, Chang Soo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.141-158
    • /
    • 2023
  • From the perspective of an entrepreneur, one of the most important factors for understanding the inherent limitations of a startup, reducing the risk of failure, and succeeding is the composition of the talent, that is, the founding team. Therefore, a common concern experienced by entrepreneurs in the pre-entrepreneurship stage or the early stage of startup is the choice between independent startups and co-founding start-up. Nonetheless, in Korea, the share of independent entrepreneurship is significantly higher than that of co-founding start-up. On the other hand, focusing on the fact that many successful global innovative companies are in the form of co-founding start-up, the success factors of co-founding start-up were examined. Most of the related preceding studies are studies that identify the capabilities and characteristics of individual entrepreneurs as factors influencing the survival and success of entrepreneurship, and there is a lack of research on partnerships, that is, co-founding start-up, which are common in the field of entrepreneurship ecosystems. Therefore, this study attempted a multi-case study through in-depth interviews, collection of relevant data, analysis of contextual information, and consideration of previous studies targeting co-founders of domestic startups that succeeded in opportunistic startups. Through this, a model for deriving the phased characteristics and key success factors of co-founding start-up was proposed. As a result of the study, the key element of the preliminary start-up stage was 'opportunity', and the success factors were 'opportunity recognition through entrepreneur's experience' and 'idea development'. The key element in the early stages of start-up is "start-up team," and the success factor is "trust and complement of start-up team," and synergy is shown when "diversity and homogeneity of start-up team" are harmonized. In addition, conflicts between co-founders may occur in the early stages of start-ups, which has a large impact on the survival of start-ups. The conflict between the start-up team could be overcome through constant "mutual understanding and respect through communication" and "clear division of work and role sharing." It was confirmed that the core element of the start-up growth stage was 'resources', and 'securing excellent talent' and 'raising external funds' were important factors for success. These results are expected to overcome the limitations of start-up companies, such as limited resources, lack of experience, and risk of failure, in entrepreneurship studies, and prospective entrepreneurs preparing for a start-up in a situation where the form of co-founding start-up is attracting attention as one of the alternatives to increase the success rate. It has implications for various stakeholders in the entrepreneurial ecosystem.

  • PDF

Exploring Pre-Service Earth Science Teachers' Understandings of Computational Thinking (지구과학 예비교사들의 컴퓨팅 사고에 대한 인식 탐색)

  • Young Shin Park;Ki Rak Park
    • Journal of the Korean earth science society
    • /
    • v.45 no.3
    • /
    • pp.260-276
    • /
    • 2024
  • The purpose of this study is to explore whether pre-service teachers majoring in earth science improve their perception of computational thinking through STEAM classes focused on engineering-based wave power plants. The STEAM class involved designing the most efficient wave power plant model. The survey on computational thinking practices, developed from previous research, was administered to 15 Earth science pre-service teachers to gauge their understanding of computational thinking. Each group developed an efficient wave power plant model based on the scientific principal of turbine operation using waves. The activities included problem recognition (problem solving), coding (coding and programming), creating a wave power plant model using a 3D printer (design and create model), and evaluating the output to correct errors (debugging). The pre-service teachers showed a high level of recognition of computational thinking practices, particularly in "logical thinking," with the top five practices out of 14 averaging five points each. However, participants lacked a clear understanding of certain computational thinking practices such as abstraction, problem decomposition, and using bid data, with their comprehension of these decreasing after the STEAM lesson. Although there was a significant reduction in the misconception that computational thinking is "playing online games" (from 4.06 to 0.86), some participants still equated it with "thinking like a computer" and "using a computer to do calculations". The study found slight improvements in "problem solving" (3.73 to 4.33), "pattern recognition" (3.53 to 3.66), and "best tool selection" (4.26 to 4.66). To enhance computational thinking skills, a practice-oriented curriculum should be offered. Additional STEAM classes on diverse topics could lead to a significant improvement in computational thinking practices. Therefore, establishing an educational curriculum for multisituational learning is essential.

A Study of Organic Matter Fraction Method of the Wastewater by using Respirometry and Measurements of VFAs on the Filtered Wastewater and the Non-Filtered Wastewater (여과한 하수와 하수원액의 VFAs 측정과 미생물 호흡률 측정법을 이용한 하수의 유기물 분액 방법에 관한 연구)

  • Kang, Seong-wook;Cho, Wook-sang
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.17 no.1
    • /
    • pp.58-72
    • /
    • 2009
  • In this study, the organic matter and biomass was characterized by using respirometry based on ASM No.2d (Activated Sludge Model No.2d). The activated sludge models are based on the ASM No.2d model, published by the IAWQ(International Association on Water Quality) task group on mathematical modeling for design and operation of biological wastewater treatment processes. For this study, OUR(Oxygen Uptake Rate) measurements were made on filtered as well as non-filtered wastewater. Also, GC-FID and LC analysis were applied for the estimation of VFAs(Volatile Fatty Acids) COD(S_A) in slowly bio-degradable soluble substrates of the ASM No.2d. Therefore, this study was intended to clearly identify slowly bio-degradable dissolved materials(S_S) and particulate materials(X_I). In addition, a method capable of determining the accurate time to measure non-biodegradable COD(S_I), by the change of transition graphs in the process of measuring microbial OUR, was presented in this study. Influent fractionation is a critical step in the model calibrations. From the results of respirometry on filtered wastewater, the fraction of fermentable and readily biodegradable organic matter(S_F), fermentation products(S_A), inert soluble matter(S_I), slowly biodegradable matter(X_S) and inert particular matter(X_I) was 33.2%, 14.1%, 6.9%, 34.7%, 5.8%, respectively. The active heterotrophic biomass fraction(X_H) was about 5.3%.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

[ $^1H$ ] MR Spectroscopy of the Normal Human Brains: Comparison between Signa and Echospeed 1.5 T System (정상 뇌의 수소 자기공명분광 소견: 1.5 T Signa와 Echospeed 자기공명영상기기에서의 비교)

  • Kang Young Hye;Lee Yoon Mi;Park Sun Won;Suh Chang Hae;Lim Myung Kwan
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.2
    • /
    • pp.79-85
    • /
    • 2004
  • Purpose : To evaluate the usefulness and reproducibility of $^1H$ MRS in different 1.5 T MR machines with different coils to compare the SNR, scan time and the spectral patterns in different brain regions in normal volunteers. Materials and Methods : Localized $^1H$ MR spectroscopy ($^1H$ MRS) was performed in a total of 10 normal volunteers (age; 20-45 years) with spectral parameters adjusted by the autoprescan routine (PROBE package). In all volunteers, MRS was performed in a three times using conventional MRS (Signa Horizon) with 1 channel coil and upgraded MRS (Echospeed plus with EXCITE) with both 1 channel and 8 channel coil. Using these three different machines and coils, SNRs of the spectra in both phantom and volunteers and (pre)scan time of MRS were compared. Two regions of the human brain (basal ganglia and deep white matter) were examined and relative metabolite ratios (NAA/Cr, Cho/Cr, and mI/Cr ratios) were measured in all volunteers. For all spectra, a STEAM localization sequence with three-pulse CHESS $H_2O$ suppression was used, with the following acquisition parameters: TR=3.0/2.0 sec, TE=30 msec, TM=13.7 msec, SW=2500 Hz, SI=2048 pts, AVG : 64/128, and NEX=2/8 (Signa/Echospeed). Results : The SNR was about over $30\%$ higher in Echospeed machine and time for prescan and scan was almost same in different machines and coils. Reliable spectra were obtained on both MRS systems and there were no significant differences in spectral patterns and relative metabolite ratios in two brain regions (p>0.05). Conclusion : Both conventional and new MRI systems are highly reliable and reproducible for $^1H$ MR spectroscopic examinations in human brains and there are no significant differences in applications for $^1H$ MRS between two different MRI systems.

  • PDF

The Causes of Conflict and the Effect of Control Mechanisms on Conflict Resolution between Manufacturer and Supplier (제조-공급자간 갈등 원인과 거래조정 방식의 갈등관리 효과)

  • Rhee, Jin Hwa
    • Journal of Distribution Research
    • /
    • v.17 no.4
    • /
    • pp.55-80
    • /
    • 2012
  • I. Introduction Developing the relationships between companies is very important issue to ensure a competitive advantage in today's business environment (Bleeke & Ernst 1991; Mohr & Spekman 1994; Powell 1990). Partnerships between companies are based on having same goals, pursuing mutual understanding, and having a professional level of interdependence. By having such a partnerships and cooperative efforts between companies, they will achieve efficiency and effectiveness of their business (Mohr and Spekman, 1994). However, it is difficult to expect these ideal results only in the B2B corporate transaction. According to agency theory which is the well-accepted theory in various fields of business strategy, organization, and marketing, the two independent companies have fundamentally different corporate purposes. Also there is a higher chance of developing opportunism and conflict due to natures of human(organization), such as self-interest, bounded rationality, risk aversion, and environment factor as imbalance of information (Eisenhardt 1989). That is, especially partnerships between principal(or buyer) and agent(or supplier) of companies within supply chain, the business contract itself will not provide competitive advantage. But managing partnership between companies is the key to success. Therefore, managing partnership between manufacturer and supplier, and finding causes of conflict are essential to improve B2B performance. In conclusion, based on prior researches and Agency theory, this study will clarify how business hazards cause conflicts on supply chain and then identify how developed conflicts have been managed by two control mechanisms. II. Research model III. Method In order to validate our research model, this study gathered questionnaires from small and medium sized enterprises(SMEs). In Korea, SMEs mean the firms whose employee is under 300 and capital is under 8 billion won(about 7.2 million dollar). We asked the manufacturer's perception about the relationship with the biggest supplier, and our key informants are denied to a person responsible for buying(ex)CEO, executives, managers of purchasing department, and so on). In detail, we contact by telephone to our initial sample(about 1,200 firms) and introduce our research motivation and send our questionnaires by e-mail, mail, and direct survey. Finally we received 361 data and eliminate 32 inappropriate questionnaires. We use 329 manufactures' data on analysis. The purpose of this study is to identify the anticipant role of business hazard (environmental dynamism, asset specificity) and investigate the moderating effect of control mechanism(formal control, social control) on conflict-performance relationship. To find out moderating effect of control methods, we need to compare the regression weight between low versus. high group(about level of exercised control methods). Therefore we choose the structural equation modeling method that is proper to do multi-group analysis. The data analysis is performed by AMOS 17.0 software, and model fits are good statically (CMIN/DF=1.982, p<.000, CFI=.936, IFI=.937, RMSEA=.056). IV. Result V. Discussion Results show that the higher environmental dynamism and asset specificity(on particular supplier) buyer(manufacturer) has, the more B2B conflict exists. And this conflict affect relationship quality and financial outcomes negatively. In addition, social control and formal control could weaken the negative effect of conflict on relationship quality significantly. However, unlikely to assure conflict resolution effect of control mechanisms on relationship quality, financial outcomes are changed by neither social control nor formal control. We could explain this results with the characteristics of our sample, SMEs(Small and Medium sized Enterprises). Financial outcomes of these SMEs(manufacturer or principal) are affected by their customer(usually major company) more easily than their supplier(or agent). And, in recent few years, most of companies have suffered from financial problems because of global economic recession. It means that it is hard to evaluate the contribution of supplier(agent). Therefore we also support the suggestion of Gladstein(1984), Poppo & Zenger(2002) that relational performance variable can capture the focal outcomes of relationship(exchange) better than financial performance variable. This study has some implications that it tests the sources of conflict and investigates the effect of resolution methods of B2B conflict empirically. And, especially, it finds out the significant moderating effect of formal control which past B2B management studies have ignored in Korea.

  • PDF

The Ability of Anti-tumor Necrosis Factor Alpha(TNF-${\alpha}$) Antibodies Produced in Sheep Colostrums

  • Yun, Sung-Seob
    • 한국유가공학회:학술대회논문집
    • /
    • 2007.09a
    • /
    • pp.49-58
    • /
    • 2007
  • Inflammatory process leads to the well-known mucosal damage and therefore a further disturbance of the epithelial barrier function, resulting abnormal intestinal wall function, even further accelerating the inflammatory process[1]. Despite of the records, etiology and pathogenesis of IBD remain rather unclear. There are many studies over the past couple of years have led to great advanced in understanding the inflammatory bowel disease(IBD) and their underlying pathophysiologic mechanisms. From the current understanding, it is likely that chronic inflammation in IBD is due to aggressive cellular immune responses including increased serum concentrations of different cytokines. Therefore, targeted molecules can be specifically eliminated in their expression directly on the transcriptional level. Interesting therapeutic trials are expected against adhesion molecules and pro-inflammatory cytokines such as TNF-${\alpha}$. The future development of immune therapies in IBD therefore holds great promises for better treatment modalities of IBD but will also open important new insights into a further understanding of inflammation pathophysiology. Treatment of cytokine inhibitors such as Immunex(Enbrel) and J&J/Centocor(Remicade) which are mouse-derived monoclonal antibodies have been shown in several studies to modulate the symptoms of patients, however, theses TNF inhibitors also have an adverse effect immune-related problems and also are costly and must be administered by injection. Because of the eventual development of unwanted side effects, these two products are used in only a select patient population. The present study was performed to elucidate the ability of TNF-${\alpha}$ antibodies produced in sheep colostrums to neutralize TNF-${\alpha}$ action in a cell-based bioassay and in a small animal model of intestinal inflammation. In vitro study, inhibitory effect of anti-TNF-${\alpha}$ antibody from the sheep was determined by cell bioassay. The antibody from the sheep at 1 in 10,000 dilution was able to completely inhibit TNF-${\alpha}$ activity in the cell bioassay. The antibodies from the same sheep, but different milkings, exhibited some variability in inhibition of TNF-${\alpha}$ activity, but were all greater than the control sample. In vivo study, the degree of inflammation was severe to experiment, despite of the initial pilot trial, main trial 1 was unable to figure out of any effect of antibody to reduce the impact of PAF and LPS. Main rat trial 2 resulted no significant symptoms like characteristic acute diarrhea and weight loss of colitis. This study suggested that colostrums from sheep immunized against TNF-${\alpha}$ significantly inhibited TNF-${\alpha}$ bioactivity in the cell based assay. And the higher than anticipated variability in the two animal models precluded assessment of the ability of antibody to prevent TNF-${\alpha}$ induced intestinal damage in the intact animal. Further study will require to find out an alternative animal model, which is more acceptable to test anti-TNF-${\alpha}$ IgA therapy for reducing the impact of inflammation on gut dysfunction. And subsequent pre-clinical and clinical testing also need generation of more antibody as current supplies are low.

  • PDF

The Abuse and Invention of Tradition from Maintenance Process of Historic Site No.135 Buyeo Gungnamji Pond (사적 제135호 부여 궁남지의 정비과정으로 살펴본 전통의 남용과 발명)

  • Jung, Woo-Jin
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.35 no.2
    • /
    • pp.26-44
    • /
    • 2017
  • Regarded as Korea's traditional pond, Gungnamj Pond was surmised to be "Gungnamji" due to its geological positioning in the south of Hwajisan (花枝山) and relics of the Gwanbuk-ri (官北里) suspected of being components to the historical records of Muwang (武王)'s pond of The Chronicles of the Three States [三國史記] and Sabi Palace, respectively, yet was subjected to a restoration following a designation to national historic site. This study is focused on the distortion of authenticity identified in the course of the "Gungnamji Pond" restoration and the invention of tradition, whose summarized conclusions are as follows. 1. Once called Maraebangjuk (마래방죽), or Macheonji (馬川池) Pond, Gungnamji Pond was existent in the form of a low-level swamp of vast area encompassing 30,000 pyeong during the Japanese colonial period. Hong, Sa-jun, who played a leading role in the restoration of "Gungnamji Pond," said that even during the 1940s, the remains of the island and stone facilities suspected of being the relics of Gungnamji Pond of the Baekje period were found, and that the traces of forming a royal palace and garden were discovered on top of them. Hong, Sa-jun also expressed an opinion of establishing a parallel between "Gungnamji Pond" and "Maraebangjuk" in connection with a 'tale of Seodong [薯童說話]' in the aftermath of the detached palace of Hwajisan, which ultimately operated as a theoretical ground for the restoration of Gungnamj Pond. Assessing through Hong, Sa-jun's sketch, the form and scale of Maraebangjuk were visible, of which the form was in close proximity to that photographed during the Japanese colonial period. 2. The minimized restoration of Gungnamji Pond faced deterrence for the land redevelopment project implemented in the 1960s, and the remainder of the land size is an attestment. The fundamental problem manifest in the restoration of Gungnamji Pond numerously attempted from 1964 through 1967 was the failure of basing the restorative work in the archaeological facts yet in the perspective of the latest generations, ultimately yielding a replication of Hyangwonji Pond of Gyeongbok Palace. More specifically, the methodologies employed in setting an island and a pavilion within a pond, or bridging an island with a land evidenced as to how Gungnamji Pond was modeled after Hyangwonji Pond of Gyeongbok Palace. Furthermore, Chihyanggyo (醉香橋) Bridge referenced in the designing of the bridge was hardly conceived as a form indigenous to the Joseon Dynasty, whose motivation and idea of the misguided restoration design at the time all the more devaluated Gungnamji Pond. Such an utterly pure replication of the design widely known as an ingredient for the traditional landscape was purposive towards the aesthetic symbolism and preference retained by Gyeongbok Palace, which was intended to entitle Gungnamji Pond to a physical status of the value in par with that of Gyeongbok Palace. 3. For its detachment to the authenticity as a historical site since its origin, Gungnamji Pond represented distortions of the landscape beauty and tradition even through the restorative process. The restorative process for such a historical monument, devoid of constructive use and certain of distortion, maintains extreme intimacy with the nationalistic cultural policy promoted by the Park, Jeong-hee regime through the 1960s and 1970s. In the context of the "manipulated discussions of tradition," the Park's cultural policy transformed the citizens' recollection into an idealized form of the past, further magnifying it at best. Consequently, many of the historical sites emerged as fancy and grand as they possibly could beyond their status quo across the nation, and "Gungnamji Pond" was a victim to this monopolistic government-led cultural policy incrementally sweeping away with new buildings and structures instituted regardless of their original space, and hence, their value.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.

Directions of Implementing Documentation Strategies for Local Regions (지역 기록화를 위한 도큐멘테이션 전략의 적용)

  • Seol, Moon-Won
    • The Korean Journal of Archival Studies
    • /
    • no.26
    • /
    • pp.103-149
    • /
    • 2010
  • Documentation strategy has been experimented in various subject areas and local regions since late 1980's when it was proposed as archival appraisal and selection methods by archival communities in the United States. Though it was criticized to be too ideal, it needs to shed new light on the potentialities of the strategy for documenting local regions in digital environment. The purpose of this study is to analyse the implementation issues of documentation strategy and to suggest the directions for documenting local regions of Korea through the application of the strategy. The documentation strategy which was developed more than twenty years ago in mostly western countries gives us some implications for documenting local regions even in current digital environments. They are as follows; Firstly, documentation strategy can enhance the value of archivists as well as archives in local regions because archivist should be active shaper of history rather than passive receiver of archives according to the strategy. It can also be a solution for overcoming poor conditions of local archives management in Korea. Secondly, the strategy can encourage cooperation between collecting institutions including museums, libraries, archives, cultural centers, history institutions, etc. in each local region. In the networked environment the cooperation can be achieved more effectively than in traditional environment where the heavy workload of cooperative institutions is needed. Thirdly, the strategy can facilitate solidarity of various groups in local region. According to the analysis of the strategy projects, it is essential to collect their knowledge, passion, and enthusiasm of related groups to effectively implement the strategy. It can also provide a methodology for minor groups of society to document their memories. This study suggests the directions of documenting local regions in consideration of current archival infrastructure of Korean as follows; Firstly, very selective and intensive documentation should be pursued rather than comprehensive one for documenting local regions. Though it is a very political problem to decide what subject has priority for documentation, interests of local community members as well as professional groups should be considered in the decision-making process seriously. Secondly, it is effective to plan integrated representation of local history in the distributed custody of local archives. It would be desirable to implement archival gateway for integrated search and representation of local archives regardless of the location of archives. Thirdly, it is necessary to try digital documentation using Web 2.0 technologies. Documentation strategy as the methodology of selecting and acquiring archives can not avoid subjectivity and prejudices of appraiser completely. To mitigate the problems, open documentation system should be prepared for reflecting different interests of different groups. Fourth, it is desirable to apply a conspectus model used in cooperative collection management of libraries to document local regions digitally. Conspectus can show existing documentation strength and future documentation intensity for each participating institution. Using this, documentation level of each subject area can be set up cooperatively and effectively in the local regions.