• Title/Summary/Keyword: 실험정보

Search Result 24,688, Processing Time 0.057 seconds

A personalized TV service under Open network environment (개방형 환경에서의 개인 맞춤형 TV 서비스)

  • Lye, Ji-Hye;Pyo, Sin-Ji;Im, Jeong-Yeon;Kim, Mun-Churl;Lim, Sun-Hwan;Kim, Sang-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.279-282
    • /
    • 2006
  • IP망을 이용한 IPTV 방송 서비스가 새로운 수익 모델로 인정받고 현재 국내의 KT, SKT 등이 IPTV 시범서비스를 준비하거나 진행 중에 있다 이 IPTV 서비스는 이전의 단방향 방송과는 달리 사용자와의 인터렉션을 중시하는 양방향 방송을 표방하기 때문에 지금까지의 방송과는 다른 혁신적인 방송서비스가 기대된다. 하지만 IPTV 서비스에 있어서 여러 통신사와 방송사가 참여할 수 있을 것으로 보여지는 것과는 달리 실상은 몇몇 거대 통신기업이 자신들의 망을 이용하는 가입자들을 상대로 한정된 사업을 벌이고 있다. 이는 IPTV 서비스를 위한 인프라가 구축되어 있지 않고 방통융합망의 개념을 만족시키기 위해 서비스 개발자가 알아야 할 프로토콜들이 너무나 많기 때문이다. 따라서 본 논문에서는 이러한 상황을 타개할 수 있는 수단을 Open API로 제안한다. 맞춤형 방송을 위한 시나리오를 TV-Anytime의 벤치마킹과 유저 시나리오를 참고하여 재구성하고 이 시나리오로부터 IPTV 방송 서비스를 위한 방통융합망의 기본적이고 강력한 기능들을 Open API 함수로 정의하였다. 여기에서의 방송 서비스는 NDR, EPG, 개인 맞춤형 광고 서비스를 말하며 각 서비스를 위한 서버는 통합망 위에 존재하고 이 서버들이 개방하는 API들은 다른 응용프로그램에 의해 사용되는 것이기 때문에 가장 기본적인 기능을 정의하게 된다. 또한, 제안한 Open API 함수를 이용하여 개인 맞춤형 방송 응용 서비스를 구현함으로써 서비스 검증을 하였다. Open API는 웹서비스를 통해 공개된 기능들로써 게이트웨이를 통해 다른 망에서 사용할 수 있게 된다. Open API 함수의 정의는 함수 이름, 기능, 입 출력 파라메터로 이루어져 있다. 사용자 맞춤 서비스를 위해 전달되는 사용자 상세 정보와 콘텐츠 상세 정보는 TV-Anytime 포럼에서 정의한 메타데이터 스키마를 이용하여 정의하였다.가능하게 한다. 제안된 방법은 프레임 간 모드 결정을 고속화함으로써 스케일러블 비디오 부호화기의 연산량과 복잡도를 최대 57%감소시킨다. 그러나 연산량 감소에 따른 비트율의 증가나 화질의 열화는 최대 1.74% 비트율 증가 및 0.08dB PSNR 감소로 무시할 정도로 작다., 반드시 이에 대한 검증이 필요함을 알 수 있었다. 현지관측에 비해 막대한 비용과 시간을 절약할 수 있는 위성영상해석방법을 이용한 방법은 해양수질파악이 가능할 것으로 판단되며, GIS를 이용하여 다양하고 복잡한 자료를 데이터베이스화함으로써 가시화하고, 이를 기초로 공간분석을 실시함으로써 환경요소별 공간분포에 대한 파악을 통해 수치모형실험을 이용한 각종 환경영향의 평가 및 예측을 위한 기초자료로 이용이 가능할 것으로 사료된다.염총량관리 기본계획 시 구축된 모형 매개변수를 바탕으로 분석을 수행하였다. 일차오차분석을 이용하여 수리매개변수와 수질매개변수의 수질항목별 상대적 기여도를 파악해 본 결과, 수리매개변수는 DO, BOD, 유기질소, 유기인 모든 항목에 일정 정도의 상대적 기여도를 가지고 있는 것을 알 수 있었다. 이로부터 수질 모형의 적용 시 수리 매개변수 또한 수질 매개변수의 추정 시와 같이 보다 세심한 주의를 기울여 추정할 필요가 있을 것으로 판단된다.변화와 기흉 발생과의 인과관계를 확인하고 좀 더 구체화하기 위한 연구가 필요할 것이다.게 이루어질 수 있을 것으로 기대된다.는 초과수익률이 상승하지만, 이후로는 감소하므로, 반전거래전략을 활용하는 경우 주식투자기간은 24개월이하의 중단기가 적합함을 발견하였다. 이상의 행태적 측면과 투자성과측면의 실증결과를 통하여 한국주식시장에 있어서 시장수익률을 평균적으로 초과할 수 있는 거래전략은 존재하므로 이러한 전략을 개발 및 활용할 수 있으며, 특히, 한국주식시장에 적합한 거래전략은 반전거래전략이고, 이 전략의 유용성은 투자자가 설정한 투자기간보다

  • PDF

A Sensitivity Analysis of JULES Land Surface Model for Two Major Ecosystems in Korea: Influence of Biophysical Parameters on the Simulation of Gross Primary Productivity and Ecosystem Respiration (한국의 두 주요 생태계에 대한 JULES 지면 모형의 민감도 분석: 일차생산량과 생태계 호흡의 모사에 미치는 생물리모수의 영향)

  • Jang, Ji-Hyeon;Hong, Jin-Kyu;Byun, Young-Hwa;Kwon, Hyo-Jung;Chae, Nam-Yi;Lim, Jong-Hwan;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.2
    • /
    • pp.107-121
    • /
    • 2010
  • We conducted a sensitivity test of Joint UK Land Environment Simulator (JULES), in which the influence of biophysical parameters on the simulation of gross primary productivity (GPP) and ecosystem respiration (RE) was investigated for two typical ecosystems in Korea. For this test, we employed the whole-year observation of eddy-covariance fluxes measured in 2006 at two KoFlux sites: (1) a deciduous forest in complex terrain in Gwangneung and (2) a farmland with heterogeneous mosaic patches in Haenam. Our analysis showed that the simulated GPP was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration for both ecosystems. RE was sensitive to wood biomass parameter for the deciduous forest in Gwangneung. For the mixed farmland in Haenam, however, RE was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration like the simulated GPP. For both sites, the JULES model overestimated both GPP and RE when the default values of input parameters were adopted. Considering the fact that the leaf nitrogen concentration observed at the deciduous forest site was only about 60% of its default value, the significant portion of the model's overestimation can be attributed to such a discrepancy in the input parameters. Our finding demonstrates that the abovementioned key biophysical parameters of the two ecosystems should be evaluated carefully prior to any simulation and interpretation of ecosystem carbon exchange in Korea.

Sugar Contents Analysis of Retort Foods (레토르트식품에 함유되어 있는 당 함량 분석)

  • Jeong, Da-Un;Im, Jun;Kim, Cheon-Hoe;Kim, Young-Kyoung;Park, Yoon-Jin;Jeong, Yoon-Hwa;Om, Ae-Son
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.44 no.11
    • /
    • pp.1666-1671
    • /
    • 2015
  • The purpose of this study was to provide trustworthy nutritional information by analyzing sugar contents of commercial retort foods. A total of 70 retort food samples were collected, which included curry (n=21), blackbean- sauce (n=16), sauce (n=17), and meat (n=16) from markets in Seoul and Gyeonggi-do. Contents of sugars such as glucose, fructose, sucrose, maltose, and lactose were analyzed in retort foods by using a high performance liquid chromatography-refractive index detector and compared to their assigned values on nutritional information labels. Analyzed sugar contents of curries, black-bean-sauces, sauces, and meats ranged from 1.05~4.63 g/100 g, 1.76~5.16 g/100 g, 0.35~25.44 g/100 g, and 1.98~11.07 g/100 g, respectively. Sauces were found to contain the highest amounts of total sugar. These analysis values were equivalent to the reference values indicated on nutrition labels, which were 40~119.5% for curries, 29~118% for black-bean-sauces, 18~118% for sauces, and 70~119.8% for meats. Therefore, this study provides reliable analytical values for sugar contents in retort foods.

Effects Of Environmental Factors And Individual Traits On Work Stress And Ethical Decision Making (간호사의 환경적 요소와 개인적 특성이 직무스트레스와 윤리적 의사결정에 미치는 영향)

  • Kim, Sang Mi L.;Shake ketefian
    • Journal of Korean Academy of Nursing
    • /
    • v.23 no.3
    • /
    • pp.417-430
    • /
    • 1993
  • 이 연구는 환경적 요소(간호사의 자율성, 조직의 표준화)와 개인의 특성(통제위, 나이, 경험. 간호역할개념, 도덕성), 직무 스트레스, 윤리적 의사결정 사이의 관계를 이론적 틀을 구성하여 테스트함으로써 그 인과관계를 탐구하였다. 본 연구를 위해 개발된 모형은 1) Katz와 Kahn의 조직에 대한 개방체계 이론(open systems theory of organization) ; 2) Kahn. Wolfe, Quinn, Snoek의 스트레스 이론 (theory of stress) : 3) Kohlberg의 도덕발달 이론(theory of moral develop-ment): 그리고 4) 여러 문헌고찰을 기초로 하였다. 본 연구의 모형은 2가지의 주요 종속변수(직무 스트레스, 윤리적 간호행위), 2가지 매개변수(간호 역할개념, 도덕성 발달정도) 그리고 여러 독립변수들(조직의 표준화, 자율성, 통제위, 교육, 나이, 경험 등)로 구성되었다. 간단히 말해, 간호사의 스트레스와 윤리적 간호행위 를 개인 자신과 환경이라는 두 요소의 결과로 간주한 것이다. 미국(2개주)의 여러 건강관리기관에 근무하는 224명의 정규 간호사를 대상으로 하였고. 가설 검증을 위하여 1) 변수간의 인과관계를 조사하기 위한 Linear Structural Relationships(LISREL)기법과 2) 나이, 경험, 교육이 변수간의 관계에 미치는 중간역할을 알아보기 위해 상관분석을 이용하였다. LISREL결과를 보면 제시된 모델이 각 내재 변수에 상당한 설명력을 가지면서 자료에 잘 맞는 것으로 나타났다. 이 연구에서 가장 뚜렷한 점으로 나타난 것은 개인의 특성보다 환경적 요소로서의 자율성이 직무스트레스와 윤리적 의사결정을 예견하는데 훨씬 중요한 변수로 부각되었다는 점이다. 또한 간호사의 전문적 역할개념과 봉사적 역할개념이 간호사의 윤리적 의사결정을 예견하는 가장 중요한 요소로 나타났다. 중간영향(moderation effect)을 보면, 젊고 경험이 적은 간호사일수록 나이가 많고 경험있는 간호사보다 환경적 요소(자율성)에 더 큰 영향을 받는다는 것을 암시하고 있다. 또한 4년제 대학 이상을 졸업한 간호사의 윤리 적 간호행 위 는 2, 3년제 를 졸업 한 간호사 보다 환경적 요소에 의해 덜 영향을 받는 것으로 나타났다. 한편 자율성의 부족은 2, 3년제 졸업 간호사보다 4년제 졸업 간호사에게 더 심한 스트레스가 되고 있음을 시사하였다. 이 연구의 결과로부터 적어도 다음과 같은 두 가지 실제적인 제언을 도출할 수 있다. 첫째, 이 연구는 환경적요소로서의 자율성이 다른 어떤 개인적인 요소보다 직무 스트레스를 예견하는 데 중요한 요소라는 것을 제시하였다. 이것은 간호행정가들에게, 간호사의 직무 스트레스를 감소시키기 위해선 “자율성”이 아주 중요히 다루어져야 한다는 것을 의미한다. 만일 간호사들의 직무스트레스가 그 개인의 복지에 큰 해가 되고 환자를 간호하는 데 직접적으로 관계된다면, 간호행정가는 그 조직의 직무체계를 다시 평가해서 일에 대한 새로운 설계가 필요한지를 파악해야 한다. 또한 이 연구는 직무를 다시 설계할 경우, 누구에게 먼저 촛점을 두고 시작해야 하는지를 밝혀주고 있다. 즉, 젊고 경험이 미숙한 간호사들에게 촛점을 두고 시작해야 하며, 작업환경의 가장 중요한 차원중의 하나인 사회적 지원(social support)을 조심스럽게 고려해 보아야 한다. 둘째, 간호사의 윤리적 간호행위를 높히기 위해 전문적 역할개념과 봉사적 역할개념이 재강조될 필요가 있다. 이 두 역할개념 들을 교육을 통하여 효과적으로 가르칠 필요가 있다고 본다. 이 두 개념들이 간호사의 바람직한 간호행 위에 영향을 미치는 가장 중요한 요소로 나타났기 때문이다. 또한, 본 연구결과에 따르면, 경험이 많을수록 일에 싫증을 느껴 바람직한 윤리적 간호행위가 감소되는 경향이 있었다. 따라서, 건강관리체제 (health care system) 안에서의 간호사의 역할이-전문직으로서의, 그리고 환자를 위한 옹호자로서의-학교와 임상에서 효과적으로 교육되어져야 한다고 본다. 간호사들의 역할에 대한 계속적인 교육이 학생은 물론 임상 간호사들에게도 실시되어져야 할 것이다. 미래연구의 방향을 제시해 보면 첫째로 연구의 일반화를 높히기 위해 더 많은 대상자를 포함시켜야 한다. 이는 여러 종류의 표본을 반드시 한번에 전부 포함시켜야 한다는 것을 의미하는 것이 아니고, 특정한 여러 표본들을 연속적으로 연구함으로서 이 목표를 성취할 수 있다고 생각한다. 둘째는 여러 construct들(윤리적 간호행위, 직무 스트레스, 간호 역할개념 등)에 대한 적절한 측정도구를 개발해야 한다. 측정도구를 개발하기 위해서는 풍부하고 세세한 통찰력을 제공하는 질적인 정보를 얻는 것이 선행되어야 한다. 셋째, 윤리적 간호행위와 직무 스트레스에 관한 연구를 증진시키기 위해 실험설계 및 종단적 연구(expel-imental, longitudinal design)가 시도될 필요가 있다. 마지막으로, 윤리적 간호행위와 직무 스트레스를 예견할 수 있는 이론적 탐구(theoretical exploration), 즉 이론정립을 위하여, 환경적 요소와 개인의 특성에 대한 자세한 정보를 제공해 줄 수 있는 질적 연구들이 요구된다.

  • PDF

Primary Adenosquamous Carcinoma of the Stomach (위에서 발생한 선-편평세포암종)

  • Cho, Yong-Kwon;An, Ji-Yeong;Hong, Seong-Kweon;Choi, Min-Gew;Noh, Jae-Hyung;Sohn, Tae-Sung;Kim, Sung
    • Journal of Gastric Cancer
    • /
    • v.6 no.1
    • /
    • pp.31-35
    • /
    • 2006
  • Purpose: A primary adenosquamous carcinoma of the stomach is relatively rare, accounting for only about 0.5% of all gastric cancers. However, its histopathologic characteristics are still unclear, and the most appropriate form of therapy has not been established yet. Materials and Methods: We retrospectively reviewed the clinicopathologic features of 8 patients with pathologically confirmed primary adenosquamous carcinomas out of 8,268 patients who underwent gastric cancer surgery at Samsung Medical Center between September 1994 and December 2004. Results: The median age of the 8 patients was 49 ($41{\sim}69$) years, and the male : female ratio was 5 : 3. In 3 patients, the tumor was located at the mid body of the stomach, and in 5 patients, at the lower body or antrum. The tumor sizes were $2.5{\sim}8cm$. Seven patients showed metastases to the regional lymph nodes. The UICC stage distribution were: 5 stage II, 2 stage III, and 1 stage IV. In the stage IV patient, a palliative gastrojejunostomy was performed, and he died 5 months after surgery. Of the 7 patients who underwent a radical gastrectomy and adjuvant chemotheratpy, the median survival was 34 ($12{\sim}66$) months, 2 patients died of cancer recurrence, and 4 patients are being followed up without evidence of recurrence. Conclusion: As for an adenocarcinoma of the stomach, a radical gastrectomy including regional lymph node dissection and postoperative adjuvant therapy should be performed for appropriate treatment of an adenosquamous carcinoma of the stomach.

  • PDF

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

A Study of Various Filter Setups with FBP Reconstruction for Digital Breast Tomosynthesis (디지털 유방단층영상합성법의 FBP 알고리즘 적용을 위한 다양한 필터 조합에 대한 연구)

  • Lee, Haeng-Hwa;Kim, Ye-Seul;Lee, Youngjin;Choi, Sunghoon;Lee, Seungwan;Park, Hye-Suk;Kim, Hee-Joung;Choi, Jae-Gu;Choi, Young-Wook
    • Progress in Medical Physics
    • /
    • v.25 no.4
    • /
    • pp.271-280
    • /
    • 2014
  • Recently, digital breast tomosynthesis (DBT) has been investigated to overcome the limitation of conventional mammography for overlapping anatomical structures and high patient dose with cone-beam computed tomography (CBCT). However incomplete sampling due to limited angle leads to interference on the neighboring slices. Many studies have investigated to reduce artifacts such as interference. Moreover, appropriate filters for tomosynthesis have been researched to solve artifacts resulted from incomplete sampling. The primary purpose of this study is finding appropriate filter scheme with FBP reconstruction for DBT system to reduce artifacts. In this study, we investigated characteristics of various filter schemes with simulation and prototype digital breast tomosynthesis under same acquisition parameters and conditions. We evaluated artifacts and noise with profiles and COV (coefficinet of variation) to study characteristic of filter. As a result, the noise with parameter 0.25 of Spectral filter reduced by 10% in comparison to that with only Ramp-lak filter. Because unbalance of information reduced with decreasing B of Slice thickness filter, artifacts caused by incomplete sampling reduced. In conclusion, we confirmed basic characteristics of filter operations and improvement of image quality by appropriate filter scheme. The results of this study can be utilized as base in research and development of DBT system by providing information that is about noise and artifacts depend on various filter schemes.

Rhetorical Analysis of News Editorials on 'Screen Quota' Arguments: An Application of Toulmin's Argumentation Model (언론의 개방담론 논증구조 분석: 스크린쿼터제 관련 의견보도에 대한 Toulmin의 논증모델과 Stock Issue의 적용)

  • Park, Sung-Hee
    • Korean journal of communication and information
    • /
    • v.36
    • /
    • pp.399-422
    • /
    • 2006
  • Whether to reduce the current 'screen quota' for domestic films in conjunction with the FTA discussions between Korea and the United States is one of the hotly debated issues in Korea. Using Toulmin's Argumentation Model, this study attempts to trace the use of data and warrants for each pro and con claims as portrayed in newspaper editorial columns and to find its rhetorical significance. A total of 67 editorial columns were collected from 9 nationwide news dailies in Korea for the purpose. The rhetorical analysis of those articles showed that the major warrants used in each pro and con opinion were absent of the potential issues of the opponents, which inherently fails to invite rebuttals from the opposite sides. This conceptual wall in each argumentation models implies an inactive conversation and subsequent absence of clash between the pro and con argumentation fields. It is thus suggested for opinion writers to find more adequate evidences to support the data and warrants to hold persuasive power of their respective claims, ultimately to enhance the public discourse among citizens.

  • PDF

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

An Efficient Heuristic for Storage Location Assignment and Reallocation for Products of Different Brands at Internet Shopping Malls for Clothing (의류 인터넷 쇼핑몰에서 브랜드를 고려한 상품 입고 및 재배치 방법 연구)

  • Song, Yong-Uk;Ahn, Byung-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.129-141
    • /
    • 2010
  • An Internet shopping mall for clothing operates a warehouse for packing and shipping products to fulfill its orders. All the products in the warehouse are put into the boxes of same brands and the boxes are stored in a row on shelves equiped in the warehouse. To make picking and managing easy, boxes of the same brands are located side by side on the shelves. When new products arrive to the warehouse for storage, the products of a brand are put into boxes and those boxes are located adjacent to the boxes of the same brand. If there is not enough space for the new coming boxes, however, some boxes of other brands should be moved away and then the new coming boxes are located adjacent in the resultant vacant spaces. We want to minimize the movement of the existing boxes of other brands to another places on the shelves during the warehousing of new coming boxes, while all the boxes of the same brand are kept side by side on the shelves. Firstly, we define the adjacency of boxes by looking the shelves as an one dimensional series of spaces to store boxes, i.e. cells, tagging the series of cells by a series of numbers starting from one, and considering any two boxes stored in the cells to be adjacent to each other if their cell numbers are continuous from one number to the other number. After that, we tried to formulate the problem into an integer programming model to obtain an optimal solution. An integer programming formulation and Branch-and-Bound technique for this problem may not be tractable because it would take too long time to solve the problem considering the number of the cells or boxes in the warehouse and the computing power of the Internet shopping mall. As an alternative approach, we designed a fast heuristic method for this reallocation problem by focusing on just the unused spaces-empty cells-on the shelves, which results in an assignment problem model. In this approach, the new coming boxes are assigned to each empty cells and then those boxes are reorganized so that the boxes of a brand are adjacent to each other. The objective of this new approach is to minimize the movement of the boxes during the reorganization process while keeping the boxes of a brand adjacent to each other. The approach, however, does not ensure the optimality of the solution in terms of the original problem, that is, the problem to minimize the movement of existing boxes while keeping boxes of the same brands adjacent to each other. Even though this heuristic method may produce a suboptimal solution, we could obtain a satisfactory solution within a satisfactory time, which are acceptable by real world experts. In order to justify the quality of the solution by the heuristic approach, we generate 100 problems randomly, in which the number of cells spans from 2,000 to 4,000, solve the problems by both of our heuristic approach and the original integer programming approach using a commercial optimization software package, and then compare the heuristic solutions with their corresponding optimal solutions in terms of solution time and the number of movement of boxes. We also implement our heuristic approach into a storage location assignment system for the Internet shopping mall.