• Title/Summary/Keyword: S/R machine

Search Result 417, Processing Time 0.034 seconds

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Optimum Size Selection and Machinery Costs Analysis for Farm Machinery Systems - Programming for Personal Computer - (농기계(農機械) 투입모형(投入模型) 설정(設定) 및 기계이용(機械利用) 비용(費用) 분석연구(分析硏究) - PC용(用) 프로그램 개발(開發) -)

  • Lee, W.Y.;Kim, S.R.;Jung, D.H.;Chang, D.I.;Lee, D.H.;Kim, Y.H.
    • Journal of Biosystems Engineering
    • /
    • v.16 no.4
    • /
    • pp.384-398
    • /
    • 1991
  • A computer program was developed to select the optimum size of farm machine and analyze its operation costs according to various farming conditions. It was written in FORTRAN 77 and BASIC languages and can be run on any personal computer having Korean Standard Complete Type and Korean Language Code. The program was developed as a user-friendly type so that users can carry out easily the costs analysis for the whole farm work or respective operation in rice production, and for plowing, rotarying and pest controlling in upland. The program can analyze simultaneously three different machines in plowing & rotarying and two machines in transplanting, pest controlling and harvesting operations. The input data are the sizes of arable lands, possible working days and number of laborers during the opimum working period, and custom rates varying depending on regions and individual farming conditions. We can find out the results such as the selected optimum combination farm machines, the overs and shorts of working days relative to the planned working period, capacities of the machines, break-even points by custom rate, fixed costs for a month, and utilization costs in a hectare.

  • PDF

Perfusion Impairment in Infantile Autism on Brain SPECT Using Tc-99m ECD : Comparison with MR Findings (유아 자폐증 환아에서의 Tc-99m ECD를 이용한 뇌 단일 광전자 방출 전산화 단층 촬영술상의 관류 저하: 자기 공명 영상과의 비교 분석)

  • Ryu, Young-Hoon;Lee, Jong-Doo;Yoon, Pyeong-Ho;Kim, Dong-Ik;Oh, Young-Taik;Lee, Sun-Ah;Lee, Ho-Bun;Shin, Yee-Jin;Lee, Byung-Hee
    • The Korean Journal of Nuclear Medicine
    • /
    • v.31 no.3
    • /
    • pp.320-329
    • /
    • 1997
  • Neuroanatomic substrate of autism has been the subjects of continuing investigation. Because previous studies had not demonstrated consistent and specific neuroimaging findings in autism and most studies comprised adults and school-aged children, we performed a retrospective review in search of common functional and anatomical abnormalities with brain SPECT using Tc-99m ECD and correlative MRI The patient population was composed of 18 children aged 28 to 89 months(mean age : 55 months) who met the diagnostic criteria of autism as defined in the DSM-IV and CARS. Brain SPECT was performed after intravenous injection of 185-370MBq of Tc-99m ECD using brain dedicated annular crystal gamma camera. MRI was performed in all patients including T1, T2 axial and T1 sagittal sequences. SPECT data were visually assessed. Thirteen patients had abnormal SPECT scan revealing focal areas of decreased perfusion. Decreased perfusion of cerebellar vermis(12/18), cerebellar hemisphere(11/18), thalami(13/18), basal ganglia(4/18), posterior parietal(7/18), and temporal(4/18) area were noted on brain SPECT. Whereas, only 3 patients had abnormal MR findings which were subtle volume loss of parieto-occipital white matter in 3 and mild thinning of posterior body of corpus callosum in 2 and slightly decreased volume of cerebellar vermis in 1. Comparison of the numbers of abnormal findings revealed that regional cerebral blood flow (rCBF) abnormalities seen on SPECT were more numerous than anatomical abnormalities seen on MRI. In conclusion, extensive perfusion impairment involving cerebellum, thalami and parietal lobe were found in this study. SPECT may be more sensitive in reflecting pathophysiology of autism than MRI. However, further studies are mandatory to determine the significance of thalamic and parietal perfusion impairment in autism.

  • PDF

Effect of Implant Types and Bone Resorption on the Fatigue Life and Fracture Characteristics of Dental Implants (임플란트 형태와 골흡수가 임플란트 피로 수명 및 파절 특성에 미치는 효과에 관한 연구)

  • Won, Ho-Yeon;Choi, Yu-Sung;Cho, In-Ho
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.26 no.2
    • /
    • pp.121-143
    • /
    • 2010
  • To investigate the effect of implant types and bone resorption on the fracture characteristics. 4 types of Osstem$^{(R)}$Implant were chosen and classified into external parallel, internal parallel, external taper, internal taper groups. Finite elements analysis was conducted with ANSYS Multi Physics software. Fatigue fracture test was performed by connecting the mold to the dynamic load fatigue testing machine with maximum load of 600N and minimum load of 60N. The entire fatigue test was performed with frequency of 14Hz and fractured specimens were observed with Hitachi S-3000 H scanning electron microscope. The results were as follows: 1. In the fatigue test of 2 mm exposed implants group, Tapered type and external connected type had higher fatigue life. 2. In the fatigue test of 4 mm exposed implants group, Parallel type and external connected types had higher fatigue life. 3. The fracture patterns of all 4 mm exposed implant system appeared transversely near the dead space of the fixture. With a exposing level of 2 mm, all internally connected implant systems were fractured transversely at the platform of fixture facing the abutment. but externally connected ones were fractured at the fillet of abutment body and hexa of fixture or near the dead space of the fixture. 4. Many fatigue striations were observed near the crack initiation and propagation sites. The cleavage with facet or dimple fractures appeared at the final fracture sites. 5. Effective stress of buccal site with compressive stress is higher than that of lingual site with tensile stress, and effective stress acting on the fixture is higher than that of the abutment screw. Also, maximum effective stress acting on the parallel type fixtures is higher. It is careful to use the internal type implant system in posterior area.

On Using Near-surface Remote Sensing Observation for Evaluation Gross Primary Productivity and Net Ecosystem CO2 Partitioning (근거리 원격탐사 기법을 이용한 총일차생산량 추정 및 순생태계 CO2 교환량 배분의 정확도 평가에 관하여)

  • Park, Juhan;Kang, Minseok;Cho, Sungsik;Sohn, Seungwon;Kim, Jongho;Kim, Su-Jin;Lim, Jong-Hwan;Kang, Mingu;Shim, Kyo-Moon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.251-267
    • /
    • 2021
  • Remotely sensed vegetation indices (VIs) are empirically related with gross primary productivity (GPP) in various spatio-temporal scales. The uncertainties in GPP-VI relationship increase with temporal resolution. Uncertainty also exists in the eddy covariance (EC)-based estimation of GPP, arising from the partitioning of the measured net ecosystem CO2 exchange (NEE) into GPP and ecosystem respiration (RE). For two forests and two agricultural sites, we correlated the EC-derived GPP in various time scales with three different near-surface remotely sensed VIs: (1) normalized difference vegetation index (NDVI), (2) enhanced vegetation index (EVI), and (3) near infrared reflectance from vegetation (NIRv) along with NIRvP (i.e., NIRv multiplied by photosynthetically active radiation, PAR). Among the compared VIs, NIRvP showed highest correlation with half-hourly and monthly GPP at all sites. The NIRvP was used to test the reliability of GPP derived by two different NEE partitioning methods: (1) original KoFlux methods (GPPOri) and (2) machine-learning based method (GPPANN). GPPANN showed higher correlation with NIRvP at half-hourly time scale, but there was no difference at daily time scale. The NIRvP-GPP correlation was lower under clear sky conditions due to co-limitation of GPP by other environmental conditions such as air temperature, vapor pressure deficit and soil moisture. However, under cloudy conditions when photosynthesis is mainly limited by radiation, the use of NIRvP was more promising to test the credibility of NEE partitioning methods. Despite the necessity of further analyses, the results suggest that NIRvP can be used as the proxy of GPP at high temporal-scale. However, for the VIs-based GPP estimation with high temporal resolution to be meaningful, complex systems-based analysis methods (related to systems thinking and self-organization that goes beyond the empirical VIs-GPP relationship) should be developed.

Studies on the Weed Competition 1. Interpretation of Weed Competition of Paddy Rice Under Various Cultural Patterns (잡초경합에 관한 연구 제1보 수도 재배양식에 따른 잡초 경합 구조 해석)

  • Guh, J.O.;Chung, S.T.;Chung, B.H.
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.25 no.1
    • /
    • pp.77-86
    • /
    • 1980
  • Asking to change the cropping patterns to save the labor and capitals in paddy rice cultivation, the study was intended to know the weed problems under the various possible cultural systems; namely, direct seeding (in broadcast and row), machine transplanting and hand transplanting. Under the conditions as weedy check plots, paddy yields were significantly variated among cropping systems, and the functions of panicle No. and spikelet No. to the yield were neglected, among others. However, the yield and yield components were narrowed among cropping systems, and the function of spikelets number per area was comparatively improved to the others.

  • PDF

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.